Page 1 of 3 This final article in
our three-part series on testing for performance looks at test
execution and results reporting. As a reminder, the Testing for
Performance series is broken into the following parts:
http://searchsoftwarequality.techtarget.com/tip/0,289483,sid92_gci1298371,00.html - : Understand your content, the system, and figure out where to start
http://searchsoftwarequality.techtarget.com/tip/0,289483,sid92_gci1303733,00.html - : Stage the environments, identify data, building out the scripts, and calibrate your tests
Provide information: Run your tests, analyze results, make them meaningful to the team, and work through the tuning process
As we look at the steps required to provide performance-related
information, we will try to tie the artifacts and activities we use
back to the work that we did in the earlier articles from this series.
Often, as we execute our testing, our understanding of the problem
space changes, becoming more (or sometimes less) detailed, and often we
are required to reconsider some of the decisions we made before we got
down into the details. Our tooling requirements can change, or our
environment needs might expand or contract. As you execute your tests
you learn a lot. It's important to recognize that it's OK for that
learning to change your strategy, approach, tooling and data.
In this article we look at some of the last heuristics: execute, analyze, and report:
Execute: Continually validate the tests and
environment, running new tests and archiving all of the data associated
with test execution
Analyze: Look at test results and collected data
to determine requirement compliance, track trends, detect bottlenecks,
or evaluate the effectiveness of tuning efforts
Report: Provide clear and intuitive reports for the intended audience so that critical performance issues get resolved
Those three heuristics capture the essence of gathering,
understanding and reporting performance information. You can't have a
tool make sense of your results for you. You also can't have a tool
communicate those results to the various project stakeholders in a
meaningful way. That's where performance testers become more then just
tool-jocks (people who are really good with tools, but not really good
at thinking about how or why they use them).
Make sure everything is in place
In http://searchsoftwarequality.techtarget.com/tip/0,289483,sid92_gci1303733,00.html -
Having a second checklist or a standard notification process for
performance testing can help reduce embarrassing and frustrating
oversights. Many mornings you will come in to find a full inbox because
you failed to notify someone that you were load testing. You might also
had to re-run tests because the right people weren't watching the
system at the right time.
There are also some tasks you may need to repeat in between tests. A
common task you will have to do for financial service applications is
reset the test data. That could mean restoring databases, purging
records, canceling pending transactions or updating user information
back to a known state. In the past you must have also had to come up
with new IP address ranges between runs, had to wait for open sessions
to timeout, clear server cache, or restart services. Make sure you know
what you need to "re-set" between runs so you can quickly and reliably
make those updates.
Execute your tests and collect all the data
Executing your test could be as simple as scheduling some scripts and
walking away, or it could be much more involved. Different tests can
require different levels of interaction. Sometimes you might need to
manually intervene during the test (clearing a file or log, kicking off
additional scripts, or actively monitoring something), or you might
just want to measure something manually using "wall-clock" time while
the system is under load.
While tests are running, it can be helpful to monitor their
execution. If there's a problem, many times you'll notice it soon
enough to end the test run early and start it up again without having
to reschedule execution. It can also be useful to watch streaming
application logs, performance test data usage, and performance
monitoring applications and utilities. Often, you'll http://www.satisfice.com/blog/archives/33 - - saw a concerning pattern in the transaction timing, they would want to investigate why that is.
If the generated workload looked correct, they would then move on to
looking at system performance characteristics. They would start with
basics such as CPU and memory utilization for each sever involved, and
move on to things like average queue depth and average message time
spent in queue and how load balancing was distributed for the run. They
are looking for anything that might be out of tolerance. Again, you may
need to work with a cross-functional team to determine what those
tolerances should be set to.
Finally, they will analyze the transaction response times. Often it
can be useful to look at the response times captured in your
performance scripts alongside response times from other monitoring
tools (for example, Introscope or WhatsUp Gold). They move most data
into Excel so they can manipulate it at will. (Don't forget to save the
original before you edit.) There they will look for trends and patterns
in the average, max, min and percentile response times. When they
examine response times, they are often looking for the time measured to
either trend towards or away from a performance objective (target
numbers, SLAs or hard limits like timeouts), or they are looking for
something inconsistent with past performance history.
As they look at the results, they will often create a list of things
that appear interesting to them or a list of any questions they might
have. This list becomes useful as they think about the next test they
want to run. It also helps when they interact with other team members
(since it is impossible to remember the minute details of the results).
Report your results
The performance tester becomes a very
popular person once he starts execution tests. Everyone wants to know
about those test results, and they want to know the minute the tests
have been completed. There is a lot of pressure to get preliminary
results public. Often this creates a dynamic where the performance
tester doesn't want to share results because he doesn't yet understand
them and doesn't want people to act on bad data, and a project team
demanding results quickly, since performance testing is so critical and
occurs so late in the project.
Two steps are recommended that can help you to deal with this
situation. First, make your raw data available as soon as you get it
all pulled together. Get in the habit of publishing the data in a
common location (file share, CM tool, SharePoint, wiki, etc.) where at
least the technical stakeholders can get to the data they may need to
review. Second, hold a review meeting for the technical team shortly
after the run. Hold it after you've done your preliminary checks to see if you even have results worth looking at, but before
you do any in-depth analysis. It's at this meeting that you might
coordinate a cross-functional team to dig into the logs, errors and
response times.
Once you have completed any in-depth analysis and have findings to
share, pull them together in a quick document and call together another
meeting. Try not to provide results without two things:
A chance for me to editorialize on the data so people don't draw
their own conclusions without understanding the context for the test
A chance for people to ask questions in real time
In the results, try to often include a summary of the test (model,
scenarios, data, etc.), the version/configuration of the application
tested, current results and how they trend to the targets, historical
results, charts to illustrate key data, and a bulleted summary of
findings, recommendations or next steps (if any). Only after a
face-to-face meeting (or in preparation for a face-to-face meeting)
that you should send out the results in an email.
If (for whatever reason) you can't pull everyone together for a
results review, send your findings first to key technical stakeholders
(DBAs, programmers, infrastructure, etc.) so they can add their
analysis and comments. Even this simple step, while slowing things down
only a little, may save you hours of heartache from misunderstandings
on your part or the part of the reviewers.
The more times you iteration through execution and results reporting
with a particular team or for a particular application or system, the
easier and faster the process becomes. You develop heuristics and
shortcuts for most things, and the time from test completion to interim
results can be as short as a few hours if not minutes. Like anything,
the more you do it, the better your apperception and the faster you
become.
Likewise, you also can become sloppy if you get stuck in a routine.
Make sure you have effective safeguards in place. Reporting inaccurate
results or drawing false conclusions will happen, and most people won't
hold it against you if you do it once or twice. But don't let it become
a habit. If you continue to make those mistakes because of shortcuts,
you'll quickly lose the respect and trust of the team. Without that,
the work becomes more difficult and you become more ineffective.
At the end of this phase
At the end of this phase you should have an initial set of results to
review with your team. You may not have your final results, but you
should have some idea of what types of errors you're getting, whether
your scripts are calibrated correctly, some preliminary response data,
and an idea of system utilization under load. Going forward you need to
constantly prioritize the next test to run, creating any new test
assets as you move along. If you've done a good job of collecting your
initial data and analyzing the results as you get them, you should be
able to adapt rather quickly.
Here is a possible summary of some of the work products from the execution and reporting phase:
- Checklists to support test execution
- Performance test logs and baselines
- Logged defects or performance issues
- Lists for further investigation:
- Errors (script, data, application, etc.)
- High response times (database transactions, application events, end-user response times, etc.)
- Unpredictable behavior (queue depth, load balancing, server utilization, etc.)
- Open questions on architecture or infrastructure
- Interim or final performance test result documents
- Updated documents from the assessment phase (strategy, diagrams, usage models and test ideas lists)
- Updated assets from the build out phase (scripts, environments, tools, data, etc.)
If you were in a contract-heavy or highly formalized project
environment, you might be done after this phase or you might carry work
over into another iteration, statement or work, or formal tuning phase.
If you were in a more agile environment, you are now ready to dig in,
get dirty, and start performance tuning or debugging any of the more
problematic issues identified. The important point to remember is that
this phase is about manipulation, observation and making sense of the
information you gather. You move freely along the continuum of data
generation, data gathering and data analysis.
------------- http://www.quick2sms.com - Send Unlimited FREE SMS to Any Mobile Anywhere in INDIA,
Click Here
|