Since the beginning of 2001 and the failure of many dot coms,
return on investment (ROI) as a measurement has again become an
important means of evaluating whether a project should be undertaken.
ROI, as a term representing value above expense, has become part of the
business lexicon. It is incredibly useful when comparing one IT
project to another or when deciding whether or not to undertake a
project at all.
In this paper we will take a look at the ROI derived from
automated testing, targeting both functional and scalability (load,
stress, performance, volume, etc.), and try to determine guidelines for
calculating the ROI.
ROI = Net Present Value of (or benefit derived from) Investment / Initial Cost
This equation may appear to be straightforward, but the
difficulty lies in determining the value of intangible benefits derived
from automated testing, since the effort will not directly produce
revenue. We will look at the ROI and test automation in broader terms,
rather than in the explicit terms of the above formula and try to
determine its value.
What is test automation?
Test automation is the use of test tools to robotize the
exercising of business and system transactions and requirements to
verify application and architecture correctness and
scalability/performance.
Most automation testing tools have editors, compilers and
fully functional programming languages, i.e. C, Basic, Java, or
Javascript languages.
Warning!
-
Automation tools are NOT macro recorders, but fully functioning programming environments and must be treated as such. -
Record and playback (“point-and-click”) will surely result in failure. -
Your engineer will need programming skills to: -
create functions -
access Win32 API functions, -
read/write to files, -
use ODBC connection to make SQL calls, -
utilize COM functionality -
perform data correlation of complex SQL calls and web transactions, etc. -
other programming techniques
Who are the main testing tool vendors?
The main tool vendors for performance and functional
automation testing are listed below. All have their strong points and
work in most mainstream environments. Each has architecture (front-end
development tools, protocols and databases) that their tools work best
with.
Company |
Web Site URL |
Functional Testing Product |
Performance Testing Product |
Rational Software |
http://www.rational.com/ - www.rational.com |
TeamTest for Functional Testing |
TeamTest for Performance Testing |
Mercury Interactive |
http://www.merc-int.com/ - www.merc-int.com |
WinRunner |
LoadRunner |
Compuware |
http://www.compuware.com/ - www.compuware.com |
QARun |
QALoad |
Segue |
http://www.segue.com/ - www.segue.com |
SilkTest |
SilkPerformer |
Empirix |
http://www.empirix.com/ - www.empirix.com |
e-Tester |
e-Load |
Radview |
http://www.radview.com/ - www.radview.com |
WebFT |
WebLoad |
Sitraka |
http://www.sitraka.com/ - www.sitraka.com |
Jprobe |
PerformaSure |
What are the initial costs incurred in test automation?
There are basically four different groups of costs associated with test automation.
- The cost of the software. The cost of software for functional testing costs approximately $5,000 per user.
- The cost of a performance testing tool can run from $35,000
for 500 virtual testers and only the http protocol (web) up to
$300,000, depending upon features, number of virtual testers and
architecture (and protocol) to be tested.
-
- The cost of the hardware. The cost of hardware is negligible
for functional testing. A high-end workstation can be purchased for
under $2,000. By high-end, we mean 1 GHz processor, 256 mb of RAM and
an ethernet port. To implement functional automation, a workstation
will be needed for each engineer. A machine with reduced
specifications, such as one with a 733mhz processor and with 96mb of
RAM would also work.
-
- The cost of hardware for performance/load/stress testing is
significantly higher. Typically, when speaking about any type of
multi-user test, tool vendors speak in terms of virtual users and
virtual testers. A virtual user is a process or a thread that can
emulate a transaction whereas the middle components and the backend
database cannot tell the difference between the virtual user and an
actual user.
-
- Multi-user testing (commonly referred to as ‘performance
testingÙ by the tool vendors) requires a master machine to act as the
scheduler and coordinate the tests, and agent machines to drive the
virtual user scripts. Virtual users consume computer resources. A
virtual user can consume between 1 to 6 megabytes of RAM. Also, CPU
saturation typically comes at approximately 250 virtual users.
Therefore, 1 high end PC with 1ghz processor and 1 gb of RAM may be
able to push 250 virtual users. The cost to purchase this hardware
would be $4,300 to run a 100-user test (source: Dell) and $337,000 for
a 5,000-user test (source: Sun). Some software vendors and professional
services firms provide performance testing software and the hardware to
run the virtual users on a rental basis, greatly reducing the cost per
project (as does RTTS).
-
- The cost of trained personnel. Either training and mentoring
of an internal resource ($75,000 for salary, benefits, plus $25,000 for
training and mentoring over the course of 2,000 hours to become
proficient) or contracting a skilled resource from a professional
services firm (from $650 to $2,500 per day, depending upon which firm
is contracted and the skill set and experience of the resource).
-
- The cost of scripting (or coding) the test cases. On the
functional side the cost is the up-front time for setup and the fact
that scripting test cases takes five times longer than manual testing
in the initial startup period. (Hence the break-even point on
functional test automation is said to be at least 5 anticipated builds).
What are the tangible benefits of automated testing?
Speed
and Accuracy – ItÙs faster and more accurate than manual testing. It
can be as much as 50 times faster, depending upon the speed of the
driver machine and the speed of the application to process information
(inserts, updates, deletes and views). Test tools also are much more
accurate than manual test input. The average typist makes 3 mistakes
for every 1,000 keystrokes. Also, automation tools never tire, get
bored, take shortcuts or make assumptions of what works.
Accessibility
– Automation tools allow access to objects, data, communication
protocols, and operating systems that manual testers cannot access.
This allows for a test suite with much greater depth and breadth.
Accumulation
– Once tests are developed, long-term benefits are derived through
reuse. Applications change and gain complexity over time. The number of
tests is always increasing as the application/architecture matures.
Engineers can constantly add onto test suite and not have to test the
same functionality over and over again.
Manageability – Ability to manage artifacts through automation tools.
Discovery of issues – Automated testing assists with the discovery of
issues early in the development process, reducing costs (see figure 1
below).
Repeatability
– An automation suite provides a repeatable process for verifying
functionality on the functional side and scalability on the performance
side.
Availability – Scripts can run any time during the day or night unattended.
What are the intangible benefits of test automation?
Formal process – Automation forces a more formal process on test teams,
due to the nature of the explicitness of the artifacts and the flow of
information that is needed.
Retention of customers – When sites do not function correctly or
perform poorly, customers may leave and never come back. What is the
cost to your business of that scenario? Performing correct and
systematic automated testing helps assure a quality experience for the
customer – both internal and external.
Greater
job satisfaction for Testers – The Test Engineers no longer manually
execute the same test cases over and over. They would utilize a
programming-like IDE and language that is more challenging, rewarding
and portable to other positions (ie development).
What is a rule of thumb for determining whether there is sufficient ROI to undertake functional test automation?
- When testing the functionality of an application, the
heuristic (or “rule of thumb”) is whether there will be at least five
builds. For scalability/performance testing, any application with more
than a “handful” of concurrent users on a site that is critical to
either internal (employees) or external customers.
- There are six primary reasons for failure in automation.
- Lack of structured automation methodology.
- Test automation is not treated as a project with proper
project planning (i.e. scope, resources, time-to-market).Testing is
performed at the end of the development cycle (the waterfall method).
- No modularization (use of functions) in automation scripts.
- Test engineers are untrained in tool interface and programming techniques.
- After initially creating automation suite, customer does not maintain the suite for future builds.
Below are listed the solutions to the reasons for failure.
Implement a structured automation methodology.
- Implement a pragmatic approach to testing that is manageable,
repeatable, measurable, improvable, automated and risk-based (see
figure 2).
- Manageable, such that the project can be decomposed into modular, defined tasks with assigned resources and timelines.
- Repeatable, such that others can easily carry forward the process that has been defined.
- Measurable, such that the effort is quantifiable - how many
defects found in each stage, what is trend of different severities of
defects, how close is the testing cycle to completion, how long does a
transaction take to complete?
- Improvable, such that each build becomes more efficient in
producing defects. The goal of this measurable and improvable process
is to produce more defects in the testing life cycle so that less are
found in production.
- Automated, to build a data-driven regression and
scalability/performance suite that takes advantage of the best-of-breed
testing software.
- And Risk-based, by targeting test types and application
functionality that is the most crucial to the usage of the
application(s).
- Treat testing as a project. Effectively treat test
automation as you would a development project and manage the scope,
resources and time-to-market adequately. The three variables have
interdependencies. Since both resources (“I only have budget for x
testers.”) and time-to-market (“If we donÙt deliver this software by x
date, weÙre all out of jobs.”) are typically not variables, but
constants, the only component that is truly a variable is scope. Only
“z” scope can be tackled by “y” resources in “x” time. If “x” and “y”
are fixed and the scope is greater than what y resources can perform in
x time, then not all of the functionality can be tested. And if the
scope of the effort needs to be increased, there are 2 choices:
increase the resources or extend the time frame. The same holds true
if the scope of work required exceeds the amount of work that can be
performed by the resources in the time frame available – the scope must
be decreased.
Move testing up in the software development lifecycle. The
test process should begin where the development process does, at the
beginning. Some development teams still follow the Waterfall
development process (see figure 5), which dictated that testing was
done in stage 5 and was 10% of the entire development effort (Gartner
suggests 30 –40%). This process was well-suited to the stability
mainframes, but is ill-suited to complex, multi-tiered iterative system
development. Defect detection proves much too costly (see figure 6).
Moving the test process up in the software engineering cycle minimizes
the cost of defects and provides more time for effective test planning,
design, execution and tracking.
Continues in Part2....
------------- http://www.quick2sms.com - Send Unlimited FREE SMS to Any Mobile Anywhere in INDIA,
Click Here
|