Case Study: Costly Automation Failures
In 1996, one large corporation set out evaluating
the various commercial automation tools that were available at that
time. They brought in eager technical sales staff from the various
vendors, watched demonstrations, and performed some fairly thorough
internal evaluations of each tool.
By 1998, they had chosen one particular vendor and
placed an initial order for over $250,000 worth of product licensing,
maintenance contracts, and onsite training. The tools and training were
distributed throughout the company into various test departments--each
working on their own projects.
None of these test projects had anything in common.
The applications were vastly different. The projects each had
individual schedules and deadlines to meet. Yet, every one of these
departments began separately coding functionally identical common
libraries. They made routines for setting up the Windows test
environment. They each made routines for accessing the Windows
programming interface. They made file-handling routines, string
utilities, database access routines--the list of code duplication was
disheartening!
For their test designs, they each captured
application specific interactive tests using the capture\replay tools.
Some groups went the next step and modularized key reusable sections,
creating reusable libraries of application-specific test functions or
scenarios. This was to reduce the amount of code duplication and
maintenance that so profusely occurs in pure captured test scripts. For
some of the projects, this might have been appropriate if done with
sufficient planning and an appropriate automation framework. But this
was seldom the case.
With all these modularized libraries testers could
create functional automated tests in the automation tool’s proprietary
scripting language via a combination of interactive test capture,
manual editing, and manual scripting.
One problem was, as separate test teams they did not
think past their own individual projects. And although they were each
setting up something of a reusable framework, each was completely
unique--even where the common library functions were the same! This
meant duplicate development, duplicate debugging, and duplicate
maintenance. Understandably, each separate project still had looming
deadlines, and each was forced to limit their automation efforts in
order to get real testing done.
As changes to the various applications began
breaking automated tests, script maintenance and debugging became a
significant challenge. Additionally, upgrades in the automation tools
themselves caused significant and unexpected script failures. In some
cases, the necessity to revert back (downgrade) to older versions of
the automation tools was indicated. Resource allocation for continued
test development and test code maintenance became a difficult issue.
Eventually, most of these automation projects were
put on hold. By the end of 1999--less than two years from the inception
of this large-scale automation effort--over 75% of the test automation
tools were back on the shelves waiting for a new chance to try again at
some later date.
click here for more details:
http://safsdev.sourceforge.net/FRAMESDataDrivenTestAutomationFrameworks.htm - http://safsdev.sourceforge.net/FRAMESDataDrivenTestAutomationFrameworks.htm
|