Approaches to Automation
There are three broad options in Test Automation:
Full Manual
Reliance on manual testing
Responsive and flexible
Inconsistent
Low implementation cost
High repetitive cost
Required for automation
High skill requirements
Partial Automation
Redundancy possible but requires duplication of effort
Flexible
Consistent
Automates repetitive tasks and high return tasks
Full Automation
Reliance on automated testing
Relatively inflexible
Very consistent
High implementation cost
Economies of scale in repetition, regression etc
Low skill requirements
Fully manual testing
has the benefit of being relatively cheap and effective. But as quality
of the product improves the additional cost for finding further bugs
becomes more expensive. Large scale manual testing also implies large
scale testing teams with the related costs of space, overhead and
infrastructure. Manual testing is also far more responsive and flexible
than automated testing but is prone to tester error through fatigue.
Fully automated testing
is very consistent and allows the repetitions of similar tests at very
little marginal cost. The setup and purchase costs of such automation
are very high however and maintenance can be equally expensive.
Automation is also relatively inflexible and requires rework in order
to adapt to changing requirements.
Partial Automation incorporates
automation only where the most benefits can be achieved. The advantage
is that it targets specifically the tasks for automation and thus
achieves the most benefit from them. It also retains a large component
of manual testing which maintains the test teams flexibility and offers
redundancy by backing up automation with manual testing. The
disadvantage is that it obviously does not provide as extensive
benefits as either extreme solution.
Choosing the right tool
· Take time to define the tool requirements in terms of technology, process, applications, people skills, and organization.
· During tool evaluation, prioritize which test types
are the most critical to your success and judge the candidate tools on
those criteria.
· Understand the tools and their trade-offs. You may
need to use a multi-tool solution to get higher levels of test-type
coverage. For example, you will need to combine the capture/play-back
tool with a load-test tool to cover your performance test cases.
· Involve potential users in the definition of tool requirements and evaluation criteria.
· Build an evaluation scorecard to compare each
tool’s performance against a common set of criteria. Rank the criteria
in terms of relative importance to the organization.
|