What should I not Automate
For
all applications, there are going to be tests that just cannot be
automated. You just need to define these tests as soon as possible
before sitting down to automate them. Not doing this leads to finding
out days or weeks later that it might be easier and more efficient in
the end to test manually than it would to develop and maintain the code
to automate the test. This influences the schedule and in some cases
the morale of the person trying to automate. Automate everything you
can as long as it makes sense to automate
An example of this
would be a test case for checking the different cursor states. Like
when you select system menu size and the default cursor is changed to
directional cursor. The requirement is to check the directional cursor
appearance.
From the brief description of the test, it seems
that this test case may not lend itself to an automated test case. One
question that always needs to be asked before the automation of manual
test cases is if the test lends itself to the automation. To automate
something at this level of difficulty, as in verifying all the
available cursor states on an application, will most likely be a large
time investment. With the time investment in mind, you probably will
not get a ROI by running the automated test case in a reasonable period
to truly make it worth the effort of automation.
High
maintenance costs is one indicator that a test should be done
manually. Another is the need for human judgment to assess the
correctness of the result or extensive, ongoing human intervention to
keep the test running. For these reasons, the following tests are a
good fit for manual testing:
•Installation, setup, operations,
and maintenance - In many cases, these tests involve loading CD-ROMs
and tapes, changing hardware and other ongoing handholding by the
tester.
•Configuration and compatibility - Like operations and
maintenance testing, these tests require reconfiguring systems and
networks, installing software and hardware; all requiring human
intervention.
•Error handling and recovery - Again, the need
to force errors—by powering off a server, for example—means that people
must stay engaged during test execution.
•Localization - Only
a human tester with appropriate skills can decide whether a translation
makes sense or not, is culturally offensive, or is otherwise
inappropriate. (Currency, date, and time testing can be automated, but
the need to rerun these tests for regression is limited).
•Usability
- As with localization, human judgment is needed to check for problems
with the facility, simplicity and elegance of the user interface and
workflows.
•Documentation and help - Like usability and localization, checking documentation requires human judgment.
Typically, there is no return on investment in trying to automate these kinds of tests.
|