Why do manual testing?
Even
in this age of short development cycles and automated-test-driven
development, manual testing contributes vitally to the software
development process. Here are a number of good reasons to do manual
testing:
- By giving end users repeatable sets of
instructions for using prototype software, manual testing allows them
to be involved early in each development cycle and draws invaluable
feedback from them that can prevent "surprise" application builds that
fail to meet real-world usability requirements.
- Manual test scripts gives testers something to use while awaiting the construction and debugging of automated scripts.
- Manual
test scripts can be used to provide feedback to development teams in
the form of a set of repeatable steps that lead to bugs or usability
problems.
- If done thoroughly, manual test scripts can also form the basis for help or tutorial files for the application under test.
- Finally,
in a nod toward test-driven development, manual test scripts can be
given to the development staff to provide a clear description of the
way the application should flow across use cases.
In summary, manual testing fills a gap in the testing repertoire and adds invaluably to the software development process.
Problems with using Excel or Word to manage manual testing
Because
manual test script creation is, well, manual, organizations may not
realize all of the benefits of manual test scripting in today's short
development cycles. Typically, testing staffs use Excel spreadsheets
(or, less typically, Word tables) to record testing steps, expected
results, and pass/fail state in the required timeframes. They compile
the results of testing either manually or with homegrown programs that
import the Excel spreadsheets or Word tables, process the test results,
and then produce reports.
But anyone attempting to use Excel spreadsheets or Word tables to manage manual test scripts will run into these problems:
- The need to constantly scroll horizontally and vertically makes these methods inefficient and difficult to use.
- The
typically wide formats of Excel spreadsheets or Word tables and the
need to reproduce headings on each page make it difficult to print
reports.
- It's hard to organize scripts by grouping testing steps.
- There's no standard way to identify expected results within test scripts.
- There's no standard way to report test results.
- It's hard or impossible to reuse test script lines or groups of lines in different test scripts.
A
tool that solves these problems would enable test departments to take
full advantage of their enlightened decision to incorporate manual test
scripts into the application development process.
Manual Tester to the rescue
IBM
Rational Manual Tester is the tool that testing staffs have been
waiting for to address the problems listed above and make management of
manual test scripts simple and straightforward. It doesn't take much
hands-on experience with Manual Tester to prompt a tester to ask, "Why
use Excel any longer for manual test scripts?" Manual Tester is easy to
learn (allow about an hour and a half) due to the detailed tutorials
that come with the tool, the tool's smart design, and the intelligence
of its user interface.
The best thing about using this tool to manage manual testing is that
the test scripts created by multiple testers will all follow the same
standards. Here are the capabilities of Manual Tester that lead to
standardization in scripts:
- Displays
and prints standard icons identifying script lines by type (script
step, verification point, reportable point, or group of steps).
- Allows the reuse of script steps or groups of steps among different scripts.
- Allows easy grouping of steps, which makes grouping an easily enforceable organizational standard.
- Associates
script lines with standard sets of tester-defined properties, such as
Name, Compare Data, and links to attachments (which can opened within
Manual Tester).
- Provides an interface that enforces
standardized expected results (for example, "inconclusive," "pass,"
"fail," "error") during execution of scripts.
- Allows for standardized comparisons of data during execution of scripts.
- Allows the test team to select font and color for the text of script steps.
|