Their priority rules are:
Design tests based on user scenarios that
span system components, not on the components themselves.
In today's trendy terminology, Petschenik is saying you
should base tests on use cases, not product architecture.
This rule is based on the observation that "developers
will tend to check individual components more carefully
than they check how these components are supposed to work
together." System testing should fill in the gaps of
developer testing, not replicate it.
In my experience, this is a rule commonly
broken by system test organizations. It is too common for
testers to test features in isolation, often simply
working through the reference manual (which is organized
alphabetically or by feature group). For example, one
tester might test the editor. Another might test the
printing code. No one will discover that the product
crashes when you edit, print, use the editor to revise
some pages, then print again - something that millions of
users will do.
Retesting old features is more important
than testing new features. This rule is justified by two
observations. The first: "The old capabilities [...]
are of more immediate importance to our users than the
new capabilities." Existing users already depend on
old features; they can't depend on the new ones yet,
because they don't have them. The second observation is
that developers test whether their new code now works;
they're much worse at testing whether the old code still
works.
"Testing typical situations is more
important than testing boundary cases." There are
two justifications for this rule. First, developers do an
adequate job of testing boundary cases. Second, boundary
cases are uncommon in normal use. A failure in a boundary
case will be seen by few users; a failure in typical use
will affect many users (by definition).
Edited by moderator - 05May2007 at 4:05am