ACCEPTANCE TESTING
Detail a set of acceptance
criteria—conditions that must be met before testing can begin. A smoke test should represent the bare
minimum of acceptance testing.
As noted above, the ideal is to create a
separate document for acceptance criteria that can be reused and referred to
here. If any particular, specialized
test cases not listed in that document will be used, refer to them here.
FEATURE LEVEL TESTING
This is the real meat of the test plan. The test categories below are filled in
itemizing categories of tests, along with references to the test library or
catalog. Individual test cases should
not be listed here; test requirements generally should not be either; the
details should exist elsewhere and can be cross-referenced.
Task-Oriented Functional
Tests
This is a detailed section, listing tests
requirements for program features against functional specifications, user
guides or other design related documents.
If there are test matrices available listing these features and their
interdependence (and there should be), refer to them.
Forced-Error Tests
Provide or refer to a list of all error
conditions and messages. Identify the
tests that will be run to force the program into error conditions.
Boundary Tests
Boundary tests—tests carried out at the lines
between valid and invalid input, acceptable and unacceptable system
requirements (such as memory, disk space, or timing), and other tests at the
limits of performance—are the keys to eliminating duplication of effort. Identify the types of boundary tests that
will be carried out. Note that such
tests can also fall into the categories outlined below, so this section may be
removed, or made a sub-section of those categories.
Integration Tests
Identify components or modules that can be
combined and tested independently to reduce dependence on system testing. Identify any test harnesses or drivers that
need to be developed.
System-Level Tests
Specify the tests will be carried out to
fully exercise the program as a whole to ensure that all elements of the
integrated system function properly.
Note that when unit and integration testing have been properly
performed, the dependence upon
Real World User-Level Test
In contrast to types of testing designed to
find defects, identify tests that will demonstrate the successful functioning
of the program as you expect the customer to use it. What type of workflow tests will be run? What type of “real work” will be carried out
using the program?
Unstructured Tests
Specify the amount of ad-hoc or exploratory
testing that will be carried out.
Identify the scope and the time associated with this form of testing.
Volume Tests
Indicate the types of tests will be carried
out to see how the program deals with very large amounts of data, or with a
large demand on timely processing. Note
that these tests can rarely be performed without automation; identify the
automation tools, test harnesses, or scripts that will be used. Ensure that the programs developed for the
test automation effort are accompanied by their own sets of requirements,
specifications, and development processes.
Stress Tests
.
Identify the limits under which the program
is expected to perform. These may
include number of transactions per unit time, timeouts, memory constraints,
disk space constraints, and so on.
Volume tests and stress tests are closely related; you may consider
wrapping both into the same category.
How will the product be tested to push the
upper functional limits of the program?
Will specific tools or test suites be used to carry out stress
tests? Ensure that these are reusable.
Performance Tests
Refer to the functional requirements that
specify acceptable performance. Identify
the functions that need to be measured, and the tests needed to show
conformance to the requirements.
REGRESSION TESTING
At each stage of new development or
maintenance, a subset of the regression test library should be run, focusing on
the feature or function that has changed from the previous version. Unit, integration, and system tests are all
viable places for regression testing.
For small maintenance fixes, identify this subset. A good version control system can allow the
building of older versions of the software for comparative purposes.
In the final phase of a complete development
cycle, a full regression test cycle is run.
Identify the test case libraries and suites that will be run.
Whether a subset or a full regression test
run, existing test scripts, matrices and test cases should be used, whether
automation is available or not. Identify
the documents that describe the details.
Emphasize regression tests for functions that are new or that have
changed, for components that have had a history of vulnerability, for high-risk
defects, and
for previously-fixed severe defects.
CONFIGURATION AND
COMPATIBILITY TESTING
If applicable, identify the types of software
and hardware compatibility tests that will be carried out.
List operating systems, software
applications, device drivers etc. that the product will be tested with or
against.
List hardware environments required for
in-house testing.
DOCUMENTATION TESTING/ONLINE
HELP TESTING
Documentation and online help testing will be
carried out to verify technical accuracy of documented material.
If a license agreement is included in or
displayed by the product, or the portion of it to which this test plan refers,
ensure the correct one is being used (see the next item below).
|