Software Testing Overview
Purpose of Testing
Testing accomplishes a variety of things, but most importantly it
measures the quality of the software you are developing. This view
presupposes there are defects in your software waiting to be discovered
and this view is rarely disproved or even disputed.
Several factors contribute to the importance of making testing a
high priority of any software development effort. These include:
1. Reducing the cost of developing the program. Minimal
savings that might occur in the early stages of the development cycle
by delaying testing efforts are almost certainly bound to increase
development costs later. Common estimates indicate that a problem that
goes undetected and unfixed until a program is actually in operation
can be 40 – 100 times more expensive to resolve than resolving the
problem early in the development cycle.
2. Ensuring that your application behaves exactly as you explain to the user. For the vast majority of programs, unpredictability is the least desirable consequence of using an application.
3. Reducing the total cost of ownership.
By providing software that looks and behaves as shown in your
documentation, your customers require fewer hours of training and less
support from product experts.
4. Developing customer loyalty and word-of-mouth market share. Finding
success with a program that offers the kind of quality that only
thorough testing can provide is much easier than trying to build a
customer base on buggy and defect-riddled code.
Organize the Testing Effort
The earlier in the development cycle that testing becomes part of the
effort the better. Planning is crucial to a successful testing effort,
in part because it has a great deal to do with setting expectations.
Considering budget, schedule, and performance in test plans increases
the likelihood that testing does take place and is effective and
efficient. Planning also ensures tests are not forgotten or repeated
unless necessary for regression testing.
Requirements-Based Testing
The requirements section of the software specification does more
than set benchmarks and list features. It also provides the basis for
all testing on the product. After all, testing generally identifies
defects that create, cause, or allow behavior not expected in the
software based on descriptions in the specification; thus, the test
team should be involved in the specification-writing process.
Specification writers should maintain the following standards when
presenting requirements:
1. All requirements should be unambiguous and interpretable only one way.
2. All requirements must be testable in a way that ensures the program complies.
3. All requirements should be binding because customers demand them.
You should begin designing test cases as the specification is being
written. Analyze each specification from the viewpoint of how well it
supports the development of test cases. The actual exercise of
developing a test case forces you to think more critically about your
specifications.
Develop a Test Plan
The test plan outlines the entire testing process and includes the
individual test cases. To develop a solid test plan, you must
systematically explore the program to ensure coverage is thorough, but
not unnecessarily repetitive. A formal test plan establishes a testing
process that does not depend upon accidental, random testing.
Testing, like development, can easily become a task that
perpetuates itself. As such, the application specifications, and
subsequently the test plan, should define the minimum acceptable
quality to ship the application.
Test Plan Approaches: Waterfall versus Evolutionary
Two common approaches to testing are the waterfall approach and the evolutionary approach.
The waterfall approach is a traditional approach to testing that
descends directly from the development team in which each person works
in phases, from requirements analysis to various types of design and
specification, to coding, final testing, and release. For the test
team, this means waiting for a final spec and then following the
pattern set by development. A significant disadvantage of this approach
is that it eliminates the opportunity for testing to identify problems
early in the process; therefore, it is best used only on small projects
of limited complexity.
An alternative is the evolutionary approach in which you develop a
modular piece (or unit) of an application, test it, fix it, feel
somewhat satisfied with it, and then add another small piece that adds
functionality. You then test the two units as an integrated component,
increasing the complexity as you proceed. Some of the advantages to
this approach are as follows:
1. You have low-cost opportunities to reappraise requirements and refine the design, as you understand the application better.
2. You are constantly delivering a working, useful product. If you are
adding functionality in priority order, you could stop development at
any time and know that the most important work is completed.
3. Rather than trying to develop one huge test plan, you can start with
small, modular pieces of what will become part of the large, final test
plan. In the interim, you can use the smaller pieces to find bugs.
4. You can add new sections to the test plan or go into depth in new areas, and use each part.
The range of the arguments associated with different approaches to
testing is very large and well beyond the scope of this documentation.
If the suggestions here do not seem to fit your project, you may want
to do further research.
Optimization
A process closely related to testing is optimization. Optimization
is the process by which bottlenecks are identified and removed by
tuning the software, the hardware, or both. The optimization process
consists of four key phases: collection, analysis, configuration, and
testing. In the first phase of optimizing an application, you need to
collect data to determine the baseline performance. Then by analyzing
this data you can develop theories that identify potential bottlenecks.
After making and documenting adjustments in configuration or code, you
must repeat the initial testing and determine if your theories proved
true. Without baseline performance data, it is impossible to determine
if your modifications helped or hindered your application. For more
information, see Performance Tuning Overview. _________________
|