The goal of performance testing is not to find bugs, but to eliminate
bottlenecks and establish a baseline for future regression testing. To
conduct performance testing is to engage in a carefully controlled
process of measurement and analysis. Ideally, the software under test
is already stable enough so that this process can proceed smoothly.
A
clearly defined set of expectations is essential for meaningful
performance testing. If you don't know where you want to go in terms of
the performance of the system, then it matters little which direction
you take (remember Alice and the Cheshire Cat?). For example, for a Web
application, you need to know at least two things:
- expected load in terms of concurrent users or HTTP connections
- acceptable response time
Once you know where you want to be, you can start on your way there by
constantly increasing the load on the system while looking for
bottlenecks. To take again the example of a Web application, these
bottlenecks can exist at multiple levels, and to pinpoint them you can
use a variety of tools:
- at the application level, developers can use profilers to spot inefficiencies in their code (for example poor search algorithms)
- at the database level, developers and DBAs can use database-specific profilers and query optimizers
- at the operating system level,
system engineers can use utilities such as top, vmstat, iostat (on
Unix-type systems) and PerfMon (on Windows) to monitor hardware
resources such as CPU, memory, swap, disk I/O; specialized kernel
monitoring software can also be used
- at the network level,
network engineers can use packet sniffers such as tcpdump, network
protocol analyzers such as ethereal, and various utilities such as
netstat, MRTG, ntop, mii-tool
From a testing point of view,
the activities described above all take a white-box approach, where the
system is inspected and monitored "from the inside out" and from a
variety of angles. Measurements are taken and analyzed, and as a
result, tuning is done.
|