Active Topics Memberlist Calendar Search Help | |
Register Login |
One Stop Testing Forum : Types Of Software Testing @ OneStopTesting : Network & Distributed Systems Testing @ OneStopTesting |
Topic: The test plan......... |
|
Author | Message |
Riya
Newbie Joined: 15Feb2007 Online Status: Offline Posts: 40 |
Topic: The test plan......... Posted: 19Feb2007 at 6:38pm |
The test plan Although frequently overlooked, the test plan is a crucial piece of the puzzle. Make sure it’s spelled out in granular detail before testing begins. The test plan defines testing expectations not only to the development group but also to other managers in the organization. By establishing clear expectations before you get into heavy analysis, you avoid after-the-fact grumbling from developers who’ve decided they needed other data points after all. The test plan also provides documentation for comparing future changes to the application.Simply put, the test plan establishes the following: * Application dimensions that will be tested, and why * Configuration of the test architecture * Testing procedure * Metrics that will be collected * Expected outcome(s) Devote a section of your test plan to each of these points. You may also wish to include a section for testing prerequisites if you require special environment or application configurations. Note any associated risks that may be apparent in the testing process or final results. Furthermore, when outlining your expected results, you should detail the steps to be taken if the results differ significantly from the initial expectations. The testing process The primary goal of testing is to deliver a comprehensive, accurate view of system performance to the engineering organization. Decisions that might be made from collected test data include the following: * Application configuration settings * System architecture design decisions * Coding practices * Scalability and capacity planning Performance measurements taken against a distributed computer system are not always consistent. A large number of variables deep inside operating systems and hardware implementations beyond those at the application level can affect performance, sometimes quite severely, even if for very short periods of time. Make sure your test spans enough time to smooth these spikes out of your overall view. Generally, a measurement of performance at a constant load level should span several minutes, and a measurement of performance at a ranging load level should span several hours. Like the directions on a shampoo bottle, the testing process boils down to a few simple steps: 1. Initialize the test environment. 2. Run the test. 3. Gather the results. 4. Repeat the process. You can roughly determine an average value and a margin of error with as few as five or six sets of results. Ideally, you should have three times that many data points to work with, but it is often hard to convince your boss that this much work is necessary for the sake of accuracy. Shell scripts do the trick As I suggested in the previous article, you should automate as much of the process as possible to reduce operator error and save valuable time. Remote shell scripts are a good solution for achieving a fair portion of test automation in a multiserver environment. With packages like Cygwin, a UNIX environment for Windows, you can even extend these scripts to Windows environments without too much of a headache. The following scripts are useful: * Build automation—If you need to compare the performance of two application builds, having a script to deploy and configure the entire application onto the appropriate machines in your test environment for a specific version number reduces the time involved in the configuration process. * Remote monitoring—The best source of system metrics is on the system itself. If a top-tier remote monitoring solution isn’t financially feasible, consider a batch of scripts that start and stop monitoring applications, such as vmstat, iostat, and netstat on remote hosts. These scripts allow you to obtain fine-grained system data for free and with little effort. If you don’t intend to use all of the data, save it for future use. * Log file archiving—When the test is over, create a single script that collects data from a cluster of machines, archiving log files at a central location. While it may seem tedious, maintain detailed documentation on a notepad or Word file of how the test proceeded. When performance problems pop up, this documentation will prove invaluable in the troubleshooting process. Post Resume: Click here to Upload your Resume & Apply for Jobs |
|
IP Logged | |
Forum Jump |
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot delete your posts in this forum You cannot edit your posts in this forum You cannot create polls in this forum You cannot vote in polls in this forum |
© Vyom Technosoft Pvt. Ltd. All Rights Reserved.