Active Topics Memberlist Calendar Search Help | |
Register Login |
One Stop Testing Forum : Types Of Software Testing @ OneStopTesting : Automated Testing @ OneStopTesting |
Topic: AUTOMATED TESTING -- WHY AND HOW |
|
Author | Message |
rose
Moderator Group Joined: 31Jan2007 Online Status: Offline Posts: 21 |
Topic: AUTOMATED TESTING -- WHY AND HOW Posted: 08Feb2007 at 4:03pm |
AUTOMATED TESTING - WHY AND HOW Everyone knows how important testing is, and, with luck, everyone actually does test the software that they release. But do they really? Can they? Even a simple program often has many different possible behaviors, some of which only take place in rather unusual (and hard to duplicate) circumstances. Even if every possible behavior was tested when the program was first released to the users, what about the second release, or even a "minor" modification? The feature being modified will probably be re-tested, but what about other, seemingly unrelated, features that may have been inadvertently broken by the modification? Will every unusual test case from the first release's testing be remembered, much less retried, for the new release, especially if retrying the test would require a lot of preliminary work (e.g. adding appropriate test records to the database)?
This problem arose for us several years ago, when we found that our software was getting so complicated that testing everything before release was a real chore, and a good many bugs (some of them very obvious) were getting out into the field. What's more, I found that I was actually afraid to add new features, concerned that they might break the rest of the software. It was this last problem that really drove home to me the importance of making it possible to quickly and easily test all the features of all our products. AUTOMATED TESTING The principle of automated testing is that there is a program (which could be a job stream) that runs the program being tested, feeding it the proper input, and checking the output against the output that was expected. Once the test suite is written, no human intervention is needed, either to run the program or to look to see if it worked; the test suite does all that, and somehow indicates (say, by a :TELL message and a results file) whether the program's output was as expected. We, for instance, have over two hundred test suites, all of which can be run overnight by executing one job stream submission command; after they run, another command can show which test suites succeeded and which failed. These test suites can help in many ways: * As discussed above, the test suites should always be run before a new version is released, no matter how trivial the modifications to the program. * If the software is internally different for different environments (e.g. MPE/V vs. MPE/XL), but should have the same external behavior, the test suites should be run on both environments. * As you're making serious changes to the software, you might want to run the test suites even before the release, since they can tell you what still needs to be fixed. * If you have the discipline to -- believe it or not -- write the test suite before you've written your program, you can even use the test suite to do the initial testing of your code. After all, you'd have to initially test the code anyway; you might as well use your test suites to do that initial testing as well as all subsequent tests. Note also that the test suites not only run the program, but set up the proper environment for the program; this might mean filling up a test database, building necessary files, etc. rgds
rose
Edited by rose - 08Feb2007 at 4:07pm Post Resume: Click here to Upload your Resume & Apply for Jobs |
|
IP Logged | |
Forum Jump |
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot delete your posts in this forum You cannot edit your posts in this forum You cannot create polls in this forum You cannot vote in polls in this forum |
© Vyom Technosoft Pvt. Ltd. All Rights Reserved.