Active Topics Memberlist Calendar Search Help | |
Register Login |
One Stop Testing Forum : Software Testing @ OneStopTesting : Beginners @ OneStopTesting |
Topic: Principles of Software Testing |
|
Author | Message |
Mithi25
Senior Member Joined: 23Jun2009 Online Status: Offline Posts: 288 |
Topic: Principles of Software Testing Posted: 10Aug2009 at 11:55pm |
Principles of Software Testing
THE EIGHT BASIC PRINCIPLES OF TESTING 1) DEFINE THE EXPECTED OUTPUT OR RESULT More
often that not, the tester approaches a test case without a set of
predefined and expected results. The danger in this lies in the
tendency of the eye to see what it wants to see. Without knowing the
expected result, erroneous output can easily be overlooked. This
problem can be avoided by carefully pre-defining all expected results
for each of the test cases. Sounds obvious? You’d be surprised how many
people miss this pint while doing the self-assessment test. 2) DON'T TEST YOUR OWN PROGRAMS The
attitudinal l problem is not the only consideration for this principle.
System errors can be caused by an incomplete or faulty understanding of
the original design specifications; it is likely that the programmer
would carry these misunderstandings into the test phase. 3) INSPECT THE RESULTS OF EACH TEST COMPLETELY As
obvious as it sounds, this simple principle is often overlooked. In
many test cases, an after-the-fact review of earlier test results shows
that errors were present but overlooked because no one took the time to
study the results. 4) INCLUDE TEST CASES FOR INVALID OR UNEXPECTED CONDITIONS Programs
already in production often cause errors when used in some new or novel
fashion. This stems from the natural tendency to concentrate on valid
and expected input conditions during a testing cycle. When we use
invalid or unexpected input conditions, the likelihood of boosting the
error detection rate is significantly increased. 5) TEST THE PROGRAM TO SEE IF IT DOES WHAT IT IS NOT SUPPOSED TO DO AS WELL AS WHAT IT IS SUPPOSED TO DO It's
not enough to check if the test produced the expected output. New
systems, and especially new modifications, often produce unintended
side effects such as unwanted disk files or destroyed records. A
thorough examination of data structures, reports, and other output can
often show that a program is doing what is not supposed to do and
therefore still contains errors. 6) AVOID DISPOSABLE TEST CASES UNLESS THE PROGRAM ITSELF IS DISPOSABLE Test
cases should be documented so they can be reproduced. With a
non-structured approach to testing, test cases are often created
on-the-fly. The tester sits at a terminal, generates test input, and
submits them to the program. The test data simply disappears when the
test is complete. Reproducible
test cases become important later when a program is revised, due to the
discovery of bugs or because the user requests new options. In such
cases, the revised program can be put through the same extensive tests
that were used for the original version. Without saved test cases, the
temptation is strong to test only the logic handled by the
modifications. This is unsatisfactory because changes which fix one
problem often create a host of other apparently unrelated problems
elsewhere in the system. As considerable time and effort are spent in
creating meaningful tests, tests which are not documented or cannot be
duplicated should be avoided. 7) DO NOT PLAN TESTS ASSUMING THAT NO ERRORS WILL BE FOUND Testing
should be viewed as a process that locates errors and not one that
proves the program works correctly. The reasons for this were discussed
earlier. 8)
THE PROBABILITY OF LOCATING MORE ERRORS IN ANY ONE MODULE IS DIRECTLY
PROPORTIONAL TO THE NUMBER OF ERRORS ALREADY FOUND IN THAT MODULE At
first glance, this may seem surprising. However, it has been shown that
if certain modules or sections of code contain a high number of errors,
subsequent testing will discover more errors in that particular section
that in other sections. Consider
a program that consists of two modules, A and B. If testing reveals
five errors in module A and only one error in module B, module A will
likely display more errors that module B in any subsequent tests. Why
is this so? There is no definitive explanation, but it is probably due
to the fact that the error-prone module is inherently complex or was
badly programmed. By identifying the most "bug-prone" modules, the
tester can concentrate efforts there and achieve a higher rate of error
detection that if all portions of the system were given equal attention. Extensive testing of the system after modifications have been made is referred to as regression testing. |
|
Post Resume: Click here to Upload your Resume & Apply for Jobs |
|
IP Logged | |
Forum Jump |
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot delete your posts in this forum You cannot edit your posts in this forum You cannot create polls in this forum You cannot vote in polls in this forum |
© Vyom Technosoft Pvt. Ltd. All Rights Reserved.