Software Testing Dictionary
Acceptance Test Formal
tests (often performed by a customer) to determine whether or not a
system has satisfied predetermined acceptance criteria. These tests are
often used to enable the customer (either internal or external) to
determine whether or not to accept a system.
Ad Hoc TestingTesting carried out using no recognised test case design technique. [BCS]
Alpha Testing Testing of a software product or system conducted at the developer’s site by the customer.
Assertion Testing. (NBS)
A dynamic analysis technique which inserts assertions about the
relationship between program variables into the program code. The truth
of the assertions is determined as the program executes.
Automated Testing Software
testing which is assisted with software technology that does not
require operator (tester) input, analysis, or evaluation.
Background testing.
is the execution of normal functional testing while the SUT is
exercised by a realistic work load. This work load is being processed
“in the background” as far as the functional testing is concerned. [
Load Testing Terminology by Scott Stirling ]
Bug:
glitch, error, goof, slip, fault, blunder, boner, howler, oversight,
botch, delusion, elision. [B. Beizer, 1990], defect, issue, problem
Beta Testing. Testing conducted at one or more customer sites by the end-user of a delivered software product or system.
Benchmarks Programs that provide performance comparison for software, hardware, and systems.
Benchmarking
is specific type of performance test with the purpose of determining
performance baselines for comparison. [Load Testing Terminology by
Scott Stirling ]
Big-bang testing
Integration testing where no incremental testing takes place prior to
all the system’s components being combined to form the system.[BCS]
Black box testing.
A testing method where the application under test is viewed as a black
box and the internal behavior of the program is completely ignored.
Testing occurs based upon the external specifications. Also known as
behavioral testing, since only the external behaviors of the program
are evaluated and analyzed.
Boundary Value Analysis (BVA). BVA is different from
equivalence partitioning in that it focuses on “corner cases” or values
that are usually out of range as defined by the specification. This
means that if function expects all values in range of negative 100 to
positive 1000, test inputs would include negative 101 and positive
1001. BVA attempts to derive the value often used as a technique for
stress, load or volume testing. This type of validation is usually
performed after positive functional validation has completed
(successfully) using requirements specifications and user documentation.
Breadth test.
– A test suite that exercises the full scope of a system from a
top-down perspective, but does not test any aspect in detail [Dorothy
Graham, 1999]
Cause Effect Graphing.
(1) [NBS] Test data selection technique. The input and output domains
are partitioned into classes and analysis is performed to determine
which input classes cause which effect. A minimal set of inputs is
chosen which will cover the entire effect set. (2)A systematic method
of generating test cases representing combinations of conditions. See:
testing, functional.[G. Myers]
Clean test.
A test whose primary purpose is validation; that is, tests designed to
demonstrate the software`s correct working.(syn. positive test)[B.
Beizer 1995]
Code Inspection.
A manual [formal] testing [error detection] technique where the
programmer reads source code, statement by statement, to a group who
ask questions analyzing the program logic, analyzing the code with
respect to a checklist of historically common programming errors, and
analyzing its compliance with coding standards. Contrast with code
audit, code review, code walkthrough. This technique can also be
applied to other software and configuration items. [G.Myers/NBS] Syn:
Fagan Inspection
Code Walkthrough.
A manual testing [error detection] technique where program logic
[structure] is traced manually [mentally] by a group with a small set
of test cases, while the state of program variables is manually
monitored, to analyze the programmer’s logic and
assumptions.[G.Myers/NBS] Contrast with code audit, code inspection,
code review.
Coexistence Testing.Coexistence
isnÂ’t enough. It also depends on load order, how virtual space is
mapped at the moment, hardware and software configurations, and the
history of what took place hours or days before. ItÂ’s probably an
exponentially hard problem rather than a square-law problem. [from
Quality Is Not The Goal. By Boris Beizer, Ph. D.]
Compatibility bug
A revision to the framework breaks a previously working feature: a new
feature is inconsistent with an old feature, or a new feature breaks an
unchanged application rebuilt with the new framework code. [R. V.
Binder, 1999]
Compatibility Testing.
The process of determining the ability of two or more systems to
exchange information. In a situation where the developed software
replaces an already working program, an investigation should be
conducted to assess possible comparability problems between the new
software and other programs or systems.
Composability testing
–testing the ability of the interface to let users do more complex
tasks by combining different sequences of simpler, easy-to-learn tasks.
[Timothy Dyck, ‘Easy’ and other lies, eWEEK April 28, 2003]
Condition Coverage.
A test coverage criteria requiring enough test cases such that each
condition in a decision takes on all possible outcomes at least once,
and each point of entry to a program or subroutine is invoked at least
once. Contrast with branch coverage, decision coverage, multiple
condition coverage, path coverage, statement coverage.[G.Myers]
Conformance directed testing. Testing that seeks to establish conformance to requirements or specification. [R. V. Binder, 1999]
CRUD Testing. Build CRUD matrix and test all object creation, reads, updates, and deletion. [William E. Lewis, 2000]
Data-Driven testing
An automation approach in which the navigation and functionality of the
test script is directed through external data; this approach separates
test and control data from the test script. [Daniel J. Mosley, 2002]
Data flow testing Testing in which test cases are designed based on variable usage within the code.[BCS]
Database testing. Check the integrity of database field values. [William E. Lewis, 2000]
Defect
The difference between the functional specification (including user
documentation) and actual program text (source code and data). Often
reported as problem and stored in defect-tracking and
problem-management system
Defect
Also called a fault or a bug, a defect is an incorrect part of code
that is caused by an error. An error of commission causes a defect of
wrong or extra code. An error of omission results in a defect of
missing code. A defect may cause one or more failures.[Robert M.
Poston, 1996.]
Defect.
A flaw in the software with potential to cause a failure.. [Systematic
Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]
Defect Age.
A measurement that describes the period of time from the introduction
of a defect until its discovery. . [Systematic Software Testing by Rick
D. Craig and Stefan P. Jaskiel 2002]
Defect Density.
A metric that compares the number of defects to a measure of size
(e.g., defects per KLOC). Often used as a measure of defect quality.
[Systematic Software Testing by Rick D. Craig and Stefan P. Jaskiel
2002]
Defect Discovery Rate.A
metric describing the number of defects discovered over a specified
period of time, usually displayed in graphical form. [Systematic
Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]
Defect Removal Efficiency (DRE).A
measure of the number of defects discovered in an activity versus the
number that could have been found. Often used as a measure of test
effectiveness. [Systematic Software Testing by Rick D. Craig and Stefan
P. Jaskiel 2002]
Defect Seeding.The
process of intentionally adding known defects to those already in a
computer program for the purpose of monitoring the rate of detection
and removal, and estimating the number of defects still remaining. Also
called Error Seeding. [Systematic Software Testing by Rick D. Craig and
Stefan P. Jaskiel 2002]
Defect Masked.An
existing defect that hasn’t yet caused a failure because another defect
has prevented that part of the code from being executed. [Systematic
Software Testing by Rick D. Craig and Stefan P. Jaskiel 2002]
Depth test. A test case, that exercises some part of a system to a significant level of detail. [Dorothy Graham, 1999]
Decision Coverage.
A test coverage criteria requiring enough test cases such that each
decision has a true and false result at least once, and that each
statement is executed at least once. Syn: branch coverage. Contrast
with condition coverage, multiple condition coverage, path coverage,
statement coverage.[G.Myers]
Dirty testing Negative testing. [Beizer]
Dynamic testing. Testing, based on specific test cases, by execution of the test object or running programs [Tim Koomen, 1999]
End-to-End testing.
Similar to system testing; the ‘macro’ end of the test scale; involves
testing of a complete application environment in a situation that
mimics real-world use, such as interacting with a database, using
network communications, or interacting with other hardware,
applications, or systems if appropriate.
Equivalence Partitioning:
An approach where classes of inputs are categorized for product or
function validation. This usually does not include combinations of
input, but rather a single state value based by class. For example,
with a given function there may be several classes of input that may be
used for positive testing. If function expects an integer and receives
an integer as input, this would be considered as positive test
assertion. On the other hand, if a character or any other input class
other than integer is provided, this would be considered a negative
test assertion or condition.
Error:
An error is a mistake of commission or omission that a person makes. An
error causes a defect. In software development one error may cause one
or more defects in requirements, designs, programs, or tests.[Robert M.
Poston, 1996.]
Errors:
The amount by which a result is incorrect. Mistakes are usually a
result of a human action. Human mistakes (errors) often result in
faults contained in the source code, specification, documentation, or
other product deliverable. Once a fault is encountered, the end result
will be a program failure. The failure usually has some margin of
error, either high, medium, or low.
Error Guessing:
Another common approach to black-box validation. Black-box testing is
when everything else other than the source code may be used for
testing. This is the most common approach to testing. Error guessing is
when random inputs or conditions are used for testing. Random in this
case includes a value either produced by a computerized random number
generator, or an ad hoc value or test conditions provided by engineer.
Error guessing.
A test case design technique where the experience of the tester is used
to postulate what faults exist, and to design tests specially to expose
them [from BS7925-1]
Error seeding.
The purposeful introduction of faults into a program to test
effectiveness of a test suite or other quality assurance program. [R.
V. Binder, 1999]
Exception Testing. Identify error messages and exception handling processes an conditions that trigger them. [William E. Lewis, 2000]
Exhaustive Testing.(NBS) Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.
Exploratory Testing:
An interactive process of concurrent product exploration, test design,
and test execution. The heart of exploratory testing can be stated
simply: The outcome of this test influences the design of the next
test. [James Bach]
Failure:
A failure is a deviation from expectations exhibited by software and
observed as a set of symptoms by a tester or user. A failure is caused
by one or more defects. The Causal Trail. A person makes an error that
causes a defect that causes a failure.[Robert M. Poston, 1996]
Follow-up testing,
we vary a test that yielded a less-thanspectacular failure. We vary the
operation, data, or environment, asking whether the underlying fault in
the code can yield a more serious failure or a failure under a broader
range of circumstances.[Measuring the Effectiveness of Software
Testers,Cem Kaner, STAR East 2003]
Formal Testing. (IEEE)
Testing conducted in accordance with test plans and procedures that
have been reviewed and approved by a customer, user, or designated
level of management. Antonym: informal testing.
Free Form Testing. Ad hoc or brainstorming using intuition to define test cases. [William E. Lewis, 2000]
Functional Decomposition Approach.
An automation method in which the test cases are reduced to fundamental
tasks, navigation, functional tests, data verification, and return
navigation; also known as Framework Driven Approach. [Daniel J. Mosley,
2002]
Functional testing
Application of test data derived from the specified functional
requirements without regard to the final program structure. Also known
as black-box testing.
Gray box testing
Tests involving inputs and outputs, but test design is educated by
information about the code or the program operation of a kind that
would normally be out of scope of view of the tester.[Cem Kaner]
Gray box testing
Test designed based on the knowledge of algorithm, internal states,
architectures, or other high -level descriptions of the program
behavior. [Doug Hoffman]
Gray box testing
Examines the activity of back-end components during test case
execution. Two types of problems that can be encountered during
gray-box testing are:
§Ҩi A component encounters a failure of some kind, causing the
operation to be aborted. The user interface will typically indicate
that an error has occurred.
§Ҩi The test executes in full, but the content of the results is
incorrect. Somewhere in the system, a component processed data
incorrectly, causing the error in the results.
[Elfriede Dustin. "Quality Web Systems: Performance, Security & Usability."]
High-level tests. These tests involve testing whole, complete products [Kit, 1995]
Inspection A
formal evaluation technique in which software requirements, design, or
code are examined in detail by person or group other than the author to
detect faults, violations of development standards, and other problems
[IEEE94]. A quality improvement process for written material that
consists of two dominant components: product (document) improvement and
process improvement (document production and inspection).
Integration The process of combining software components or hardware components or both into overall system.
Integration testing
– testing of combined parts of an application to determine if they
function together correctly. The ‘parts’ can be code modules,
individual applications, client and server applications on a network,
etc. This type of testing is especially relevant to client/server and
distributed systems.
Integration Testing.
Testing conducted after unit and feature testing. The intent is to
expose faults in the interactions between software modules and
functions. Either top-down or bottom-up approaches can be used. A
bottom-up method is preferred, since it leads to earlier unit testing
(step-level integration) This method is contrary to the big-bang
approach where all source modules are combined and tested in one step.
The big-bang approach to integration should be discouraged.
Interface Tests
Programs that probide test facilities for external interfaces and
function calls. Simulation is often used to test external interfaces
that currently may not be available for testing or are difficult to
control. For example, hardware resources such as hard disks and memory
may be difficult to control. Therefore, simulation can provide the
characteristics or behaviors for specific function.
Internationalization testing (I18N)
– testing related to handling foreign text and data within the program.
This would include sorting, importing and exporting test and data,
correct handling of currency and date and time formats, string parsing,
upper and lower case handling and so forth. [Clinton De Young, 2003].
Interoperability Testing
which measures the ability of your software to communicate across the
network on multiple machines from multiple vendors each of whom may
have interpreted a design specification critical to your success
differently.
Inter-operability Testing.
True inter-operability testing concerns testing for unforeseen
interactions with other packages with which your software has no direct
connection. In some quarters, inter-operability testing labor equals
all other testing combined. This is the kind of testing that I say
shouldnÂ’t be done because it canÂ’t be done.[from Quality Is Not The
Goal. By Boris Beizer, Ph. D.]
Latent bug A bug that has been dormant (unobserved) in two or more releases. [R. V. Binder, 1999]
Lateral testing. A test design technique based on lateral thinking principals, to identify faults. [Dorothy Graham, 1999]
Load testing Testing
an application under heavy loads, such as testing of a web site under a
range of loads to determine at what point the system’s response time
degrades or fails.
Load §ҡ̳tress test. A test is design to determine how heavy a load the application can handle.
Load-stability test. Test design to determine whether a Web application will remain serviceable over extended time span.
Load §ҡ̩solation test.
The workload for this type of test is designed to contain only the
subset of test cases that caused the problem in previous testing.
http://www.geocities.com/xtremetesting/#top - - [
Master Test Planning.
An activity undertaken to orchestrate the testing effort across levels
and organizations.[Systematic Software Testing by Rick D. Craig and
Stefan P. Jaskiel 2002]
Monkey Testing.(smart
monkey testing) Input are generated from probability distributions that
reflect actual expected usage statistics — e.g., from user profiles.
There are different levels of IQ in smart monkey testing. In the
simplest, each input is considered independent of the other inputs.
That is, a given test requires an input vector with five components. In
low IQ testing, these would be generated independently. In high IQ
monkey testing, the correlation (e.g., the covariance) between these
input distribution is taken into account. In all branches of smart
monkey testing, the input is considered as a single event.
Maximum Simultaneous Connection testing. This is a test performed to determine the number of connections which the firewall or Web server is capable of handling.
Mutation testing.
A testing strategy where small variations to a program are inserted (a
mutant), followed by execution of an existing test suite. If the test
suite detects the mutant, the mutant is §Ҧamp;#8992;
retired.Â§Ò¡í¹€í¹¦ undetected, the test suite must be revised. [R. V.
Binder, 1999]
Multiple Condition Coverage.
A test coverage criteria which requires enough test cases such that all
possible combinations of condition outcomes in each decision, and all
points of entry, are invoked at least once.[G.Myers] Contrast with
branch coverage, condition coverage, decision coverage, path coverage,
statement coverage.
Negative test. A test whose primary purpose is falsification; that is tests designed to break the software[B.Beizer1995]
Orthogonal array testing:
Technique can be used to reduce the number of combination and provide
maximum coverage with a minimum number of TC.Pay attention to the fact
that it is an old and proven technique. The OAT was introduced for the
first time by Plackett and Burman in 1946 and was implemented by G.
Taguchi, 1987
Orthogonal array testing: Mathematical technique to determine which variations of parameters need to be tested. [William E. Lewis, 2000]
Oracle.
Test Oracle: a mechanism to produce the predicted outcomes to compare
with the actual outcomes of the software under test [fromBS7925-1]
Parallel Testing
Testing a new or an alternate data processing system with the same
source data that is used in another system. The other system is
considered as the standard of comparison. Syn: parallel run.[ISO]
Penetration testing The process of attacking a host from outside to ascertain remote security vulnerabilities.
Performance Testing. Testing conducted to evaluate the compliance of a system or component with specific performance requirements [BS7925-1]
Performance testing
can be undertaken to: 1) show that the system meets specified
performance objectives, 2) tune the system, 3) determine the factors in
hardware or software that limit the system’s performance, and 4)
project the system’s future load- handling capacity in order to
schedule its replacements” [Software System Testing and Quality
Assurance. Beizer, 1984, p. 256]
Preventive Testing
Building test cases based upon the requirements specification prior to
the creation of the code, with the express purpose of validating the
requirements [Systematic Software Testing by Rick D. Craig and Stefan
P. Jaskiel 2002]
Prior Defect History Testing. Test cases are created or rerun for every defect found in prior tests of the system. [William E. Lewis, 2000]
Qualification Testing. (IEEE)
Formal testing, usually conducted by the developer for the consumer, to
demonstrate that the software meets its specified requirements. See:
acceptance testing.
Quality. The degree to which a program possesses a desired combination of attributes that enable it to perform its specified end use.
Quality Assurance (QA)
Consists of planning, coordinating and other strategic activities
associated with measuring product quality against external requirements
and specifications (process-related activities).
Quality Control (QC) Consists of monitoring, controlling and other tactical activities associated with the measurement of product quality goals.
Our definition of Quality:
Achieving the target (not conformance to requirements as used by many
authors) & minimizing the variability of the system under test
Race condition defect.
Many concurrent defects result from data-race conditions. A data-race
condition may be defined as two accesses to a shared variable, at least
one of which is a write, with no mechanism used by either to prevent
simultaneous access. However, not all race conditions are defects.
Recovery testingTesting how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Regression Testing.
Testing conducted for the purpose of evaluating whether or not a change
to the system (all CM items) has introduced a new failure. Regression
testing is often accomplished through the construction, execution and
analysis of product and system tests.
Regression Testing.
– testing that is performed after making a functional improvement or
repair to the program. Its purpose is to determine if the change has
regressed other aspects of the program [Glenford J.Myers, 1979]
Reengineering.The
process of examining and altering an existing system to reconstitute it
in a new form. May include reverse engineering (analyzing a system and
producing a representation at a higher level of abstraction, such as
design from code), restructuring (transforming a system from one
representation to another at the same level of abstraction),
recommendation (analyzing a system and producing user and support
documentation), forward engineering (using software products derived
from an existing system, together with new requirements, to produce a
new system), and translation (transforming source code from one
language to another or from one version of a language to another).
Reference testing.
A way of deriving expected outcomes by manually validating a set of
actual outcomes. A less rigorous alternative to predicting expected
outcomes in advance of test execution. [Dorothy Graham, 1999]
Reliability testing. Verify the probability of failure free operation of a computer program in a specified environment for a specified time.
Reliability
of an object is defined as the probability that it will not fail under
specified conditions, over a period of time. The specified conditions
are usually taken to be fixed, while the time is taken as an
independent variable. Thus reliability is often written R(t) as a
function of time t, the probability that the object will not fail
within time t.
Any
computer user would probably agree that most software is flawed, and
the evidence for this is that it does fail. All software flaws are
designed in — the software does not break, rather it was always broken.
But unless conditions are right to excite the flaw, it will go
unnoticed — the software will appear to work properly. [Professor Dick
Hamlet. Ph.D.]
Range Testing. For each input identifies the range over which the system behavior should be the same. [William E. Lewis, 2000]
Risk management.An
organized process to identify what can go wrong, to quantify and access
associated risks, and to implement/control the appropriate approach for
preventing or handling each risk identified.
Robust test.
A test, that compares a small amount of information, so that unexpected
side effects are less likely to affect whether the test passed or
fails. [Dorothy Graham, 1999]
Sanity Testing
– typically an initial testing effort to determine if a new software
version is performing well enough to accept it for a major testing
effort. For example, if the new software is often crashing systems,
bogging down systems to a crawl, or destroying databases, the software
may not be in a ’sane’ enough condition to warrant further testing in
its current state.
Scalability testing
is a subtype of performance test where performance requirements for
response time, throughput, and/or utilization are tested as load on the
SUT is increased over time. [Load Testing Terminology by Scott Stirling
]
Sensitive test.
A test, that compares a large amount of information, so that it is more
likely to defect unexpected differences between the actual and expected
outcomes of the test. [Dorothy Graham, 1999]
Skim Testing A
testing technique used to determine the fitness of a new build or
release of an AUT to undergo further, more thorough testing. In
essence, a “pretest” activity that could form one of the acceptance
criteria for receiving the AUT for testing [Testing IT: An
Off-the-Shelf Software Testing Process by John Watkins]
Smoke test
describes an initial set of tests that determine if a new version of
application performs well enough for further testing.[Louise Tamres,
2002]
Specification-based test. A test, whose inputs are derived from a specification.
Spike testing.
to test performance or recovery behavior when the system under test
(SUT) is stressed with a sudden and sharp increase in load should be
considered a type of load test.[ Load Testing Terminology by Scott
Stirling ]
http://www.geocities.com/xtremetesting/standards.html - - Standards This page lists many standards that can be related to software testing
STEP (Systematic Test and Evaluation Process) Software Quality Engineering’s copyrighted testing methodology.
State-based testing Testing with test cases developed by modeling the system under test as a state machine [R. V. Binder, 1999]
State Transition Testing.
Technique in which the states of a system are fist identified and then
test cases are written to test the triggers to cause a transition from
one condition to another state. [William E. Lewis, 2000]
Static testing. Source code analysis. Analysis of source code to expose potential defects.
Statistical testing.
A test case design technique in which a model is used of the
statistical distribution of the input to construct representative test
cases. [BCS]
Stealth bug. A bug that removes information useful for its diagnosis and correction. [R. V. Binder, 1999]
Storage test.
Study how memory and space is used by the program, either in resident
memory or on disk. If there are limits of these amounts, storage tests
attempt to prove that the program will exceed them. [Cem Kaner, 1999,
p55]
Stress / Load / Volume test.
Tests that provide a high degree of activity, either using boundary
conditions as inputs or multiple copies of a program executing in
parallel as examples.
Structural Testing.
(1)(IEEE) Testing that takes into account the internal mechanism
[structure] of a system or component. Types include branch testing,
path testing, statement testing. (2) Testing to insure each program
statement is made to execute during testing and that each program
statement performs its intended function. Contrast with functional
testing. Syn: white-box testing, glass-box testing, logic driven
testing.
System testing Black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
http://www.geocities.com/xtremetesting/#top - -
Table testing. Test access, security, and data integrity of table entries. [William E. Lewis, 2000]
Test Bed.
An environment containing the hardware, instrumentation, simulators,
software tools, and other support elements needed to conduct a test
[IEEE 610].
Test Case. A set of test inputs, executions, and expected results developed for a particular objective.
------------- http://www.quick2sms.com - Send Unlimited FREE SMS to Any Mobile Anywhere in INDIA,
Click Here
|