Testing Directory
Printed From: One Stop Testing
Category: Software Testing @ OneStopTesting
Forum Name: Beginners @ OneStopTesting
Forum Discription: New to the Club...!!! Don't Worry, We are here for you...!!! Learn the very basics of Software Testing and other pertinent Informations.
URL: http://forum.onestoptesting.com/forum_posts.asp?TID=6659
Printed Date: 23Feb2025 at 8:57am
Topic: Testing Directory
Posted By: Mithi25
Subject: Testing Directory
Date Posted: 23Jul2009 at 10:59pm
Acceptance criteria - Exit criteria that a component
or system or application must satisfy in order to be accepted by an end
user or customer or other authorized entity.
Acceptance Testing:
Testing conducted to enable a user/customer to determine whether to
accept a software product. Normally performed to validate the software
meets a set of agreed acceptance criteria.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Accuracy - Capability of software product or application to provide the agreed results or effectswith specified degree of precision.
Actual Result - Behavior produced or observed when a component or system is tested.
Ad Hoc Testing:
A testing phase where the tester tries to break the system by randomly
trying the system's functionality. Can include negative testing as
well. See also Monkey Testing.
Adaptability - is
the capability of software product to be adapted for specified multiple
environments without applying actions other than those provided for the
specific purpose for the software considered.
Agile Testing:
Testing practice for projects using agile methodologies, treating
development as the customer of testing and emphasizing a test-first
design paradigm. See also Test Driven Development.
Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.
Alpha Testing - Operational testing by intented users / customers or an independent test team at the http://www.softwaretestingstuff.com/2007/09/software-testing-dictionary.html# - Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Business Process Based Testing -
An approach to software testing in which test cases are designed based
on descriptions and knowledge of business processes and components.
CAST: Computer Aided Software Testing.
Capture/Replay Tool:
A test tool that records test input as it is sent to the software under
test. The input cases stored can then be used to reproduce the test at
a later time. Most commonly applied to GUI test tools.
CMM:
The Capability Maturity Model for Software (CMM or SW-CMM) is a model
for judging the maturity of the software processes of an organization
and for identifying the key practices that are required to increase the
maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Changeability - capability of the software or system to enable specified modifications to be implemented.
Classification Tree Method - Black
box test design technique in which test cases, described by means of a
classification tree which are designed to execute combinations of
representatives of input and output domains.
Code Complete: Phase
of development where functionality is implemented in entirety; bug
fixes are all that are left. All functions found in the Functional
Specifications have been implemented.
Code Coverage:
An analysis method that determines which parts of the software have
been executed (covered) by the test case suite and which parts have not
been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the http://www.softwaretestingstuff.com/2007/09/software-testing-dictionary.html# - Path Testing: Testing in which all paths in the program source code are tested at least once.
Penetration testing: Evaluating the http://www.softwaretestingstuff.com/2007/09/software-testing-dictionary.html# - Quality Assurance:
All those planned or systematic actions necessary to provide adequate
confidence that a product or service is of the type and quality needed
and expected by the customer.
Quality Audit: A
systematic and independent examination to determine whether quality
activities and related results comply with planned arrangements and
whether these arrangements are implemented effectively and are suitable
to achieve objectives.
Quality Circle: A group
of individuals with related interests that meet at regular intervals to
consider problems or other matters related to the quality of outputs of
a process and to the correction of problems or to the improvement of
quality.
Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.
Ramp Testing: Continuously raising an input signal until the system breaks down.
Recovery Testing:
Confirms that the program recovers from expected or unexpected events
without loss of data or functionality. Events can include shortage of
disk space, unexpected loss of communication, or power out conditions.
Regression Testing:
Retesting a previously tested program following modification to ensure
that faults have not been introduced or uncovered as a result of the
changes made.
Release Candidate: A pre-release
version, which contains the desired functionality of the final version,
but which needs to be tested for bugs (which ideally should be removed
before the final version is released).
Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational.
Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.
Security Testing:
Testing which confirms that the program can restrict access to
authorized personnel and that the authorized personnel can access the
functions available to their security level.
Smoke Testing: A
quick-and-dirty test that the major functions of a piece of software
work. Originated in the hardware testing practice of turning on a new
piece of hardware for the first time and considering it a success if it
does not catch on fire.
Soak Testing: Running a
system at high load for a prolonged period of time. For example,
running several times more transactions in an entire day (or night)
than would be expected in a busy day, to identify and performance
problems that appear after a large number of transactions have been
executed.
Software Testing: A set of activities conducted with the intent of finding errors in software.
Static Analysis: Analysis of a program carried out without executing the program.
Static Analyzer: A tool that carries out static analysis.
Static Testing: Analysis of a program carried out without executing the program.
Storage Testing: Testing
that verifies the program under test stores data files in the correct
directories and that it reserves sufficient space to prevent unexpected
termination resulting from lack of space. This is external storage as
opposed to internal storage.
Stress Testing:
Testing conducted to evaluate a system or component at or beyond the
limits of its specified requirements to determine the load under which
it fails and how. Often this is performance testing using a very high
level of simulated load.
Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software.
System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
Testability:
The degree to which a system or component facilitates the establishment
of test criteria and the performance of tests to determine whether
those criteria have been met.
Testing:
· The process of exercising software to verify that it satisfies specified requirements and to detect errors.
·
The process of analyzing a software item to detect the differences
between existing and required conditions (that is, bugs), and to
evaluate the features of the software item (Ref. IEEE Std 829).
·
The process of operating a system or component under specified
conditions, observing or recording the results, and making an
evaluation of some aspect of the system or component.
Test Bed:
An execution environment configured for testing. May consist of
specific hardware, OS, network topology, configuration of the product
under test, other application or system software, etc. The Test Plan
for a project should enumerated the test beds(s) to be used.
Test Case:
·
Test Case is a commonly used term for a specific test. This is usually
the smallest unit of testing. A Test Case will consist of information
such as requirements testing, test steps, verification steps,
prerequisites, outputs, test environment, etc.
· A set of
inputs, execution preconditions, and expected outcomes developed for a
particular objective, such as to exercise a particular program path or
to verify compliance with a specific requirement.
Test Driven Development:
Testing methodology associated with Agile Programming in which every
chunk of code is covered by unit tests, which must all pass all the
time, in an effort to eliminate unit-level and regression bugs during
development. Practitioners of TDD write a lot of tests, i.e. an equal
number of lines of test code to the size of the production code.
Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness.
Test Environment:
The hardware and software environment in which tests will be run, and
any other software with which the software under test interacts when
under test including stubs and test drivers.
Test First Design:
Test-first design is one of the mandatory practices of Extreme
Programming (XP).It requires that programmers do not write any
production code until they have first written a unit test.
Test Harness: A program or test tool used to execute a tests.
Test Plan:
A document describing the scope, approach, resources, and schedule of
intended testing activities. It identifies test items, the features to
be tested, the testing tasks, who will do each task, and any risks
requiring contingency planning.
Test Procedure: A document providing detailed instructions for the execution of one or more test cases.
Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.
Test Specification:
A document specifying the test approach for a software feature or
combination or features and the inputs, predicted results and execution
conditions for the associated tests.
Test Suite:
A collection of tests used to validate the behavior of a product. The
scope of a Test Suite varies from organization to organization. There
may be several Test Suites for a particular product for example. In
most cases however a Test Suite is a high level concept, grouping
together hundreds or thousands of tests related by what they are
intended to test.
Test Tools: http://www.softwaretestingstuff.com/2007/09/software-testing-dictionary.html# -
|
|