Manual Definitions
Manual Definitions
Acceptance Testing
Formal testing conducted to enable a user, customer, or other authorized entity to
determine whether to accept a system or component
Ad hoc Testing
Testing carried out using no recognized test case design technique.
Alpha Testing
Simulated or actual operational testing at an in-house site not otherwise involved
with the software developers.
Bebugging
The process of intentionally adding known faults to those already in a computer
program for the purpose of monitoring the rate of detection and removal, and
estimating the number of faults remaining in the program.
Behaviour
The combination of input values and preconditions and the required response for a
function of a system. The full specification of a function would normally comprise one
or more behaviours.
Benchmarking
Comparing your product to the best competitors'.
Beta Testing
Operational testing at a site not otherwise involved with the software developers.
Big-bang Testing
Integration testing where no incremental testing takes place prior to all the system's
components being combined to form the system.
Bottom-up Testing
An approach to testing where the lowest level components are tested first, then used
to facilitate the testing of higher level components. The process is repeated until the
component at the top of the hierarchy is tested.
Boundary Value
An input value or output value which is on the boundary between equivalence
classes, or an incremental distance either side of the boundary.
Branch
A conditional transfer of control from any statement to any other statement in a
component, or an unconditional transfer of control from any statement to any other
statement in the component except the next statement, or when a component has
more than one entry point, a transfer of control to an entry point of the component.
Branch Condition
A condition within a decision.
Branch Coverage
The percentage of branches that have been exercised by a test case suite.
Bug
A manifestation of an error in software. A fault, if encountered may cause a failure.
Bug Seeding
The process of intentionally adding known faults to those already in a computer
program for the purpose of monitoring the rate of detection and removal, and
estimating the number of faults remaining in the program.
C-use
A data use not in a condition.
Capture/Playback Tool
A test tool that records test input as it is sent to the software under test. The input
cases stored can then be used to reproduce the test at a later time.
Capture/Replay Tool
A test tool that records test input as it is sent to the software under test. The input
cases stored can then be used to reproduce the test at a later time.
CAST
Acronym for computer-aided software testing.
Certification
The process of confirming that a system or component complies with its specified
requirements and is acceptable for operational use.
Code-based Testing
Designing tests based on objectives derived from the implementation (e.g., tests that
execute specific control flow paths or use specific data items).
Compatibility Testing
Testing whether the system is compatible with other systems with which it should
communicate.
Component
A minimal software item for which a separate specification is available.
Component Testing
The testing of individual software components.
Condition
A Boolean expression containing no Boolean operators. For instance, A < B is a
condition but A and B is not.
Condition Coverage
The percentage of branch condition outcomes in every decision that have been
exercised by a test case suite.
Condition Outcome
The evaluation of a condition to TRUE or FALSE.
Conformance Testing
The process of testing that an implementation conforms to the specification on which
it is based.
Control Flow
An abstract representation of all possible sequences of events in a program's
execution.
Correctness
The degree to which software conforms to its specification.
Coverage
The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test case suite.
A measure applied to a set of tests. The most important are called Code Coverage
and Path Coverage. There are a number of intermediate types of coverage defined,
moving up the ladder from Code Coverage (weak) to Path Coverage (strong, but
hard to get). These definitions are called things like Decision -Condition Coverage,
but they're seldom important in the real world. For the curious, they are covered in
detail in Glenford Myers' book, The Art of Software Testing.
Coverage Item
An entity or property used as a basis for testing.
Customer Satisfaction
Meeting or exceeding a customer's expectations for a product or service.
Data Definition
An executable statement where a variable is assigned a value.
Data Use
An executable statement where the value of a variable is accessed.
Debugging
The process of finding and removing the causes of failures in software.
Decision
A program point at which the control flow has two or more alternative routes.
Decision Condition
A condition within a decision.
Decision Coverage
The percentage of decision outcomes that have been exercised by a test case suite.
Decision Outcome
The result of a decision (which therefore determines the control flow alternative
taken).
Design
The creation of a specification from concepts.
Design-based Testing
Designing tests based on objectives derived from the architectural or detail design of
the software (e.g., tests that execute specific invocation paths or probe the worst
case behaviour of algorithms).
Desk Checking
The testing of software by the manual simulation of its execution.
Dirty Testing
Testing aimed at showing software does not work.
Documentation Testing
Testing concerned with the accuracy of documentation.
Domain
The set from which values are selected.
Domain Testing
A test case design technique for a component in which test cases are designed to
execute representatives from equivalence classes.
Dynamic Analysis
The process of evaluating a system or component based upon its behaviour during
execution.
Emulator
A device, computer program, or system that accepts the same inputs and produces
the same outputs as a given system.
Entry Point
The first executable statement within a component.
Equivalence Class
A portion of the component's input or output domains for which the component's
behaviour is assumed to be the same from the component's specification.
An Equivalence Class (EC) of input values is a group of values that all cause the
same sequence of operations to occur.
In Black Box terms, they are all treated the same way according to the specs.
Different input values within an Equivalence Class may give different answers, but
the answers are produced by the same procedure.
In Glass Box terms, they all cause execution to go down the same path.
Equivalence Partition
A portion of the component's input or output domains for which the component's
behaviour is assumed to be the same from the component's specification.
Error
A human action that produces an incorrect result.
Error Guessing
A test case design technique where the experience of the tester is used to postulate
what faults might occur, and to design tests specifically to expose them.
Error Seeding
The process of intentionally adding known faults to those already in a computer
program for the purpose of monitoring the rate of detection and removal, and
estimating the number of faults remaining in the program.
Executable Statement
A statement which, when compiled, is translated into object code, which will be
executed procedurally when the program is running and may perform an action on
program data.
Exit Point
The last executable statement within a component.
Failure
Deviation of the software from its expected delivery or service.
Fault
A manifestation of an error in software. A fault, if encountered may cause a failure.
Feasible Path
A path for which there exists a set of input values and execution conditions, which
causes it to be executed.
Feature Testing
Test case selection that is based on an analysis of the specification of the component
without reference to its internal workings.
Flow Charting
Creating a 'map' of the steps in a process.
Functional Chunk
The fundamental unit of testing.
Its precise definition is "The smallest piece of code for which all the inputs and
outputs are meaningful at the spec level." This means that we can test it Black Box,
and design the tests before the code arrives without regard to how it was coded, and
also tell whether the results it gives are correct.
Functional Specification
The document that describes in detail the characteristics of the product with regard
to its intended capability.
Independence
Separation of responsibilities, which ensures the accomplishment of objective
evaluation.
Infeasible Path
A path, which cannot be exercised by any set of possible input values.
Input
A variable (whether stored within a component or outside it) that is read by the
component.
Input Domain
The set of all possible inputs.
Input Value
An instance of an input.
Integration
The process of combining components into larger assemblies.
Integration Testing
Testing performed to expose faults in the interfaces and in the interaction between
integrated components.
Interface Testing
Integration testing where the interfaces between system components are tested.
Isolation Testing
Component testing of individual components in isolation from surrounding
components, with surrounding components being simulated by stubs.
LCSAJ
A Linear Code Sequence And Jump, consisting of the following three items
(conventionally identified by line numbers in a source code listing): the start of the
linear sequence of executable statements, the end of the linear sequence, and the
target line to which control flow is transferred at the end of the linear sequence.
LCSAJ Coverage
The percentage of LCSAJs of a component, which are exercised by a test case suite.
LCSAJ Testing
A test case design technique for a component in which test cases are designed to
execute LCSAJs.
Logic-coverage Testing
Test case selection that is based on an analysis of the internal structure of the
component.
Logic-driven Testing
Test case selection that is based on an analysis of the internal structure of the
component.
Maintainability Testing
Testing whether the system meets its specified objectives for maintainability.
Manufacturing
Creating a product from specifications.
Metrics
Ways to measure: e.g., time, cost, customer satisfaction, quality.
N-switch Coverage
The percentage of sequences of N-transitions that have been exercised by a test case
suite.
N-switch Testing
A form of state transition testing in which test cases are designed to execute all valid
sequences of N-transitions.
N-transitions
A sequence of N+1 transitions.
Negative Testing
Testing aimed at showing software does not work.
Outcome
Actual outcome or predicted outcome. This is the outcome of a test.
Output
A variable (whether stored within a component or outside it) that is written to by the
component.
Output Domain
The set of all possible outputs.
Output Value
An instance of an output.
P-use
A data use in a predicate.
Partition Testing
A test case design technique for a component in which test cases are designed to
execute representatives from equivalence classes.
Path
A sequence of executable statements of a component, from an entry point to an exit
point.
Path Coverage
The percentage of paths in a component exercised by a test case suite.
A set of tests that gives Path Coverage for some code if the set goes down each path
at least once.
The difference between this and Code Coverage is that Path Coverage means not just
"visiting" a line of code, but also includes how you got there and where you're going
next. It therefore uncovers more bugs, especially those caused by Data Coupling.
However, it's impossible to get this level of coverage except perhaps for a tiny critical
piece of code.
Path Sensitizing
Choosing a set of input values to force the execution of a component to take a given
path.
Path Testing
A test case design technique in which test cases are designed to execute paths of a
component.
Performance Testing
Testing conducted to evaluate the compliance of a system or component with
specified performance requirements.
Portability Testing
Testing aimed at demonstrating the software can be ported to specified hardware or
software platforms.
Process
What is actually done to create a product.
Progressive Testing
Testing of new features after regression testing of previous features.
Recovery Testing
Testing aimed at verifying the system's ability to recover from varying degrees of
failure.
Regression Testing
Retesting of a previously tested program following modification to ensure that faults
have not been introduced or uncovered as a result of the changes made.
Re-running a set of tests that used to work to make sure that changes to the system
didn't break anything. It's usually run after each set of maintenance or enhancement
changes, but is also very useful for Incremental Integration of a system.
Requirements-based Testing
Designing tests based on objectives derived from requirements for the software
component (e.g., tests that exercise specific functions or probe the non-functional
constraints such as performance or security).
Result
Actual outcome or predicted outcome. This is the outcome of a test.
Review
A process or meeting during which a work product, or set of work products, is
presented to project personnel, managers, users or other interested parties for
comment or approval.
Security Testing
Testing whether the system meets its specified security objectives.
Statement
An entity in a programming language, which is typically the smallest indivisible unit
of execution.
Statement Coverage
The percentage of executable statements in a component that have been exercised
by a test case suite.
Statement Testing
A test case design technique for a component in which test cases are designed to
execute statements.
Static Analysis
Analysis of a program carried out without executing the program.
Static Testing
Testing of an object without execution on a computer.
Statistical Testing
A test case design technique in which a model is used of the statistical distribution of
the input to construct representative test cases.
Storage Testing
Testing whether the system meets its specified storage objectives.
Stress Testing
Testing conducted to evaluate a system or component at or beyond the limits of its
specified requirements.
Structural Coverage
Coverage measures based on the internal structure of the component.
Structural Testing
Test case selection that is based on an analysis of the internal structure of the
component.
Syntax Testing
A test case design technique for a component or system in which test case design is
based upon the syntax of the input.
System Testing
The process of testing an integrated system to verify that it meets specified
requirements.
Test
Testing the product for defects.
Test Automation
The use of software to control the execution of tests, the comparison of actual
outcomes to predicted outcomes, the setting up of test preconditions, and other test
control and test reporting functions.
Test Case
A set of inputs, execution preconditions, and expected outcomes developed for a
particular objective, such as to exercise a particular program path or to verify
compliance with a specific requirement.
Test Coverage
The degree, expressed as a percentage, to which a specified coverage item has been
exercised by a test case suite.
Test Driver
A program or test tool used to execute software against a test case suite.
Test Environment
A description of the hardware and software environment in which the tests will be
run, and any other software with which the software under test interacts when under
test including stubs and test drivers.
Test Execution
The processing of a test case suite by the software under test, producing an
outcome.
Test Harness
A testing tool that comprises a test driver and a test comparator.
Test Outcome
Actual outcome or predicted outcome. This is the outcome of a test. The result of a
decision (which therefore determines the control flow alternative taken). The
evaluation of a condition to TRUE or FALSE.
Test Plan
A record of the test planning process detailing the degree of tester independence,
the test environment, the test case design techniques and test measurement
techniques to be used, and the rationale for their choice.
Test Procedure
A document providing detailed instructions for the execution of one or more test
cases.
Test Records
For each test, an unambiguous record of the identities and versions of the
component under test, the test specification, and actual outcome.
Test Script
Commonly used to refer to the automated test procedure used with a test harness.
Test Specification
For each test case, the coverage item, the initial state of the software under test, the
input, and the predicted outcome.
Test Target
A set of test completion criteria.
Testing
The process of exercising software to verify that it satisfies specified requirements
and to detect errors.
Top-down Testing
An approach to integration testing where the component at the top of the component
hierarchy is tested first, with lower level components being simulated by stubs.
Tested components are then used to test lower level components. The process is
repeated until the lowest level components have been tested.
Usability Testing
Testing the ease with which users can learn and use a product.
Validation
Determination of the correctness of the products of software development with
respect to the user needs and requirements.
Verification
The process of evaluating a system or component to determine whether the products
of the given development phase satisfy the conditions imposed at the start of that
phase.
Volume Testing
Testing where the system is subjected to large volumes of data.