Technical Terms Used in Testing World
Technical Terms Used in Testing World
In this tutorial you will learn about technical terms used in testing world, from Audit,
Acceptance Testng to Validation, Verification and Testing.
Acceptance testing: Testing conducted to determine whether or not a system satisfies its
acceptance criteria and to enable the customer to determine whether or not to accept the
system.
Assertion Testing: A dynamic analysis technique which inserts assertions about the
relationship between program variables into the program code. The truth of the assertions is
determined as the program executes.
Boundary Value: (1) A data value that corresponds to a minimum or maximum input,
internal, or output value specified for a system or component. (2) A value which lies at, or just
inside or just outside a specified range of valid input and output values.
Boundary Value Analysis: A selection technique in which test data are chosen to lie along
"boundaries" of the input domain [or output range] classes, data structures, procedure
parameters, etc. Choices often include maximum, minimum, and trivial values or parameters.
Branch Coverage: A test coverage criteria which requires that for each decision point each
possible branch be executed at least once.
Beta Testing: Acceptance testing performed by the customer in a live application of the
software, at one or more end user sites, in an environment not controlled by the developer.
Boundary Value Testing: A testing technique using input values at, just below, and just
above, the defined limits of an input domain; and with input values causing outputs to be at,
just below, and just above, the defined limits of an output domain.
Branch Testing: Testing technique to satisfy coverage criteria which require that for each
decision point, each possible branch [outcome] be executed at least once. Contrast with
testing, path; testing, statement. See: branch coverage.
Compatibility Testing: The process of determining the ability of two or more systems to
exchange information. In a situation where the developed software replaces an already
working program, an investigation should be conducted to assess possible comparability
problems between the new software and other programs or systems.
Cause Effect Graph: A Boolean graph linking causes and effects. The graph is actually a
digital-logic circuit (a combinatorial logic network) using a simpler notation than standard
electronics notation.
Cause Effect Graphing: This is a Test data selection technique. The input and output
domains are partitioned into classes and analysis is performed to determine which input
classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect
set. It is a systematic method of generating test cases representing combinations of
conditions.
Code Inspection: A manual [formal] testing [error detection] technique where the
programmer reads source code, statement by statement, to a group who ask questions
analyzing the program logic, analyzing the code with respect to a checklist of historically
common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A manual testing [error detection] technique where program [source
code] logic [structure] is traced manually [mentally] by a group with a small set of test cases,
while the state of program variables is manually monitored, to analyze the programmer's logic
and assumptions.
Coverage Analysis: Determining and assessing measures associated with the invocation of
program structural elements to determine the adequacy of a test run. Coverage analysis is
useful when attempting to execute each statement, branch, path, or iterative structure in a
program.
Criticality: The degree of impact that a requirement, module, error, fault, failure, or other
item has on the development or operation of a system.
Error: A discrepancy between a computed, observed, or measured value or condition and the
true, specified, or theoretically correct value or condition.
Error Guessing: This is a Test data selection technique. The selection criterion is to pick
values that seem likely to cause errors.
Error Seeding: The process of intentionally adding known faults to those already in a
computer program for the purpose of monitoring the rate of detection and removal, and
estimating the number of faults remaining in the program. Contrast with mutation analysis.
Exception: An event that causes suspension of normal program execution. Types include
addressing exception, data exception, operation exception, overflow exception, protection
exception, and underflow exception.
Exhaustive Testing: Executing the program with all possible combinations of values for
program variables. This type of testing is feasible only for small, simple programs.
Failure: The inability of a system or component to perform its required functions within
specified performance requirements.
Fault: An incorrect step, process, or data definition in a computer program which causes the
program to perform in an unintended or unanticipated manner.
Functional Testing: Testing that ignores the internal mechanism or structure of a system or
component and focuses on the outputs generated in response to selected inputs and execution
conditions. (2) Testing conducted to evaluate the compliance of a system or component with
specified functional requirements and corresponding predicted results.
Interface Testing: Testing conducted to evaluate whether systems or components pass data
and control correctly to one another.
Mutation Testing: A testing methodology in which two or more program mutations are
executed using the same test cases to evaluate the ability of the test cases to detect
differences in the mutations.
Parallel Testing: Testing a new or an altered data processing system with the same source
data that is used in another system. The other system is considered as the standard of
comparison.
Path Testing: Testing to satisfy coverage criteria that each logical path through the program
be tested. Often paths through the program are grouped into a finite set of classes. One path
from each class is then tested.
Qualification Testing: Formal testing, usually conducted by the developer for the consumer,
to demonstrate that the software meets its specified requirements.
Quality Assurance: (1) The planned systematic activities necessary to ensure that a
component, module, or system conforms to established technical requirements. (2) All actions
that are taken to ensure that a development organization delivers products that meet
performance requirements and adhere to standards and procedures. (3) The policy,
procedures, and systematic actions established in an enterprise for the purpose of providing
and maintaining some degree of confidence in data integrity and accuracy throughout the life
cycle of the data, which includes input, update, manipulation, and output. (4) The actions,
planned and performed, to provide confidence that all systems and components that influence
the quality of the product are working as expected individually and collectively.
Quality Control: The operational techniques and procedures used to achieve quality
requirements.
Regression Testing: Rerunning test cases which a program has previously executed correctly
in order to detect errors spawned by changes or corrections made during software
development and maintenance.
Review: A process or meeting during which a work product or set of work products, is
presented to project personnel, managers, users, customers, or other interested parties for
comment or approval. Types include code review, design review, formal qualification review,
requirements review, test readiness review.
Risk: A measure of the probability and severity of undesired effects.
Risk Assessment: A comprehensive evaluation of the risk and its associated impact.
Static Analysis: Analysis of a program that is performed without executing the program. The
process of evaluating a system or component based on its form, structure, content,
documentation is also called as Static Analysis.
Statement Testing: Testing to satisfy the criterion that each statement in a program be
executed at least once during program testing.
Storage Testing: This is a determination of whether or not certain processing conditions use
more storage [memory] than estimated.
Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits
of its specified requirements.
Structural Testing: Testing that takes into account the internal mechanism [structure] of a
system or component. Types include branch testing, path testing, statement testing. (2)
Testing to insure each program statement is made to execute during testing and that each
program statement performs its intended function.
System Testing: The process of testing an integrated hardware and software system to verify
that the system meets its specified requirements. Such testing may be conducted in both the
development environment and the target environment.
Test: An activity in which a system or component is executed under specified conditions, the
results are observed or recorded and an evaluation is made of some aspect of the system or
component.
Testability: The degree to which a system or component facilitates the establishment of test
criteria and the performance of tests to determine whether those criteria have been met.
Testcase Generator: A software tool that accepts as input source code, test criteria,
specifications, or data structure definitions; uses these inputs to generate test input data; and,
sometimes, determines expected results.
Test Design: Documentation specifying the details of the test approach for a software feature
or combination of software features and identifying the associated tests.
Test Documentation: Documentation describing plans for, or results of, the testing of a
system or component, Types include test case specification, test incident report, test log, test
plan, test procedure, test report.
Test Driver: A software module used to invoke a module under test and, often, provide test
inputs, control and monitor execution, and report test results.
Test Incident Report: A document reporting on any event that occurs during testing that
requires further investigation.
Test Log: A chronological record of all relevant details about the execution of a test.
Test Phase: The period of time in the software life cycle in which the components of a
software product are evaluated and integrated, and the software product is evaluated to
determine whether or not requirements have been satisfied.
Test Plan: Documentation specifying the scope, approach, resources, and schedule of
intended testing activities. It identifies test items, the features to be tested, the testing tasks,
responsibilities, required, resources, and any risks requiring contingency planning. See: test
design, validation protocol.
Test Procedure: A formal document developed from a test plan that presents detailed
instructions for the setup, operation, and evaluation of the results for each defined test.
Test Report: A document describing the conduct and results of the testing carried out for a
system or system component.
Test Result Analyzer: A software tool used to test output data reduction, formatting, and
printing.
Testing: (1) The process of operating a system or component under specified conditions,
observing or recording the results, and making an evaluation of some aspect of the system or
component. (2) The process of analyzing a software item to detect the differences between
existing and required conditions, i.e. bugs, and to evaluate the features of the software items.
Traceability Matrix: A matrix that records the relationship between two or more products;
e.g., a matrix that records the relationship between the requirements and the design of a
given software component. See: traceability, traceability analysis.
Unit Testing: Testing of a module for typographic, syntactic, and logical errors, for correct
implementation of its design, and for satisfaction of its requirements (or) Testing conducted to
verify the implementation of the design for one software element; e.g., a unit or module; or a
collection of software elements.
Usability: The ease with which a user can learn to operate, prepare inputs for, and interpret
outputs of a system or component.
Validation: Establishing documented evidence which provides a high degree of assurance that
a specific process will consistently produce a product meeting its predetermined specifications
and quality attributes.