Verification vs. Validation: Unit 7
Verification vs. Validation: Unit 7
com
UNIT 7
VERIFICATION AND VALIDATION
Validation: "Are we building the right product”., The software should do what
the user really requires.
V& V goals
Verification and validation should establish confidence that the software is fit
for purpose. This does NOT mean completely free of defects. Rather, it must
be good enough for its intended use and the type of use will determine the
degree of confidence that is needed.
V & V confidence
Depends on system‟s purpose, user expectations and marketing environment
Software function
•The level of confidence depends on how critical the software is to
an organisation.
User expectations
•Users may have low expectations of certain kinds of software.
Marketing environment
•Getting a product to market early may be more important than
finding defects in the program.
Program testing
Can reveal the presence of errors NOT their absence. The only validation
technique for non-functional requirements as the software has to be executed
to see how it behaves. Should be used in conjunction with static verification to
provide full V&V coverage.
Types of testing
• Defect testing: Tests designed to discover system defects. A
successful defect test is one which reveals the presence of defects in
a system. Covered in Chapter 23
• Validation testing: Intended to show that the software meets its
requ irements. A successful test is one that shows that a requirement
has been properly implemented.
V & V planning
Careful planning is required to get the most out of testing and inspection
processes. Planning should start early in the development process. The plan
should identify the balance between static verification and testing. Test
planning is about defining standards for the testing process rather than
describing product tests.
T es ted item s
T he p rodu cts o f the s of tw are p rocess th at a re to be tes ted s hou ld be
spe cifie d.
C ons tr a ints
Constra in ts affecting th e tes tin g proc es s suc h as sta ff sho rtag es sho uld
b e an ticip ated in th is se ction .
Software inspections
These involve people examining the source representation with the aim of
discovering anomalies and defects.
Inspections not require execution of a system so may be used before
implementation.
They may be applied to any representation of the system (requirements,
design, configuration data, test data, etc.).
They have been shown to be an effective technique for discovering program
errors.
Inspection success
Many different defects may be discovered in a single inspection. In
testing, one defect, may mask another so several executions are
required. The reuse domain and programming knowledge so
reviewers are likely to have seen the types of error that commonly
arise.
Inspections and testing
Inspections and testing are complementary and not opposing
verification techniques. Both should be used during the V & V
process. Inspections can check conformance with a specification but
not conformance with the customer‟s real requirements. Inspections
Program inspections
Formalised approach to document reviews. Intended explicitly for
defect detection (not correction). Defects may be logical errors,
anomalies in the code that might indicate an erroneous condition
(e.g. an uninitialised variable) or non-compliance with standards.
Inspection pre-conditions
• A precise specification must be available.
• Team members must be familiar with the organisation standards.
• Syntactically correct code or other system representations must be
available.
• An error checklist should be prepared.
• Management must accept that inspection will increase costs early
in the software process.
• Management should not use inspections for staff appraisal i.e.
finding out who makes mistakes.
Inspection procedure
• System overview presented to inspection team.
• Code and associated documents are distributed to inspection
team in advance.
• Inspection takes place and discovered errors are noted.
• Modifications are made to repair discovered errors.
• Re-inspection may or may not be required.
Inspection roles
Inspection checklists
• Checklist of common errors should be used to drive the inspection.
• Error checklists are programming language dependent and reflect the
characteristic errors that are likely to arise in the language.
• In general, the 'weaker' the type checking, the larger the checklist.
• Examples: Initialisation, Constant naming, loop termination, array
bounds, etc.
Inspection check
Inspection rate
500 statements/hour during overview. 125 source statement/hour during
individual
preparation. 90-125 statements/hour can be inspected. Inspection is therefore
an expensive process. Inspecting 500 lines costs about 40 man/hours effort -
about £2800 at UK rates.
ITesting phases
Defect testing
• The goal of defect testing is to discover defects in programs
• A successful defect test is a test which causes a program to
behave in an anomalous way
• Tests show the presence not the absence of defects
Testing policies
Only exhaustive testing can show a program is free from defects.
However, exhaustive testing is impossible,
Testing policies define the approach to be used in selecting system
tests:
•All functions accessed through menus should be tested;
•Combinations of functions accessed through the same menu should
be tested;
Testing approaches
• Architectural validation: Top-down integration testing is better
at discovering errors in the system architecture.
• System demonstration: Top-down integration testing allows a
limited demonstration at an early stage in the development.
• Test implementation: Often easier with bottom-up integration
testing.
• Test observation: Problems with both approaches. Extra code
may be required to observe tests.
Release testing
• The process of testing a release of a system that will be
distributed to customers.
• Primary goal is to increase the supplier‟s confidence that the
system meets its requirements.
• Release testing is usually black-box or functional testing
• Based on the system specification only;
• Testers do not have knowledge of the system implementation.
Black-box testing
I
Testing guidelines
Testing guidelines are hints for the testing team to help them choose
tests that will reveal defects in the system
• Choose inputs that force the system to generate all error
messages;
• Design inputs that cause buffers to overflow;
• Repeat the same input or input series several times;
• Force invalid outputs to be generated;
• Force computation results to be too large or too small.
System tests
1. T e s t t h e l o g in m e c h a n i s m u s i n g c o r r e c t a n d i n c o r re c t l o g in s t o c h e c k
t h a t v a l i d u s e rs a r e a c c e p te d a n d i n v a l i d u s e rs a re r e je c t e d .
2. T e s t t h e s e a r c h f a c i li t y u s i n g d if f e r e n t q u e r ie s a g a in s t k n o w n s o u r c e s to
c h e c k t h a t th e s e a r c h m e c h a n i s m is a c t u a l ly f i n d i n g d o c u m e n t s .
3. T e s t t h e s y s te m p re s e n t a t i o n f a c i l i t y t o c h e c k t h a t i n f o r m a t i o n ab ou t
d o c u m e n t s is d i s p la y e d p r o p e r ly .
4. T e s t t h e m e c h a n is m t o r e q u e s t p e r m is s i o n f o r d o w n l o a d i n g .
5. T e s t th e e - m a i l r e s p o n s e in d i c a t i n g th a t th e d o w n l o a d e d d o c u m e n t is
a v ailab le.
Use cases
Use cases can be a basis for deriving the tests for a system. They
help identify operations to be tested and help design the required
test cases. From an associated sequence diagram, the inputs and
outputs to be created for the tests can be identified.
Performance testing
Part of release testing may involve testing the emergent properties
of a system, such as performance and reliability. Performance tests
usually involve planning a series of tests where the load is steadily
increased until the system performance becomes unacceptable.
Stress testing
• Exercises the system beyond its maximum design load.
Stressing the system often causes defects to
come to light.
• Stressing the system test failure behaviour.. Systems should not
fail catastrophically. Stress testing checks for unacceptable loss of
service or data.
• Stress testing is particularly relevant to distributed systems that
can exhibit severe degradation as a
network becomes overloaded .
Component testing
• Component or unit testing is the process of testing individual
components in isolation.
Interface testing
• Objectives are to detect faults due to interface errors or invalid
assumptions about interfaces.
Interface types
• Parameter interfaces: Data passed from one procedure to
another.
• Shared memory interfaces: Block of memory is shared between
procedures or functions.
• Procedural interfaces: Sub-system encapsulates a set of
procedures to be called by other sub-systems.
• Message passing interfaces: Sub-systems request services from
other sub-systems
Interface errors
• Interface misuse: A calling component calls another component
and makes an error in its use of its interface e.g. parameters in the
wrong order.
• Interface misunderstanding: A calling component embeds
assumptions about the behaviour of the called component which are
incorrect.
The user shall be able to search either all of the initial set of databases or select a
subset from it.
The system shall provide appropriate viewers for the user to read documents in the
document store.
Every order shall be allocated a unique identifier (ORDER_ID) that the user shall
be able to copy to the ac countÕs permanent storage area.
Partition testing
• Input data and output results often fall into different classes
where all members of a class are related.
• Each of these classes is an equivalence partition or domain
where the program behaves in an equivalent way for each class
member.
• Initiate user search for searches fo r items that are known to
be present and known not to be present, where the set of
databases includes 1 database.
• Initiate user searches fo r items that are known to be present
and known not to be present, where the set of databases
includes 2 da tabases
• Initiate user searches fo r items that are known to be present
and known not to be present where the set of databases
includes more than 2 databases.
• Select one database from the set of databases and initiate
user searches for items that are known to be present and
known not to be present.
• Select more than on e database from the set of databases
and initiate searches for items that are known to be present
and known not to be present.
• Test cases should be chosen from each partition.
Equivalence partitioning
Equivalence partitions
Pre-condition
-- the sequence has at least one element
T‟FIRST <= T‟LAST
Post-condition
-- the element is found and is referenced by L
( Found and T (L) = Key)
or
-- the element is not in the array
( not Found and
not (exists i, T‟FIRST >= i <= T‟LAST, T (i) = Key ))
I n p u t s e q u e n ce ( T ) K ey ( K ey ) O u tp ut ( F o u n d , L )
17 17 tr u e, 1
17 0 f al s e , ? ?
17, 29 , 21, 23 17 tr u e, 1
41, 18 , 9, 3 1, 30, 1 6, 45 45 tr u e, 7
1 7, 18 , 2 1 , 23 , 2 9 , 41 , 3 8 23 tr u e, 4
21, 23 , 2 9, 33 , 38 25 f al s e , ? ?
Structural testing
Sometime called white-box testing. Derivation of test cases
according to program structure. Knowledge of the program is used
to identify additional test cases. Objective is to exercise all program
statements (not all path combinations)
Path testing
• The objective of path testing is to ensure that the set of test
cases is such that each path through the program is executed at least
once.
• The starting point for path testing is a program flow graph that
shows nodes representing program decisions and arcs representing
the flow of control.
• Statements with conditions are therefore nodes in the flow
graph.
Independent paths
•1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 14
•1, 2, 3, 4, 5, 14
•1, 2, 3, 4, 5, 6, 7, 11, 12, 5, …
•1, 2, 3, 4, 6, 7, 2, 11, 13, 5, …
Test automation
• Testing is an expensive process phase. Testing workbenches
provide a range of tools to reduce the time required and total testing
costs.
• Systems such as Junit support the automatic execution of tests.
• Most testing workbenches are open systems because testing
needs are organisation-specific.
• They are sometimes difficult to integrate with closed design
and analysis workbenches.
A testing workbench
I