Chapter 8
Chapter 8
Chapter 8 – Software
Verification and Validation
1 FES
Program Testing
• To demonstrate to the developer and the customer that the software meets its
requirements.
• For custom software, this means that there should be at least one test for every
requirement in the requirements document. For generic software products, it means
that there should be tests for all of the system features, plus combinations of these
features, that will be incorporated in the product release.
• To discover situations in which the behavior of the software is incorrect, undesirable or
does not conform to its specification.
• Defect testing is concerned with rooting out undesirable system behavior such as
system crashes, unwanted interactions with other systems, incorrect computations
and data corruption.
Validation And Defect Testing
• Validation testing
• To demonstrate to the developer and the system customer
that the software meets its requirements
• A successful test shows that the system operates as
intended.
• Defect testing
• To discover faults or defects in the software where its
behavior is incorrect or not in conformance with its
specification
• A successful test is a test that makes the system perform
incorrectly and so exposes a defect in the system.
An Input-output Model of Program Testing
Verification vs Validation
• Verification:
"Are we building the product right”.
• The software should conform to its specification.
• Validation:
"Are we building the right product”.
• The software should do what the user really requires.
V & V confidence
• Aim of V & V is to establish confidence that the system is ‘fit for purpose’.
• Depends on system’s purpose, user expectations and marketing
environment
• Software purpose
• The level of confidence depends on how critical the software is to
an organisation.
• User expectations
• Users may have low expectations of certain kinds of software.
• Marketing environment
• Getting a product to market early may be more important than
finding defects in the program.
Inspections and Testing
• Involve people examining the source representation with the aim of discovering
anomalies and defects
• Do not require execution of a system
► May be used before implementation.
• May be applied to any representation of the system
► Requirements, design, test data, etc.
• Very effective technique for discovering errors
• Many different defects may be discovered in a single inspection
► In testing, one defect may mask another so several executions are required.
• Reuse of domain and programming knowledge
► Reviewers are likely to have seen the types of error that commonly arise.
Program Inspections
• Development testing includes all testing activities that are carried out by
the team developing the system.
• Unit testing, where individual program units or object classes are
tested. Unit testing should focus on testing the functionality of objects
or methods.
• Component testing, where several individual units are integrated to
create composite components. Component testing should focus on
testing component interfaces.
• System testing, where some or all of the components in a system are
integrated and the system is tested as a whole. System testing should
focus on testing component interactions.
Unit Testing
• Need to define test cases for reportWeather, calibrate, test, startup and
shutdown.
• Using a state model, identify sequences of state transitions to be tested
and the event sequences to cause these transitions
• For example:
• Shutdown -> Running-> Shutdown
• Configuring-> Running-> Testing -> Transmitting -> Running
• Running-> Collecting-> Running-> Summarizing -> Transmitting ->
Running
Automated Testing
• Whenever possible, unit testing should be automated so that tests are run
and checked without manual intervention.
• In automated unit testing, you make use of a test automation framework
(such as JUnit) to write and run your program tests.
• Unit testing frameworks provide generic test classes that you extend to
create specific test cases. They can then run all of the tests that you have
implemented and report, often through some GUI, on the success of
otherwise of the tests.
Automated Test Components
1. A setup part, where you initialize the system with the test case, namely
the inputs and expected outputs.
2. A call part, where you call the object or method to be tested.
3. An assertion part where you compare the result of the call with the
expected result. If the assertion evaluates to true, the test has been
successful if false, then it has failed.
Unit Test Effectiveness
• The test cases should show that, when used as expected, the component
that you are testing does what it is supposed to do.
• If there are defects in the component, these should be revealed by test
cases.
• This leads to 2 types of unit test case:
• The first of these should reflect normal operation of a program and
should show that the component works as expected.
• The other kind of test case should be based on testing experience of
where common problems arise. It should use abnormal inputs to check
that these are properly processed and do not crash the component.
Testing Strategies
• Partition testing, where you identify groups of inputs that have common
characteristics and should be processed in the same way.
• You should choose tests from within each of these groups.
• Guideline-based testing, where you use testing guidelines to choose test
cases.
• These guidelines reflect previous experience of the kinds of errors that
programmers often make when developing components.
Partition Testing
• Input data and output results often fall into different classes where all
members of a class are related.
• Each of these classes is an equivalence partition or domain where the
program behaves in an equivalent way for each class member.
• Test cases should be chosen from each partition.
Equivalence Partitioning
Equivalence Partitions
Testing Guidelines (Sequences)
• Choose inputs that force the system to generate all error messages
• Design inputs that cause input buffers to overflow
• Repeat the same input or series of inputs numerous times
• Force invalid outputs to be generated
• Force computation results to be too large or too small.
Component Testing
• Interface misuse
• A calling component calls another component and makes an error in its
use of its interface e.g. parameters in the wrong order.
• Interface misunderstanding
• A calling component embeds assumptions about the behaviour of the
called component which are incorrect.
• Timing errors
• The called and the calling component operate at different speeds and
out-of-date information is accessed.
Interface Testing Guidelines
Kate is a nurse who specializes in mental health care. One of her responsibilities is to visit patients at home to check
that their treatment is effective and that they are not suffering from medication side -effects. On a day for home
visits, Kate logs into the MHC-PMS and uses it to print her schedule of home visits for that day, along with
summary information about the patients to be visited. She requests that the records for these patients be
downloaded to her laptop. She is prompted for her key phrase to encrypt the records on the laptop. One of the
patients that she visits is Jim, who is being treated with medication for depression. Jim feels that the medication is
helping him but believes that it has the side -effect of keeping him awake at night. Kate looks up Jim’s record and is
prompted for her key phrase to decrypt the record. She checks the drug prescribed and queries its side effects.
Sleeplessness is a known side effect so she notes the problem in Jim’s record and suggests that he visits the clinic to
have his medication changed. He agrees so Kate enters a prompt to call him when she gets back to the clinic to
make an appointment with a physician. She ends the consultation and the system re-encrypts Jim’s record. After,
finishing her consultations, Kate returns to the clinic and uploads the records of patients visited to the database.
The system generates a call list for Kate of those patients who she has to contact for follow-up information and
make clinic appointments.
Performance Testing
• Alpha testing
• Users of the software work with the development team to test the
software at the developer’s site.
• Beta testing
• A release of the software is made available to users to allow them to
experiment and to raise problems that they discover with the system
developers.
• Acceptance testing
• Customers test a system to decide whether or not it is ready to be
accepted from the system developers and deployed in the customer
environment. Primarily for custom systems.
The Acceptance Testing Process
Stages In The Acceptance Testing Process