0% found this document useful (0 votes)
10 views34 pages

Ch8.SoftwareTesting 34

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views34 pages

Ch8.SoftwareTesting 34

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 34

Chapter 8:

Software Testing

1
Program Testing

 Testing is intended to show that a program does what it is


intended to do and to discover program defects before it is put
into use.
 When you test software, you execute a program using
artificial data.
 You check the results of the test run for errors, anomalies or
information about the program’s non-functional attributes.
 Can reveal the presence of errors NOT their absence.
 Testing is part of a more general verification and validation
process, which also includes static validation techniques.

2
Program Testing Goals

 To demonstrate to the developer and the customer that


the software meets its requirements.
 For custom software, this means that there should be at least
one test for every requirement in the requirements document.
For generic software products, it means that there should be
tests for all of the system features, plus combinations of these
features, that will be incorporated in the product release.
 To discover situations in which the behavior of the
software is incorrect, undesirable or does not
conform to its specification.
 Defect testing is concerned with rooting out undesirable system
behavior such as system crashes, unwanted interactions with
other systems, incorrect computations and data corruption.
3
Validation and Defect Testing

 The first goal leads to validation testing


 You expect the system to perform correctly using a given set of
test cases that reflect the system’s expected use.
 The second goal leads to defect testing
 The test cases are designed to expose defects. The test cases
in defect testing can be deliberately obscure and need not reflect
how the system is normally used.

4
Testing Process Goals

 Validation Testing
 To demonstrate to the developer and the system customer that
the software meets its requirements
 A successful test shows that the system operates as intended.
 Defect Testing
 To discover faults or defects in the software where its behavior is
incorrect or not in conformance with its specification
A successful test is a test that makes the system perform
incorrectly and so exposes a defect in the system.

5
An Input-Output Model of Program Testing

6
Verification Vs Validation

 Verification:
"Are we building the product right”.
The software should conform to its specification.
 Validation:
"Are we building the right product”.
The software should do what the user really requires.

7
V & V Confidence

 Aim of V & V is to establish confidence that the system is


‘fit for purpose’.
 Depends on system’s purpose, user expectations and
marketing environment
 Software purpose
• The level of confidence depends on how critical the software is to an
organization.
 User expectations
• Users may have low expectations of certain kinds of software.
 Marketing environment
• Getting a product to market early may be more important than
finding defects in the program.

8
Inspections and Testing

 Software Inspections Concerned with analysis of


the static system representation to discover problems
(static verification)
 May be supplement by tool-based document and code
analysis.
 Software Testing Concerned with exercising and
observing product behaviour (dynamic verification)
 The system is executed with test data and its operational
behaviour is observed.

9
Inspections and Testing

10
Software Inspections

 These involve people examining the source representation


with the aim of discovering anomalies and defects.
 Inspections do not require execution of a system so may
be used before implementation.
 They may be applied to any representation of the system
(requirements, design, configuration data, test data, etc.).
 They have been shown to be an effective technique for
discovering program errors.

11
Advantages of Inspections

 Because inspection is a static process, you don’t have to


be concerned with interactions between errors. During
testing, errors can mask (hide) other errors, but not in
inspection.
 Incomplete versions of a system can be inspected
without additional costs. If a program is incomplete, then
you need to develop specialized test harnesses to test
the parts that are available.
 In addition to searching for program defects, an
inspection can also consider broader quality attributes of
a program, such as compliance with standards,
portability and maintainability.
12
Inspections and Testing

 Inspections and testing are complementary and not


opposing verification techniques.
 Both should be used during the V & V process.
 Inspections can check conformance with a specification
but not conformance with the customer’s real
requirements.
 Inspections cannot check non-functional characteristics
such as performance, usability, etc.

13
A Model of the Software Testing Process

14
Stages of Testing

 Development testing, where the system is tested during


coding to discover bugs and defects.
  In the development environment
 Release testing, where a separate testing team test a
complete version of the system before released to users.
  In a controlled environment
 User testing, where users or potential users of a system
test the system in their own environment.
  In the production environment

15
Development Testing

 Development testing includes:


 Unit testing, where individual program units or object classes
are tested. Unit testing should focus on testing the functionality
of objects or methods.
 Integration/Component testing, where several individual units
are integrated to create composite components. Component
testing should focus on testing component interfaces.
 System testing, where some or all of the components in a
system are integrated and the system is tested as a whole.
System testing should focus on testing component interactions.

16
Unit Testing

 Unit testing is the process of testing individual


components in isolation.
 It is a defect testing process.
 Units may be:
 Individual functions or methods within an object
 Object classes with several attributes and methods
 Composite components with defined interfaces used to access
their functionality.

17
Automated Testing
Activity: Who can prepare a complete demo on JUnit?
 It may be considered as bonus!

 Whenever possible, unit testing should be automated so


that tests are run and checked without manual
intervention.
 In automated unit testing, you make use of a test
automation framework (such as JUnit) to write and run
your program tests.
Unit testing frameworks (eg. JUnit) provide generic test
classes that you extend to create specific test cases.
They can then run all of the tests that you have
implemented and report, often through some GUI.

18
Automated Test Components
Here is what you need to do for the demo

 A setup part, where you initialize the system with the


test case, namely the inputs and expected outputs.
 A call part, where you call the object or method to be
tested.
 An assertion part, where you compare the result of the
call with the expected result. If the assertion evaluates to
true, the test has been successful if false, then it has
failed.

19
General Testing Guidelines

 Choose inputs that force the system to generate all error


messages
 Design inputs that cause input buffers to overflow
 Repeat the same input or series of inputs numerous
times
 Force invalid outputs to be generated
 Force computation results to be too large or too small.

20
Interface Testing

 Objectives are to detect faults due to interface errors or


invalid assumptions about interfaces.
 Interface types
 Parameter interfaces Data passed from one method or
procedure to another.
 Shared memory interfaces Block of memory is shared between
procedures or functions.
 Procedural interfaces Sub-system encapsulates a set of
procedures to be called by other sub-systems.
 Message passing interfaces Sub-systems request services from
other sub-systems

21
Interface Errors

 Interface misuse
 A calling component calls another component and makes an
error in its use of its interface e.g. parameters in the wrong
order.
 Interface misunderstanding
 A calling component embeds assumptions about the behaviour
of the called component which are incorrect.
 Timing errors
 The called and the calling component operate at different
speeds and out-of-date information is accessed.

22
Interface Testing Guidelines

 Design tests so that parameters to a called procedure


are at the extreme ends of their ranges.
 Always test pointer parameters with null pointers.
 Design tests which cause the component to fail.
 Use stress testing in message passing systems.
 In shared memory systems, vary the order in which
components are activated.

23
System Testing

 System testing during development involves integrating


components to create a version of the system and then
testing the integrated system.
 The focus in system testing is testing the interactions
between components.
 System testing checks that components are compatible,
interact correctly and transfer the right data at the right
time across their interfaces.
 System testing tests the emergent behavior of a system.

24
System and Component Testing (Integration)

 During system testing, reusable components that have


been separately developed and off-the-shelf systems
may be integrated with newly developed components.
The complete system is then tested.
 Components developed by different team members or
sub-teams may be integrated at this stage. System
testing is a collective rather than an individual process.
 In some companies, system testing may involve a separate
testing team with no involvement from designers and
programmers.

25
Test-driven Development

 Test-driven development (TDD) is an approach to


program development in which you inter-leave testing
and code development.
Tests are written before code and ‘passing’ the tests is
the critical driver of development.
 You develop code incrementally, along with a test for
that increment. You don’t move on to the next increment
until the code that you have developed passes its test.
 TDD was introduced as part of agile methods such as
Extreme Programming. However, it can also be used in
plan-driven development processes.
26
Test-driven Development Diagram

27
TDD process Activities (5 Steps)

 Start by identifying the increment of functionality that is


required. This should normally be small and
implementable in a few lines of code.
 Write a test for this functionality and implement this as
an automated test.
 Run the test, along with all other tests that have been
implemented. Initially, you have not implemented the
functionality so the new test will fail.
 Implement the functionality and re-run the test.
 Once all tests run successfully, you move on to
implementing the next chunk of functionality.
28
Benefits of Test-driven Development

 Code coverage
 Every code segment that you write has at least one associated
test so all code written has at least one test.
 Regression testing
 A regression test suite is developed incrementally as a program
is developed.
 Simplified debugging
 When a test fails, it should be obvious where the problem lies.
The newly written code needs to be checked and modified.
 System documentation
 The tests themselves are a form of documentation that describe
what the code should be doing.
29
Regression Testing

 Regression testing is testing the system to check that


changes have not ‘broken’ previously working code.
 In a manual testing process, regression testing is
expensive but, with automated testing, it is simple and
straightforward. All tests are rerun every time a change
is made to the program.
 Tests must run ‘successfully’ before the change is
committed.

30
User Testing

 Alpha testing
 Users of the software work with the development team to test
the software at the developer’s site.
 Beta testing
 A release of the software is made available to users to allow
them to experiment and to raise problems that they discover with
the system developers.
 Acceptance testing
 Customers test a system to decide whether or not it is ready to
be accepted from the system developers and deployed in the
customer environment. Primarily for custom systems.

31
The Acceptance Testing Process

32
Stages in the Acceptance Testing Process

 Define acceptance criteria


 Plan acceptance testing
 Derive acceptance tests
 Run acceptance tests
 Negotiate test results
 Reject/accept system

33
Agile Methods and Acceptance Testing

 In agile methods, the user/customer is part of the


development team and is responsible for making
decisions on the acceptability of the system.
 Tests are defined by the user/customer and are
integrated with other tests in that they are run
automatically when changes are made.
 There is no separate acceptance testing process.
Main problem here is whether or not the embedded
user is ‘typical’ and can represent the interests of all
system stakeholders.

34

You might also like