0% found this document useful (0 votes)
98 views

Seminar On: Software Testing: A Practioner'S Approach

The document summarizes a seminar on software testing for practitioners. The agenda includes discussing testing skills, experiences, practical viewpoints, issues faced during testing and solutions. It will cover test automation, metrics, and tools. The objectives are to share knowledge rather than teach fundamentals. It discusses why testing is needed, definitions, types of testing at different levels, and provides overviews of key testing concepts.

Uploaded by

mbhangale
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
98 views

Seminar On: Software Testing: A Practioner'S Approach

The document summarizes a seminar on software testing for practitioners. The agenda includes discussing testing skills, experiences, practical viewpoints, issues faced during testing and solutions. It will cover test automation, metrics, and tools. The objectives are to share knowledge rather than teach fundamentals. It discusses why testing is needed, definitions, types of testing at different levels, and provides overviews of key testing concepts.

Uploaded by

mbhangale
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 202

Date : 06-April-2002

Seminar on
SOFTWARE TESTING:
A PRACTIONERS
APPROACH

Software Testing

1 / 202

Date : 06-April-2002

Todays Agenda
Expectations
Testing
Testing Types & Levels
Test Assets
Test Approaches
Aspects of
Automation
Metrics from Testing
Test Tools
Q & A and Wrap-up

Software Testing

AGENDA

Type text

2 / 202

Date : 06-April-2002

Objectives
Discussing a group on fine tuning the testing
skills
Sharing experiences in testing
Provide practical viewpoint on testing
Discussing issues/problems faced during
testing and possible ways to overcome them
Exploring use of metrics to reduce testing
effort
Highlight use of varied test approach
depending on environment
Software Testing

3 / 202

Date : 06-April-2002

Non-Objectives
Not to teach fundamentals of testing
Not to create appreciation for testing
Not to give a ready-to-use formula for all
testing issues

Software Testing

4 / 202

Date : 06-April-2002

Why Test?

Software Testing

Nobody is perfect
Bugs in development tools
Certain bugs easier to find in testing
Bugs found on-site can be disastrous
Post release debugging is expensive
To deliver a Quality product to the
customer meeting the specifications

5 / 202

Date : 06-April-2002

What is Quality?
Producers view
Meeting requirements

Customers View
Fitness For purpose

ISO 9000
The totality of characteristics of an entity (product or
service) that bear on its ability to satisfy stated or
implied needs.

Software Testing

6 / 202

Date : 06-April-2002

Meeting Customers Needs


Do it right the first time
still a dream!
requires perfect process
is it possible?? ..Not yet!!

After-the-fact defect removal


normally seen
test & correct approach
costly

Software Testing

7 / 202

Date : 06-April-2002

Cost of software Defects

Software Testing

8 / 202

Date : 06-April-2002

Introduction to Defect

Software Testing

9 / 202

Date : 06-April-2002

Software Quality
Software Engg
Methods

Formal
Reviews

Metrics

QUALITY
Standards &
Procedures

Testings
SCM &
SQA

Software Testing

10 / 202

Date : 06-April-2002

Definitions
Defect
A deviation from specification standard
Anything that causes customer dissatisfaction

Verification
All QC activities throughout the life cycle that ensure
that interim deliverables meet their input
specification

Validation
The test phase of the cycle which assures that the
end product meets the users needs

Software Testing

11 / 202

Date : 06-April-2002

Some Facts
Testing cycle break-up (effort-wise)
Testing - 20%
Debugging - 80%

Software Testing

12 / 202

Date : 06-April-2002

Points to Ponder
80% of all defects occur in 20% of the work

Software Testing

What is the cost of testing


What is the value of any test case
What is the life of a test case
What to test & how
Can we visualize testing as a process rather
than an art

13 / 202

Date : 06-April-2002

Testing
Testing is the process of exercising or
evaluating a system or system component
by manual or automated means to verify
that it satisfies specified requirements.

Software Testing

14 / 202

Date : 06-April-2002

Testing in IBM Global


Services
Purpose of testing activity is to verify that the
software satisfies the specified requirements.

Software Testing

15 / 202

Date : 06-April-2002

Testing
Testing is a process of executing a program with the intent of finding
errors
A good test case is one that has a high probability of finding an error
A successful test case is one that detects an as-yet-undiscovered
error

Software Testing

16 / 202

Date : 06-April-2002

Testing Involves
Plan for testing
Design test conditions, cases
Develop test data
Create required test environment
Execute tests
Analyze actual results with expected
results
Results: Rest passed or failed!

Software Testing

17 / 202

Date : 06-April-2002

Testers Maturity
Biezer
Phase 0 - no difference between testing and
debugging. Test only to support debugging
Phase 1 - Purpose is to show that software works
Phase 2 - Purpose is to show that software does not
work

Software Testing

18 / 202

Date : 06-April-2002

Testers Maturity (contd..)


Phase 3 - Purpose not to prove anything, put to
reduce perceived risk of not working to an
acceptable value
Phase 4 - Testing is mental discipline resulting in low
risk software without much testing effort

Software Testing

19 / 202

Date : 06-April-2002

What can affect Effective


Testing

Software Testing

Optimise that the system works


Negative attitude towards testing
Ego
I dont want the software to fail
Conflict between testers and developers
Improper test planning & test design
Inexperience & insufficient resources
Low management support

20 / 202

Date : 06-April-2002

What can affect Effect


Testing (contd..)
Cost of Testing
Delivery deadlines

Software Testing

21 / 202

Date : 06-April-2002

The pesticides Paradox


First law : Every method you use to prevent or
find bugs leaves a residue of subtler bugs
against which those methods are ineffectual
Second law : Software complexity (and
therefore that of the bugs) grows to the limits
of our ability to manage that complexity.

Software Testing

22 / 202

Date : 06-April-2002

Testing and Debugging


Purpose of testing
show that a program has bugs!!

Purpose of debugging
find the error/misconception that led to failure and
implement program changes that correct the error

Software Testing

23 / 202

Date : 06-April-2002

Debugging
Debugging is the act of attempting to
determine the cause of the symptoms of
malfunctions detected by testing or by
frenzied user complaints.

Software Testing

24 / 202

Date : 06-April-2002

Types of Testing
Static testing
Dynamic Testing

Software Testing

White box
Black box
Grey box
Functional
Structural

Regression
Stress
Volume
Performance
Guerilla/Ad-hoc
Timing
Smoke test

25 / 202

Date : 06-April-2002

Types of Testing
(contd..)
Static Testing
Verification performed without executing the
systems code
Code inspection
Revenue Engineering

Dynamic Testing
Verification or validation performed by executing the
systems code

Software Testing

26 / 202

Date : 06-April-2002

Types of Testing (contd..)


Black Box Test
Testing based on external specifications without
knowledge of how the system is constructed

White Box Test


Testing based on knowledge of internal structure and
logic. Usually logic driven.

Grey Box test


Testing based on partial knowledge of internal
structure and logic

Software Testing

27 / 202

Date : 06-April-2002

Types of Testing (contd..)


Functional Test
Test that validate business functional requirements
(what the system is supposed to do)

Structural Test
Tests that validate the system architecture (how the
system was implemented)

Regression Test
Testing after changes have been made to ensure that
no unwanted changes were introduced.

Software Testing

28 / 202

Date : 06-April-2002

Types of Testing (contd..)


Stress Test
Tests that validate the capacity of the system to
handle extremely harsh inputs or load with
inadequate resources

Volume Test
Tests that validate the capacity of the system to
handle extremely large volume of data

Performance Test
Testing to check that the system behavior is as
expected in terms of its performance

Software Testing

29 / 202

Date : 06-April-2002

Types of Testing (contd..)


Guerrilla/Ad-hoc Test
Tests that are not drivenby a pre-defined path.
Generally by experienced & novice users

Timing Test
Response time related testing

Smoke Test
Surface test; quick check

Configuration Test
To test that the software is in one piece

Software Testing

30 / 202

Date : 06-April-2002

Types of Testing (contd..)


Some more..
Exception Test
To specifically take care of robustness of software in
case of exceptions

Progressive Test
Testing of new features after regression testing of
previous features

Suspicion Test
Certain component of the system; new programmer,
new technology; new domain;

Software Testing

31 / 202

Date : 06-April-2002

Types of Testing (contd..)


Levels of testing
Units testing
Integration testing
System testing
Acceptance testing
Alpha testing
Beta testing

Software Testing

32 / 202

Date : 06-April-2002

Types of Testing (contd..)


Unit Testing
individual software units or groups of related units

Integration Testing
collection of units to test the interaction among them
(within a syb-system)

Software Testing

33 / 202

Date : 06-April-2002

Types of Testing (contd..)


System Testing
entire software system to be tested for compliance
with specified requirements

Acceptance Testing
complete integrated system to evaluate fitness of
use (users viewpoint)

Software Testing

34 / 202

Date : 06-April-2002

More Definitions
Software Item
Object coe, job control code, control data, or a
collection of these items; e.g. COM file, .EXE file,
executable file in UNIX, Shell script.

Test Item
A software item which is the object of testing.

Software Testing

35 / 202

Date : 06-April-2002

Definitions (contd..)
Test Case Specification
Specification of initial conditions, inputs and
expected result(s) for test item(s); Also known as test
assertions.

Test Program
Code which is used to perform the test case
specification.

Software Testing

36 / 202

Date : 06-April-2002

Definitions (contd..)
Test Case
Test case specification and associated test
program(s).

Test Procedure
A sequence of steps specifies how to execute a test.
It can be manual or automated or a combination of
both.

Software Testing

37 / 202

Date : 06-April-2002

Definitions (contd..)
Test
When used as a noun it denotes,
A set of one or more test cases
A set of one or more test procedures
A set of one or more test cases and procedures

When used as a verb it denotes,


The process of verifying the behaviour of an item
against its specification.

Software Testing

38 / 202

Date : 06-April-2002

Definitions (contd..)
Regression Test Bucket
Set of tests to be executed after software is
changed to show that the softwares
behaviour is unchanged except insofar as
required by the change to the software itself.
The Test coverage Matrix can be used as the
input for building the Regression Test Bucket.

Software Testing

39 / 202

Date : 06-April-2002

Definitions (contd..)
Test Plans
Test Plans specify the test case specifications that
will be tested for a specific level of testing. Plans also
also contain other information about resources,
schedules, etc.

Software Testing

40 / 202

Date : 06-April-2002

Unit Testing
Unit testing done to show that the unit does
not satisfy the functional specification and/or
its implemented structure does not match the
intended design structure

Software Testing

41 / 202

Date : 06-April-2002

What is a Unit?
A unit is a testable piece of software, that
can be built and executed under the control
of a test harness or driver. Unit could be a
functionality, subsystem, function, or as
defined by the customer.

Software Testing

42 / 202

Date : 06-April-2002

ETVX

Tasks/Activities

Exit Criteria

Software Testing

Verification

Candidate

Exit Criteria

43 / 202

Date : 06-April-2002

Activities in Unit Testing

Software Testing

Develop Unit Test Plan


Design Unit Test Cases
Develop Unit Test Cases
Execute Unit Test Cases

44 / 202

Date : 06-April-2002

Develop Unit Test Plan


Entry Criteria
An approval Detailed Design (Program Specs.) is
available

Validation
Unit Test Plan (UTP) is reviewed

Exit Criteria
UTP is reviewed and approved

Software Testing

45 / 202

Date : 06-April-2002

Design Unit Test Plan


Entry Criteria
UTP is available
An approved Detailed Design (Program Specs.) is
available

Validation
Design of Unit Test Cases is reviewed

Exit Criteria
Unit Test Case design is reviewed and approved

Software Testing

46 / 202

Date : 06-April-2002

Develop Unit Test Cases


Entry Criteria
Design of Unit Test Cases is available

Validation
Unit Test Cases are reviewed

Exit Criteria
Unit Test Cases are reviewed and approved

Software Testing

47 / 202

Date : 06-April-2002

Execute Unit Test Cases


Entry Criteria
Approved Unit Test Cases are available
Inspected code is available

Validation
Defect Report reviewed

Exit Criteria
Test items passed the unit testing as per pass/fail
criteria in the UTP
Unit testing summary details are available in the
Defect Report form for final cycle

Software Testing

48 / 202

Date : 06-April-2002

Test Drivers / Stubs


A test driver simulates a calling component or
external environment
a test stub simulates a called component.
Why are stubs and drivers required?

Software Testing

49 / 202

Date : 06-April-2002

Stubs and Drivers


A

B1

C1

Software Testing

B2

C2

C3

B3

C4

C5

50 / 202

Date : 06-April-2002

Integration Testing
Integration testing is done to show that even
though the units were individually tested OK,
the integration is incorrect or inconsistent.

Software Testing

51 / 202

Date : 06-April-2002

Typical Integration
Problems

Software Testing

Configuration / version conrol


I/O format, protocol mismatches
Conflicting data views / usage
Data integrity problems
Wrong call order / wrong parameters
Missing / overlaping functions
Resource problems (memory, etc..)

52 / 202

Date : 06-April-2002

Activities in Integration
Testing

Software Testing

Develop Integration Test Plan


Design Integration Test Cases
Develop Integration Test Cases
Execute Integration Test Cases

53 / 202

Date : 06-April-2002

Develop Integration
Test Plan

Entry Criteria
Reviewed and approved SRS document is available
A draft Design Document is available

Validation
Integration Test Plan (ITP) is reviewed

Exit Criteria
An approved Design Document is available
ITP is reviewed and approved

Software Testing

54 / 202

Date : 06-April-2002

Design Integration Test


Cases
Entry Criteria
ITP is available

Validation
Design of Integration Test Cases is reviewed

Exit Criteria
Integration Test Case design is reviewed and
approved

Software Testing

55 / 202

Date : 06-April-2002

Develop Integration Test


Cases
Entry Criteria
Design of Integration Test Cases is available

Validation
Integration Test Cases are reviewed

Exit Criteria
Integration Test Cases are reviewed and approved

Software Testing

56 / 202

Date : 06-April-2002

Execute Integration Test


Cases
Entry Criteria
approved Integration Test Cases are available
Unit Tested code is available

Validation
Defect Log reviewed

Exit Criteria
All defects found in the test items are logged
Test items passed the integration testing as per
pass/fail criteria in the ITP

Software Testing

57 / 202

Date : 06-April-2002

System Testing
System testing focuses on items that cannot
be attributed to a component, and addresses
to uncover inconsistencies between
components or planned interactions between
components.
E.g., Performance, Security, Recovery

Software Testing

58 / 202

Date : 06-April-2002

Activities in System
Testing

Software Testing

Develop System Test Plan


Design System Test Cases
Develop System Test Cases
Execute System Test Cases

59 / 202

Date : 06-April-2002

Develop system Test Plan


Entry Criteria
A draft SRS and acceptance criteria is available

Validation
System Test Plan (STP) is reviewed

Exit Criteria
As approved SRS is available
STP is reviewed and approved

Software Testing

60 / 202

Date : 06-April-2002

Design System Test Plan


Entry Criteria
STP is available

Validation
Design of System Test Cases is reviewed

Exit Criteria
System Test Case design is reviewed and approved

Software Testing

61 / 202

Date : 06-April-2002

Develop System Test Cases


Entry Criteria
Design of System Test Cases is available

Validation
System Test Cases are reviewed

Exit Criteria
System Test Cases are reviewed and approved
System Test Cases are placed under Configuration
Control

Software Testing

62 / 202

Date : 06-April-2002

Execute System Test Cases


Entry Criteria
Aproved System Test Cases are available
Unit and/or Integration Tested and baselined code is
available

Validation
Defect Log reviewed

Exit Criteria
All defects found in the test items are logged
Test items passed the integration testing as per
pass/fail criteria in the STP

Software Testing

63 / 202

Date : 06-April-2002

Activity and Logic Flow at


IBM

Software Testing

64 / 202

Date : 06-April-2002

Roles and Responsibilities


Project Leader
The overall responsibility for conducting testing
Identifying the software item to be tested
Identifying test team members and allocating
specific unit to be tested
Organize review of system test plan and test cases
by peer and SQA
approving test plans and test cases.
Determining if the test items have successfully
passed the testing phase and then terminating
testing

Software Testing

65 / 202

Date : 06-April-2002

Roles and Responsibilities


(contd..)
Test Team Members
Developing test plan
Designing and developing the test cases
Organize review of unit and integration test cases by
peer test team members
Executing the test cases and logging defects.

Software Testing

66 / 202

Date : 06-April-2002

Roles and Responsibilities


(contd..)
Developers
Supplying test items to test teams
Co-operating with them in debugging failed test
cases
Fixing the defects in the test items in a timely
manner

Software Testing

67 / 202

Date : 06-April-2002

Roles and Responsibilities


(contd..)
SQA
Review system test plan and system test results

Software Testing

68 / 202

Date : 06-April-2002

Check list for Test Plan


Review
Minimal Checklist for Test Plan Review
1. Are the test items identified clearly?
2. Is the scope of testing correctly identified?
3. Are all the pertinent types of tests conducted?
4. Is the intended coverage in tune with the Quality Plan?
5. Is the test strategy fesible and viable?
6. Are proper test tools used?
7. Is the verification mechanism for test outputs correct?
8. Is there sufficient automation of tests to repeat the tests?
9. Is the pass/fail criteria sufficient for the intended use of
the software?
Software Testing

69 / 202

Date : 06-April-2002

Check list for Test Plan


Review (contd..)
10. Are there test case specifications for each
requirement/feature given
in SRS?
11. For each test case specification, is the expected
output documented
clearly?
12. Is there sufficient test coverage to provide
confidence that the
software is adequately tested?
13. Are the schedule and resource requirements
feasible?
14. Is the documentation sufficiently clear?
Software Testing

70 / 202

Date : 06-April-2002

Check list for Test Plan


Review (contd..)
Additional Check list for Test Plan Review
1. Is there a traceability matrix?
2. Is there a mechanism to uniquely identify each test
case specification?
3. Is the test environment (both hardware and
software) documented clearly?
4. Is there a better strategy?
5. Can the level of automation be improved?
6. Are tools being used properly to make the whole
process more efficient?

Software Testing

71 / 202

Date : 06-April-2002

Check list for Test Case


Review
Minimal Check list for Test case / Program Review
1. Will the identified test case specifications help in achieving the
intended test coverage specified in the Test Plan?
2. Are the test programs and test procedures correctly written?
3. Does each test program specify the test case ids to be executed?
4. Does the test program correctly setup the environment? If does
not, does it clearly identify what set should be done prior to
execution?
5. Does the test program correctly implement the test case specs?
6. Is the verification of successful completion correct?
7. Does the test program handle abnormal conditions gracefully?
8. Does the program clean up the environment prior to exiting (in
both normal and abnormal conditions?

Software Testing

72 / 202

Date : 06-April-2002

Check list for Test Case


Review (contd)
Additional Check list for Test case / Program
Review
1. Are the test procedures automated enough?
2. Does the test program contain progress message to
programs are executed?
3. Does the test program automatically verify the actual
output with expected output and inform the result?

Software Testing

73 / 202

Date : 06-April-2002

Check List for Executing the


Test Cases
1. Have all the tests been conducted.
2. Ensure that summary details are filled up in Defect
Report for each round of tests.

Software Testing

74 / 202

Date : 06-April-2002

Checklist for Integration Test


Exit Review
Items marked with an asterisk (*) are particularly
important in making this decision.
1. All the software that was to be developed is complete.(*)
2. The inter-component interfaces are completely verified. If
any interface remains unverified, plan to verify it at an
early stage of system/acceptance test.
3. All the planned test cases have been executed.
4. The number of remaining defects is within the
predetermined criteria as specified in the Test plan.
5. Capacity and performance measurements planned for
this module are complete. There should be no risk of
overflow.

Software Testing

75 / 202

Date : 06-April-2002

Checklist for Int. Test Exit


Review (contd..)
6. A draft of the operations manual is complete (*)
7. A detailed plan for system tests and the test
environment have been prepared.
8. Most of the necessary test data and databases are
available.(*)
9. A regression test environment has been prepared.(*)
10. A system/acceptance test plan and a deployment test
plan have been developed.
Agreement has been reached with the clients end-user
departments (or client sponsor) concerning the system
environment, organization, roles, authority, and
responsibilities of test team and the participants (clients
end-user departments and client sponsor).(*)

Software Testing

76 / 202

Date : 06-April-2002

Checklist for Int. Test Exit


Review (contd..)

11. The deployment criteria have been broken down into


essential items, and adequate standard values have been
defined for each item, including:(*)
Quality requirements
Capacity and performance requirements
Operational requirements
Migration requirements
Maintenance and expandability requirements
Contract fulfilment requirements
In cases where the subsystem integration test is not
executed, and the items were not completed as part of
component integration test, the items will be added to the
completion criteria for the solution generation exit review
Software Testing

77 / 202

Date : 06-April-2002

Acceptance Testing
Testing for implied requirement
Evaluating fitness of use
Should not find bugs which should have been
found in earlier testing phases

Software Testing

78 / 202

Date : 06-April-2002

Alpha Testing
All developer site - by the customer
developer records bugs and usage problems
controlled environment

Software Testing

79 / 202

Date : 06-April-2002

Beta Testing

Software Testing

at one/more customer sites by end user


developer not present
near live situation
customer records problems and conveys to
developer

80 / 202

Date : 06-April-2002

Suspicion testing

Software Testing

When programmer less experienced


Component with high failure rate
Late change order
Designer / engineer feel uneasy
Multi-condition coverage

81 / 202

Date : 06-April-2002

Cross platform testing


Navigation testing
Requirements testing
same test on each platform
platform specific tests on respective
platforms

Software Testing

82 / 202

Date : 06-April-2002

Cross platform testing


(contd..)
Regression testing
Ad-hoc Testing (Guerilla Testing)
by experienced testers on respective platforms
(platform anomalies)
by novice users

Software Testing

83 / 202

Date : 06-April-2002

Test Assets

Software Testing

84 / 202

Date : 06-April-2002

Test Repository

Software Testing

Test Strategy
Test Plans
Test Scripts, Test Environment and Test Data
Test Results / Test Log
Defect Log

85 / 202

Date : 06-April-2002

Test Assets

Software Testing

Acceptance Test Plan


System Test Plan
Integration Test Plan
Unit Test Plans
Regression Test Plan

86 / 202

Date : 06-April-2002

Test Plans
Ideally test plans should be prepared as soon
as the corresponding document in the
development cycle is produced.
The preparation of the test plan itself validates
the document in the development cycle.

Software Testing

87 / 202

Date : 06-April-2002

Test Cases
Describe specific functional capability or
feature which needs to be validated.
Based on system specifications / users
production environment / other available lists.
Should also describe the Pass/Fail Criteria

Software Testing

88 / 202

Date : 06-April-2002

Pass/Fail Criteria
Criteria to be used to determine if the test
item has passed or failed testing
e.g.,
Testing should achieve at least 85% code coverage
No critical/Serious defects found in system test case
execution

!! Is it a Bug or a Feature!!
Software Testing

89 / 202

Date : 06-April-2002

Traceability to Requirements
Check necessary and sufficient condition for every
test case
Every requirement must be completely addressed
by one or more test cases
Every test case must be to address one or more
requirements fully or in part
Good to build a traceability matrix
Peers Reviews check completion

Software Testing

90 / 202

Date : 06-April-2002

Test Data
This is the data that is required to run and
verify the test. Test data includes:
Initial database contents
Data to be transacted
Expected results

Software Testing

91 / 202

Date : 06-April-2002

Test Environment
Environment under which testing takes place.
Typical points under environment are:
Operating system
Start state of the system
Single user / multi user
Database state

Software Testing

92 / 202

Date : 06-April-2002

Test Script
Test script contains the step by step procedure
comprising the action to be taken and the verification
of results expected.
Test scripts could be manual or automated. It is easy
to automate the test scripts relating to batch/report
programs.
Tools are available to automate the scripting of online programs also.
One script may test for one or more test conditions

Software Testing

93 / 202

Date : 06-April-2002

Test Script
Detailed, complete specification of all aspects
of a test including initialization, data,
keystrokes, etc.
In principle, a script can be executed by an
idiot or a computer

Software Testing

94 / 202

Date : 06-April-2002

Test Results / Test Log


When a test is run, the actual results are
compared with the expected results. The test
log should contain pass or fail status of
various tests.
Test logs also contain the actual results in case
of fail. This provides a basis for the
debugging effort.

Software Testing

95 / 202

Date : 06-April-2002

Defect Log
Every defect that is found during testing is
logged in a defect log. The defect log can be
used to track and close the defect and also to
perform statistical analysis
Statistical analysis of defects can be used to
identify the root causes of the defects and help
in improving the development processes.

Software Testing

96 / 202

Date : 06-April-2002

How Much To Test?

Software Testing

97 / 202

Date : 06-April-2002

Is complete Testing Possible?


Proving that program is bug-free
practically impossible

what if the verification system itself has bugs!


Can any verification system confirm absence of
bugs!!

Software Testing

98 / 202

Date : 06-April-2002

Aim at...
Not absolute proof
But, a convincing demonstration with
Qualitative measures
Judgement of enough

Software Testing

99 / 202

Date : 06-April-2002

Completion Criteria
Time Runs Out
POOR CRITERIA !!!

Software Testing

100 / 202

Date : 06-April-2002

Completion Criteria (contd..)


e.g., Testing stope when all the test cases
execute without any critical/serious error
e.g., Test shall continue until N number of
errors have been found
e.g., Testing stops when all statements and all
branches are executed and all test cases
execute without failure

Software Testing

101 / 202

Date : 06-April-2002

Methods of Testing
Manual Testing
Automated Testing

Software Testing

102 / 202

Date : 06-April-2002

Problems with Manual


Testing
Testing speed cannot match development
speed
Each Build not fully tested
Tested coverage decreases, more bugs left
undetected

Software Testing

103 / 202

Date : 06-April-2002

Automated Testing
First cycle takes more time than manual cycle
After initial test development, test cycles take
less time
Frequent and comprehensive tests possible
Each build can be tested fully - better coverage
Detect more bugs, earlier

Software Testing

104 / 202

Date : 06-April-2002

Common Automated Test


Tools

Software Testing

GUI Test Drivers


Non-GUI Test Drivers
Load & Performance Testing
Test Design Tools
Static Analysis Tools
Test Evaluation Tools
Miscellaneous Tools

105 / 202

Date : 06-April-2002

GUI Test Drivers

Software Testing

winner/Runner - Mercury Interactive


QA Partner - Segue
Visual Test - Rational
SilkTest - Segue
CAPBAK - Software Research Inc.
SQA/Suite - Rational
QARun - Compuware Corp.
SQA TeamTest (ERP extension for SAP) Rational
106 / 202

Date : 06-April-2002

Test Design Tools


McCabe Test - McCabe & Associates
SoftTest - Bender & Associates
TDGEN - Software Research Corp.
(Test Data Generation)
TestMaster - Teradyne Software
(test generation from model)

Software Testing

107 / 202

Date : 06-April-2002

Static Analysis Tools


TestWorks/Advisor - Software Inc.
LogiScope
METRIC - Software Research Inc.
(generate useful metrics from source code)
STATIC - Software Research Inc.
ObjectDetail - Object Software Inc.
(entry defect analyzer & metrics generator)

Software Testing

108 / 202

Date : 06-April-2002

Test Evaluation Tools

Software Testing

Object Coverage - Object s/w Inc.


TestWorks/Coverage - Software Research Inc.
C-Cover
JavaScope
Cantata/Cantata++
Visual Basic Coverage Expert
LogiScope

109 / 202

Date : 06-April-2002

Miscellaneous Tools
CGI TESTER
(checks the output from Perl, SP, CGI etc.)
NULLSTONE
(automated compiler performance analysis
tool)
QUANTIFY
(Software quality improvement - performance
bottleneck analysis)
Purify
(run0-time error & memory leak detection)
Software Testing

110 / 202

Date : 06-April-2002

Life time of Automated Tests


Automation tests are useful only when code
changes
Exceptions 0 timing tests, stress tests, etc.

How soon the product changes, thus making


the test useless?
Modify the test? Or discard / rewrite it?
How well is the test protected from changes to
intervening code?
How stable is the behavior of code under test?

Software Testing

111 / 202

Date : 06-April-2002

Test Tool & Libraries

Test Tool and


Libraries
Intervening
code
Code under
Test

Software Testing

112 / 202

Date : 06-April-2002

Value of Automated Tests?


An automated tests value is mostly unrelated
to the specific purpose for which it was
written. It is the accidental things that count;
the unrelated bugs that it finds.

Software Testing

113 / 202

Date : 06-April-2002

When to Automate a Test?

What is the cost of automation?


How many bugs will be missed by automating a test?
What is the severity of these missed bugs?
What is the life time of the test?
Fuzzy estimate of automated test v/s fuzzy estimate
of manual test!?!

ANSWERES MAY BE IMPRECISE BUT THE METHOD


HELPS

Software Testing

114 / 202

Date : 06-April-2002

Other Considerations for


Automation
Human can notice bugs that automation ignores
Humans are bad at painstaking and precise
checking of bugs
Humans cant be precise about inputs.
Repeated runs of a manual test are often
slightly different tests
Configuration testing is a very good candidate
for automation (OS, device, browser etc.)

Software Testing

115 / 202

Date : 06-April-2002

Other Considerations for


Automation (contd)
Automation can be considered of a particular
code segment that is liable to have future
changes
What if you found a bug manually and not able
to reproduce because you forgot something
you did
An automated test suite can explore the whole
product very fast, so bugs are found sooner

Software Testing

116 / 202

Date : 06-April-2002

Other Considerations for


Automation (contd..)
Test automation takes time. First bugs often wont
be reported as soon as found
Automated test suites tend to decay over time
(product changes behavior)
Automated tests can be run in different sequence,
if required. Randomness can be advantageous
Automated tests might not pay until next release

Software Testing

117 / 202

Date : 06-April-2002

Testing Techniques
Means by which test conditions / cases are
identified
Types of techniques / Approaches
coverage based
domain based
process maturity based
lifecycle based

Software Testing

118 / 202

Date : 06-April-2002

Coverage Based

Software Testing

Statement coverage
Branch coverage
Condition coverage
Multiple condition coverage
Full path coverage

119 / 202

Date : 06-April-2002

Domain based

Software Testing

Banking
Manufacturing
Insurance
Engineering Projects
Process control
Avionics

120 / 202

Date : 06-April-2002

Process Maturity based


Approach

Software Testing

121 / 202

Date : 06-April-2002

Maturity Levels
Want to deliver good quality product by testing
it before delivery
Somebody should be responsible for quality of
my products
I want to produce reliable software
I want to produce reliable software every time
I want to produce reliable software every time
with reduced cost/effort

Software Testing

122 / 202

Date : 06-April-2002

Lifecycle/Model based
Approach

Software Testing

123 / 202

Date : 06-April-2002

Lifecycle / Models

Software Testing

Waterfall (V-model)
RAD / Iterative model / Spiral Model
OO model
Client-Server model
Internet Applications

124 / 202

Date : 06-April-2002

Waterfall (V-Model)
Acceptance Test Plan
Acceptance Test

Requirements
STP

Analysis

System Test
ITP

HLD

Integration Test
UTP
LLD

Unit Testing

Coding

Software Testing

125 / 202

Date : 06-April-2002

Testing in V-Model
Freezing of Test Plans during early stages may not be
practical
Test plans can be in draft until before the actual
testing is to be taken up
Need to have a re-look at the model itself for testing
activities
e.g., Some of unit level test cases may be easier to
test during integration test.

Software Testing

126 / 202

Date : 06-April-2002

RAD Technology and Testing


Specifications not frozen: functions
refined/added
iterative prototyping
high development productivity
testing for RAD needs to be fast, high test
development productivity
should be integrated in development
environment
modular approach

Software Testing

127 / 202

Date : 06-April-2002

Testing Life Cycle - RAD


Test specification cannot be frozen
Refinements in tests should be possible for
each build cycle
Execute tests for each build

Software Testing

128 / 202

Date : 06-April-2002

Testing Life Cycle RAD


(contd..)
Testing life cycle should start at the same time
as RAD life cycle as process is complex, time
consuming.
Later start may
affect time schedule
delay, exposing gaps/flaws in application definition

Software Testing

129 / 202

Date : 06-April-2002

For Effective Testing in RAD


Build test suites from small, reusable test
components
Create automated test scripts quickly from
working applications
Perform tests fast

Software Testing

130 / 202

Date : 06-April-2002

OO Model
Characteristics of OOAD/OOPS
OO functions are generally smaller
interactions amount components increases
base class & derived class

Software Testing

131 / 202

Date : 06-April-2002

Testing in OO Model
Fault Based Testing
driven by product specifications
some faults may become less possible (function
level)
some faults might become more possible
(integration)
some new faults to be considered (inheritance)

Software Testing

132 / 202

Date : 06-April-2002

Testing in OO Model
(contd)
Scenario based Testing
driven by user need
use of Use Cases
interaction among subsystems

Software Testing

133 / 202

Date : 06-April-2002

Testing in OO Model (contd..)


Changes in the way we do test design
Not the approach
Can tests for a base class be reused for a
derived class?
Test inputs to both
expected results might differ

Software Testing

134 / 202

Date : 06-April-2002

Client - Server model


Distributed environment
Data & Processes are dispersed
Data & processes may be replicated on
different platforms

Software Testing

135 / 202

Date : 06-April-2002

Testing for Client - Server


model

Software Testing

Client GUI
Target environment
Distributed Database
Non-robust target environment
Non-linear performance relationships

136 / 202

Date : 06-April-2002

Testing for Client - Server


model (contd..)
What it means..
OO like approach
Parallel development of all modules
No incremental model of testing
Integrate ALL modules and test
Put all Together and then Test
Configuration & Compatibility testing

Software Testing

137 / 202

Date : 06-April-2002

GUI Testing
Considerations..
No sequencing of fields
Cross Platform
Mouse and Keyboard interface
Event driven
Contain custom objects
Distributed data and processes

Software Testing

138 / 202

Date : 06-April-2002

Internet Applications

Software Testing

Host based
WWW (non-proprietary n/s)
Multiple client platforms
URLs
Java applets
Security
Performance

139 / 202

Date : 06-April-2002

Testing Server Applications

Software Testing

Volume Testing (size)


Stress Testing transactions)
Performance Testing
Data Recovery Testing
Error Trapping
Data backup & Restore Testing
Data Security Testing

140 / 202

Date : 06-April-2002

Product Development
Scenario
Typically two phases of development
Feature Development Phase
Product Stabilization Phase

Software Testing

141 / 202

Date : 06-April-2002

Product Development
Scenario (contd..)
Feature Development Phase
Developers do most
Testers role is limited

Product Stabilization Phase


Developers do nothing but fix bugs
Heat is on Testers

Software Testing

142 / 202

Date : 06-April-2002

Testing during Product


Stabilization
Generally three types of testing.
Planned Testing
Guerrilla Testing
Regression Testing

Software Testing

143 / 202

Date : 06-April-2002

Planned Testing
Tester has a prior knowledge of
what approach to take
what a complete set of tests is
what is the time allocated

Software Testing

144 / 202

Date : 06-April-2002

Guerrilla Testing
Opportunistically seek to find severe bugs
less planned
depends on experience of testers
tests are usually not documents & preserved

Software Testing

145 / 202

Date : 06-April-2002

Regression Testing
Return tests of see if one that used to pass
now fails

Software Testing

146 / 202

Date : 06-April-2002

Testing during Product


Stabilization
During first part of stabilization, planned tests
dominate
As stabilization proceeds, regression testing
increases
At the end, testing effort shifts entirely to
regression and guerilla testing

Software Testing

147 / 202

Date : 06-April-2002

Towards the end of


Stabilization
Estimates become useless
Other metrics may haps
no. of bugs found, active, fixed and verified
no. of low severity, unfixed bugs
changes in bug severity distribution
amount of recent changes to the source code

Software Testing

148 / 202

Date : 06-April-2002

Project Objectives driven


Approach
What are the Quality goals of a project?
Are the quality goals measurable?
How do I build tests to measure these quality
goals?

Software Testing

149 / 202

Date : 06-April-2002

Quality Goals

Software Testing

Functionality
Usability
Supportability
Installability
Performance
Reliability
Maintainability

150 / 202

Date : 06-April-2002

Are these Goals measurable?


Software should be very User friendly
User should be able to install the software
quickly with minimum queries to customer
support
Customer support engineer should be able to
identify the cause of bug within minimal time
and effort
Software should have excellent response time

Software Testing

151 / 202

Date : 06-April-2002

Can we be more precise?

How do we make them measurable?


Are the goals testable?
What should be our test strategy?
How do we design the tests?

Thinking on these aspects might help

Software Testing

152 / 202

Date : 06-April-2002

Reporting Bugs
Bug reporting is a part of an evolving
relationship between tester and developer
Bug report should address two issues provide information about state of the product (main
goal)
provide information about you and the degree to
which developer can rely on you (usefulness of
testers)

Software Testing

153 / 202

Date : 06-April-2002

Bug Report
Be clear
what is being tested
what is expected to happen
what did happen
what was incorrect about it
if possible, explicit sequence of steps to make bugs
reproducible
if problem does not happen on developers machine,
quickly try to understand configuration dependence
and report it.

Software Testing

154 / 202

Date : 06-April-2002

Bug Report (contd)


De-personalize reports
do not have any hint of personal criticism

Do not try to solve the problem you report


Demonstrate your value with important bugs
important in the eyes of customer
is it a bug or a feature

Software Testing

155 / 202

Date : 06-April-2002

Verification of Testing
Process
Like any other software engineering work
product, testing process also is open for review
and verification
a repeatable, defined, measured and managed
process
is auditable
as part of SQA audits, testing should get
audited for conformance and adequacy

Software Testing

156 / 202

Date : 06-April-2002

Alternatives to Testing?

Software Testing

Formal reviews
design processes
static analysis
language checks
development environment

157 / 202

Date : 06-April-2002

Can Testing be Replaced?


NOT YET!!
Other methods yet to mature, till then cannot replace
testing
Review inspect, walkthrough, better processes and
methodologies
Can reduce Test effort

Software Testing

158 / 202

Date : 06-April-2002

Metrics of Testing

Software Testing

Why do we need metrics from testing?


What do we do with these metrics?
How to collect metrices?
When to measure?
Who will measure?

159 / 202

Date : 06-April-2002

Sample - Why Metrics?


E.g., Test Performance improvement by
improving processes
The Goal-Question-Metrics paradigm by
Victor Basili may be used
Individual goals of the team/department
identified
Ways to satisfy goals identified by asking the
questions whose answers satisfy the goals
The questions are answered by metrics that
need to be collected and analyzed
Software Testing

160 / 202

Date : 06-April-2002

GQM - example
Goal : Better time management
Question : Are schedules met?
Metric : % delay in schedule for every
milestone
Question : Is estimation of effort done well?
Metric : % difference in actual and estimated
effort

Software Testing

161 / 202

Date : 06-April-2002

Collect Data
Data from projects over a period of time
Data can also be gathered through
Defect Control System
Test Plans
Test Summary Reports
Personal interviews

Software Testing

162 / 202

Date : 06-April-2002

Possible Metrics
Defect arrival rate
Defects by severity
Defect repair rate
Test effort
effectiveness
Original of Defects
cost of defect
Defect distribution
of cycle

Software Testing

Feature-wise
defects
distribution
Effort Vs. Elapsed
time
# of Test cases Vs.
Defects
Defects removal
efficiency by
severity
163 / 202

Date : 06-April-2002

Software Testing

164 / 202

Date : 06-April-2002

Defects distribution per cycle

Software Testing

165 / 202

Date : 06-April-2002

Common Testing Mistakes


Role of Testing
Testing team is responsible for assuring quality
purpose of testing is to find bugs
Not reporting useability problems
no focus as an estimate of quality (and quality of
estimates)
reporting bug without putting it into context
starting testing too late (bug detection v/s bug
reduction)

Software Testing

166 / 202

Date : 06-April-2002

Common Testing Mistakes


(contd..)
People Issues
using testing as a transitional job for new programmers
failed programmers made testers
testers with no domain expertise
not seeking candidates from customer support staff and
technical writers
insisting the testers be able to program
a physical separation between testers & developers
believing programmers cant test their own code
programmers are neither trained nor motivated to test.

Software Testing

167 / 202

Date : 06-April-2002

Common Testing Mistakes


(contd..)
Planning the Testing Effort
biased towards functional testing
under emphasizing configuration testing
scheduling stress & load testing to the last minute
not testing the documentation
not testing the installation procedures
over-reliance on beta testing
finishing all testing task before moving on to the next
failing to correctly identify risky areas
sticking stubborn to test plan

Software Testing

168 / 202

Date : 06-April-2002

Common Testing Mistakes


(contd..)
The testers at work
more attention to running tasks than to designing them
test designs not reviewed
being too specific about test inputs and procedures
not checking that the product does not do what it is not
supposed to do
test suites that are understandable only by their owners
poor bug reporting
adding only regression tests when bugs are found
failing to revise next testing efforts based on bugs
reported from customers & others

Software Testing

169 / 202

Date : 06-April-2002

Common Testing Mistakes


(contd..)
Test Automation
Automating all tests
Expecting to return manual tests
Using GUI capture/replay tools to reduce test
creation cost
Expecting regression tests to find high proportion of
new bugs

Software Testing

170 / 202

Date : 06-April-2002

Issues that affect Testing

Software Testing

Friction between developers and testers


Bugs found uneconomically late
Developers who dont test
Please debug this for me
Not knowing what is new
Stop asking for all those stupid documents
Untestable code

171 / 202

Date : 06-April-2002

Questions to you as a Test


Manager

Software Testing

When will we be ready to ship?


Which bugs should be fixed?
Where are the danger areas in the project?
Are you working smart?
What does your work mean?

172 / 202

Date : 06-April-2002

Common Testing Mistakes


(contd..)
Code Coverage
code coverage ensures completeness of tests
removing tests from regression test suite just
because they dont add coverage
using coverage as a performance goal for testers
abandoning coverage entirely

Software Testing

173 / 202

Date : 06-April-2002

Issues that affect Testing


(contd..)
Lack of developer introspection about bugs
Testers sharing developers view and not
customers

Stockholm Syndrome- tendency of captives to


bond with their captors and adopt their point
of view.

Software Testing

174 / 202

Date : 06-April-2002

When will be ready to ship?


Builds made again and again
pressure to show forward progress
when do we fix all the bug?
Expected number of bugs
average time to resolve each bug

Estimation based on past experience & analogy

Software Testing

175 / 202

Date : 06-April-2002

Which bugs should be fixed?


(contd..)

Software Testing

Perform Risk Analysis


risk of shipping with bug
risk of fixing the bugs
testing the fix
cost of fix and test
resource availability
extent of regression tests
how often do bug fixes fail?

176 / 202

Date : 06-April-2002

Where are the danger areas


in the Project?

Software Testing

Make sure it si really so


look a size and complexity
historical data
Report on product, not people
do not criticise developer
do not appear to be criticizing them
say something good along with bad news!
Warn of hot sites in advance
Track of hot sites in advance
Track hot sites carefully
Another hot set how many regression tests fail?
177 / 202

Date : 06-April-2002

Which bugs should be fixed?


Developers tend to assign low priority
Testers may over-emphasize the severity
Ask the people who live with us
consequences
customer support
marketing

Software Testing

178 / 202

Date : 06-April-2002

A Question?
What can be the effect on choosing which
bugs to fix when top management tracks the
number of open bugs?

Software Testing

179 / 202

Date : 06-April-2002

Where are the danger areas in


the Project?

Software Testing

180 / 202

Date : 06-April-2002

What we can infer?

Software Testing

Database Testing
test are only 30% complete against the plan
have got 70% coverage
bug finding rate is good
Continue with testing
Stress Testing
10% complete against the plan
65 bugs found which is quite high
might increase effort in this
181 / 202

Date : 06-April-2002

What we can infer? (contd..)

Software Testing

Security Testing
40% of tests planned done
50% coverage
bugs being found as expected
will continue as planned

182 / 202

Date : 06-April-2002

What we can infer? (contd..)


Developers feel
Testers write lot of tests, but it is a wasted
effort. Real bugs are not uncovered.
Is this true?
May be!
May be not!!
How do we justify?

Software Testing

183 / 202

Date : 06-April-2002

What we can infer? (contd..)

Software Testing

GUI Testing
55% of tests done out of planned
80% coverage
only few bugs found
module looks rebust
may not do all the tests planned

184 / 202

Date : 06-April-2002

What we can infer? (contd..)

Software Testing

Library Module
85% of tests complete
90% coverage
only 4 bugs found; no yield!
Should have stopped testing earlier; will do so
immediately

185 / 202

Date : 06-April-2002

What is the possible


response?
Oh, well, .I cant say; it is difficult to tell!!
We are trying to find as many defects as
possible during our N cycles of testing
We will ensure the bugs are minimum onsite!
@!

Software Testing

186 / 202

Date : 06-April-2002

Extrapolation from Previous


Projects
In the previous release, we found 70% bugs during
testing, I.e. We may again miss 30% of bugs
because we are doing the testing in the same way.
Based on past 3 projects, since we are putting in
the similar effort and testing techniques, we predict
between 30-70 more bugs found by customers.

Consistency in process is the key

Software Testing

187 / 202

Date : 06-April-2002

Another Question from PM


I want to know how many bugs testing wont find?
How many bugs testing will find, may be needed to
decide on release schedule
But what about product image at customer site?

Software Testing

188 / 202

Date : 06-April-2002

What is the possible


response? (contd..)
Extrapolate
from previous projects
not just with numbers: some justification will
help

Software Testing

189 / 202

Date : 06-April-2002

A Test Manager
Is a frequent bearer of bad news
As a keeper of data, understands trends and
special occurrences in the project
Ensures that testing team & its work are
represented well to right people
Avoid two data traps
unjustified faith in numbers
rejecting numbers completely because they
are imperfect.

Software Testing

190 / 202

Date : 06-April-2002

Case Study
For your project, identify Quality Goals that are
measurable
Pick up any two of the Goals, and define test
strategy for them
Identify test metrics
Define pass/fail criteria

Software Testing

191 / 202

Date : 06-April-2002

Key to Successful Testing


Define Test Strategy at the start of the p
(use product objectives & LC)
Make bug finding a positive activity
Focus on the critical success factors/key areas
- do not focus on all areas equally
Identify stop criteria at the start of the
project
Prepare plans as soon as possible

Software Testing

192 / 202

Date : 06-April-2002

Key to Successful Testing


(contd..)
Make test execution a mechanical exercise
Use Walk throughs /Inspections to make
testing more effective
Measure Testing activities
Analyze metrics of testing
Use metrics o refine processes.

Software Testing

193 / 202

Date : 06-April-2002

Testing - Summary

Software Testing

194 / 202

Date : 06-April-2002

Key to Successful Testing


(contd..)

Software Testing

Try to avoid ad-hoc testing


If possible, use tools for
defect tracking
Test coverage analysis
Regression testing
Test planning
Test data generation

195 / 202

Date : 06-April-2002

Reference Material
Beizer B Software System Testing and Quality
Assurance, Van Nostrand Reinhold 1984.
Beizer, B Software Testing Techniques. Van
Nostrand Reinhold, 1990.
Myers, G J., the Art of software Testing, New
York: John Wiely and Sons, 1979.
Software Metrics: Establishing A CompanyWide Program by Grady and caswell

Software Testing

196 / 202

Date : 06-April-2002

Reference Material (contd..)


IEEE Standards Collection - Software
Engineering, IEEE, 1994.
Musa, J.D., and Everett, W. W., Software
Reliability Engineering: Technology for the
1990s IEE Software, Nov. 1990.
Pressman, R.S., Software Engineering -A
Practitioners Approach, MCGraw-Hill
International Edition.

Software Testing

197 / 202

Date : 06-April-2002

OAKSYS - What we do in
testing
Take -up independent testing (and other V & V)
assignments
Perform traceability between work products
Design test cases
Build test strategy
Develop test suite
Develop test scripts
Perform testing (automated/manual)

Software Testing

198 / 202

Date : 06-April-2002

Q&A

Software Testing

199 / 202

Date : 06-April-2002

Reference Material (contd..)


IEEE Standard for software Test
Documentation, IEEE Std 829-1998
When should a test be automated? - Brain
Marick, Proc. Of International Quality Week,
May 1998.
Towards Metrics for Process Validation - J E.
Cook, A L Wolf, ICSP 3, October 1994.
Http/www.cs.edu/jcook/papers
Numerous white papers, articles from the
Internet.
Software Testing

200 / 202

Date : 06-April-2002

OAKSYS - How do we do

Have
Have
Have
Have
Have

requisite technical skills


requisite domain knowledge
experience (in Testing and on Tools)
defined processes
templates.

WE ARE DELIVERING VALUE TO CUSTOMERS

Software Testing

201 / 202

Date : 06-April-2002

Thank You

We can be
Contacted at
IBM

Software Testing

202 / 202

You might also like