0% found this document useful (0 votes)
84 views276 pages

Testing S1

The document provides an introduction to software testing. It defines key terms like quality assurance, quality control, verification, and validation. It also defines what a bug is, why bugs occur, and common symptoms and root causes of bugs. The learning objectives are to understand software testing objectives, criteria, test plans, and different types of testing. Reference books on software testing are also listed.

Uploaded by

ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views276 pages

Testing S1

The document provides an introduction to software testing. It defines key terms like quality assurance, quality control, verification, and validation. It also defines what a bug is, why bugs occur, and common symptoms and root causes of bugs. The learning objectives are to understand software testing objectives, criteria, test plans, and different types of testing. Reference books on software testing are also listed.

Uploaded by

ahmad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 276

SED-305

SOFTWARE
TESTING
Momina Shaheen
Department of CS
COMSATS University Islamabad Lahore
Campus
INTRODUCTION
TO SOFTWARE
TESTING
Lecture 1-2
• Introduction
• Learning Objectives
• Text / Reference Books
• Rules
• Verification v/s Validation

Outline • Quality Assurance v/s Quality Control


• Formal Definition of Software Testing
• What is a Bug?
• Why do Bugs Occur?
• Symptoms and Root Causes of Bugs
• Cost of Bugs
• After completing this course the students will be
able to:
• Determine software testing objectives and
criteria
• Select and prepare minimum number of test
cases in case of both black box and white box
Learning testing
• Develop and validate a test plan
Objectives • Do effective test reporting
• Observe bug’s life cycle and bugs categories
• Identify the need for testing
• Perform different types of testing on your
projects
• Measure the success of testing effort
Text Book

• Software Testing by Ron Patton Second


Edition

Text / Reference Books


Reference • Software Testing Principles and Practices
by Srinivasan Desikan and Gopal Aswamy
Books Ramesh
• Software Testing Techniques 2nd Edition
by Boris Beizer
• Software Quality Assurance from Theory
to Implementation By Daniel Galin
Mobile Phones Should be Silent / Vibration Mode

Arrive on time
Absent will be marked if you are more than 10 min
Punctuality late
No disturbance during lecture

80 % attendance required to appear in the exams


Some Rules Attendance No compromise

Should be submitted on time


No late submission
Assignments Should not be copied, otherwise graded zero
No retake
Will be announced
All quizzes will be graded and marks will be
Quizzes uploaded on cu-online
No retake
Basic definitions
Revision of some basic concepts
Verification v/s
Validation
• Verification: The software should
confirm to its specification (Are we
building the product right?)

• Validation: The software should do


what the user really requires (Are we
building the right product?)
Quality DEFINITION
• Quality Assurance: QA is a set of activities

Assurance for ensuring quality in the processes by


which products are developed.

v/s Quality • Quality Control: QC is a set of activities for


ensuring quality in products. The activities
focus on identifying defects in the actual

Control products produced.


FOCUS ON
Quality • Quality Assurance: QA aims to prevent defects
with a focus on the process used to make the
Assurance v/s product. It is a proactive quality process.
Quality
• Quality Control: QC aims to identify (and
Control correct) defects in the finished product. Quality
control, therefore, is a reactive process.
GOAL
• Quality Assurance: The goal of QA is to
Quality improve development and test processes so
Assurance v/s that defects do not arise when the product is
being developed.
Quality
Control • Quality Control: The goal of QC is to identify
defects after a product is developed and before
it's released.
Quality HOW
• Quality Assurance: Establish a good quality
management system and the assessment of
Assurance its adequacy. Periodic conformance audits
of the operations of the system.

v/s Quality • Quality Control: Finding & eliminating


sources of quality problems through tools &
equipment so that customer's requirements
Control are continually met.
WHAT
Quality • Quality Assurance: Prevention of quality
problems through planned and systematic
Assurance v/s activities including documentation.
Quality
• Quality Control: The activities or techniques
Control used to achieve and maintain the product
quality, process and service.
RESPONSIBILITY
Quality • Quality Assurance: Everyone on the team
involved in developing the product is
Assurance v/s responsible for quality assurance.
Quality
• Quality Control: Quality control is usually the
Control responsibility of a specific team that tests the
product for defects.
Questions
• Verification is an example of?
• Quality Assurance
• Quality Control

• Validation is an example of?


• Quality Assurance
• Quality Control
Quality Assurance
v/s Quality Control
EXAMPLE
• Quality Assurance: Verification is an example of QA.

• Quality Control: Validation/Software Testing is an example of QC.


AS a TOOL

Quality
Assurance v/s Quality Assurance: QA is a
managerial tool.
Quality
Control
Quality Control: QC is a
corrective tool.
ORIENTATION
Quality Assurance • Quality Assurance: QA is process oriented.

v/s Quality Control • Quality Control: QC is product oriented.


Software Testing
Testing is the process of executing a program
with intention of finding errors.
Formal • Software testing is a formal process carried
out by a specialized testing team in which a
Definition software unit, several integrated software
units or an entire software package are
examined by running the programs on a
of Software computer. All the associated tests are
performed according to approved test
procedures on approved test cases.
Testing
Formal • Formal:
• Software test plans are part of project’s
development and quality plans,
Definition scheduled in advance
• It is often signed between developer
and customer
of Software • Ad hoc examination by colleague and or
regular checks by the programming
team leader cannot be considered
Testing software tests
• Specialized testing team:
• An independent team or external consultants
who specialize in testing are assigned to perform
Formal these tasks to
• Eliminate bias
Definition of • Guarantee effective testing
• Tests performed by the developers themselves
Software Testing will yield poor results
• Unit tests continue to be performed by
developers in many organizations
• Running the programs:
• Any form of quality assurance
activity that does not involve
running the software, for example
Formal
code inspection cannot be
considered as a test.
• Approved test procedures:
Definition
• The testing process performed
according to a test plan and testing
procedures
• These are approved SQA
of Software
procedures adopted by the
developing organizations Testing
Formal • Approved test cases:
Definition of • The test cases to be examined are defined in
full by the test plan.
Software • No omissions or additions are expected to
occur during testing.
Testing
What is a Bug?
• Informally, it is “what happens when software fails”, whether the
failure was
o Inconvenient
o Catastrophic
• Terms for software failure
* Fault * Anomaly * Problem
* Inconsistency * Failure * Incident
* Error * Defect
* Variance * Bug
What is a Bug?
• Formally, we say that a software bug occurs when
one or more of the following five rules is true:
when the software
o doesn't do something that the product
specification says it should do.
o does something that the product
specification says it shouldn't do.
o does something that the product
specification doesn't mention.
o doesn't do something that the product
specification doesn't mention but should.
o is difficult to understand, hard to use, slow,
or will be viewed by the end user as just
pain not right.
Why do Bugs
Occur?
• Inaccurate understanding of end user needs
Symptoms • Inability to deal with changing requirements
• Modules that don’t fit together

and Root • Software that is hard to maintain or extend


• Late discovery of serious project flaws
• Poor software quality

Causes of • Unacceptable software performance


• Team members in each other’s way, making
it impossible to reconstruct who changed

Bugs what, when, where and why


• An untrustworthy build-and-release process
The Cost of
Bugs
SED-305
SOFTWARE
TESTING
Momina Shaheen
Department of CS
COMSATS University Islamabad Lahore
Campus
INTRODUCTION
TO SOFTWARE
TESTING
Lecture 2
• Formal Definition of Software Testing
• What is a Bug?
• Why do Bugs Occur?
Outline • Symptoms and Root Causes of Bugs
• Cost of Bugs
Software Testing
Testing is the process of executing a program
with intention of finding errors.
Formal • Software testing is a formal process carried
out by a specialized testing team in which a
Definition software unit, several integrated software
units or an entire software package are
examined by running the programs on a
of Software computer. All the associated tests are
performed according to approved test
procedures on approved test cases.
Testing
Formal • Formal:
• Software test plans are part of project’s
development and quality plans,
Definition scheduled in advance
• It is often signed between developer
and customer
of Software • Ad hoc examination by colleague and or
regular checks by the programming
team leader cannot be considered
Testing software tests
• Specialized testing team:
• An independent team or external consultants
who specialize in testing are assigned to perform
Formal these tasks to
• Eliminate bias
Definition of • Guarantee effective testing
• Tests performed by the developers themselves
Software Testing will yield poor results
• Unit tests continue to be performed by
developers in many organizations
• Running the programs:
• Any form of quality assurance
activity that does not involve
running the software, for example
Formal
code inspection cannot be
considered as a test.
• Approved test procedures:
Definition
• The testing process performed
according to a test plan and testing
procedures
• These are approved SQA
of Software
procedures adopted by the
developing organizations Testing
Formal • Approved test cases:
Definition of • The test cases to be examined are defined in
full by the test plan.
Software • No omissions or additions are expected to
occur during testing.
Testing
What is a Bug?
• Informally, it is “what happens when software fails”, whether the
failure was
o Inconvenient
o Catastrophic
• Terms for software failure
* Fault * Anomaly * Problem
* Inconsistency * Failure * Incident
* Error * Defect
* Variance * Bug
What is a Bug?
• Formally, we say that a software bug occurs when
one or more of the following five rules is true:
when the software
o doesn't do something that the product
specification says it should do.
o does something that the product
specification says it shouldn't do.
o does something that the product
specification doesn't mention.
o doesn't do something that the product
specification doesn't mention but should.
o is difficult to understand, hard to use, slow,
or will be viewed by the end user as just
pain not right.
Why do Bugs
Occur?
• Inaccurate understanding of end user needs
Symptoms • Inability to deal with changing requirements
• Modules that don’t fit together

and Root • Software that is hard to maintain or extend


• Late discovery of serious project flaws
• Poor software quality

Causes of • Unacceptable software performance


• Team members in each other’s way, making
it impossible to reconstruct who changed

Bugs what, when, where and why


• An untrustworthy build-and-release process
The Cost of
Bugs
SOFTWARE MOMINA SHAHEEN

TESTING –SED305

COMSATS University Islamabd, Lahore


Campus
Software Failures

Slide 2 Software Testing


SOFTWARE FAILURES

NASA: Mariner Failure


 A bug in the flight software for the
Mariner 1
 The rocket to divert from its intended
path
 Mission control destroys the rocket
over the Atlantic Ocean
 The investigation discovers that a
formula written on paper in pencil was
improperly transcribed
Slide 3 Software Testing
SOFTWARE FAILURES

Rocket Launch Errors


 In 1996, a European Ariane 5 rocket
was launched rocket to veer off its path
a mere 37 seconds after launch
 As it started disintegrating, it self-
destructed
 The problem was the result of code
reuse from the launch system’s
predecessor
 More than $370 million were lost due to
this error
Slide 4 Software Testing
SOFTWARE FAILURES

Flight Crashes
 In 1994 in Scotland, a Chinook
helicopter crashed
 killed all 29 passengers
 The crash was due to a systems error

Slide 5 Software Testing


SOFTWARE FAILURES

Korean Airliner Crash


 KAL 801 got accident
 Killed 225 out of 254 aboard
 A software design problem was
discovered in barometric altimetry
in Ground Proximity Warning
System (GPWS)

Slide 6 Software Testing


SOFTWARE FAILURES

Customer Tracking System


 An application for tracking
customer calls
 The system crashed because it
could not support multiple users at
once and did not meet the bank’s
security requirements
 Approximately a loss of $2,00,000

Slide 7 Software Testing


Software Testing

Slide 8 Software Testing


SOFTWARE TESTING

 Software Testing involves operating a system,


or an application, under controlled conditions,
and evaluating the results
 Software testing is normally carried out under
controlled conditions
 The aim of testing is to try to break the
software, and find the bugs in it
 Testing is oriented towards "detection" of
bugs in the software

Slide 9
SOFTWARE TESTING Maintenance

Validate Requirements, Verify Specification Acceptance Test


Requirements
(Release testing)

System Design System Testing


Verify System Design
(Architecture High- (Integration testing
level Design) of modules)

Verify Module Design


Module Design Module Testing
(Program Design, (Integration
Detailed Design) testing of units)
Verify Implementation

Implementation
of Units (classes,
procedures, Unit Testing
functions)

Slide 10 Software Testing


SOFTWARE TESTING Maintenance

Validate Requirements, Verify Specification Acceptance Test


Requirements
(Release testing)

System Design System Testing


Verify System Design
(Architecture High- (Integration testing
level Design) of modules)
Software Testing
Verify Module Design
Module Design Module Testing
(Program Design, (Integration
Detailed Design) testing of units)
Verify Implementation

Implementation
of Units (classes,
procedures, Unit Testing
functions)

Slide 11 Software Testing


SOFTWARE TESTING

Lecture : 04

Momina Shaheen
Outline

 What we do
 What does a Software Tester Do?
 What Makes a Good Software Tester?
 Goals for Testing
 Testing Methodologies
What we do?

 Testing consumes half of the labor expended to produce a working program


 Test design and testing take longer than program design and coding
 Software is ephemeral (transient)
 If software is insubstantial (vague) then how much more insubstantial does software
testing seem?
What we do?

 Myth :If we were really good at programming, there would be no bugs to catch
 If we apply good programming and design practices, there will be no bugs
 But there are bugs because we are bad at what we do and we should feel guilty
about it.
 Testing and test design amount to an admission of failure
 Tedium of testing is just punishment for our errors
 Punishment for what???
 For being human?
 Guilt for what?
 For not achieving inhuman perfection?
 For not distinguishing between what another programmer thinks and what he
says?
 For not solving human communication problems?
What we do?

 Statistics show that programming, done well, will still have one to three bugs per
hundred statements
 As far as programming errors are concerned, I have them, you have them, we all have
them
 Point is to do what we can to
 Prevent them
 Discover them as early as possible
 Not to feel guilty about them
What we do

 Programmer! Cast out your guilt! Spend half your time in joyous testing and
debugging!
 Testers! Break that software and drive it to the ultimate- but do not enjoy the
programmers pain
What does a Software Tester Do?
 Uncover as many errors (or bugs) as possible in a given product.
 Demonstrate a given software product matching its requirement specifications.
 Validate the quality of software testing using the minimum cost and efforts.
 Generate high quality test cases, perform effective tests, and issue correct and helpful
problem reports.
What Makes a Good Software Tester?
 They are explorers
 They love to get a new piece of software, install it on their PC, and see what
happens
 They are troubleshooters
 Software testers are good at figuring out why something doesn’t work. They love
puzzles.
 They are relentless (continuous)
 Software testers keep trying. They may see a bug that quickly vanishes or is
difficult to re-create.
 They are creative
 Their job is to think up creative and even off-the-wall approaches to find bugs
 They are (mellowed) perfectionists
 They strive for perfection, but they know when it becomes unattainable and they’re
OK with getting as close as they can
What Makes a Good Software Tester?

 They exercise good judgment


 Software testers need to make decisions about what they will test, how long it will
take, and if the problem they’re looking at is really a bug
 They are tactful and diplomatic
 Software testers are always the bearers of bad news. They have to tell the
programmers that their baby is ugly. Good software testers know how to do so
tactfully and professionally
 They are convincing
 Bugs that testers find won’t always be viewed as severe enough to be fixed.
Testers need to be good at making their points clear, demonstrating why the bug
does indeed need to be fixed

Software Testing is Fun


Goals for Testing

 The main focus of testing and test design should be bug prevention
 If bugs not prevented, testing and test design should be able to discover symptoms
caused by bugs
 Finally there should be clear diagnoses so that bugs can be easily corrected
Goals for Testing

 Prevention:
 A prevented bug is better than a detected and corrected bug
 If bug is prevented
 No code to correct
 No retesting is needed
 No one is embarrassed
 No memory is consumed
 No delays in schedule
 Designing tests is one of the best preventers
 Test design eliminates bugs at every stage in the creation of software, from
conception to specification, to design, coding and rest
Goals for Testing

 Discovery:
 This is the secondary goal of testing
 A bug is manifested in deviation from excepted behavior
 A test design must document expectations, test procedure and results of actual test
 Different bugs can have same manifestations and one bug can have many symptoms
Testing Methodologies
 Black box testing
 No knowledge of internal program design or code required.
 Tests are based on requirements and functionality.
 White box testing
 Knowledge of the internal program design and code required.
 Tests are based on coverage of code statements, branches, paths, conditions.

3/23/2021 6:49 PM
SOFTWARE TESTING

Lecture : 05 Momina Shaheen

CUI Lahore Campus


Outline

 Testing Axioms
 7 Principals of Software Testing
Testing Axioms

 Following are testing axioms


 It’s Impossible to Test a Program Completely
 Software Testing Is a Risk-Based Exercise
 Testing Can’t Show That Bugs Don’t Exist
 The More Bugs You Find, the More Bugs There Are
 The Pesticide Paradox
 Not All the Bugs You Find Will Be Fixed
 When a Bug’s a Bug Is Difficult to Say
 Product Specifications Are Never Final
 Software Testers Aren’t the Most Popular Members of a Project Team
 Software Testing Is a Disciplined Technical Profession
It’s Impossible to Test a Program Completely

 This is due to the following four reasons


 The number of possible inputs is very large.
 The number of possible outputs is very large.
 The number of paths through the software is very large.
 The software specification is subjective. You might say that a bug is in the eye of
the beholder.
 Multiply all these “very large” possibilities
 You get a set of test conditions that’s too large to attempt
It’s Impossible to Test a Program Completely

 Even a simple program such as the Windows Calculator is too complex to completely
test.
 If you decide to eliminate any of the test conditions due to following reasons, you’ve
decided not to test the program completely
 They’re redundant
 They’re unnecessary
 or just to save time
Software Testing Is a Risk-Based Exercise

 If you decide not to test every possible test scenario, you’ve chosen to take on risk
 When the product has to be released, so you will need to stop testing, but if you stop
too soon, there will still be areas untested
 A customer will still use the untested areas of the product, and he or she will discover
the bug
 It’ll be a costly bug then
 Testers need to learn
 How to reduce the huge domain of possible tests into a manageable set
 How to make wise risk-based decisions on what’s important to test and what’s not
Software Testing Is a Risk-Based Exercise

 Every software project has


an optimal test effort.
Testing Can’t Show That Bugs Don’t Exist

 Software testing can show that bugs exist, but it can’t show that bugs don’t exist
 You can perform your tests, find and report bugs, but at no point you can guarantee
that there are no longer any bugs to find
 You can only continue your testing and possibly find more and more bugs
The More Bugs You Find, the More Bugs There Are

 There are even more similarities between real bugs and software bugs
 If you see one, odds are there will be more nearby.
 There are several reasons for this
 Programmers have bad days:
 Like all of us, programmers can have off days.
 Code written one day may be perfect;
 Code written another may be sloppy.
 One bug can be a tell-tale sign that there are more nearby.
The More Bugs You Find, the More Bugs There Are

 Programmers often make the same mistake:


 Everyone has habits. A programmer who is prone to a certain error will often
repeat it.
 Some bugs are really just the tip of the iceberg:
 Very often the software’s design or architecture has a fundamental problem.
 A tester will find several bugs that at first may seem unrelated but eventually
are discovered to have one primary serious cause.
The Pesticide Paradox

 The more you test software, the more immune it becomes to your tests
 The same thing happens to insects with pesticides
 If you keep applying the same pesticide, the insects eventually build up resistance and
the pesticide no longer works.
 The test process repeats each time around the loop just like spiral model of
development
 After several passes, all the bugs that those tests would find are exposed, continuing
to run them won’t reveal anything new
 To overcome this, software testers must continually write new and different tests to
exercise and find more bugs.
The Pesticide Paradox

 Software undergoing the same repetitive tests eventually builds up resistance to them.
Not All the Bugs You Find Will Be Fixed

 One of the sad realities of software testing is that not every bug you find will be fixed
 Don’t be disappointed
 This doesn’t mean that you’ve failed in achieving your goal as a software tester
 It also does not mean that you or your team will release a poor quality product
 It does mean that you need to exercise good judgment and know when perfection isn’t
reasonably attainable
 You need to decide which bugs will be fixed and which ones won’t.
Not All the Bugs You Find Will Be Fixed

 There are following reasons why you might choose not to fix a bug:
 There’s not enough time.
 In every project there are always too many software features, too few people
to code and test them. They do not find enough room left in the schedule to
finish.
 Example: If you’re working on a tax preparation program, April 15 isn’t going
to move
 It’s really not a bug:
 “It’s not a bug, it’s a feature!” It’s not uncommon for misunderstandings, test
errors, or spec changes to result in would-be bugs being dismissed as
features.
Not All the Bugs You Find Will Be Fixed
 It’s too risky to fix:
 Software is fragile, intertwined, and sometimes like spaghetti. You might make
a bug fix that causes other bugs to appear.
 Under the pressure to release a product under a tight schedule, it might be
better to leave in the known bug to avoid the risk of creating new, unknown
ones.
 It’s just not worth it:
 Following types of bugs are not removed
 Bugs that would occur infrequently
 Bugs that appear in little-used features.
 Bugs that have, work-around ways that a user can prevent or avoid the bug
 The decision-making process usually involves
 software testers
 project managers
 programmers
When a Bug’s a Bug Is Difficult to Say

 If there’s a problem in the software but no one ever discovers it—not programmers, not
testers, and not even a single customer—is it a bug?

???
When a Bug’s a Bug Is Difficult to Say

 The problem is that there’s no definitive answer


 The answer is based on what you and your development team decide works best for
you.
 Recall the rules discussed earlier for the bugs to be called bugs
 One opinion is to claim that the software does or doesn’t do “something” implies that
the software was run and that “something” or the lack of “something” was witnessed
 You can’t report on what you didn’t see
 You can’t claim that a bug exists if you didn’t see it
When a Bug’s a Bug Is Difficult to Say

 The other opinion is it’s not uncommon for two people to have completely different
opinions on the quality of a software product
 One may say that the program is incredibly buggy and the other may say that it’s
perfect
 How can both be right?

???
When a Bug’s a Bug Is Difficult to Say

 Answer:
 One has used the product in a way that reveals lots of bugs. The other hasn’t.
 It is just like:
 “If a tree falls in the forest and there’s no one there to hear it, does it make a sound?”
Product Specifications Are Never Final

 The industry is moving so fast that last year’s cutting-edge products are obsolete this
year
 Software is getting larger and gaining more features and complexity, resulting in longer
and longer development schedules
 The result is a constantly changing product specification
 There’s no other way to respond to the rapid changes
 Example:
 You’re halfway through the planned two year development cycle, and your main
competitor releases a product very similar to yours but with several desirable
features that your product doesn’t have.
 Do you continue with your spec as is and release an inferior product in another
year?
Product Specifications Are Never Final

 As a software tester, you will observe that features will be added that you didn’t plan to
test.
 Features will be changed or even deleted that you had already tested and reported
bugs on
 You need to be flexible in your test planning and test execution
Software Testers Aren’t the Most Popular Members of a
Project Team

 Recall the job of a software tester


 Inspect and critique peer’s work
 Find problems with it
 Publicize what you’ve found
 Can you win a popularity contest doing this job???
 Following are the tips to keep the peace with your fellow teammates:
 Find bugs early:
 It will be more appreciated if you find a serious bug three months before,
rather than one day before, a product’s scheduled release
Software Testers Aren’t the Most Popular Members of a
Project Team

 Temper your enthusiasm


 You get really excited when you find a terrible bug. But, if you bounce into a
programmer’s cubicle with a huge grin on your face and tell her that you just
found the nastiest bug of your career and it’s in her code, she won’t be happy.
 Don’t always report bad news:
 If you find a piece of code surprisingly bug free, tell the world.
Software Testing Is a Disciplined Technical Profession

 Software industry has progressed to the point where professional software testers are
mandatory.
 It’s now too costly to build bad software
 To be fair, not every company is on board yet
 But most software is now developed with a disciplined approach that has software
testers as core, vital members of their staff
 It is now a career choice—a job that requires training and discipline, and allows for
advancement
7 Principals of Software Testing
 Principal 1:
 Exhaustive testing is impossible.
 Unless the application under test has a very simple logical structure and limited
input, it is not possible to test all possible combinations of data and scenarios
 We need an optimal amount of testing based on the risk assessment of the
application
 Principal 2:
 Defect Clustering
 Most of the reported defects are related to small number of modules within a
system
 Approximately 80% of the problems are found in 20% of the modules.

3/23/2021 6:49 PM
7 Principals of Software Testing
 Principal 3:
 Pesticide Paradox
 If the same tests are repeated over and over again, eventually the same test cases
will no longer find new bugs.
 How to overcome
 Test cases need to be regularly reviewed & revised
 Adding new & different test cases
 Principal 4:
 Testing shows presence of defects
 Software Testing reduces the probability of undiscovered defects remaining in the
software
 Even if no defects are found, it is not a proof of correctness.
 What if software doesn’t meet the needs & requirements of clients.

3/23/2021 6:49 PM
7 Principals of Software Testing
 Principal 5:
 Absence of Error is a Fallacy
 Finding and fixing defects does not help if the system build is unusable and does
not fulfill the users needs & requirements
 Principal 6:
 Early Testing
 Testing should start as early as possible in the Software Development Life Cycle.
 We can start testing as soon as requirements and design documents are
available.
 So that any defects in the requirements or design phase are captured as well.
 When defects are found earlier in the lifecycle, they are much easier and cheaper
to fix.

3/23/2021 6:49 PM
7 Principals of Software Testing
 Principal 7:
 Testing is context dependent
 The way you test an e-commerce site will be different from the way you test a
commercial off the shelf application.

3/23/2021 6:49 PM
7 Principals of Software Testing

Principle 1 Testing shows presence of defects

Principle 2 Exhaustive testing is impossible

Principle 3 Early Testing

Principle 4 Defect Clustering

Principle 5 Pesticide Paradox

Principle 6 Testing is context dependent

Principle 7 Absence of errors - fallacy

3/23/2021 6:49 PM
Software
Testing

Momina Shaheen
Software Defects
What
Will You Faults and
Learn Failure
Today?
Ctegories of
Defect
Software
Defects
 The term “defect” generally refers
to some problem with the software,
either with its external behavior or
with its internal characteristics.
 The IEEE Standard 610.12 (IEEE,
1990) defines the terms defects in
following ways:
 Failure:
 Fault:
 Error:

Slide 3 Software Testing


Software Defects

 Failure: The inability of a system or component


to perform its required functions within specified
performance requirements.
 Fault: An incorrect step, process, or data
definition in a computer program.
 Error: A human action that produces an
incorrect result.

Slide 4
 Failure:

Over
Flow

Slide 5 Software Testing


 Fault:

Formul
a
Wrong

Slide 6 Software Testing


Categories of Software Defects

something was something is


left out by done that is
accident Errors of wrong
Commission

Errors of Errors of Speed


Omission or Capacity

Errors of Clarity
and Ambiguity
Two people The application
reach different works, but not
interpretations fast enough
of what is
meant
Slide 7 Software Testing
SOFTWARE TESTING

LECTURE # 07
Momina Shaheen
 Origins of Defects
‒ Requirements Defects
‒ Design Defects

WHAT ‒ Coding Defects


‒ Documentation Defects
WILL YOU ‒ Fix Defects
‒ Data Defects
LEARN ‒ Test-Case Defects
‒ Regression Testing
 Origins of Defects
‒ Requirements Defects
‒ Design Defects
WHAT WILL ‒ Coding Defects

YOU ‒ Documentation Defects


‒ Fix Defects
LEARN
‒ Data Defects
TODAY?
‒ Test-Case Defects
‒ Regression Testing
something was something is
left out by done that is
accident Errors of wrong
Commission

Errors of Errors of Speed


Omission or Capacity

Errors of Clarity
and Ambiguity
Two people The application
reach different works, but not
interpretations fast enough
of what is
meant
Errors in Requirements

Errors in Design

Errors in Source Code

Errors in Documentation

Errors due to "Bad Fixes"

Errors in Data and Tables

Errors in Test Cases


All four categories of defect
are found in requirements

The two most common


REQUIREMENTS problems are errors of
DEFECTS omission and errors of
clarity and ambiguity

If requirements errors are


not prevented or removed,
they flow downstream into
design, code, and user
manuals
 Errors which originate in requirements tend to be the most
expensive and troublesome to eliminate later

 For reducing requirements defects, prevention is usually more


effective than defect removal

Slide 7
Requirements for large systems
can ever be complete, given the
observed rate of creeping
requirements during the
development cycle

Since requirements grow at


rates between 1% and 3% per
month during development, the
initial requirements often
describe less than 50% of the
features that end up in the final
version when it is delivered
 Once deployed, applications
continue to change at rates
REQUIREMENTS that approximate 5% to 8%
DEFECTS new features every year, and
perhaps 10% modification to
existing features
 Content Feeds : Discuss potential feeds – these
could be either internal or external content
 Company Profile: Gather a basic company profile
– what they specialize in, what their plans are for
expansion (such as adding a new product line or
service), and who their customers are.
 Competition : Define who
their competitors are – this will
help you see how they achieve
success and analyze potential
REQUIREMENTS improvements.
COMMON
DEFECTS FOR  Sitemap : Properly site
WEBSITE navigation according content
 Pages : Total Pages
 Header & Footer : Information
is correct in this part

Slide 11
Design ranks next to
requirements as a
source of very
troublesome, and
very expensive
errors
II. DESIGN
DEFECTS
All four categories of
defects are found in
software design and
specifications, as
might be expected

Slide 12
The most common forms of
design defects are errors
of omission where things
are left out, and of
commission, where
something is stated that
later turns out to be wrong
DESIGN
DEFECTS Errors of clarity and
ambiguity are also
common, and many
performance related
problems originate in the
design process as well

Slide 13
 All four categories of defects can
be found in source code, with
errors of commission being
dominant while code is under
development

CODING
DEFECTS  Perhaps the most surprising
aspect of coding defects when
they are studied carefully is that
more than 50% of the serious
bugs or errors found in the
source code did not truly
originate in the source code

Slide 14
 A majority of so-called
programming errors are really
due to the programmer not
understanding the design, or
the design not correctly
interpreting a requirement
CODING
DEFECTS
 This is not a surprising
situation. Software is one of
the most difficult products to
visualize prior to having to
build it

Slide 15
Built-in syntax checkers and
editors associated with
modern programming
languages can find many
"true" programming errors
(such as missed parentheses
or looping problems)
CODING
DEFECTS
Even poor structure and
excessive branching can now
be measured and corrected
automatically

Slide 16 Software Testing


 The kinds of errors that are
CODING not easily found are deeper
problems in algorithms or
DEFECTS those associated with
misinterpretation of design

Slide 17
Slide 18
 Defects in Object-Oriented
Programming Languages
‒ Since OO analysis and design
LANGUAGE has a steep learning curve and
is difficult to absorb, some OO
LEVELS projects suffer from worse than
average quality levels due to
problems originating in the
design

Slide 19 Software Testing


 Classification of Defects
 Origins of Defects
RECAP ‒ Requirements Defects
‒ Design Defects
‒ Coding Defects

Slide 20
SOFTWARE TESTING

LECTURE # 07
Momina Shaheen
 Origins of Defects
‒ Requirements Defects

WHAT ‒ Design Defects


‒ Coding Defects
WILL YOU ‒ Documentation Defects
‒ Fix Defects
LEARN ‒ Data Defects
‒ Test-Case Defects
TODAY? ‒ Regression Testing
something was something is
left out by done that is
accident Errors of wrong
Commission

Errors of Errors of Speed


Omission or Capacity

Errors of Clarity
and Ambiguity
Two people The application
reach different works, but not
interpretations fast enough
of what is
meant
Errors in Requirements

Errors in Design

Errors in Source Code

Errors in Documentation

Errors due to "Bad Fixes"

Errors in Data and Tables

Errors in Test Cases


All four categories of defect
are found in requirements

The two most common


REQUIREMENTS problems are errors of
DEFECTS omission and errors of
clarity and ambiguity

If requirements errors are


not prevented or removed,
they flow downstream into
design, code, and user
manuals
 Errors which originate in requirements tend to be the most
expensive and troublesome to eliminate later

 For reducing requirements defects, prevention is usually more


effective than defect removal

Slide 6
Requirements for large systems
can ever be complete, given the
observed rate of creeping
requirements during the
development cycle

Since requirements grow at


rates between 1% and 3% per
month during development, the
initial requirements often
describe less than 50% of the
features that end up in the final
version when it is delivered
 Once deployed, applications
continue to change at rates
REQUIREMENTS that approximate 5% to 8%
DEFECTS new features every year, and
perhaps 10% modification to
existing features
 Content Feeds : Discuss potential feeds – these
could be either internal or external content
 Company Profile: Gather a basic company profile
– what they specialize in, what their plans are for
expansion (such as adding a new product line or
service), and who their customers are.
 Competition : Define who
their competitors are – this will
help you see how they achieve
success and analyze potential
REQUIREMENTS improvements.
COMMON
DEFECTS FOR  Sitemap : Properly site
WEBSITE navigation according content
 Pages : Total Pages
 Header & Footer : Information
is correct in this part

Slide 10
Design ranks next to
requirements as a
source of very
troublesome, and
very expensive
errors
II. DESIGN
DEFECTS
All four categories of
defects are found in
software design and
specifications, as
might be expected

Slide 11
The most common forms of
design defects are errors
of omission where things
are left out, and of
commission, where
something is stated that
later turns out to be wrong
DESIGN
DEFECTS Errors of clarity and
ambiguity are also
common, and many
performance related
problems originate in the
design process as well

Slide 12
 All four categories of defects can
be found in source code, with
errors of commission being
dominant while code is under
development

CODING
DEFECTS  Perhaps the most surprising
aspect of coding defects when
they are studied carefully is that
more than 50% of the serious
bugs or errors found in the
source code did not truly
originate in the source code

Slide 13
 A majority of so-called
programming errors are really
due to the programmer not
understanding the design, or
the design not correctly
interpreting a requirement
CODING
DEFECTS
 This is not a surprising
situation. Software is one of
the most difficult products to
visualize prior to having to
build it

Slide 14
Built-in syntax checkers and
editors associated with
modern programming
languages can find many
"true" programming errors
(such as missed parentheses
or looping problems)
CODING
DEFECTS
Even poor structure and
excessive branching can now
be measured and corrected
automatically

Slide 15 Software Testing


 The kinds of errors that are
CODING not easily found are deeper
problems in algorithms or
DEFECTS those associated with
misinterpretation of design

Slide 16
Slide 17
 Defects in Object-Oriented
Programming Languages
‒ Since OO analysis and design
LANGUAGE has a steep learning curve and
is difficult to absorb, some OO
LEVELS projects suffer from worse than
average quality levels due to
problems originating in the
design

Slide 18 Software Testing


 User documentation in the
form of both manuals and
online information can contain
errors of omission and errors
of commission

DOCUMENTATION
DEFECTS  The most common kind of
problem is that of errors of
clarity and ambiguity

 Performance-related errors are


not often faced in user
information

Slide 19
Slide 20
The phrase "bad fixes"
refers to attempts to repair
an error which, although the
original error may be fixed,
introduce a new secondary
bug into the application
FIX
DEFECTS
Bad fixes are usually errors
of commission, and they are
found in every major
deliverable although most
troublesome for
requirements, design, and
source code

Slide 21
Bad fixes are very common From about 5% to more
and can be both annoying than 20% of attempts to
and serious repair bugs may create a
new secondary bug

FIX
DEFECTS

For code repairs, bad fixes


correlate strongly with
high complexity levels, as
might be expected=

Slide 22
 Repairs to ageing legacy
applications where the code is
poorly structured tend to
achieve higher than average
bad fix injection rates.
FIX
DEFECTS
 Often bad fixes are the result
of haste or schedule pressures
which cause the developers to
skimp on things like inspecting
or testing the repairs

Slide 23
When attempting to
correct a loop problem
such as going through the
loop one time too often,
the repair goes through
the loop one time short of
the correct amount; and
BAD FIX
EXAMPLES When correcting a
branching problem that
goes to the wrong
subroutine, the repair
goes to a different wrong
subroutine

Slide 24
The topic of data quality and data defects is Since one of the most common business uses of
usually outside the domain of software quality computers is specifically to hold databases,
assurance repositories, and data warehouses the topic of
data quality is becoming a major issue

Slide 25
Data errors can be very
serious and they also
interact with software
errors to create many
expensive and
troublesome problems.

Many of the most


frustrating problems that
human beings note with
computerized applications
can be traced back to data
problems.

Errors in utility bills,


financial statements, tax
errors, motor vehicle
registrations, and a host of
others are often data
errors.

Slide 26
Exploratory research carried out by IBM's software quality
assurance group on regression test libraries noted some
disturbing findings:

About 30% of the regression test cases were duplicates


that could be removed without reducing testing
effectiveness.

Slide 27
ABOUT 12% OF THE REGRESSION TEST CASES COVERAGE OF THE REGRESSION TEST LIBRARY
CONTAINED ERRORS OF SOME KIND RANGED BETWEEN ABOUT 40% AND 70% OF THE
CODE; I.E., THERE WERE NOTABLE GAPS WHICH
NONE OF THE REGRESSION TEST CASES MANAGED
TO REACH

Slide 28
For software regression means slipping A regression test means a set of test cases
backward, and usually refers to an error that are run after changes are made to an
made while attempting to add new features application
or fix bugs in an existing application

Slide 29
 The test cases are intended to ensure
REGRESSION that every prior feature of the
application still works, and that the new
TESTING materials have not caused errors in
existing portions of the application

Slide 30
 The seven fundamental design
issues actually describe the
application itself. Therefore
errors or defects affect what
the software actually does.
Security
DEFECTS IN  As hackers and viruses
SECONDARY become trickier and more
DESIGN common, every application
that deals with business
information needs to have
security features designed into
it. Errors in software security
can result in viral invasion or
easy dispersion by malicious
hackers.

Slide 31
Reliability
 This section of software specifications defines
the mean time to failure and mean time
between failures targeted for the application.
For contract software, reliability requirements
should be stated explicitly. There is a strong
correlation between reliability and software
defect volumes and severity levels, so reliability
targets lead directly to the need for effective
defect prevention and defect removal
operations. In some software development
contracts, explicit targets for defect removal
DEFECTS IN efficiency and post release quality levels are
SECONDARY now included.

DESIGN
Maintainability

 This section of specifications discusses the


assumptions of how software defects will be
reported and handled, plus any built in features
of the application to facilitate later maintenance
activity. Topics such as maintenance release
intervals are also discussed.

Slide 32
Software Dependencies
 Errors in this section can lead to reduced
functionality. A by-product of listing specific
software dependencies is the ability to
explore interfaces in a thorough manner.

Packaging
 This section, used primarily by commercial
software vendors, discusses how the
DEFECTS IN software will be packaged and delivered;
SECONDARY i.e., CD ROM, disk, down loaded from a
host, etc. Errors here may affect user
DESIGN satisfaction and market shares. Also, the
initial packaging decision will probably affect
how subsequent maintenance releases and
defect repairs are distributed to users also.
For example, starting in about 1993 many
major software vendors began to use
commercial networks such as America
Online, CompuServe, and the Internet as a
channel for receiving customer queries and
defect reports and also as a channel for
down loading updates, new releases, and
defect repairs.

Slide 33
 Classification of Defects
 Origins of Defects
‒ Requirements Defects
‒ Design Defects
‒ Coding Defects
RECAP
‒ Documentation Defects
‒ Fix Defects
‒ Data Defects
‒ Test-Case Defects
‒ Regression Testing

Slide 34
SOFTWARE TESTING
Momina Shaheen
Test Case
Test Case Template
Level of Detail For Test Case
Good Test Cases
Bad Test Cases
Test Case Organization and Tracking
Outline
Software Testing Life Cycle
SDLC Models and Testing
V-Model
Modified V-Model
Test Driven Development
3/28/2021 7:11
PM
 Test Case
 A set of
 Input values
 Execution preconditions
 Expected results

Test Case  Execution post conditions

 Developed for a particular objective or test condition,


such as
 To exercise a particular program path
 To verify compliance with a specific requirement.
Test Case

The details of a test case should It can be referenced by one or It may reference more than one
explain exactly more test design specs test procedure
What values or conditions will be sent to the
software
What result is expected
Below are the standard fields of sample test
case template

Unique ID for each test case.

Test case ID: Follow some convention to indicate


types of test. E.g. ‘TC_UI_1’ indicating
‘user interface test case #1’.

Mention product name.


Mention name of main module or sub
Product / Ver./ Module:
Test Case
module.
Mention version information of the
product.

Template
Test case Version: Mention the test case version
(Optional) number.

Mention the use case reference for


Use Case Reference(s): which the test case is written.
 GUI Reference(s) :(Optional)
 Mention the GUI reference for which the test case is written.

 QA Test Engineer / Test Designed By:


 Name of the tester
 Test Designed Date:

Test Case  Date when test case is written.


 Test Executed By:
Template  Name of tester who executed this test.
 To be filled after test execution.
 Test Execution Date:
 Date when test case is executed.
 Test Title/Name:
 Test case title. E.g. verify login page with valid username
and password.
 Test Case Summary/Description:
 Describe test objective.
 Pre-Requisite/Pre-condition:
 Any prerequisite that must be fulfilled before execution of
this test case.
 List all pre-conditions in order to successfully execute this
Test Case test case.
 Dependencies: (Optional)
Template  Mention any dependencies on other test cases or test
requirement.
 Test Steps:
 List all test execution steps in detail.
 Write test steps in the order in which these should be
executed.
 Make sure to provide as much details as you can.
 Test Data/Input Specification:
 Use of test data as an input for the test case.
 You can provide different data sets with exact values to be
used as an input.
 Examples: If you’re testing Calculator, this may be as simple
as 1+1.
 If you’re testing cellular telephone switching software, there
could be hundreds or thousands of input conditions.
 If you’re testing a file-based product, it would be the name
of the file and a description of its contents.
Test Case  Expected Result/ Output Specification:
Template  What should be the system output after test execution?
 Describe the expected result in detail including
message/error that should be displayed on screen.
 Examples: Did 1+1 equal 2?
 Were the thousands of output variables set correctly in the
cell phone software?
 Did all the contents of the file load as expected?
 Actual result:
 Actual test result should be filled after test execution.
 Describe system behaviour after test execution.
 Status (Pass/Fail):
 If actual result is not as per the expected result mark this test
as failed.
 Otherwise update as passed.
Test Case  Notes/Comments/Questions: To support above fields if
there are some special conditions which can’t be
Template described in any of the above fields or there are
questions related to the expected or actual results,
mention those here.
 Post-condition:
 What should be the state of the system after executing this
test case?
 Environmental needs: (Optional)
 Environmental needs that are necessary to run the test
case include:
 Hardware
 Software
 Test tools
 Facilities
Test Case  Staff and so on.

Template  Special procedural requirements: (Optional)


 This section describes anything unusual that must be done
to perform the test.
 Example: Testing WordPad probably doesn’t need
anything special, but testing nuclear power plant software
might.
 If you follow this level of documentation, you
could be writing at least a page of
descriptive text for each test case you
identify.
 Thousands of test cases could take
thousands of pages of documentation.
 The project could be outdated by the time
you finish writing.
Level of Detail
 Many government projects and industries are For Test Case
required to document their test cases to this
level.
 In other cases, you can take some shortcuts.
 Taking a shortcut doesn’t mean dismissing or
neglecting important information.
Level of Detail
For Test Case
 You can use the following
test case format for printer
compatibility matrix
 All the other information
that goes with a test case
are most likely common to
all these cases and could
be written once and
attached to the table
A good test case has certain characteristics which are:
1. Should be accurate and tests what it is intended to
test.
2. No unnecessary steps should be included in it.
3. It should be reusable.
Good Test 4. It should be traceable to requirements.
Cases 5. It should be compliant to regulations.
6. It should be independent i.e. You should be able to
execute it in any order without any dependency on
other test cases.
7. It should be simple and clear, any tester should be
able to understand it by reading once.
 Tests only one thing
 Always make sure that your test case tests only
one thing, if you try to test multiple conditions
in one test case it becomes very difficult to
track results and errors.
 Organize your test cases consistently

TIPS for writing


 You can organize your test cases in many ways
however you should always follow the same
pattern to organize you test cases.
 Write independent test cases
 Your test cases should not have dependency
good test
on other test cases, i.e. you should be able to
execute your test case individually cases
 Write small test cases
 Always mention purpose of each test case
clearly.
 Be precise.
Bad Test Cases

1. Test passes but not testing the actual feature


2. Testing irrelevant things
3. Testing multiple things in assertions
4. Tests swallowing exceptions
5. Test which depends on excessive setup
6. Test compatible to only developer's machine
7. Test filling log files with load of texts
Test Procedure

 A standardized and documented process that represent the sequence of actions for the
execution of test cases. Also known as manual test script.
 The ANSI/IEEE 829 standard lists some important information that need to be defined
 Identifier
 A unique identifier that ties the test procedure to the associated test cases and test
design.
 Purpose
 The purpose of the procedure and reference to the test cases that it will execute.
 Special requirements
 Other procedures, special testing skills, or special equipment needed to run the
procedure.
Test Procedure

 Procedure steps
 Detailed description of how the tests are to be run:
 Log- Tells how and by what method the results and observations will be recorded.
 Setup- Explains how to prepare for the test.
 Start- Explains the steps used to start the test.
 Procedure- Describes the steps used to run the tests.
 Measure- Describes how the results are to be determined—for example, with a stopwatch or visual determination.
 Shut down- Explains the steps for suspending the test for unexpected reasons.
 Restart- Tells the tester how to pick up the test at a certain point if there’s a failure or after shutting down.
 Stop- Describes the steps for an orderly halt to the test.
 Wrap up- Explains how to restore the environment to its pre-test condition.
 Contingencies- Explains what to do if things don’t go as planned.
Test Procedure Showing
How Much Detail
Should Be Involved
While considering how the information will be organized and tracked
think about the questions:

Which test cases do you plan to run?

How many test cases do you plan to run?

How long will it take to run them?

Test Case Can you pick and choose test suites (groups of related test cases) to run
Organization on particular features or areas of the software?

and Tracking When you run the cases, will you be able to record which ones pass
and which ones fail?

Of the ones that failed, which ones also failed the last time you ran
them?

What percentage of the cases passed the last time you ran them?
Test Case Organization and Tracking

To manage your test In your head Paper/documents Spreadsheet.


cases and track their
results, there are
essentially four possible
systems Don’t even consider this one It’s possible to manage the test A popular and very workable
even for the simplest projects cases for very small projects on method of tracking test cases is
paper. by using a spreadsheet
Tables and charts of checklists They’re
have been used effectively. •easy to use
•relatively easy to set up
•provide good tracking and
proof of testing
•Give quick overview
Custom Database.

You can set up reports


Many commercial and queries that would
The ideal method for
applications available to allow you to answer any
tracking test cases
perform this specific task question regarding the
test cases.

Test Case
Organization
and Tracking The important thing to remember is that the
number of test cases can easily be in the
thousands and without a means to manage
them, you and the other testers could quickly
be lost in a sea of documentation
Prepare a document containing
three test cases with proper test case
format for your final year project.

Each group member will write his


Assignment own test case (One test case per
group member)

Actual test case writer’s name should


be mentioned in QA Test Engineer’s
field.
Testing Life Cycle
Project Initiation Summary Reports

System Study Analysis

Test Plan Regression Test

Design Test Cases Report Defects

Test Environment Setup Execute Test Cases


( manual /automated ) 3/28/2021 7:12
PM
 Project Initiation
 All the necessary analysis is undertaken to allow
the project to be planned.
 System Study
 To test, we need to know the product
functionality(Understanding the product)
 Test Plan
 It is a Systematic approach to test a system or s/w.
 Contains a detailed understanding of what the
Testing Life
eventual testing workflow will be or should be.
 Design Test Cases:
Cycle
 Test case is the specific procedure of testing a
particular requirement by giving specific input to
the system and defining the expected results.
 Executing(Manual):
 Usually Test cases will be written by person A and
will be executed by person B and vice-versa so
that there are more chances to find bugs.
Testing Life Cycle

Report defects: Regression testing: Analysis: Test Summary Report


Reporting defects in Issue Verify whether the new Analysis about the Testing is An important deliverable
Logger (Ex: JIRA, HP-QC) functionality or bug Done here. which is prepared after
Issues will be fixed by the correction affected the Testing is completed.
DEV team. previous behaviour This document is to explain
various details and activities
about the Testing
performed for the Project,
to the respective
stakeholders like Senior
Management, Client etc.
Waterfall Model

• Sees testing as Post Development (Post Coding)


activity

Spiral Model

• Tried to Breakup product in increments


• Each of the increment can be tested separately
SDLC
Models V-Model

and Testing • Similar to Waterfall Model


• Sees product development to be made up of
number of phases or levels
• Different types of testing is applied to different
phases or levels
Product development activity represented as
Waterfall Model
Overall Business
Requirements

Software
Requirements

High Level Design

Low Level Design

Coding

Testing
Phases of testing for different development
phases
Overall Business Acceptance
Requirements Testing

Software
System Testing
Requirements

High Level Design Integration Testing

Component
Low Level Design
Testing

Coding Unit Testing

Testing
The V Model

 Overall Business Requirement


 These requirements cover hardware, software, and operational requirements
 Software Requirements
 Next step is moving from overall requirements to software requirements.
 High Level Design
 Software system is imagined as a set of sub systems that work together
 Low Level Design
 High level design gets translated to a more detailed or low level design. In this data structures,
algorithms choices, table layouts, processing logics and exception conditions etc are decided
 Coding
 Program code is written in appropriate languages
The V Acceptance Testing
Model Testing •For overall business requirements, eventually whatever software is developed
should fit into and work in overall context and should be accepted by end user.
This testing is acceptance testing

System Testing
•Before product deployment, the product tested as an entire unit to make sure
Testing that all the software requirements are satisfied by the product. This testing of
entire software system is system testing

Integration Testing
•High level design views the system as being made up of interoperating and
Testing integrated subsystems. The individual subsystems should be integrated and
tested. This type of testing corresponds to integration testing
Component Testing
•The components that are the outputs of low level
Testing design have to be tested independently before being
integrated. This type of testing is component level
testing

The V
Model Unit Testing
•Coding produces several program units, each of these
Testing units have to be tested independently before
combining them to form components. The testing of
program units form the unit testing
The V Model
 Planning of testing for different development phases
 Planning phase is not shown as a separate entity since it is common for all
testing phases.
 It is still not possible to execute any of these tests until the product is
actually built.
 In other words, the step called "testing" is now broken down into different
sub-steps.
 It is still the case that all the testing execution related activities are done
only at the end of the life cycle.
The V Model
 Who should design test
 Execution of the tests cannot be done till the product is built, but the design of tests
can be carried out much earlier.
 Skill sets required for designing each type of tests,
 The people who are actually performing the function of creating the
corresponding artifact.
 For example,
 Acceptance tests should be designed by those who formulate the overall
business requirements (the customers, where possible).
 Those should design the integration tests who know how the system is
broken into subsystems i.e. those who perform the high level design.
 Again, the people doing development know the innards of the program
code and thus are best equipped to design the unit tests.
The V Model
 Benefits of early design
 We achieve more parallelism and reduce the end-of-cycle time taken for testing.
 By designing tests for each activity upfront, we are building in better upfront
validation, thus again reducing last-minute surprises.
 Tests are designed by people with appropriate skill sets.
V-Model
Overall Business Acceptance Test Acceptance
Requirements Design Testing

Software System Testing


System Testing
Requirements Design

Integration Test
High Level Design Integration Testing
Design

Component Test Component


Low Level Design
Design Testing

Coding Unit Test Design Unit Testing

Verification Validation
V-Model
 Advantages of V- Model
 Testing activities like planning, test designing happens well before
coding.
This saves a lot of time hence higher chances of success over the
waterfall model.
 Proactive defect tracking – that is defects are found at early stage
it avoids the downward flow of the defects.
 Dis-advantages of V-Model
 It is Very rigid and least flexible.
 No early prototypes of the software are produced.
 If any changes happen in midway, then the test documents along with
requirement documents has to be updated.
Modified V-Model
 In the V-Model there is an assumption
 Even the activity of test execution was split into execution of tests of
different types, the execution cannot happen until the entire product is
built.
 For a given product, the different units and components can be in
different stages of evolution
 For example one unit may be in development and thus subject to unit
testing whereas another unit may be ready for component testing
 The V model does not explicitly address this parallelism commonly found
in the product development
Modified V-Model

 In the modified V Model,


 Each unit or component or module is given explicit exit criteria to
pass on to the subsequent stage
 The units or components or modules that satisfy a given phase of
testing move to the next phase of testing where possible.
They do not wait for all the units or components or modules to
move from one phase of testing to another.
Modified V-Model
Test Driven Development
 Test-driven development (TDD) is a method of software development
 The concept is to "get something working now and perfect it later."
 You develop the code incrementally, along with a test for that increment
 Test-driven development was introduced as part of agile methods such as Extreme
Programming
 You don’t move on to the next increment until the code that you have developed
passes its test
Test Driven Development
 After each test, refactoring is done and then the same or a similar test is performed
again.
 Refactoring
 Refactoring is the process of changing a software system in such a way
that it does not alter the external behavior of the code yet improves its
internal structure.
 The testing is usually performed by an automated tool
 You have to be able to run every test each time that you add
functionality or refactor the program
 The process is iterated as many times as necessary until each unit is
functioning according to the desired specifications.
 The tests are embedded in a separate program that runs the tests and
invokes the system that is being tested
 It is possible to run hundreds of separate tests in a few seconds
Test Driven Development
 Procedure
 Think about what you want to do.
 Think about how to test it.
 Write a small test.
 Write just enough code to fail the test.
 Run and watch the test fail.
 Write just enough code to pass the test (and pass all your previous tests).
 Run and watch all of the tests pass.
 If you have any duplicate logic, or inexpressive code, refactor to remove
duplication and increase expressiveness.
 Run the tests again, It should Pass. If it Fails, then you made a mistake in your
refactoring. Fix it now and re-run.
 Repeat the steps above until you can't find any more tests that drive writing
new code.
Test Driven Development

Add a test

[Pass]
Run the tests

[Fail]

Make a little
change

[Pass, Development
continues]
[Fail]
Run the tests

[Pass, Development
stops]
Test Driven Development
 There are two levels of TDD
 Acceptance TDD
Write a single acceptance test, or behavioral specification.
Produce functionality/code to fulfill that test.
Also known as Behavior Driven Development (BDD).
 Developer TDD
Write single developer test
Produce code to fulfill that test.
Simply called as TDD
Test Driven Development
Acceptance TDD Developer TDD

Add an
acceptance test

Add a test

[Pass]
Run the
acceptance tests
[Pass]
Run the tests
[Fail]

[Fail]

Make a little [Developer TDD]


change Make a little
change

[Pass, Functionality
[Pass, Development incomplete]
continues] [Fail]
Run the tests
[Fail]

Run the
acceptance tests [Pass, Development
stops]
[Pass, Development
stops]
Acceptance TDD vs. Developer TDD

The scenario:
 You’re a developer on a team responsible for the company accounting system,
implemented in Rails. One day, a business person asks you to implement a
reminder system to remind clients of their pending invoices. Because you’re
practicing BDD, you sit down with that business person and start defining
behaviours.
 You open your text editor and start creating pending specs for the behaviours
the business user wants:
 It "adds a reminder date when an invoice is created"
 It "sends an email to the invoice's account's primary contact after
the reminder date has passed"
 It "marks that the user has read the email"
Acceptance TDD vs. Developer TDD

 Some developers prefer to write test cases on the spot, calling methods in the system, setting up
expectations, like so:
 it "adds a reminder date when an invoice is created"
 do
 current_invoice = create :invoice
 current_invoice.reminder_date.should == 20.days.from_now
 End
 Let’s look at this a different way, with a Test-Driven Development approach, and write out pending tests:
 it "after_create an Invoice sets a reminder date to be creation + 20 business days"
 it "Account#primary_payment_contact returns the current payment contact or the client
project manager"
 it "InvoiceChecker#mailer finds invoices that are overdue and sends the email"
Test Driven Development
Test Case ID Description Input Data Expected Actual Pass / Fail Remarks
Results Results
UT001 To test that the function isDivisibleByThree returns true if any number 3 True
is divisible by 3
UT002 To test that the function isDivisibleByThree returns false if any 2 False
number is not divisible by 3
UT003 To test that the function isDivisibleByFive returns true if any number 5 True
is divisible by 5
UT004 To test that the function isDivisibleByFive returns false if any number 6 False
is not divisible by 5
UT005 To test that the function isDivisibleByFifteen returns true if any 30 True
number is divisible by 15
UT006 To test that the function isDivisibleByFifteen returns false if any 25 False
number is not divisible by 15
UT007 To test that function fizzBuzz returns number if a number is passed to 1 1
it
UT008 To test that function fizzBuzz returns FizzBuzz if a number divisible 30 FizzBuzz
by 15 is passed to it
UT009 To test that function fizzBuzz returns Fizz if a number divisible by 3 is 9 Fizz
passed to it
UT010 To test that function fizzBuzz returns Buzz if a number divisible by 5 20 Buzz
is passed to it
Benefits of Test Driven Development
 Code coverage
 Every code segment that you write should have at least one associated
test. You can be confident that all of the code in the system has actually
been executed
 Regression testing
 Run regression tests to check that changes to the program have not
introduced new bugs.
 Simplified debugging
 When a test fails, it should be obvious where the problem lies. The newly
written code needs to be checked and modified. You do not need to
use debugging tools to locate the problem
 System documentation
 The tests themselves act as a form of documentation that describe what
the code should be doing.
Test Driven Development Using
Standard Libraries
 Test-driven development is of most use when development is done by using
well-tested standard libraries.
 In case of you use libraries, you need to write tests for these systems as a
whole
 If you use test-driven development, you still need a system testing process
to check that it meets the requirements of all of the system stakeholders
 System testing also tests performance, reliability, and checks that the
system does not do things that it shouldn’t do
 Test-driven development is a successful approach for small and medium-
sized projects
SOFTWARE
TESTING
PROCESS
Outline
■ Basic Definitions
■ Fundamental of test processes
■ Requirement Traceability Matrix

7:12 PM
Basic Definitions
■ Test basis
– It is the information or the document that we need to create our own test cases
and start the test analysis.
■ Test analysis
– It is the process of looking at something that can be used to derive test
information.
■ Test Condition
– An item or event of a component or system that could be verified by one or more
test cases, e.g., a function, transaction, feature, quality attribute, or structural
element.
Basic Definitions

■ Test Procedure Specification


– A document specifying a sequence of actions for the execution of a test. Also
known as manual test script.
■ Test Script
– Commonly used to refer to a test procedure specification, especially an
automated one.
■ Test Suites
– Are collection of test cases that are used to test a software program to show
that it has some specified set of behaviors
Basic Definitions
Software Testing Process
Fundamental of Test Processes
1. Test Planning and Control
2. Test Analysis and Design
3. Test implementation and execution
4. Evaluating exit criteria and Reporting
5. Test closure activities

7:12 PM
1. Test Planning and Control
■ Test plan:
– A document describing the scope, approach, resources and schedule of intended
test activities.
– It consists of the following:
■ The scope, risk and objective of testing
■ The test policy and/or the test strategy
■ List of the features to be tested
■ Details of the testing tasks
■ Who will do each task (Resource Allocation)
■ The test environment
■ The test design techniques
■ Entry and exit criteria to be used
■ Test Schedule
■ Any risks requiring contingency planning

7:12 PM
1. Test Planning and Control

■ Test Planning
– The activity of establishing or updating a test plan.
– Continuous process and performed in all project life cycles
■ Test control has the following major tasks:
– To measure and analyze the results of reviews and testing
– To monitor and document progress, test coverage and exit criteria
– To provide overall information on testing
– To initiate corrective actions
– To make decisions
2. Test Analysis and Design
■ The test objectives are a major deliverable for technical test analysts to know what to
test.
■ We use test objectives as our guide to
– Identify and refine the test conditions for each test objective
– Create test cases that exercise the identified test conditions
■ We need to prioritize the test conditions on the basis of likelihood and impact
associated with each quality risk item as we know that testing everything is an
impractical goal.
■ Following steps are followed for analysis and design phase:
– Non-functional Test Objectives
– Identifying and Documenting Test Conditions
– Test Oracles
– Standards

7:12 PM
2.1 Non-functional Test Objectives

■ Non-functional test objectives can apply to any test level and exist throughout the
lifecycle
■ Major non-functional test objectives are addressed at the end of the project
■ If test execution is not possible to start at any level, reviews of requirements, design and
code can be conducted
■ Performance testing should be performed as early as possible
■ Performance testing should be done at unit and component level and also at the time of
integration
2.2 Identifying and Documenting Test Conditions

■ Identify functional and non-functional test conditions


■ Two important choices while Identifying and Documenting Test Conditions are:

1. The structure of the documentation for the test conditions

■ Work in parallel with the test basis documents.


■ Generate the high-level test conditions
■ Elaborate into one or more low-level test conditions underneath each high-
level test condition
2.2 Identifying and Documenting Test
Conditions
2. The level of detail we need to describe the test conditions in the
documentation
■ Outline the key features and quality characteristics at a high level.
■ Identify one or more detailed quality risk items for each feature or
characteristic
■ If you have detailed requirements, go directly to the low-level requirements.
■ Traceability from the detailed test conditions to the requirements is
ensured
■ Another approach is to identify high-level test conditions only
– Chosen level of detail and the structure must align with the test strategy or
strategies, and those strategies should align with the test plan or plans
2.2 Identifying and Documenting Test
■ Conditions
The next step is to elaborate test conditions into test cases.
■ High-level test case
– A test case without concrete (implementation-level) values for input data and
expected results
– These include test cases for
■ Functional testing
■ System Testing
■ Acceptance Testing
■ Low-level test case
– A test case with concrete (implementation-level) values for input data and
expected results.
– These include test cases for
■ Unit testing
■ Integration testing
2.2 Identifying and Documenting Test
Conditions
■ Test Design is creating a set of inputs for given software that will provide a set of
expected outputs.
■ The purpose is to ensure that the system is working good enough and it can be
released with as few problems as possible for the average user.
■ Test design process involves defining the following
– Preconditions
– Test environment requirements
– Test inputs and other test data requirements
– Expected results
– Post conditions
2.2 Identifying and Documenting Test
■ Conditions
Test Design Techniques
– Static Techniques/Testing
■ Static testing is software testing technique where testing is carried out without
executing the code.
■ This type of testing comes under Verification.
– Dynamic Techniques/Testing
■ Dynamic testing is software testing technique where testing is carried out with
executing the code.
■ This type of testing comes under Validation.

7:12 PM
2.2 Identifying and Documenting Test
Conditions

7:12 PM
2.3 Test Oracles

■ A test oracle is a source we use to determine the expected results of a test


■ Oracle can be
– Existing system
– User manual
– Individual’s specialized knowledge
■ Never use the code itself as an oracle because that’s simply testing that the compiler,
operating system, and hardware work
■ Higher test levels like user acceptance test and system test rely more on requirements
specification, use cases, and defined business processes.
■ Lower test levels like component test and integration test rely more on low-level design
specification
2.4 Standards
■ The test design specification template includes
– Test design specification identifier (following whatever standard your
company uses for document identification)
– Features to be tested (in this test suite)
– Approach refinements (specific techniques, tools, etc.)
– Test identification (tracing to test cases in suites)
– Feature pass/fail criteria (e.g., how we intend to determine whether
a feature works, such as via a test oracle, a test basis document, or
a legacy system)
2.4 Standards

■ The test case specification template includes


– Test case specification identifier
– Test items (what is to be delivered and tested)
– Input spécifications (user inputs, files, etc.)
– Output specifications (expected results, including screens, files, timing,
behaviors of various sorts, etc.)
– Environmental needs (hardware, software, people, props (Other supporting
things), and so forth)
– Special procedural requirements (operator intervention, permissions, etc.)
– Inter-case dependencies (if needed to set up preconditions)
3. Test Implementation and Execution
■ Test implementation has following task:
– To develop and prioritize the test cases by using techniques and create test data
for those tests.
– We also write test procedures.
– To create test suites from the test cases for efficient test execution.
– To implement and verify the environment.
– Schedule test execution

7:12 PM
3. Test Implementation and Execution
■ Test execution has the following major task:
– To check test environments
– To check traceability between test basis and test cases
– To execute test suites and individual test cases following the test procedures using
execution tools.
– To re-execute the tests that previously failed in order to confirm a fix. This is known
as confirmation testing or re-testing.
– To log the outcome of the test execution and record the identities and versions of
the software under tests. The test log is used for the audit trial..
– To Compare actual results with expected results.
– Where there are differences between actual and expected results, it report
discrepancies as Incidents.

7:12 PM
4. Evaluating exit criteria and Reporting

■ Evaluating exit criteria is a process defining when to stop testing.


■ The criteria vary from project to project.
■ Evaluating exit criteria has the following major tasks:
– To check the test logs against the exit criteria specified in test planning.
– To assess if more tests are needed or if the exit criteria specified should be
changed
– To write a test summary report for the stakeholders
4. Evaluating exit criteria and Reporting
■ We can measure properties of the test execution process such as the following:
– Number of test conditions, cases, or test procedures that are planned, executed,
passed, and failed
– Total defects, classified by severity, priority, status, or some other factor
– Change requests proposed, accepted, and tested
– Planned versus actual costs, schedule, effort
– Quality risks, both mitigated and residual
– Lost test time due to blocking events
– Confirmation and regression test results

7:12 PM
5. Test Closure Activities
■ Test closure activities are done when the software is delivered
■ This process collects data from completed test process and test wares.
■ The testing can be closed for the other reasons like:
– When all the information has been gathered which are needed for the testing.
– When a project is cancelled.
– When some target is achieved.
– When a maintenance release or update is done.

7:12 PM
5. Test Closure Activities
■ It has following major tasks.
– Ensure deliverable has been delivered or not
– Ensure closing incident report
– Documenting all the systems
– Archiving all the test ware, test environment and infrastructure for later reuse.
– Handover the testware to the maintenance organization. They will give support to
the software.
– To evaluate how the testing went and learn lessons for future releases and
projects.

7:12 PM
Requirements Traceability Matrix (RTM)
■ Process of preparing links between user requirements and all the initiatives that you
take to meet requirements.
■ What?
– All software requirements
– Software coding
– Software design specification
– Test planning

7:12 PM
Requirements Traceability Matrix (RTM)
■ Why?
– Project handling team will come to know that which part of code is concerned with
client’s requirement
– Testing team comes to know that which type of test cases they have to prepare
■ When?
– During the Requirement Management phase of SDLC
– It is a deliverable of Requirement Analysis in STLC

7:12 PM
Requirements Traceability Matrix (RTM)
■ Importance?
– Risk Management
– Change Management
– Post change effect can also be maintained

7:12 PM
SOFTWARE TEST
PLAN
By
Sabeen Amjad
Software Test Plan
▪ Test Plan Template
▪ IEEE 829 Format
Outline

▪ Introduction 9. Item Pass/Fail Criteria


▪ The goal of test planning 10. Suspension Criteria and
Resumption Requirements
▪ Test planning topics 11. Test Deliverables
1. Test Plan Identifier 12. Remaining Test Tasks
2. References 13. Environmental Needs
3. Introduction 14. Staffing and Training Needs
4. Test Items (Functions) 15. Responsibilities
5. Software Risk Issues 16. Schedule
6. Features to be tested 17. Planning Risks and Contingencies
7. Features not to be tested 18. Approvals
8. Approach (Strategy) 19. Glossary
Introduction

▪ The most fundamental test document


▪ Test lead will be responsible to create comprehensive test plan
▪ Tester will assist in its creation
▪ Tester can use this information to organize his own testing tasks
The goal of test planning

▪ The test plan is a by-product of the detailed planning process that is undertaken to
create it.
▪ It’s the planning process that matters, not the resulting document
▪ The ultimate goal of the test planning process is communicating (not recording)
▪ Test team’s intent
▪ Test team’s expectations
▪ Test team’s understanding of the testing that’s to be performed
Test planning topics vs. Test planning templates

▪ Many software test plan templates are available


▪ By using these templates a document can be easily prepared
▪ Emphasis should be on the planning process and not the document
▪ So the focus should be the topics that need to be covered in test plan
▪ Planning is a dynamic process, so where you feel these topics do not cover your
project feel free to adjust it accordingly
TEST PLANNING TOPICS
1- Test Plan Identifier
▪ Unique number to identify the test plan
▪ The number may also identify whether the test plan is
▪ Master plan
▪ Level plan (Unit, Integration, System and acceptance test plan)
▪ Testing Type Specific Test Plans (Performance, Security Test Plan etc)
▪ Keep in mind
▪ Test plans are like other software documentation
▪ They are dynamic in nature and must be kept up to date
▪ They have revision numbers
▪ You may include
▪ Author
▪ Contact information
▪ Revision history information
Document History and Distribution

1. Revision History
Revision # Revision Date Description of Change Author

2. Distribution
Recipient Name Recipient Organization Distribution Method
2- References
▪ List all documents that support this test plan
▪ Refer to the actual version/release number of the document
▪ As stored in the configuration management system
▪ Do not duplicate the text from other documents
▪ It will reduce the viability (practicality) of this document and increase the
maintenance effort
▪ Documents that can be referenced include:
▪ Project Plan
▪ Requirements specifications (Software / Business)
▪ High Level design document
▪ Detail design document
▪ Development and Test process standards
▪ Methodology guidelines and examples
▪ Corporate standards and guidelines
3- Introduction
▪ State the purpose of the Plan
▪ Possibly identifying the level of the plan (master etc.)
▪ Executive summary part of the plan
▪ Identify the Scope of the plan in relation to the Software Project plan
▪ Other items may include
▪ Resource and budget constraints
▪ Process to be used for change control
▪ Communication and coordination of key activities
▪ As this is the “Executive Summary” keep information brief and to the point.
4- Test Items (Functions)
▪ Things you intend to test within the scope of the test plan
▪ List of what is to be tested
▪ This can be developed from the software application inventories as well as other
sources of documentation and information.
▪ This can be controlled and defined by local Configuration Management (CM)
process.
▪ Remember, what you are testing is what you intend to deliver to the Client.
▪ This section can be oriented to the level of the test plan
▪ For higher levels it may be by application or functional area
▪ For lower levels it may be by program, unit, module or build
5- Software Risk Issues
▪ Identify what software is to be tested and what the critical areas are, such as:
▪ Delivery of a third party product.
▪ New version of interfacing software
▪ Ability to use and understand a new package/tool, etc.
▪ Extremely complex functions
▪ Modifications to components with a past history of failure
▪ Poorly documented modules or change requests
▪ There are some inherent software risks such as complexity; these need to be
identified
▪ Safety
▪ Multiple interfaces
▪ Impacts of operations on Client
▪ Government regulations and rules
5- Software Risk Issues
▪ Misunderstanding of the original requirements
▪ This can occur at the management, user and developer levels
▪ The past history of defects will help identify potential areas within the software that
are risky.
▪ It is the nature of defects to cluster and clump together.
▪ Good approach to identify risks
▪ To have several brainstorming sessions
▪ Identify them early so that they do not appear as surprise late in the project
▪ Examples: Risks can be
▪ A new tester assigned to test the software of a new nuclear power plant
▪ Time to test a project is too short to meet the schedule
6- Features to be tested
▪ Listing of what is to be tested from the USERS viewpoint of what the system does.
▪ This is not a technical description of the software
▪ Set the level of risk for each feature
▪ Use a simple rating scale such as (H, M, L): High, Medium and Low.
▪ These types of levels are understandable to a User.
▪ Be prepared to discuss why a particular level was chosen
Test Items v/s Features to be tested (Section 4 v/s Section 6)
▪ Section 4 and Section 6 are very similar?
▪ The only true difference is the point of view.
▪ Section 4 is a technical type description including version numbers and other
technical information
▪ Section 6 is from the User’s viewpoint
▪ Users do not understand technical software terminology; they understand
functions and processes as they relate to their jobs.
7- Features not to be tested
▪ Listing of what is NOT to be tested from both
▪ The Users viewpoint of what the system does
▪ Configuration management/version control view
▪ Identify WHY the feature is not to be tested, there can be any number of reasons.
▪ Not to be included in this release of the Software
▪ Low risk, has been used before and is considered stable
▪ Will be released but not tested or documented as a functional part of the
release of this version of the software
▪ Some components previously released are already tested
▪ An outsourcing company may supply pre-tested portion of the product
7- Features not to be tested

▪ Sections 6 and 7 are directly related to Sections 5


▪ What will and will not be tested are directly affected by the levels of acceptable
risk within the project
▪ What does not get tested affects the level of risk of the project
▪ However a code slipped through he development cycle untested because of
misunderstanding would be a disaster
8- Approach (Strategy)
▪ It should be appropriate to the level of the plan (master, acceptance, etc.)
▪ It should be in agreement with all higher and lower levels of plans.
▪ Overall rules and processes should be identified.
▪ Either you have to use black box testing or white box testing or a mixed
approach?
▪ When to apply each approach and to what part of the software?
▪ Which part of code tested manually and which through automation?
▪ Are any special tools to be used?
▪ Whether the tools will be developed or existing tools will be purchased
▪ Will the tool require special training?
▪ What metrics will be collected?
▪ How is Configuration Management to be handled?
▪ How many different configurations will be tested?
8- Approach (Strategy)
▪ Combinations of HW, SW and other vendor packages
▪ What levels of regression testing will be done and how much at each test level?
▪ Will regression testing be based on severity of defects detected?
▪ How will elements in the requirements and design that do not make sense or are
un-testable to be processed?
▪ Would it be better to outsource entire test effort
▪ If this is a master test plan
▪ Overall project testing approach and coverage requirements must also be
identified
▪ Specify if there are special requirements for the testing
▪ Only the full component will be tested
▪ A specified segment of grouping of features/components must be tested
together.
8- Approach (Strategy)

▪ Other information that may be useful in setting the approach are:


▪ MTBF, Mean Time Between Failures - if this is a valid measurement for the test
involved and if the data is available.
▪ How will meetings and other organizational processes be handled?
▪ Test strategy needs to be made by experienced testers
▪ Everyone in the project team should understand and be in agreement with test
strategy
9- Item Pass/Fail Criteria
▪ What are the Completion criteria for this plan?
▪ This is a critical aspect of any test plan and should be appropriate to the level of
the plan.
▪ At the Unit test level this could be items such as:
▪ All test cases completed.
▪ A specified percentage of cases completed with a percentage containing
some number of minor defects.
▪ Code coverage tool indicates all code covered.
▪ At the Master test plan level this could be items such as:
▪ All lower level plans completed.
▪ A specified number of plans completed without errors and a percentage with
minor defects.
9- Item Pass/Fail Criteria

▪ This could be
▪ An individual test case level criterion
▪ A unit level plan
▪ General functional requirements for higher level plans
▪ What is the number and severity of defects located need to be considered?
10- Suspension Criteria and Resumption Requirements
▪ Know when to pause in a series of tests.
▪ If the number or type of defects reaches a point where the follow on testing has
no value, it makes no sense to continue the test
▪ Specify
▪ What constitutes stoppage for a test or series of tests
▪ What is the acceptable level of defects that will allow the testing to proceed
▪ Testing after a truly fatal error will generate conditions that may be identified as
defects but are in fact ghost errors caused by the earlier defects that were ignored.
11- Test Deliverables
▪ What is to be delivered as part of this plan?
▪ Test plan document
▪ Test cases
▪ Test design specifications
▪ Tools and their outputs
▪ Simulators
▪ Static and dynamic generators
▪ Error logs and execution logs
▪ Problem reports and corrective actions
▪ One thing that is not a test deliverable is
▪ Software itself that is listed under test items and is delivered by development
12- Remaining Test Tasks
▪ If this is a multi-phase process or if the application is to be released in increments
▪ There may be parts of the application that this plan does not address.
▪ These areas need to be identified to avoid any confusion. Defects should not be
reported back on those future functions.
▪ This will also allow the users and testers to avoid incomplete functions and
prevent waste of resources chasing non-defects.
▪ If the project is being developed as a multi-party process
▪ This plan may only cover a portion of the total functions/features.
▪ This status needs to be identified so that those other areas have plans
developed for them
▪ So to avoid wasting resources tracking defects that do not relate to this plan.
▪ When a third party is developing the software, this section may contain descriptions
of those test tasks belonging to both the internal groups and the external groups.
13- Environmental Needs
▪ Are there any special requirements for this test plan, such as:
▪ Special simulators, static generators etc. required
▪ How will test data be provided.
▪ Are there special collection requirements or specific ranges of data that
must be provided?
▪ How much testing will be done on each component of a multi-part feature?
▪ Specific versions of other supporting software?
14- Staffing and Training Needs
▪ Training on the application/system
▪ Training for any test tools to be used
▪ What is to be tested and who is responsible for the testing and training?
▪ What should be the skill level of the assigned staff?
15- Responsibilities
▪ Who is in charge?
▪ What assigned staff will do?
▪ This issue includes all areas of the plan e.g.
▪ Setting risks
▪ Selecting features to be tested and not tested
▪ Setting overall strategy for this level of plan
▪ Ensuring all required elements are in place for testing
▪ Providing for resolution of scheduling conflicts
▪ Who provides the required training?
▪ Who makes the critical go/no go decisions for items not covered in the test
plans?
15- Responsibilities

▪ The test team’s work is driven by many other


functional groups- programmers, project managers,
technical writers and so on
▪ If responsibilities are not planned, testing becomes a
comedy show
▪ Deciding which tasks to list comes with experience.
Each project is different, so you can question about
the projects that which tasks are neglected
16- Schedule
▪ Should be based on realistic and validated estimates
▪ If the estimates for the development of the application are inaccurate, the entire
project plan will slip
▪ And testing is part of the overall project plan
▪ All relevant milestones should be identified with their relationship to the
development process identified.
▪ This will also help in identifying and tracking potential slippage in the schedule
caused by the test process.
▪ It is critical because the features, considered easy to design and code may be very
time consuming to test
▪ Some features may be postponed to a later release based on the test schedule
16- Schedule

▪ Test work typically is not distributed evenly over the entire product development
cycle
▪ Some testing occurs early in the form of reviews
▪ Number of testing tasks, number of people and amount of time spent often
increases over course of project
16- Schedule

▪ When time allotted for application testing is limited, how can a test manager and
team possibly organize, implement and manage ample test coverage, this is called
schedule crunch
▪ The solution is to avoid absolute dates for starting and stopping tasks in test
schedule
16- Schedule

▪ If test schedule uses relative dates based on entrance and exit criteria defined by
testing phases, it becomes clear that the testing tasks rely on some other
deliverables being completed first
17- Planning Risks and Contingencies
▪ What are the overall risks to the project with an emphasis on the testing process?
▪ Lack of personnel resources when testing is to begin.
▪ Lack of availability of required hardware, software, data or tools.
▪ Late delivery of the software, hardware or tools.
▪ Delays in training on the application and/or tools.
▪ Changes to the original requirements or designs.
17- Planning Risks and Contingencies
▪ Specify what will be done for various events, for example:
▪ If the requirements change after baselining, following actions will be taken
▪ The test schedule and development schedule will move out an appropriate
number of days. This rarely occurs, as most projects tend to have fixed
delivery dates.
▪ The number of test performed will be reduced.
▪ The number of acceptable defects will be increased.
▪ The above two items could lower the overall quality of the delivered
product
▪ Resources will be added to the test team
▪ The test team will work overtime that could affect the team morale
▪ The scope of the plan may be changed
▪ There may be some optimization of resources. This should be avoided, if
possible, for obvious reasons
▪ One could just QUIT. A rather extreme option to say the least.
18- Approvals
▪ Who can approve the process as complete and allow the project to proceed to the
next level?
▪ At the master test plan level, this may be all involved parties.
▪ When determining the approval process, keep in mind who the audience is.
▪ The audience for a unit test level plan is different than that of an integration,
system or master level plan.
▪ The levels and type of knowledge at the various levels will be different as well.
▪ Programmers are very technical but may not have a clear understanding of the
overall business process driving the project.
▪ Users may have varying levels of business insight and very little technical skills.
▪ Always be wary of users who claim high levels of technical skills and
programmers that claim to fully understand the business process.
19- Glossary
▪ Used to define terms and acronyms used in the document, and testing in general
▪ To eliminate confusion and promote consistent communications.
Project Name:
Test Case Template

Test Case ID: Fun_10 Test Designed by: <Name>


Test Priority (Low/Medium/High): Med Test Designed date: <Date>
Module Name: Google login screen Test Executed by: <Name>
Test Title: Verify login with valid username and password Test Execution date: <Date>
Description: Test the Google login page

Pre-conditions: User has valid username and password


Dependencies:

Step Test Steps Test Data Expected Result Actual Result Status (Pass/Fail) Notes

1 Navigate to login page Pass


2 Provide valid username User= [email protected]
3 Provide valid password Password: 1234
User should be able to login User is navigated to dashboard with
4 Click on Login button successful login

Post-conditions:
User is validated with database and successfully login to account. The account session details are logged in database.

You might also like