Testing S1
Testing S1
SOFTWARE
TESTING
Momina Shaheen
Department of CS
COMSATS University Islamabad Lahore
Campus
INTRODUCTION
TO SOFTWARE
TESTING
Lecture 1-2
• Introduction
• Learning Objectives
• Text / Reference Books
• Rules
• Verification v/s Validation
Arrive on time
Absent will be marked if you are more than 10 min
Punctuality late
No disturbance during lecture
Quality
Assurance v/s Quality Assurance: QA is a
managerial tool.
Quality
Control
Quality Control: QC is a
corrective tool.
ORIENTATION
Quality Assurance • Quality Assurance: QA is process oriented.
TESTING –SED305
Flight Crashes
In 1994 in Scotland, a Chinook
helicopter crashed
killed all 29 passengers
The crash was due to a systems error
Slide 9
SOFTWARE TESTING Maintenance
Implementation
of Units (classes,
procedures, Unit Testing
functions)
Implementation
of Units (classes,
procedures, Unit Testing
functions)
Lecture : 04
Momina Shaheen
Outline
What we do
What does a Software Tester Do?
What Makes a Good Software Tester?
Goals for Testing
Testing Methodologies
What we do?
Myth :If we were really good at programming, there would be no bugs to catch
If we apply good programming and design practices, there will be no bugs
But there are bugs because we are bad at what we do and we should feel guilty
about it.
Testing and test design amount to an admission of failure
Tedium of testing is just punishment for our errors
Punishment for what???
For being human?
Guilt for what?
For not achieving inhuman perfection?
For not distinguishing between what another programmer thinks and what he
says?
For not solving human communication problems?
What we do?
Statistics show that programming, done well, will still have one to three bugs per
hundred statements
As far as programming errors are concerned, I have them, you have them, we all have
them
Point is to do what we can to
Prevent them
Discover them as early as possible
Not to feel guilty about them
What we do
Programmer! Cast out your guilt! Spend half your time in joyous testing and
debugging!
Testers! Break that software and drive it to the ultimate- but do not enjoy the
programmers pain
What does a Software Tester Do?
Uncover as many errors (or bugs) as possible in a given product.
Demonstrate a given software product matching its requirement specifications.
Validate the quality of software testing using the minimum cost and efforts.
Generate high quality test cases, perform effective tests, and issue correct and helpful
problem reports.
What Makes a Good Software Tester?
They are explorers
They love to get a new piece of software, install it on their PC, and see what
happens
They are troubleshooters
Software testers are good at figuring out why something doesn’t work. They love
puzzles.
They are relentless (continuous)
Software testers keep trying. They may see a bug that quickly vanishes or is
difficult to re-create.
They are creative
Their job is to think up creative and even off-the-wall approaches to find bugs
They are (mellowed) perfectionists
They strive for perfection, but they know when it becomes unattainable and they’re
OK with getting as close as they can
What Makes a Good Software Tester?
The main focus of testing and test design should be bug prevention
If bugs not prevented, testing and test design should be able to discover symptoms
caused by bugs
Finally there should be clear diagnoses so that bugs can be easily corrected
Goals for Testing
Prevention:
A prevented bug is better than a detected and corrected bug
If bug is prevented
No code to correct
No retesting is needed
No one is embarrassed
No memory is consumed
No delays in schedule
Designing tests is one of the best preventers
Test design eliminates bugs at every stage in the creation of software, from
conception to specification, to design, coding and rest
Goals for Testing
Discovery:
This is the secondary goal of testing
A bug is manifested in deviation from excepted behavior
A test design must document expectations, test procedure and results of actual test
Different bugs can have same manifestations and one bug can have many symptoms
Testing Methodologies
Black box testing
No knowledge of internal program design or code required.
Tests are based on requirements and functionality.
White box testing
Knowledge of the internal program design and code required.
Tests are based on coverage of code statements, branches, paths, conditions.
3/23/2021 6:49 PM
SOFTWARE TESTING
Testing Axioms
7 Principals of Software Testing
Testing Axioms
Even a simple program such as the Windows Calculator is too complex to completely
test.
If you decide to eliminate any of the test conditions due to following reasons, you’ve
decided not to test the program completely
They’re redundant
They’re unnecessary
or just to save time
Software Testing Is a Risk-Based Exercise
If you decide not to test every possible test scenario, you’ve chosen to take on risk
When the product has to be released, so you will need to stop testing, but if you stop
too soon, there will still be areas untested
A customer will still use the untested areas of the product, and he or she will discover
the bug
It’ll be a costly bug then
Testers need to learn
How to reduce the huge domain of possible tests into a manageable set
How to make wise risk-based decisions on what’s important to test and what’s not
Software Testing Is a Risk-Based Exercise
Software testing can show that bugs exist, but it can’t show that bugs don’t exist
You can perform your tests, find and report bugs, but at no point you can guarantee
that there are no longer any bugs to find
You can only continue your testing and possibly find more and more bugs
The More Bugs You Find, the More Bugs There Are
There are even more similarities between real bugs and software bugs
If you see one, odds are there will be more nearby.
There are several reasons for this
Programmers have bad days:
Like all of us, programmers can have off days.
Code written one day may be perfect;
Code written another may be sloppy.
One bug can be a tell-tale sign that there are more nearby.
The More Bugs You Find, the More Bugs There Are
The more you test software, the more immune it becomes to your tests
The same thing happens to insects with pesticides
If you keep applying the same pesticide, the insects eventually build up resistance and
the pesticide no longer works.
The test process repeats each time around the loop just like spiral model of
development
After several passes, all the bugs that those tests would find are exposed, continuing
to run them won’t reveal anything new
To overcome this, software testers must continually write new and different tests to
exercise and find more bugs.
The Pesticide Paradox
Software undergoing the same repetitive tests eventually builds up resistance to them.
Not All the Bugs You Find Will Be Fixed
One of the sad realities of software testing is that not every bug you find will be fixed
Don’t be disappointed
This doesn’t mean that you’ve failed in achieving your goal as a software tester
It also does not mean that you or your team will release a poor quality product
It does mean that you need to exercise good judgment and know when perfection isn’t
reasonably attainable
You need to decide which bugs will be fixed and which ones won’t.
Not All the Bugs You Find Will Be Fixed
There are following reasons why you might choose not to fix a bug:
There’s not enough time.
In every project there are always too many software features, too few people
to code and test them. They do not find enough room left in the schedule to
finish.
Example: If you’re working on a tax preparation program, April 15 isn’t going
to move
It’s really not a bug:
“It’s not a bug, it’s a feature!” It’s not uncommon for misunderstandings, test
errors, or spec changes to result in would-be bugs being dismissed as
features.
Not All the Bugs You Find Will Be Fixed
It’s too risky to fix:
Software is fragile, intertwined, and sometimes like spaghetti. You might make
a bug fix that causes other bugs to appear.
Under the pressure to release a product under a tight schedule, it might be
better to leave in the known bug to avoid the risk of creating new, unknown
ones.
It’s just not worth it:
Following types of bugs are not removed
Bugs that would occur infrequently
Bugs that appear in little-used features.
Bugs that have, work-around ways that a user can prevent or avoid the bug
The decision-making process usually involves
software testers
project managers
programmers
When a Bug’s a Bug Is Difficult to Say
If there’s a problem in the software but no one ever discovers it—not programmers, not
testers, and not even a single customer—is it a bug?
???
When a Bug’s a Bug Is Difficult to Say
The other opinion is it’s not uncommon for two people to have completely different
opinions on the quality of a software product
One may say that the program is incredibly buggy and the other may say that it’s
perfect
How can both be right?
???
When a Bug’s a Bug Is Difficult to Say
Answer:
One has used the product in a way that reveals lots of bugs. The other hasn’t.
It is just like:
“If a tree falls in the forest and there’s no one there to hear it, does it make a sound?”
Product Specifications Are Never Final
The industry is moving so fast that last year’s cutting-edge products are obsolete this
year
Software is getting larger and gaining more features and complexity, resulting in longer
and longer development schedules
The result is a constantly changing product specification
There’s no other way to respond to the rapid changes
Example:
You’re halfway through the planned two year development cycle, and your main
competitor releases a product very similar to yours but with several desirable
features that your product doesn’t have.
Do you continue with your spec as is and release an inferior product in another
year?
Product Specifications Are Never Final
As a software tester, you will observe that features will be added that you didn’t plan to
test.
Features will be changed or even deleted that you had already tested and reported
bugs on
You need to be flexible in your test planning and test execution
Software Testers Aren’t the Most Popular Members of a
Project Team
Software industry has progressed to the point where professional software testers are
mandatory.
It’s now too costly to build bad software
To be fair, not every company is on board yet
But most software is now developed with a disciplined approach that has software
testers as core, vital members of their staff
It is now a career choice—a job that requires training and discipline, and allows for
advancement
7 Principals of Software Testing
Principal 1:
Exhaustive testing is impossible.
Unless the application under test has a very simple logical structure and limited
input, it is not possible to test all possible combinations of data and scenarios
We need an optimal amount of testing based on the risk assessment of the
application
Principal 2:
Defect Clustering
Most of the reported defects are related to small number of modules within a
system
Approximately 80% of the problems are found in 20% of the modules.
3/23/2021 6:49 PM
7 Principals of Software Testing
Principal 3:
Pesticide Paradox
If the same tests are repeated over and over again, eventually the same test cases
will no longer find new bugs.
How to overcome
Test cases need to be regularly reviewed & revised
Adding new & different test cases
Principal 4:
Testing shows presence of defects
Software Testing reduces the probability of undiscovered defects remaining in the
software
Even if no defects are found, it is not a proof of correctness.
What if software doesn’t meet the needs & requirements of clients.
3/23/2021 6:49 PM
7 Principals of Software Testing
Principal 5:
Absence of Error is a Fallacy
Finding and fixing defects does not help if the system build is unusable and does
not fulfill the users needs & requirements
Principal 6:
Early Testing
Testing should start as early as possible in the Software Development Life Cycle.
We can start testing as soon as requirements and design documents are
available.
So that any defects in the requirements or design phase are captured as well.
When defects are found earlier in the lifecycle, they are much easier and cheaper
to fix.
3/23/2021 6:49 PM
7 Principals of Software Testing
Principal 7:
Testing is context dependent
The way you test an e-commerce site will be different from the way you test a
commercial off the shelf application.
3/23/2021 6:49 PM
7 Principals of Software Testing
3/23/2021 6:49 PM
Software
Testing
Momina Shaheen
Software Defects
What
Will You Faults and
Learn Failure
Today?
Ctegories of
Defect
Software
Defects
The term “defect” generally refers
to some problem with the software,
either with its external behavior or
with its internal characteristics.
The IEEE Standard 610.12 (IEEE,
1990) defines the terms defects in
following ways:
Failure:
Fault:
Error:
Slide 4
Failure:
Over
Flow
Formul
a
Wrong
Errors of Clarity
and Ambiguity
Two people The application
reach different works, but not
interpretations fast enough
of what is
meant
Slide 7 Software Testing
SOFTWARE TESTING
LECTURE # 07
Momina Shaheen
Origins of Defects
‒ Requirements Defects
‒ Design Defects
Errors of Clarity
and Ambiguity
Two people The application
reach different works, but not
interpretations fast enough
of what is
meant
Errors in Requirements
Errors in Design
Errors in Documentation
Slide 7
Requirements for large systems
can ever be complete, given the
observed rate of creeping
requirements during the
development cycle
Slide 11
Design ranks next to
requirements as a
source of very
troublesome, and
very expensive
errors
II. DESIGN
DEFECTS
All four categories of
defects are found in
software design and
specifications, as
might be expected
Slide 12
The most common forms of
design defects are errors
of omission where things
are left out, and of
commission, where
something is stated that
later turns out to be wrong
DESIGN
DEFECTS Errors of clarity and
ambiguity are also
common, and many
performance related
problems originate in the
design process as well
Slide 13
All four categories of defects can
be found in source code, with
errors of commission being
dominant while code is under
development
CODING
DEFECTS Perhaps the most surprising
aspect of coding defects when
they are studied carefully is that
more than 50% of the serious
bugs or errors found in the
source code did not truly
originate in the source code
Slide 14
A majority of so-called
programming errors are really
due to the programmer not
understanding the design, or
the design not correctly
interpreting a requirement
CODING
DEFECTS
This is not a surprising
situation. Software is one of
the most difficult products to
visualize prior to having to
build it
Slide 15
Built-in syntax checkers and
editors associated with
modern programming
languages can find many
"true" programming errors
(such as missed parentheses
or looping problems)
CODING
DEFECTS
Even poor structure and
excessive branching can now
be measured and corrected
automatically
Slide 17
Slide 18
Defects in Object-Oriented
Programming Languages
‒ Since OO analysis and design
LANGUAGE has a steep learning curve and
is difficult to absorb, some OO
LEVELS projects suffer from worse than
average quality levels due to
problems originating in the
design
Slide 20
SOFTWARE TESTING
LECTURE # 07
Momina Shaheen
Origins of Defects
‒ Requirements Defects
Errors of Clarity
and Ambiguity
Two people The application
reach different works, but not
interpretations fast enough
of what is
meant
Errors in Requirements
Errors in Design
Errors in Documentation
Slide 6
Requirements for large systems
can ever be complete, given the
observed rate of creeping
requirements during the
development cycle
Slide 10
Design ranks next to
requirements as a
source of very
troublesome, and
very expensive
errors
II. DESIGN
DEFECTS
All four categories of
defects are found in
software design and
specifications, as
might be expected
Slide 11
The most common forms of
design defects are errors
of omission where things
are left out, and of
commission, where
something is stated that
later turns out to be wrong
DESIGN
DEFECTS Errors of clarity and
ambiguity are also
common, and many
performance related
problems originate in the
design process as well
Slide 12
All four categories of defects can
be found in source code, with
errors of commission being
dominant while code is under
development
CODING
DEFECTS Perhaps the most surprising
aspect of coding defects when
they are studied carefully is that
more than 50% of the serious
bugs or errors found in the
source code did not truly
originate in the source code
Slide 13
A majority of so-called
programming errors are really
due to the programmer not
understanding the design, or
the design not correctly
interpreting a requirement
CODING
DEFECTS
This is not a surprising
situation. Software is one of
the most difficult products to
visualize prior to having to
build it
Slide 14
Built-in syntax checkers and
editors associated with
modern programming
languages can find many
"true" programming errors
(such as missed parentheses
or looping problems)
CODING
DEFECTS
Even poor structure and
excessive branching can now
be measured and corrected
automatically
Slide 16
Slide 17
Defects in Object-Oriented
Programming Languages
‒ Since OO analysis and design
LANGUAGE has a steep learning curve and
is difficult to absorb, some OO
LEVELS projects suffer from worse than
average quality levels due to
problems originating in the
design
DOCUMENTATION
DEFECTS The most common kind of
problem is that of errors of
clarity and ambiguity
Slide 19
Slide 20
The phrase "bad fixes"
refers to attempts to repair
an error which, although the
original error may be fixed,
introduce a new secondary
bug into the application
FIX
DEFECTS
Bad fixes are usually errors
of commission, and they are
found in every major
deliverable although most
troublesome for
requirements, design, and
source code
Slide 21
Bad fixes are very common From about 5% to more
and can be both annoying than 20% of attempts to
and serious repair bugs may create a
new secondary bug
FIX
DEFECTS
Slide 22
Repairs to ageing legacy
applications where the code is
poorly structured tend to
achieve higher than average
bad fix injection rates.
FIX
DEFECTS
Often bad fixes are the result
of haste or schedule pressures
which cause the developers to
skimp on things like inspecting
or testing the repairs
Slide 23
When attempting to
correct a loop problem
such as going through the
loop one time too often,
the repair goes through
the loop one time short of
the correct amount; and
BAD FIX
EXAMPLES When correcting a
branching problem that
goes to the wrong
subroutine, the repair
goes to a different wrong
subroutine
Slide 24
The topic of data quality and data defects is Since one of the most common business uses of
usually outside the domain of software quality computers is specifically to hold databases,
assurance repositories, and data warehouses the topic of
data quality is becoming a major issue
Slide 25
Data errors can be very
serious and they also
interact with software
errors to create many
expensive and
troublesome problems.
Slide 26
Exploratory research carried out by IBM's software quality
assurance group on regression test libraries noted some
disturbing findings:
Slide 27
ABOUT 12% OF THE REGRESSION TEST CASES COVERAGE OF THE REGRESSION TEST LIBRARY
CONTAINED ERRORS OF SOME KIND RANGED BETWEEN ABOUT 40% AND 70% OF THE
CODE; I.E., THERE WERE NOTABLE GAPS WHICH
NONE OF THE REGRESSION TEST CASES MANAGED
TO REACH
Slide 28
For software regression means slipping A regression test means a set of test cases
backward, and usually refers to an error that are run after changes are made to an
made while attempting to add new features application
or fix bugs in an existing application
Slide 29
The test cases are intended to ensure
REGRESSION that every prior feature of the
application still works, and that the new
TESTING materials have not caused errors in
existing portions of the application
Slide 30
The seven fundamental design
issues actually describe the
application itself. Therefore
errors or defects affect what
the software actually does.
Security
DEFECTS IN As hackers and viruses
SECONDARY become trickier and more
DESIGN common, every application
that deals with business
information needs to have
security features designed into
it. Errors in software security
can result in viral invasion or
easy dispersion by malicious
hackers.
Slide 31
Reliability
This section of software specifications defines
the mean time to failure and mean time
between failures targeted for the application.
For contract software, reliability requirements
should be stated explicitly. There is a strong
correlation between reliability and software
defect volumes and severity levels, so reliability
targets lead directly to the need for effective
defect prevention and defect removal
operations. In some software development
contracts, explicit targets for defect removal
DEFECTS IN efficiency and post release quality levels are
SECONDARY now included.
DESIGN
Maintainability
Slide 32
Software Dependencies
Errors in this section can lead to reduced
functionality. A by-product of listing specific
software dependencies is the ability to
explore interfaces in a thorough manner.
Packaging
This section, used primarily by commercial
software vendors, discusses how the
DEFECTS IN software will be packaged and delivered;
SECONDARY i.e., CD ROM, disk, down loaded from a
host, etc. Errors here may affect user
DESIGN satisfaction and market shares. Also, the
initial packaging decision will probably affect
how subsequent maintenance releases and
defect repairs are distributed to users also.
For example, starting in about 1993 many
major software vendors began to use
commercial networks such as America
Online, CompuServe, and the Internet as a
channel for receiving customer queries and
defect reports and also as a channel for
down loading updates, new releases, and
defect repairs.
Slide 33
Classification of Defects
Origins of Defects
‒ Requirements Defects
‒ Design Defects
‒ Coding Defects
RECAP
‒ Documentation Defects
‒ Fix Defects
‒ Data Defects
‒ Test-Case Defects
‒ Regression Testing
Slide 34
SOFTWARE TESTING
Momina Shaheen
Test Case
Test Case Template
Level of Detail For Test Case
Good Test Cases
Bad Test Cases
Test Case Organization and Tracking
Outline
Software Testing Life Cycle
SDLC Models and Testing
V-Model
Modified V-Model
Test Driven Development
3/28/2021 7:11
PM
Test Case
A set of
Input values
Execution preconditions
Expected results
The details of a test case should It can be referenced by one or It may reference more than one
explain exactly more test design specs test procedure
What values or conditions will be sent to the
software
What result is expected
Below are the standard fields of sample test
case template
Template
Test case Version: Mention the test case version
(Optional) number.
A standardized and documented process that represent the sequence of actions for the
execution of test cases. Also known as manual test script.
The ANSI/IEEE 829 standard lists some important information that need to be defined
Identifier
A unique identifier that ties the test procedure to the associated test cases and test
design.
Purpose
The purpose of the procedure and reference to the test cases that it will execute.
Special requirements
Other procedures, special testing skills, or special equipment needed to run the
procedure.
Test Procedure
Procedure steps
Detailed description of how the tests are to be run:
Log- Tells how and by what method the results and observations will be recorded.
Setup- Explains how to prepare for the test.
Start- Explains the steps used to start the test.
Procedure- Describes the steps used to run the tests.
Measure- Describes how the results are to be determined—for example, with a stopwatch or visual determination.
Shut down- Explains the steps for suspending the test for unexpected reasons.
Restart- Tells the tester how to pick up the test at a certain point if there’s a failure or after shutting down.
Stop- Describes the steps for an orderly halt to the test.
Wrap up- Explains how to restore the environment to its pre-test condition.
Contingencies- Explains what to do if things don’t go as planned.
Test Procedure Showing
How Much Detail
Should Be Involved
While considering how the information will be organized and tracked
think about the questions:
Test Case Can you pick and choose test suites (groups of related test cases) to run
Organization on particular features or areas of the software?
and Tracking When you run the cases, will you be able to record which ones pass
and which ones fail?
Of the ones that failed, which ones also failed the last time you ran
them?
What percentage of the cases passed the last time you ran them?
Test Case Organization and Tracking
Test Case
Organization
and Tracking The important thing to remember is that the
number of test cases can easily be in the
thousands and without a means to manage
them, you and the other testers could quickly
be lost in a sea of documentation
Prepare a document containing
three test cases with proper test case
format for your final year project.
Spiral Model
Software
Requirements
Coding
Testing
Phases of testing for different development
phases
Overall Business Acceptance
Requirements Testing
Software
System Testing
Requirements
Component
Low Level Design
Testing
Testing
The V Model
System Testing
•Before product deployment, the product tested as an entire unit to make sure
Testing that all the software requirements are satisfied by the product. This testing of
entire software system is system testing
Integration Testing
•High level design views the system as being made up of interoperating and
Testing integrated subsystems. The individual subsystems should be integrated and
tested. This type of testing corresponds to integration testing
Component Testing
•The components that are the outputs of low level
Testing design have to be tested independently before being
integrated. This type of testing is component level
testing
The V
Model Unit Testing
•Coding produces several program units, each of these
Testing units have to be tested independently before
combining them to form components. The testing of
program units form the unit testing
The V Model
Planning of testing for different development phases
Planning phase is not shown as a separate entity since it is common for all
testing phases.
It is still not possible to execute any of these tests until the product is
actually built.
In other words, the step called "testing" is now broken down into different
sub-steps.
It is still the case that all the testing execution related activities are done
only at the end of the life cycle.
The V Model
Who should design test
Execution of the tests cannot be done till the product is built, but the design of tests
can be carried out much earlier.
Skill sets required for designing each type of tests,
The people who are actually performing the function of creating the
corresponding artifact.
For example,
Acceptance tests should be designed by those who formulate the overall
business requirements (the customers, where possible).
Those should design the integration tests who know how the system is
broken into subsystems i.e. those who perform the high level design.
Again, the people doing development know the innards of the program
code and thus are best equipped to design the unit tests.
The V Model
Benefits of early design
We achieve more parallelism and reduce the end-of-cycle time taken for testing.
By designing tests for each activity upfront, we are building in better upfront
validation, thus again reducing last-minute surprises.
Tests are designed by people with appropriate skill sets.
V-Model
Overall Business Acceptance Test Acceptance
Requirements Design Testing
Integration Test
High Level Design Integration Testing
Design
Verification Validation
V-Model
Advantages of V- Model
Testing activities like planning, test designing happens well before
coding.
This saves a lot of time hence higher chances of success over the
waterfall model.
Proactive defect tracking – that is defects are found at early stage
it avoids the downward flow of the defects.
Dis-advantages of V-Model
It is Very rigid and least flexible.
No early prototypes of the software are produced.
If any changes happen in midway, then the test documents along with
requirement documents has to be updated.
Modified V-Model
In the V-Model there is an assumption
Even the activity of test execution was split into execution of tests of
different types, the execution cannot happen until the entire product is
built.
For a given product, the different units and components can be in
different stages of evolution
For example one unit may be in development and thus subject to unit
testing whereas another unit may be ready for component testing
The V model does not explicitly address this parallelism commonly found
in the product development
Modified V-Model
Add a test
[Pass]
Run the tests
[Fail]
Make a little
change
[Pass, Development
continues]
[Fail]
Run the tests
[Pass, Development
stops]
Test Driven Development
There are two levels of TDD
Acceptance TDD
Write a single acceptance test, or behavioral specification.
Produce functionality/code to fulfill that test.
Also known as Behavior Driven Development (BDD).
Developer TDD
Write single developer test
Produce code to fulfill that test.
Simply called as TDD
Test Driven Development
Acceptance TDD Developer TDD
Add an
acceptance test
Add a test
[Pass]
Run the
acceptance tests
[Pass]
Run the tests
[Fail]
[Fail]
[Pass, Functionality
[Pass, Development incomplete]
continues] [Fail]
Run the tests
[Fail]
Run the
acceptance tests [Pass, Development
stops]
[Pass, Development
stops]
Acceptance TDD vs. Developer TDD
The scenario:
You’re a developer on a team responsible for the company accounting system,
implemented in Rails. One day, a business person asks you to implement a
reminder system to remind clients of their pending invoices. Because you’re
practicing BDD, you sit down with that business person and start defining
behaviours.
You open your text editor and start creating pending specs for the behaviours
the business user wants:
It "adds a reminder date when an invoice is created"
It "sends an email to the invoice's account's primary contact after
the reminder date has passed"
It "marks that the user has read the email"
Acceptance TDD vs. Developer TDD
Some developers prefer to write test cases on the spot, calling methods in the system, setting up
expectations, like so:
it "adds a reminder date when an invoice is created"
do
current_invoice = create :invoice
current_invoice.reminder_date.should == 20.days.from_now
End
Let’s look at this a different way, with a Test-Driven Development approach, and write out pending tests:
it "after_create an Invoice sets a reminder date to be creation + 20 business days"
it "Account#primary_payment_contact returns the current payment contact or the client
project manager"
it "InvoiceChecker#mailer finds invoices that are overdue and sends the email"
Test Driven Development
Test Case ID Description Input Data Expected Actual Pass / Fail Remarks
Results Results
UT001 To test that the function isDivisibleByThree returns true if any number 3 True
is divisible by 3
UT002 To test that the function isDivisibleByThree returns false if any 2 False
number is not divisible by 3
UT003 To test that the function isDivisibleByFive returns true if any number 5 True
is divisible by 5
UT004 To test that the function isDivisibleByFive returns false if any number 6 False
is not divisible by 5
UT005 To test that the function isDivisibleByFifteen returns true if any 30 True
number is divisible by 15
UT006 To test that the function isDivisibleByFifteen returns false if any 25 False
number is not divisible by 15
UT007 To test that function fizzBuzz returns number if a number is passed to 1 1
it
UT008 To test that function fizzBuzz returns FizzBuzz if a number divisible 30 FizzBuzz
by 15 is passed to it
UT009 To test that function fizzBuzz returns Fizz if a number divisible by 3 is 9 Fizz
passed to it
UT010 To test that function fizzBuzz returns Buzz if a number divisible by 5 20 Buzz
is passed to it
Benefits of Test Driven Development
Code coverage
Every code segment that you write should have at least one associated
test. You can be confident that all of the code in the system has actually
been executed
Regression testing
Run regression tests to check that changes to the program have not
introduced new bugs.
Simplified debugging
When a test fails, it should be obvious where the problem lies. The newly
written code needs to be checked and modified. You do not need to
use debugging tools to locate the problem
System documentation
The tests themselves act as a form of documentation that describe what
the code should be doing.
Test Driven Development Using
Standard Libraries
Test-driven development is of most use when development is done by using
well-tested standard libraries.
In case of you use libraries, you need to write tests for these systems as a
whole
If you use test-driven development, you still need a system testing process
to check that it meets the requirements of all of the system stakeholders
System testing also tests performance, reliability, and checks that the
system does not do things that it shouldn’t do
Test-driven development is a successful approach for small and medium-
sized projects
SOFTWARE
TESTING
PROCESS
Outline
■ Basic Definitions
■ Fundamental of test processes
■ Requirement Traceability Matrix
7:12 PM
Basic Definitions
■ Test basis
– It is the information or the document that we need to create our own test cases
and start the test analysis.
■ Test analysis
– It is the process of looking at something that can be used to derive test
information.
■ Test Condition
– An item or event of a component or system that could be verified by one or more
test cases, e.g., a function, transaction, feature, quality attribute, or structural
element.
Basic Definitions
7:12 PM
1. Test Planning and Control
■ Test plan:
– A document describing the scope, approach, resources and schedule of intended
test activities.
– It consists of the following:
■ The scope, risk and objective of testing
■ The test policy and/or the test strategy
■ List of the features to be tested
■ Details of the testing tasks
■ Who will do each task (Resource Allocation)
■ The test environment
■ The test design techniques
■ Entry and exit criteria to be used
■ Test Schedule
■ Any risks requiring contingency planning
7:12 PM
1. Test Planning and Control
■ Test Planning
– The activity of establishing or updating a test plan.
– Continuous process and performed in all project life cycles
■ Test control has the following major tasks:
– To measure and analyze the results of reviews and testing
– To monitor and document progress, test coverage and exit criteria
– To provide overall information on testing
– To initiate corrective actions
– To make decisions
2. Test Analysis and Design
■ The test objectives are a major deliverable for technical test analysts to know what to
test.
■ We use test objectives as our guide to
– Identify and refine the test conditions for each test objective
– Create test cases that exercise the identified test conditions
■ We need to prioritize the test conditions on the basis of likelihood and impact
associated with each quality risk item as we know that testing everything is an
impractical goal.
■ Following steps are followed for analysis and design phase:
– Non-functional Test Objectives
– Identifying and Documenting Test Conditions
– Test Oracles
– Standards
7:12 PM
2.1 Non-functional Test Objectives
■ Non-functional test objectives can apply to any test level and exist throughout the
lifecycle
■ Major non-functional test objectives are addressed at the end of the project
■ If test execution is not possible to start at any level, reviews of requirements, design and
code can be conducted
■ Performance testing should be performed as early as possible
■ Performance testing should be done at unit and component level and also at the time of
integration
2.2 Identifying and Documenting Test Conditions
7:12 PM
2.2 Identifying and Documenting Test
Conditions
7:12 PM
2.3 Test Oracles
7:12 PM
3. Test Implementation and Execution
■ Test execution has the following major task:
– To check test environments
– To check traceability between test basis and test cases
– To execute test suites and individual test cases following the test procedures using
execution tools.
– To re-execute the tests that previously failed in order to confirm a fix. This is known
as confirmation testing or re-testing.
– To log the outcome of the test execution and record the identities and versions of
the software under tests. The test log is used for the audit trial..
– To Compare actual results with expected results.
– Where there are differences between actual and expected results, it report
discrepancies as Incidents.
7:12 PM
4. Evaluating exit criteria and Reporting
7:12 PM
5. Test Closure Activities
■ Test closure activities are done when the software is delivered
■ This process collects data from completed test process and test wares.
■ The testing can be closed for the other reasons like:
– When all the information has been gathered which are needed for the testing.
– When a project is cancelled.
– When some target is achieved.
– When a maintenance release or update is done.
7:12 PM
5. Test Closure Activities
■ It has following major tasks.
– Ensure deliverable has been delivered or not
– Ensure closing incident report
– Documenting all the systems
– Archiving all the test ware, test environment and infrastructure for later reuse.
– Handover the testware to the maintenance organization. They will give support to
the software.
– To evaluate how the testing went and learn lessons for future releases and
projects.
7:12 PM
Requirements Traceability Matrix (RTM)
■ Process of preparing links between user requirements and all the initiatives that you
take to meet requirements.
■ What?
– All software requirements
– Software coding
– Software design specification
– Test planning
7:12 PM
Requirements Traceability Matrix (RTM)
■ Why?
– Project handling team will come to know that which part of code is concerned with
client’s requirement
– Testing team comes to know that which type of test cases they have to prepare
■ When?
– During the Requirement Management phase of SDLC
– It is a deliverable of Requirement Analysis in STLC
7:12 PM
Requirements Traceability Matrix (RTM)
■ Importance?
– Risk Management
– Change Management
– Post change effect can also be maintained
7:12 PM
SOFTWARE TEST
PLAN
By
Sabeen Amjad
Software Test Plan
▪ Test Plan Template
▪ IEEE 829 Format
Outline
▪ The test plan is a by-product of the detailed planning process that is undertaken to
create it.
▪ It’s the planning process that matters, not the resulting document
▪ The ultimate goal of the test planning process is communicating (not recording)
▪ Test team’s intent
▪ Test team’s expectations
▪ Test team’s understanding of the testing that’s to be performed
Test planning topics vs. Test planning templates
1. Revision History
Revision # Revision Date Description of Change Author
2. Distribution
Recipient Name Recipient Organization Distribution Method
2- References
▪ List all documents that support this test plan
▪ Refer to the actual version/release number of the document
▪ As stored in the configuration management system
▪ Do not duplicate the text from other documents
▪ It will reduce the viability (practicality) of this document and increase the
maintenance effort
▪ Documents that can be referenced include:
▪ Project Plan
▪ Requirements specifications (Software / Business)
▪ High Level design document
▪ Detail design document
▪ Development and Test process standards
▪ Methodology guidelines and examples
▪ Corporate standards and guidelines
3- Introduction
▪ State the purpose of the Plan
▪ Possibly identifying the level of the plan (master etc.)
▪ Executive summary part of the plan
▪ Identify the Scope of the plan in relation to the Software Project plan
▪ Other items may include
▪ Resource and budget constraints
▪ Process to be used for change control
▪ Communication and coordination of key activities
▪ As this is the “Executive Summary” keep information brief and to the point.
4- Test Items (Functions)
▪ Things you intend to test within the scope of the test plan
▪ List of what is to be tested
▪ This can be developed from the software application inventories as well as other
sources of documentation and information.
▪ This can be controlled and defined by local Configuration Management (CM)
process.
▪ Remember, what you are testing is what you intend to deliver to the Client.
▪ This section can be oriented to the level of the test plan
▪ For higher levels it may be by application or functional area
▪ For lower levels it may be by program, unit, module or build
5- Software Risk Issues
▪ Identify what software is to be tested and what the critical areas are, such as:
▪ Delivery of a third party product.
▪ New version of interfacing software
▪ Ability to use and understand a new package/tool, etc.
▪ Extremely complex functions
▪ Modifications to components with a past history of failure
▪ Poorly documented modules or change requests
▪ There are some inherent software risks such as complexity; these need to be
identified
▪ Safety
▪ Multiple interfaces
▪ Impacts of operations on Client
▪ Government regulations and rules
5- Software Risk Issues
▪ Misunderstanding of the original requirements
▪ This can occur at the management, user and developer levels
▪ The past history of defects will help identify potential areas within the software that
are risky.
▪ It is the nature of defects to cluster and clump together.
▪ Good approach to identify risks
▪ To have several brainstorming sessions
▪ Identify them early so that they do not appear as surprise late in the project
▪ Examples: Risks can be
▪ A new tester assigned to test the software of a new nuclear power plant
▪ Time to test a project is too short to meet the schedule
6- Features to be tested
▪ Listing of what is to be tested from the USERS viewpoint of what the system does.
▪ This is not a technical description of the software
▪ Set the level of risk for each feature
▪ Use a simple rating scale such as (H, M, L): High, Medium and Low.
▪ These types of levels are understandable to a User.
▪ Be prepared to discuss why a particular level was chosen
Test Items v/s Features to be tested (Section 4 v/s Section 6)
▪ Section 4 and Section 6 are very similar?
▪ The only true difference is the point of view.
▪ Section 4 is a technical type description including version numbers and other
technical information
▪ Section 6 is from the User’s viewpoint
▪ Users do not understand technical software terminology; they understand
functions and processes as they relate to their jobs.
7- Features not to be tested
▪ Listing of what is NOT to be tested from both
▪ The Users viewpoint of what the system does
▪ Configuration management/version control view
▪ Identify WHY the feature is not to be tested, there can be any number of reasons.
▪ Not to be included in this release of the Software
▪ Low risk, has been used before and is considered stable
▪ Will be released but not tested or documented as a functional part of the
release of this version of the software
▪ Some components previously released are already tested
▪ An outsourcing company may supply pre-tested portion of the product
7- Features not to be tested
▪ This could be
▪ An individual test case level criterion
▪ A unit level plan
▪ General functional requirements for higher level plans
▪ What is the number and severity of defects located need to be considered?
10- Suspension Criteria and Resumption Requirements
▪ Know when to pause in a series of tests.
▪ If the number or type of defects reaches a point where the follow on testing has
no value, it makes no sense to continue the test
▪ Specify
▪ What constitutes stoppage for a test or series of tests
▪ What is the acceptable level of defects that will allow the testing to proceed
▪ Testing after a truly fatal error will generate conditions that may be identified as
defects but are in fact ghost errors caused by the earlier defects that were ignored.
11- Test Deliverables
▪ What is to be delivered as part of this plan?
▪ Test plan document
▪ Test cases
▪ Test design specifications
▪ Tools and their outputs
▪ Simulators
▪ Static and dynamic generators
▪ Error logs and execution logs
▪ Problem reports and corrective actions
▪ One thing that is not a test deliverable is
▪ Software itself that is listed under test items and is delivered by development
12- Remaining Test Tasks
▪ If this is a multi-phase process or if the application is to be released in increments
▪ There may be parts of the application that this plan does not address.
▪ These areas need to be identified to avoid any confusion. Defects should not be
reported back on those future functions.
▪ This will also allow the users and testers to avoid incomplete functions and
prevent waste of resources chasing non-defects.
▪ If the project is being developed as a multi-party process
▪ This plan may only cover a portion of the total functions/features.
▪ This status needs to be identified so that those other areas have plans
developed for them
▪ So to avoid wasting resources tracking defects that do not relate to this plan.
▪ When a third party is developing the software, this section may contain descriptions
of those test tasks belonging to both the internal groups and the external groups.
13- Environmental Needs
▪ Are there any special requirements for this test plan, such as:
▪ Special simulators, static generators etc. required
▪ How will test data be provided.
▪ Are there special collection requirements or specific ranges of data that
must be provided?
▪ How much testing will be done on each component of a multi-part feature?
▪ Specific versions of other supporting software?
14- Staffing and Training Needs
▪ Training on the application/system
▪ Training for any test tools to be used
▪ What is to be tested and who is responsible for the testing and training?
▪ What should be the skill level of the assigned staff?
15- Responsibilities
▪ Who is in charge?
▪ What assigned staff will do?
▪ This issue includes all areas of the plan e.g.
▪ Setting risks
▪ Selecting features to be tested and not tested
▪ Setting overall strategy for this level of plan
▪ Ensuring all required elements are in place for testing
▪ Providing for resolution of scheduling conflicts
▪ Who provides the required training?
▪ Who makes the critical go/no go decisions for items not covered in the test
plans?
15- Responsibilities
▪ Test work typically is not distributed evenly over the entire product development
cycle
▪ Some testing occurs early in the form of reviews
▪ Number of testing tasks, number of people and amount of time spent often
increases over course of project
16- Schedule
▪ When time allotted for application testing is limited, how can a test manager and
team possibly organize, implement and manage ample test coverage, this is called
schedule crunch
▪ The solution is to avoid absolute dates for starting and stopping tasks in test
schedule
16- Schedule
▪ If test schedule uses relative dates based on entrance and exit criteria defined by
testing phases, it becomes clear that the testing tasks rely on some other
deliverables being completed first
17- Planning Risks and Contingencies
▪ What are the overall risks to the project with an emphasis on the testing process?
▪ Lack of personnel resources when testing is to begin.
▪ Lack of availability of required hardware, software, data or tools.
▪ Late delivery of the software, hardware or tools.
▪ Delays in training on the application and/or tools.
▪ Changes to the original requirements or designs.
17- Planning Risks and Contingencies
▪ Specify what will be done for various events, for example:
▪ If the requirements change after baselining, following actions will be taken
▪ The test schedule and development schedule will move out an appropriate
number of days. This rarely occurs, as most projects tend to have fixed
delivery dates.
▪ The number of test performed will be reduced.
▪ The number of acceptable defects will be increased.
▪ The above two items could lower the overall quality of the delivered
product
▪ Resources will be added to the test team
▪ The test team will work overtime that could affect the team morale
▪ The scope of the plan may be changed
▪ There may be some optimization of resources. This should be avoided, if
possible, for obvious reasons
▪ One could just QUIT. A rather extreme option to say the least.
18- Approvals
▪ Who can approve the process as complete and allow the project to proceed to the
next level?
▪ At the master test plan level, this may be all involved parties.
▪ When determining the approval process, keep in mind who the audience is.
▪ The audience for a unit test level plan is different than that of an integration,
system or master level plan.
▪ The levels and type of knowledge at the various levels will be different as well.
▪ Programmers are very technical but may not have a clear understanding of the
overall business process driving the project.
▪ Users may have varying levels of business insight and very little technical skills.
▪ Always be wary of users who claim high levels of technical skills and
programmers that claim to fully understand the business process.
19- Glossary
▪ Used to define terms and acronyms used in the document, and testing in general
▪ To eliminate confusion and promote consistent communications.
Project Name:
Test Case Template
Step Test Steps Test Data Expected Result Actual Result Status (Pass/Fail) Notes
Post-conditions:
User is validated with database and successfully login to account. The account session details are logged in database.