SoftwareTesting Lect 1.1
SoftwareTesting Lect 1.1
Testing
Testing is a process of identifying defects
Develop test cases and test data
A test case is a formal description of
• A starting state
• One or more events to which the software must
respond
• The expected response or ending state
Test data is a set of starting states and events used
to test a module, group of modules, or entire system
Software Testing
3
What is Software Testing?
Several definitions:
- The last quality checking point for software on its production line
Testing Objectives
Testing is a process of executing a program with the intention of
finding errors.
Goal Of Tester :-
General
Testing Skills
Test Planning
Patient.
Sometimes it takes a lot of back-and-forth to get to the root of a problem.
And programmers have egos, they'll often try to push issues back to the tester.
Passionate.
The best developers are the ones that really care about development and maybe
even get a little excited about it sometimes. Testing isn't that much different.
Read a few posts from James Bach or Michael Bolton and you'll see that
there are passionate testers out there, however implausible that may sound to us
developers.
Creative.
Really exercising a system requires one to try non-intuitive ways of accomplishing
tasks, to go outside the workflow that the program expects of them and do things
that normal users wouldn't do.
Task-oriented people who receive a set of instructions and do the exact same
thing every time are no good for this job.
Skills Required In Tester
Analytical.
Just finding a defect isn't enough - a tester has to be able to figure out
how to reproduce it.
If a report comes in as "intermittent" then there's about a 10% chance it'll
get solved.
Most developers won't even look at a case without a reasonably concise
sequence of repro steps.
Good testers have to be able to retrace their steps and narrow down the
field of possibilities so as to come up with the simplest possible sequence
of actions that trigger a bug.
Not a programmer.
Programmers never want to do any actual work testing.
They'll spend all their time trying to write automated tests and not do what
really matters, which is to make sure the damn thing actually works the
way it is supposed to.
Although there are exceptions to every rule, most programmers simply
find testing boring and will do the absolute minimum amount required.
Need Of Testing
Errors in testing
Mistakes in correction
For eg.
21
Quality Control versus SQA
Quality Control (QC) is a set of activities carried out with the main
objective of withholding products from shipment if they do not qualify.
22
Why do we care about Quality?
23
Quality as Dealing with defects
The term “defect” refers to some problem with the software either with
its external or with internal characteristics.
24
Causes of software defects
1. Faulty requirements definition
2. Client-developer communication failures
3. Deliberate deviations from software requirements
4. Logical design errors
5. Coding errors
6. Non-compliance with documentation and coding instructions
7. Shortcomings of the testing process
8. User interface and procedure errors
9. Documentation errors
25
Software errors, software faults and software failures
software error
software fault
software
failure
26
Software errors, software faults and software failures
More precisely:
Not all software errors become software faults. in some cases, the
software error can cause improper functioning of the software.
In many other cases, erroneous code lines will not affect the
functionality of the software as a whole.
27
Error, faults, failures
30
Static Testing
Static testing may not examine the code, also the specification,
design and the user documents.
It is testing prior to deployment.
Static testing is done by human or with the specialized tools.
Static Testing
Static testing include :-
Desk checking the code, code review, design walkthroughs,
Inspections to check requirements and design documents.
Review, walkthroughs, Inspections are the method for detailed
examination of a product by systematically reading the content
on step-by-step basis to find defects.
Running the syntax and type checkers as part of the
compilation process, or other code analysis tools.
Syntax checking and manually reading the code to find errors
is the method of static testing.
This type of testing is mostly done by the developer himself.
Static testing usually the first type of testing done on any
system.
Dynamic Testing
34
Dynamic Testing
35
Dynamic testing Includes
Feed the program with input and observe the behavior.
Check a certain number of input and output values.
- Unit testing
- Integrated Testing
- System testing
- User Acceptance Testing
Test Bed
Test
Driver
Test Test Test
Case 1 Case 2 Case n
PoC Test
output
Stub1
PoO
Stub 2
Test Object comparison
Stub k
37
Incremental approach to execute tests
Steps are:
Determine conditions and preconditions for the test and the goals
that are to be achieved
Specify individual test cases
Determine how to execute the tests (usually chaining together
several test cases)
• Eg: group test cases in such a way that a whole sequence of test
cases is executed (test sequence or test scenario)
• Document it in test procedure specification
• Need test script, e.g., JUnit
38
Techniques for Testing
39
Black Box Testing Techniques(1)
Test object is treated as a black box
The inner structure and design of the test object is unknown
Test cases are derived/design using the specification or the
requirements of the test object
The behavior of the test object is watched from the outside (PoO is
outside the test object)
It is not possible to control the operating sequence of the object other
than choosing the adequate input test data (PoC is situated outside of
test object)
Used for higher levels of testing
Any test design before the code is written (test-first programming, test-
driven development) is black box driven
40
Black Box testing
Focus :- I/O behavior :- For any given input, predict the output.
If it matches with the actual output then the module passes the
test.
In Black box testing, software is considered as a box. The
tester only knows what the software is supposed to do – he
can’t look in the box to see to see how it operates as shown in
fig.
If it gives certain input, he gets certain output.
He doesn’t know HOW or WHY it happens?
Black Box Testing Can be defined as
It is testing against functional specifications.
It will not test hidden functions and errors associated with them
will not be found.
It tests valid and invalid inputs but can not possibly test all
inputs.
Black Box Testing Can be defined as
Black box testing is so named as it IS NOT POSSIBLE to look
inside the box to determine the design and implementation
structures of the components you are testing.
The author of the program who knows too much about the
program internal should not perform black box testing.
PoC Test
Output
data
PoO
Test Object
44
Black-Box Testing Types
Static Black Box Testing :- Testing the specification
Dynamic Black Box Testing :-
Testing the software without having an insight into the details of
underlying code.
It’s dynamic because the program is running. And it is black
box because you are testing it without knowing exactly how it
works.
Disadvantages
Unit testing is also good for catching “rare” bugs that only occur
when specific sets of inputs are fed to module.
For example, if you wanted to move a fighter on the game, the driver
code would be
This driver code would likely be called from the main method. A white-
box test case would execute this driver line of code and check
"fighter.getPosition()" to make sure the player is now on the expected
cell on the board.
Stubs for Testing
A Stub is a dummy procedure, module or unit that stands in for
an unfinished portion of a system.
main test
main
A B C test
A
test
D E F B test
test main, A, B, C
C D, E, F
test
D
test
E
test
F
Big Bang Integration
In this type all components are combined at once to from a program.
That is, test all components in isolation, then mix them all together
and see how it works.
This approach is great with smaller systems, but it can end up taking
a lot of time for larger, more complex systems.
However this approach is not recommended as both drivers and
stubs are required.
Big Bang Integration
Advantages
Convenient for small systems
Disadvantages
Need driver and stubs for each module
Integration testing can only begin when all
modules are ready
Fault localization difficult
Easy to miss interface faults
Top-down Integration
It is incremental approach to construction of a program
structure
- Depth-first manner
- Breadth-first manner
Top-down Integration
Incremental strategy
1. Start by including highest level modules in test
set.
• All other modules replaced by stubs or
test
mock objects. main
2. Integrate (i.e. replace stub by real module)
modules called by called by modules in test set.
3. Repeat until all modules in test set. test
main, A,
M1 B,C
M2 M3 M4
test
main, A, B, C
M5 M6 M7 D, E, F
M8
M8
Top-down Integration
Depth First Manner :-
test
main, A, C
test
main main, A,
C, D, E, F
A B C
test
D E F main, A, B, C
D, E, F
Steps in Top-down Integration
The main control module is used as a test driver, and stubs are
substituted for all components directly subordinate to the main
control module.
Depending on the integration approach, subordinate stubs are
replaced one at a time with actual components.
Tests are conducted as each components is integrated.
On completion on each set of tests, another stub is replaced
with a real components.
Re-testing may be conducted to ensure that new errors have
not been introduced.
test
main D test
D,E,A
A B C
test test
test
E main, A, B,
B
D E F
C
D, E, F
test test
F C,F
Bottom-up Integration
Bottom-up integration is just the opposite of top-down
integration, where the components for a new product
development become available in reverse order, starting
from the bottom.
test
main D test
D,E,A
A B C
test test
E main, A, B,
D E F
C
test test D, E, F
F C,F
test
main test
main, A, B, C
Sandwich Integration
Pros :-
Cons :-
before integration
Function/thread integration
Integrate modules according to
threads/functions they belong to
main
A B C
D E F
Smoke Testing
The term smoke testing originated in the hardware industry.
After a piece of hardware or a hardware components(e.g.
transformer) was changed or repaired, the equipment was simply
powered up.
If there was no smoke , the component passed the test.
In software, the term smoke testing describes the process of
validating changed code to identity and fix defects in software before
the changes are mapped into the source product.
It is designed to confirm that changes in the code, functions as
expected.
Before running a smoke test, code review is conducted, which
focuses on changes in the code.
Tester must work with the developer who has written the code to
understand :
- What changes are made in the code ?
- How the changes affects the functionality ?
- How the change affects the interdependencies of various
components ?
Smoke Testing
Whenever a new software build is received, smoke test is
played against the software, verifying that major functionality still
operates.
Definition :-
Types :-
Regular Regression Testing
Final Regression Testing
Types of regression testing
Regular Regression Testing :-
d.If the result of a particular test case is FAIL using the previous builds
but works with a documented workaround and
Product acceptance :-
While accepting the product, end users execute all existing test
cases to determine they meet the requirements.
Acceptance Testing
Procedure acceptance :-
- All major defects that come up during the first six months of
deployment need to fixed free of cost.
- All major defects are to be fixed within 48 hours of reporting.
Types of Acceptance Testing
Alpha Test :-
When the first round of bugs have been fixed, the product goes
into beta test.
Types of Acceptance Testing
Beta Test :-
When a test case verifies the requirements of the product with a set of
expected output it is called “ positive test case”
When a test case not does verify the requirements of the product
with a set of expected output, it is called negative test case.
For e.g.
if the program is supposed to give an error when the person
types in “101” on a field that should be between “1” and “100”,
then that is positive test if an error shows up. If however, the
application does not give an error when the user typed “101”
then you have a negative test.
Positive and Negative Testing
Software Engineering
Symptoms Root Causes
Practices
Needs not met Incorrect requirements Develop Iteratively
Requirements churn Ambiguous communications
Modules don’t fit Brittle architectures Manage Requirements
Hard to maintain Overwhelming complexity
Late discovery Undetected inconsistencies Use Component Architectures
Poor
Poor quality
quality Insufficient testing
Model Visually (UML)
Poor performance Subjective assessment
Colliding developers Waterfall development Continuously Verify Quality
Build-and-release Uncontrolled change
Insufficient automation Manage Change
Software Engineering Practices Reinforce Each Other
Software Engineering
Practices
Develop Iteratively
Validates architectural
Use Component Architectures decisions early on
Field name
Field name
Field name
Review: Core Concepts of Software Testing
What is Quality?
Who are the Stakeholders?
What is a Defect?
What are Dimensions of Quality?
What are Test Ideas?
Where are Test Ideas useful?
Give some examples of a Test Ideas.
Explain how a catalog of Test Ideas could
be applied to a Test Matrix.
Principles of Software Testing for
Testers
Module 4: Define Evaluation Mission
So? Purpose of Testing?
The typical testing group has two key
priorities.
Find the bugs (preferably in priority order).
Assess the condition of the whole product
(as a user will see it).
Sometimes, these conflict
The mission of assessment is the underlying
reason for testing, from management’s
viewpoint. But if you aren’t hammering hard on
the program, you can miss key risks.
Missions of Test Groups Can Vary
Find defects
Maximize bug count
Block premature product releases
Help managers make ship / no-ship decisions
Assess quality
Minimize technical support costs
Conform to regulations
Minimize safety-related lawsuit risk
Assess conformance to specification
Find safe scenarios for use of the product (find ways to
get it to work, in spite of the bugs)
Verify correctness of the product
Assure quality
A Different Take on Mission: Public vs. Private Bugs
A programmer’s public bug rate includes all
bugs left in the code at check-in.
A programmer’s private bug rate includes
all the bugs that are produced, including the
ones fixed before check-in.
Estimates of private bug rates have ranged
from 15 to 150 bugs per 100 statements.
What does this tell us about our task?
Defining the Test Approach
The test approach (or “testing strategy”)
specifies the techniques that will be used to
accomplish the test mission.
The test approach also specifies how the
techniques will be used.
A good test approach is:
Diversified
Risk-focused
Product-specific
Practical
Defensible
Heuristics for Evaluating Testing Approach
James Bach collected a series of heuristics
for evaluating your test approach. For
example, he says:
Testing should be optimized to find important
problems fast, rather than attempting to find all
problems with equal urgency.
Please note that these are heuristics – they
won’t always the best choice for your
context. But in different contexts, you’ll find
different ones very useful.
What Test Documentation Should You Use?
Test planning standards and templates
Examples
Some benefits and costs of using IEEE-829
standard based templates
When are these appropriate?
Thinking about your requirements for test
documentation
Requirements considerations
Questions to elicit information about test
documentation requirements for your project
Write a Purpose Statement for Test Documentation
Try to describe your core documentation
requirements in one sentence that doesn’t
have more than three components.
Examples:
The test documentation set will primarily
support our efforts to find bugs in this version,
to delegate work, and to track status.
The test documentation set will support ongoing
product and test maintenance over at least 10
years, will provide training material for new
group members, and will create archives
suitable for regulatory or litigation use.
Review: Define Evaluation Mission
What is a Test Mission?
What is your Test Mission?
What makes a good Test Approach (Test
Strategy)?
What is a Test Documentation Mission?
What is your Test Documentation Goal?
Principles of Software Testing for
Testers
Module 5: Test & Evaluate
Test and Evaluate – Part One: Test
In this module, we drill into
Test and Evaluate
This addresses the “How?”
question:
How will you test those
things?
Test and Evaluate – Part One: Test
This module focuses
on the activity
Implement Test
Earlier, we covered
Test-Idea Lists, which
are input here
In the next module,
we’ll cover Analyze
Test Failures, the
second half of Test
and Evaluate
Review: Defining the Test Approach
In Module 4, we covered Test Approach
A good test approach is:
Diversified
Risk-focused
Product-specific
Practical
Defensible
The techniques you apply should follow
your test approach
Discussion Exercise 5.1: Test Techniques
There are as many as 200 published testing
techniques. Many of the ideas are
overlapping, but there are common themes.
Similar sounding terms often mean different
things, e.g.:
User testing
Usability testing
User interface testing
What are the differences among these
techniques?
Dimensions of Test Techniques
Think of the testing you do in terms of five
dimensions:
Testers: who does the testing.
Coverage: what gets tested.
Potential problems: why you're testing (what
risk you're testing for).
Activities: how you test.
Evaluation: how to tell whether the test passed
or failed.
Test techniques often focus on one or two
of these, leaving the rest to the skill and
imagination of the tester.
Test Techniques—Dominant Test Approaches
Of the 200+ published Functional Testing techniques,
there are ten basic themes.
They capture the techniques in actual practice.
In this course, we call them:
Function testing
Equivalence analysis
Specification-based testing
Risk-based testing
Stress testing
Regression testing
Exploratory testing
User testing
Scenario testing
Stochastic or Random testing
“So Which Technique Is the Best?”
Each has
strengths and Technique A
weaknesses
Think in Technique H Technique B
terms of Testers
complement
Coverage
There is no TechniquePotential
G Technique C
problems
“one true way” Activities
Mixing Evaluation
Technique F Technique D
techniques can
improve Technique E
coverage
Apply Techniques According to the LifeCycle
Test Approach changes over the project
Some techniques work well in early phases;
others in later ones
Align the techniques to iteration objectives
Technique H Technique B
Testers
Coverage
TechniquePotential
G Technique C
problems
Activities
TechniqueEvaluation
F Technique D
Technique E
Applying Opposite Techniques to Boost Coverage
Contrast these two techniques
Regression Exploration
Inputs: Inputs:
• Old test cases and • models or other analyses
analyses leading to new that yield new tests
test cases Outputs
Outputs: • scribbles and bug reports
• Archival test cases, Better for:
preferably well • Find new bugs, scout
documented, and bug new areas, risks, or ideas
reports
Better for:
• Reuse across multi-
version products Exploration Regression
Applying Complementary Techniques Together
Regression testing alone suffers fatigue
The bugs get fixed and new runs add little info
Symptom of weak coverage
Combine automation w/ suitable variance
E.g. Risk-based equivalence analysis
Coverage of the combination
can beat sum of the parts Equivalence
Risk-based
Regression
How To Adopt New Techniques
1. Answer these questions:
What techniques do you use in your test approach
now?
What is its greatest shortcoming?
What one technique could you add to make the
greatest improvement, consistent with a good test
approach:
• Risk-focused?
• Product-specific?
• Practical?
• Defensible?
2. Apply that additional technique until proficient
3. Iterate