Testing - Foundation
Testing - Foundation
explaining the
1
1.1.1 Why is testing necessary Why do we test?
(the common answer is :) To find bugs!
...but consider also:
To reduce the impact of the failures at the clients site (live defects) and
ensure that they will not affect costs & profitability
To decrease the rate of failures (increase the products reliability)
To improve the quality of the product
To ensure requirements are implemented fully & correctly
To validate that the product is fit for its intended purpose
To verify that required standards and legal requirements are met
To maintain the company reputation
Find defects
Assess the level of quality of the software product and providing related
information to the stakeholders
Prevent defects
Reduce risk of operational incidents
Increase the product quality
A programmer (or analyst) can make an error (mistake), which produces a defect
(fault, bug) in the programs code. If such a defect in the code is executed, the
system will fail to do what it should do (or it will do something it should not do),
causing a failure.
4
1.1.2 Why is testing necessary Causes of the errors
Defects are caused by human errors!
Why? Because of:
Time pressure - the more pressure we are under the more likely we are to
make mistakes
Code complexity or new technology
Too many system interactions
Requirements not clearly defined, changed & not properly documented
We make wrong assumptions about missing bits of information!
Poor communication
Poor training
5
1.1.2 Why is testing necessary - Causes of software defects -
Defects taxonomy
(Boris Beizer)
Requirements (incorrect, logic, completeness, verifiability, documentation, changes)
Features and functionality (correctness, missing case, domain and boundary,
messages, exception mishandled)
Structural (control flow, sequence, data processing)
Data (definition, structure, access, handling)
Implementation and Coding
Integration (internal and external interfaces)
System (architecture, performance, recovery, partitioning, environment)
Test definition and execution (test design, test execution, documentation, reporting)
(Cem Kaner (b1))
User interface (functionality, communication, missing, performance, output)
Error handling (prevention, detection, recovery)
Boundary (numeric, loops)
Calculation (wrong constants, wrong operation order, over & underflow)
Initialization (data item, string, loop control)
Control flow (stop, crash, loop, if-then-else,)
Data handling (data type, parameter list, values)
Race & Load conditions (event sequence, no resources)
Source and version control (old bug reappear)
Testing (fail to notice, fail to test, fail to report) 6
1.1.3 Why is testing necessary - The role of testing in the
software life cycle
Testers do cooperate with:
Analysts to review the specifications for completeness and correctness,
ensure that they are testable
Designers to improve interfaces testability and usability
Programmers to review the code and assess structural flaws
Project manager to estimate, plan, develop test cases, perform tests and
report bugs, to assess the quality and risks
Quality assurance staff to provide defect metrics
Interactions with these project roles are very complex.
RACI matrix (Responsible, Accountable, Consulted, Informed)
7
1.1.4 Why is testing necessary What is quality?
Quality (ISO) = The totality of the characteristics of an entity that bear on its
ability to satisfy stated or implied needs
9
1.1.4 Why is testing necessary quality attributes
10
1.1.5 Why is testing necessary - How much testing is enough?
The five basic criteria often used to decide when to stop testing are:
Software Reliability Engineering can help also to determine when to stop testing,
by taking into consideration aspects like failure intensity.
11
1.2 What is testing - Definition of testing
Testing = the process concerned with planning the necessary static and dynamic
activities, preparation and evaluation of software products and related deliverables,
in order to:
determine that they satisfy specified requirements
demonstrate that they are fit for the intended use
detect defects, help and motivate the developers to fix them
measure, assess and improve the quality of the software product
There are two basic types of testing: execution and non-execution based
Other definitions:
(IEEE) Testing = the process of analyzing a software item to detect the differences
between existing and required conditions and to evaluate its features
(Myers (b3)) Testing = the process of executing a program with the intent of finding
errors
(Craig & Jaskiel (b5)) Testing = a concurrent lifecycle process of engineering, using
and maintaining test-ware in order to measure and improve the quality of the
software being tested 12
1.2 What is testing Testing schools
Analytic School - testing is rigorous, academic and technical
Testing is a branch of CS/Mathematics
Testing techniques must have a logic-mathematical form
Key Question: Which techniques should we use?
Require precise and detailed specifications
Factory School - testing is a way to measure progress, with emphasis on cost and
repeatable standards
Testing must be managed & cost effective
Testing validates the product & measures development progress
Key Questions: How can we measure whether were making progress, When will we be done?
Require clear boundaries between testing and other activities (start/stop criteria)
Encourage standards (V-model), best practices, and certification
Quality School - emphasizes process & quality, acting as the gatekeeper
Software quality requires discipline
Testers may need to police developers to follow the rules
Key Question: Are we following a good process?
Testing is a stepping stone to process improvement
Context-Driven School - emphasizes people, setting out to find the bugs that will be
most important to stakeholders
Software is created by people. People set the context
Testing finds bugs acting as a skilled, mental activity
Key Question: What testing would be most valuable right now?
Expect changes. Adapt testing plans based on test results 13
Testing research requires empirical and psychological study
1.3 General testing principles
Testing shows presence of defects, but cannot prove that there are no more
defects; testing can only reduce the probability of undiscovered defects
Pareto rule (defect clustering): usually 20% of the modules contain 80% of the
bugs
Early testing: testing activities should start as soon as possible (including here
planning, design, reviews)
Pesticide paradox: if the same set of tests are repeated over again, no new bugs
will be found; the test cases should be reviewed, modified and new test cases
developed
Verification and Validation: discovering defects cannot help a product that is not fit
14
to the users needs
1.3 General testing principles heuristics of software testing
Controllability - The better we control the software, the more the testing process
can be automated and optimized
Simplicity - The less there is to test, the more quickly we can test it
Stability - The fewer the changes, the fewer are the disruptions to testing
Understandability - The more information we will have, the smarter we will test
Suitability - The more we know about the intended use of the software, the
better we can organize our testing to find important bugs
15
1.4.1 Fundamental test process phases
16
1.4.1 Fundamental test process planning & control
Planning
1. Determine scope
Study project documents, used software life-cycle specifications, product desired
quality attributes
Clarify test process expectations
2. Determine risks
Choose quality risk analysis method (e.g. FMEA)
Document the list of risks, probability, impact, priority, identify mitigation actions
3. Estimate testing effort, determine costs, develop schedule
Define necessary roles
Decompose test project into phases and tasks (WBS)
Schedule tasks, assign resources, set-up dependencies
4. Refine plan
Select test strategy (how to do it, what test types at which test levels)
Select metrics to be used for defect tracking, coverage, monitoring
Define entry and exit criteria
Control
Measure and analyze results
Monitor testing progress, coverage, exit criteria
Assign or reallocate resources, update the test plan schedule
Initiate corrective actions
Make decisions
17
1.4.2 Fundamental test process analysis & design
18
1.4.2 Fundamental test process what is a test Oracle?
The expected result (test outcome) must be defined at the test analysis stage
Who will decide that (expected result = actual result), when the test will be
executed? The Test Oracle!
Oracles in use = Simplification of Risk : do not assess pass - fail, but instead
problem - no problem
Possible issues:
false alarms
missed bugs 19
1.4.3 Fundamental test process implementation & execution
Test implementation:
Develop and prioritize test cases, create test data, test harnesses and automation scripts
Create test suites from the test cases
Check test environment
Test execution:
Execute (manually or automatically) the test cases (suites)
Use Test Oracles to determine if test passed or failed
Login the outcome of tests execution
Report incidents (bugs) and try to discover if they are caused by the test data, test
procedure or they are defect failures
Expand test activities as necessary, according to the testing mission
(see Rex Black (b4))
20
1.4.3 Fundamental test process prioritizing the Test Cases
It is not possible to test everything, we must do our best in the time available
Testing must be Risk based, assuring that the errors, that will get through to the
clients production system, will have the smallest possible impact and frequency
of occurrence
What to watch?
21
1.4.4 Fundamental test process evaluating exit criteria and
reporting
Test reporting:
Write the test summary report for the stakeholders use
22
1.4.5 Fundamental test process test closure
23
1.5 The psychology of testing
Testing is regarded as a destructive activity
(we run tests to make the software fail)
A good tester:
Rex Blacks
Should always have a critical approach
Top 10 professional errors
Must keep attention to detail
Must have analytical skills
Fall in Love with a Tool
Should have good verbal and written
Write Bad Bug Reports
communication skills
Fail to Define the Mission
Must analyse incomplete facts
Ignore a Key Stakeholder
Must work with incomplete facts
Deliver Bad News Badly
Must learn quickly about the product being
Take Sole Responsibility for Quality
tested
Be an Un-appointed Process Cop
Should be able to quickly prioritise
Fail to Fire Someone who Needs Firing
Should be a planned, organised kind of
Forget Youre Providing a Service
person
Ignore Bad Expectations
Also, he must have a good knowledge about: (see also Brian Maricks article)
The customers business workflows
The product architecture and interfaces
The software project process 24
Testing techniques and practices
1.5 The psychology of testing
The best tester isnt the one who finds the most bugs, the best tester is the one who
gets the most bugs fixed (Cem Kaner)
26
2.1.1 The V testing model Verification & Validation
Verification = Confirmation by
examination and through the
provision of objective evidence that
specified requirements have been
fulfilled.
Validation = Confirmation by
examination and through provision of
objective evidence that the
requirements for a specific intended
use or application have been fulfilled.
Verification is the dominant activity in
the Unit, Integration, System testing
levels, Validation is a mandatory
activity in the Acceptance testing level
27
2.1.1 The W testing model dynamic testing
28
2.1.1 The W testing model static testing
29
2.1.2 Software development models - Waterfall
30
2.1.2 Software development models - Waterfall
Waterfall weaknesses
31
2.1.2 Software development models - Rapid Prototype Model
32
2.1.2 Software development models - Rapid Prototype Model
33
2.1.2 Software development models - Incremental Model
34
2.1.2 Software development models - Incremental Model
35
2.1.2 Software development models - Spiral Model
36
2.1.2 Software development models - Spiral Model
The model is complex, and developers, managers, and customers may find it
too complicated to use
Considerable risk assessment expertise is required
Hard to define objective, verifiable milestones that indicate readiness to
proceed through the next iteration
May be expensive - time spent planning, resetting objectives, doing risk
analysis, and prototyping may be excessive
37
2.1.2 Software development models - Rational Unified Process
38
2.1.3 Software development models Testing life cycle
Plan, analysis and design of a testing activity should be done during the
corresponding development activity
39
2.2.1 Test levels Component testing
Target: single software modules, components that are separately testable
Access to the code being tested is mandatory, usually involves the programmer
o Functional tests
o Non-functional tests (stress test)
o Structural tests (statement coverage, branch coverage)
Target: the interfaces between components and interfaces with other parts of
the system
41
2.2.2 Test levels Component Integration testing
Increase the number of components, create & test subsystems and finally the
complete system
driver: A software component or test tool that replaces a component that takes
care of the control and/or the calling of a component or system.
We check the data exchanged between our system and other external systems.
Additional difficulties:
Multiple Platforms
Communications between platforms
Management of the environments
System testing = The process of testing an integrated system to verify that it meets
specified requirements
Black box testing techniques may be used (ex: business rule decision table)
44
2.2.4 Test levels Acceptance testing
Acceptance Testing = Formal testing with respect to user needs, requirements,
and business processes conducted to determine whether or not a system satisfies
the acceptance criteria and to enable the user, customers or other authorized entity
to determine whether or not to accept the system
The main focus is not to find defects, but to assess the readiness for deployment
It is not necessary the final testing level; a final system integration testing session
can be executed after the acceptance tests
May be executed also after component testing (component usability acceptance)
Usually involves client representatives
Typical forms:
User acceptance: business aware users verify the main features
Operational acceptance testing: backup-restore, security, maintenance
Alpha and Beta testing: performed by customers or potential users
Alpha : at the developers site
Beta : at the customers site
45
2.3.1 Test types Functional testing
Specification based:
uses Test Cases, derived from the specifications (Use Cases)
business process based, using business scenarios
46
2.3.2 Test types Non-Functional testing
Performance testing
Load testing (how much load can be handled by the system?)
Stress testing (evaluate system behavior at limits and out of limits)
Usability testing
Reliability testing
Portability testing
Maintainability testing
47
2.3.2 Test types Non-Functional testing - Usability
48
2.3.2 Test types Non-Functional testing - Instalability
49
2.3.2 Test types Non-Functional testing Load, Stress,
Performance, Volume testing
Load Test = A test type concerned with measuring the behavior of a component
or system with increasing load, e.g. number of parallel users and/or numbers of
transactions to determine what load can be handled by the component or system
Spike Test = Keeping the system, periodically, for short amounts of time,
beyond its specified limits
Endurance Test = a Load Test performed for a long time interval (week(s))
Volume Test = Testing where the system is subjected to large volumes of data
50
2.3.3 Test types Structural testing
Targeted to test:
Used also to help measure the coverage (% of items being covered by tests)
51
2.3.4 Test types Confirmation & regression testing
52
2.4 Maintenance testing
Include:
53
3.1 Reviews and the testing process
Reviews
Why review?
To identify errors as soon as possible in the development lifecycle
Reviews offer the chance to find omissions and errors in the software
specifications
54
3.1 Reviews and the testing process
When to review?
As soon as an software artifact is produced, before it is used as the basis for the
next step in development
Benefits include:
Early defect detection
Reduced testing costs and time
Can find omissions
Risks:
If misused they can lead to project team members frictions
The errors & omissions found should be regarded as a positive issue
The author should not take the errors & omissions personally
No follow up to is made to ensure correction has been made
Witch-hunts used when things are going wrong
55
3.2.1 Phases of a formal review
Planning: define scope, select participants, allocate roles, define entry &
exit criteria
Kick-off: distribute documents, explain objectives, process, check entry
criteria
Individual preparation: each of participants studies the documents, takes
notes, issues questions and comments
Review meeting: meeting participants discuss and log defects, make
recommendations
Rework: fixing defects (by the author)
Follow-up: verify again, gather metrics, check exit criteria
56
3.2.2 Roles in a formal review
57
3.2.3 Types of review
59
3.3 Static analysis by tools
60
4. Test design techniques - glossary
61
4.1 Test design test development process
1. Identify test conditions: 2. Develop test cases
Other taxonomy:
Specification based: test cases are built from the specifications of the module
Structure based: information about the module is constructed (design, code) is
used to derive the test cases
Experience based: testers knowledge about the specific domain, about the
likely defects, is used
63
4.3.1 Black box techniques equivalence partitioning
68
4.3.4 Black box techniques state transition tables
69
4.3.4 Black box techniques state transition tables - example
Ticket buy - web application
70
4.3.5 Black box techniques requirements based testing
Best practices:
Validate requirements (what) against objectives (why)
Apply use cases against requirements
Perform ambiguity reviews
Involve domain experts in requirements reviews
Create cause-effect diagrams
Check logical consistency of test scenarios
Validate test scenarios with domain experts and users
Walk through scenarios comparing with design documents
Walk through scenarios comparing with code
71
4.3.5 Black box techniques scenario testing
Here is a representation of the syntax for the floating point number, float in
Backus Naur Form (BNF) :
Syntax testing is the only black box technique without a coverage metric
assigned.
74
4.4 White box techniques Control flow
Modules of code are converted to graphs, the paths through the graphs are analyzed, and
test cases are created from that analysis. There are different levels of coverage.
Example:
a;
if (b) {
c;
}
d;
In case b is TRUE, executing the code will result in 100% statement coverage
76
4.4.1 White box techniques statement coverage - exercise
} e e
else {
e;
}
77
How many test cases are needed to get 100% statement coverage?
4.4.2 White box techniques branch & decision coverage - glossary
For components with one entry point 100% Branch Coverage is equivalent to
100% Decision Coverage
78
4.4.2 White box techniques branch & decision coverage - example
79
4.4.2 White box techniques LCSAJ coverage
80
4.4.3 White box techniques data flow coverage
Just as one would not feel confident about a program without executing every
statement in it as part of some test, one should not feel confident about a program
without having seen the effect of using the value produced by each and every
computation.
Data flow coverages:
All defs = Number of exercised definition-use pairs / Number of variable definitions
All c(omputation)-uses = Number of exercised definition- c-use pairs / Number of
definition- c-use pairs
All p(redicate)-uses = Number of exercised definition- p-use pairs / Number of
definition- p-use pairs
All uses = Number of exercised definition- use pairs / Number of definition- use pairs
Branch condition = Boolean operand values executed / Total Boolean operand
values
Branch condition combination = Boolean operand values combinations executed /
Total Boolean operand values combinations
82
4.6 Choosing test techniques
Factors used to choose:
Product or system type Schedule constraints
Standards Cost constraints
Products requirements Used software development life cycle model
Available documentation Testers skills and (domain) experience
Determined risks (additional materials: Unit Test design, exercises)
83
5.1.1 Test organization & independence
Pluses:
Testers are not influenced by the other project members
Can act as the customers voice
More objectivity in evaluating the product quality issues
Minuses:
Risk of isolation from the development team
Communication issues
Developers can loose the quality ownership attribute
84
5.1.2 Tasks of the test leader
85
5.1.2 Tasks of the tester
It identifies amongst others test items, the features to be tested, the testing tasks, who
will do each task, degree of tester independence, the test environment, the test design
techniques and test measurement techniques to be used, and the rationale for their
choice, and any risks requiring contingency planning.
87
5.2.1-5.2.2-5.2.3 Test planning
Determine scope
o Study project documents, used Refine plan
software life-cycle specifications, product o Define roles detailed
desired quality attributes responsibilities
o Identify and communicate with other
stakeholders o Select test strategy, test levels:
o Clarify test process expectations Test strategy issues (alternatives):
Preventive approach
Determine risks Reactive approach
o Choose quality risk analysis method Risk-based
(e.g. FMEA) Model (standard) based
o Document the list of risks,
probability, impact, priority, identify Choosing testing techniques (white
mitigation actions and/or black box)
Estimate testing effort, determine costs, o Select metrics to be used for defect
develop schedule tracking, coverage, monitoring
o Define necessary roles
o Decompose test project into phases o Define entry and exit criteria
and tasks (WBS) Exit criteria:
o Schedule tasks, assign resources, Coverage measures
set-up dependencies Defect density or trend measures
o Develop a budget Cost
o Obtain commitment for the plan Residual risk estimation
88
from the stakeholders Time or market based
Two approaches:
based on metrics (historical data)
made by domain experts
89
5.2.5 Test strategies
Test approach (test strategy) = The chosen approaches and decisions made that follow from the
test project's and test team's goal or mission.
The mission is typically effective and efficient testing, and the strategies are the general policies,
rules, and principles that support this mission. Test tactics are the specific policies, techniques,
processes, and the way testing is done.
One way to classify test approaches or strategies is based on the point in time at which the bulk of
the test design work is begun:
Preventative approaches, where tests are designed as early as possible.
Reactive approaches, where test design comes after the software or system has been produced.
Or, another taxonomy:
Analytical - such as risk-based
testing
Model-based - such as stochastic
testing
Methodical - such as failure-
based, experience-based
Process- or standard-compliant
Dynamic and heuristic - such as
exploratory testing
Consultative
Regression-averse
90
5.3 Test progress monitoring, reporting & control
91
5.4 Configuration management
Configuration Management:
identifies the current configuration (hardware, software) in the life cycle of
the system, together with any changes that are in course of being
implemented.
provides traceability of changes through the lifecycle of the system.
permits the reconstruction of a system whenever necessary
93
5.4 Configuration management
94
5.5 Risk & Testing
95
5.6 Incident management
Incident = any significant, unplanned event that occurs during testing that
requires subsequent investigation and / or correction
The incident reports raised against products defects are named also bug
reports.
96
5.6 Incident management
97
6.1.1 Test tool classification
98
6.1.2 Tool support - Management of testing
Bug tracking
99
6.1.3 Tool support - Static testing
Review support:
Process support
Communications support
Team support
Static analysis:
Coding standards
WEB site structure
Metrics
Modeling:
SQL database management
100
6.1.4 Tool support Test specification
Test design:
From requirements
From design models
101
6.1.5 Tool support Test execution and logging
102
6.1.6 Tool support Performance and monitoring
Dynamic analysis:
Time dependencies
Memory leaks
Load testing
Stress testing
Monitoring
103
6.2.1 Tool support benefits
104
6.2.1 Tool support risks
106
6.2.2 Test automation classic mistakes (Shrini Kulkarni )
Developer:
use Test Driven
Development methods
manage Unit Testing
analyze code coverage
use code static analysis
use code profiler to handle
performance issues
Tester:
manage test cases
manage test suites
manage manual testing
manage bug tracking
record / play WEB tests
run load tests
report test results
108
6.2.2 Tool support testing in Agile distributed environment
http://agile2008toronto.pbwiki.com/Evolution+of+tools+and+practices+of+a+distributed+agile+team
109
6.2.2 Introducing a tool into an organization
Note: there are many free testing tools available, some of them also online
( www.testersdesk.com )
110
ISTQB Foundation Exam guidelines
40 multiple (4) choice questions K1: The candidates will recognize, remember
1 hour exam and recall a term or concept.
Score >= 65% (>=26 good K2: The candidates can select the reasons or
answers) to pass explanations for statements related to the topic.
50% K1, 30% K2, 20% K3 They can summarize, compare, classify and
Chapter 1 - 7 questions give examples for concepts of testing.
Chapter 2 6 questions
Chapter 3 3 questions K3: The candidates can select the correct
Chapter 4 12 questions application of a concept or techniques and/or
Chapter 5 8 questions apply it to a given context.
Chapter 6 4 questions