0% found this document useful (0 votes)
815 views

Principles of Software Testing: Satzinger, Jackson, and Burd

This document provides an overview of software testing principles. It discusses that testing is a process of identifying defects by developing test cases and data. There are different types of testing such as unit testing, integration testing, usability testing, and user acceptance testing. Unit testing tests individual components in isolation before integration. Integration testing evaluates behavior when components are integrated. Usability and acceptance testing involve end users to determine if requirements are fulfilled. Programmers, quality assurance personnel, and users all play a role in software testing.

Uploaded by

Vicky Vicky
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
815 views

Principles of Software Testing: Satzinger, Jackson, and Burd

This document provides an overview of software testing principles. It discusses that testing is a process of identifying defects by developing test cases and data. There are different types of testing such as unit testing, integration testing, usability testing, and user acceptance testing. Unit testing tests individual components in isolation before integration. Integration testing evaluates behavior when components are integrated. Usability and acceptance testing involve end users to determine if requirements are fulfilled. Programmers, quality assurance personnel, and users all play a role in software testing.

Uploaded by

Vicky Vicky
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 86

Principles of Software Testing

Part I
Satzinger, Jackson, and Burd
Testing
 Testing is a process of identifying defects
 Develop test cases and test data
 A test case is a formal description of
• A starting state
• One or more events to which the software must
respond
• The expected response or ending state
 Test data is a set of starting states and events used
to test a module, group of modules, or entire system
Testing discipline activities
Test types and detected defects
Unit Testing

 The process of testing individual methods,


classes, or components before they are
integrated with other software
 Two methods for isolated testing of units
 Driver
• Simulates the behavior of a method that sends a
message to the method being tested
 Stub
• Simulates the behavior of a method that has not
yet been written
Integration Testing
 Evaluates the behavior of a group of methods
or classes
 Identifies interface compatibility, unexpected
parameter values or state interaction, and run-time
exceptions
 System test
 Integration test of the behavior of an entire system
or independent subsystem
 Build and smoke test
 System test performed daily or several times a
week
Usability Testing
 Determines whether a method, class,
subsystem, or system meets user requirements
 Performance test
 Determines whether a system or subsystem can
meet time-based performance criteria
• Response time specifies the desired or maximum
allowable time limit for software responses to
queries and updates
• Throughput specifies the desired or minimum
number of queries and transactions that must be
processed per minute or hour
User Acceptance Testing
 Determines whether the system fulfills user
requirements
 Involves the end users

 Acceptance testing is a very formal activity


in most development projects
Who Tests Software?
 Programmers
 Unit testing
 Testing buddies can test other’s programmer’s code
 Users
 Usability and acceptance testing
 Volunteers are frequently used to test beta versions
 Quality assurance personnel
 All testing types except unit and acceptance
 Develop test plans and identify needed changes
Part II

Principles of Software Testing for


Testers
Module 0: About This Course
Course Objectives
 After completing this course, you will be a more
knowledgeable software tester. You will be able to
better:
 Understand and describe the basic concepts of
functional (black box) software testing.
 Identify a number of test styles and techniques and
assess their usefulness in your context.
 Understand the basic application of techniques used to
identify useful ideas for tests.
 Help determine the mission and communicate the status
of your testing with the rest of your project team.
 Characterize a good bug report, peer-review the reports
of your colleagues, and improve your own report writing.
 Understand where key testing concepts apply within the
context of the Rational Unified Process.
Course Outline
0 – About This Course
1 – Software Engineering Practices
2 – Core Concepts of Software Testing
3 – The RUP Testing Discipline
4 – Define Evaluation Mission
5 – Test and Evaluate
6 – Analyze Test Failure
7 – Achieve Acceptable Mission
8 – The RUP Workflow As Context
Principles of Software Testing for
Testers
Module 1: Software Engineering Practices
(Some things Testers should know about them)
Objectives
 Identify some common software
development problems.
 Identify six software engineering practices
for addressing common software
development problems.
 Discuss how a software engineering
process provides supporting context for
software engineering practices.
Symptoms of Software Development Problems
 User or business needs not met
 Requirements churn
 Modules don’t integrate
 Hard to maintain
 Late discovery of flaws
 Poor quality or poor user experience
 Poor performance under load
 No coordinated team effort
 Build-and-release issues
Trace Symptoms to Root Causes

Software Engineering
Symptoms Root Causes
Practices
Needs not met Incorrect requirements Develop Iteratively
Requirements churn Ambiguous communications
Modules don’t fit Brittle architectures Manage Requirements
Hard to maintain Overwhelming complexity
Late discovery Undetected inconsistencies Use Component Architectures

Poor
Poor quality
quality Insufficient testing
Model Visually (UML)
Poor performance Subjective assessment
Colliding developers Waterfall development Continuously Verify Quality
Build-and-release Uncontrolled change
Insufficient automation Manage Change
Software Engineering Practices Reinforce Each Other
Software Engineering
Practices
Develop Iteratively

Ensures users involved


Manage Requirements as requirements evolve

Validates architectural
Use Component Architectures decisions early on

Model Visually (UML) Addresses complexity of design/


implementation incrementally

Continuously Verify Quality Measures quality early and often

Manage Change Evolves baselines incrementally


Principles of Software Testing for
Testers
Module 2: Core Concepts of Software
Testing
Objectives
 Introduce foundation topics of functional
testing
 Provide stakeholder-centric visions of
quality and defect
 Explain test ideas
 Introduce test matrices
Module 2 Content Outline
Definitions
 Defining functional testing
 Definitions of quality
 A pragmatic definition of defect
 Dimensions of quality
 Test ideas
 Test idea catalogs
 Test matrices
Functional Testing
 In this course, we adopt a common, broad
current meaning for functional testing. It is
 Black box
 Interested in any externally visible or
measurable attributes of the software other than
performance.
 In functional testing, we think of the
program as a collection of functions
 We test it in terms of its inputs and outputs.
How Some Experts Have Defined Quality
 Fitness for use (Dr. Joseph M. Juran)
 The totality of features and characteristics of a
product that bear on its ability to satisfy a given
need (American Society for Quality)
 Conformance with requirements (Philip Crosby)
 The total composite product and service
characteristics of marketing, engineering,
manufacturing and maintenance through which
the product and service in use will meet
expectations of the customer (Armand V.
Feigenbaum)
 Note absence of “conforms to
specifications.”
Quality As Satisfiers and Dissatisfiers
 Joseph Juran distinguishes between
Customer Satisfiers and Dissatisfiers as key
dimensions of quality:
 Customer Satisfiers
• the right features
• adequate instruction
 Dissatisfiers
• unreliable
• hard to use
• too slow
• incompatible with the customer’s equipment
A Working Definition of Quality

Quality is value to some person.

---- Gerald M. Weinberg


Change Requests and Quality
 A “defect” – in the eyes of a project
stakeholder– can include anything about
the program that causes the program to
have lower value.

 It’s appropriate to report any aspect of the


software that, in your opinion (or in the
opinion of a stakeholder whose interests
you advocate) causes the program to have
lower value.
Dimensions of Quality: FURPS
Usability
 e.g., Test application from
the perspective of
Functionality convenience to end-user.
 e.g., Test the accurate
workings of each
usage scenario Reliability
 e.g., Test the application
behaves consistently and
predictably.
Supportability
 e.g., Test the ability to
maintain and support
application under Performance
production use
 e.g., Test online
response under average
and peak loading
A Broader Definition of Dimensions of Quality
 Accessibility  Maintainability
 Capability  Performance
 Compatibility  Portability
 Concurrency
 Reliability
 Conformance to
standards  Scalability
 Efficiency  Security
 Installability and  Supportability
uninstallability  Testability
 Localizability  Usability

Collectively, these are often called Qualities of Service,


Nonfunctional Requirements, Attributes, or simply the -ilities
Test Ideas
 A test idea is a brief statement that
identifies a test that might be useful.
 A test idea differs from a test case, in that
the test idea contains no specification of the
test workings, only the essence of the idea
behind the test.
 Test ideas are generators for test cases:
potential test cases are derived from a test
ideas list.
 A key question for the tester or test analyst
is which ones are the ones worth trying.
Exercise 2.3: Brainstorm Test Ideas (1/2)
 We’re about to brainstorm, so let’s review…
 Ground Rules for Brainstorming
 The goal is to get lots of ideas. You brainstorm together
to discover categories of possible tests—good ideas
that you can refine later.
 There are more great ideas out there than you think.
 Don’t criticize others’ contributions.
 Jokes are OK, and are often valuable.
 Work later, alone or in a much smaller group, to
eliminate redundancy, cut bad ideas, and refine and
optimize the specific tests.
 Often, these meetings have a facilitator (who runs the
meeting) and a recorder (who writes good stuff onto
flipcharts). These two keep their opinions to themselves.
Exercise 2.3: Brainstorm Test Ideas (2/2)
 A field can accept integer values between
20 and 50.
 What tests should you try?
A Test Ideas List for Integer-Input Tests
 Common answers to the exercise would include:

Test Why it’s interesting Expected result


20 Smallest valid value Accepts it
19 Smallest -1 Reject, error msg
0 0 is always interesting Reject, error msg
Blank Empty field, what’s it do? Reject? Ignore?
49 Valid value Accepts it
50 Largest valid value Accepts it
51 Largest +1 Reject, error msg
-1 Negative number Reject, error msg
4294967296 2^32, overflow integer? Reject, error msg
Discussion 2.4: Where Do Test Ideas Come From?
 Where would you derive Test Ideas Lists?
 Models
 Specifications
 Customer complaints
 Brainstorm sessions among colleagues
A Catalog of Test Ideas for Integer-Input tests
 Nothing  Non-digits
 Valid value  Wrong data type (e.g. decimal
 At LB of value into integer)
 At UB of value  Expressions
 At LB of value - 1  Space
 At UB of value + 1  Non-printing char (e.g.,
 Outside of LB of value Ctrl+char)
 Outside of UB of value  DOS filename reserved chars
 0 (e.g., "\ * . :")
 Negative  Upper ASCII (128-254)
 At LB number of digits or chars  Upper case chars
 At UB number of digits or chars  Lower case chars
 Empty field (clear the default  Modifiers (e.g., Ctrl, Alt, Shift-
value)
Ctrl, etc.)
 Outside of UB number of digits
or chars  Function key (F2, F3, F4, etc.)
The Test-Ideas Catalog
 A test-ideas catalog is a list of related test
ideas that are usable under many
circumstances.
 For example, the test ideas for numeric input
fields can be catalogued together and used for
any numeric input field.
 In many situations, these catalogs are
sufficient test documentation. That is, an
experienced tester can often proceed with
testing directly from these without creating
documented test cases.
Apply a Test Ideas Catalog Using a Test Matrix

Field name

Field name

Field name
Review: Core Concepts of Software Testing
 What is Quality?
 Who are the Stakeholders?
 What is a Defect?
 What are Dimensions of Quality?
 What are Test Ideas?
 Where are Test Ideas useful?
 Give some examples of a Test Ideas.
 Explain how a catalog of Test Ideas could
be applied to a Test Matrix.
Principles of Software Testing for
Testers
Module 4: Define Evaluation Mission
So? Purpose of Testing?
 The typical testing group has two key
priorities.
 Find the bugs (preferably in priority order).
 Assess the condition of the whole product
(as a user will see it).
 Sometimes, these conflict
 The mission of assessment is the underlying
reason for testing, from management’s
viewpoint. But if you aren’t hammering hard on
the program, you can miss key risks.
Missions of Test Groups Can Vary
 Find defects
 Maximize bug count
 Block premature product releases
 Help managers make ship / no-ship decisions
 Assess quality
 Minimize technical support costs
 Conform to regulations
 Minimize safety-related lawsuit risk
 Assess conformance to specification
 Find safe scenarios for use of the product (find ways to
get it to work, in spite of the bugs)
 Verify correctness of the product
 Assure quality
A Different Take on Mission: Public vs. Private Bugs
 A programmer’s public bug rate includes all
bugs left in the code at check-in.
 A programmer’s private bug rate includes
all the bugs that are produced, including the
ones fixed before check-in.
 Estimates of private bug rates have ranged
from 15 to 150 bugs per 100 statements.
 What does this tell us about our task?
Defining the Test Approach
 The test approach (or “testing strategy”)
specifies the techniques that will be used to
accomplish the test mission.
 The test approach also specifies how the
techniques will be used.
 A good test approach is:
 Diversified
 Risk-focused
 Product-specific
 Practical
 Defensible
Heuristics for Evaluating Testing Approach
 James Bach collected a series of heuristics
for evaluating your test approach. For
example, he says:
 Testing should be optimized to find important
problems fast, rather than attempting to find all
problems with equal urgency.
 Please note that these are heuristics – they
won’t always the best choice for your
context. But in different contexts, you’ll find
different ones very useful.
What Test Documentation Should You Use?
 Test planning standards and templates
 Examples
 Some benefits and costs of using IEEE-829
standard based templates
 When are these appropriate?
 Thinking about your requirements for test
documentation
 Requirements considerations
 Questions to elicit information about test
documentation requirements for your project
Write a Purpose Statement for Test Documentation
 Try to describe your core documentation
requirements in one sentence that doesn’t
have more than three components.
 Examples:
 The test documentation set will primarily
support our efforts to find bugs in this version,
to delegate work, and to track status.
 The test documentation set will support ongoing
product and test maintenance over at least 10
years, will provide training material for new
group members, and will create archives
suitable for regulatory or litigation use.
Review: Define Evaluation Mission
 What is a Test Mission?
 What is your Test Mission?
 What makes a good Test Approach (Test
Strategy)?
 What is a Test Documentation Mission?
 What is your Test Documentation Goal?
Principles of Software Testing for
Testers
Module 5: Test & Evaluate
Test and Evaluate – Part One: Test
 In this module, we drill into
Test and Evaluate
 This addresses the “How?”
question:
 How will you test those
things?
Test and Evaluate – Part One: Test
 This module focuses
on the activity
Implement Test
 Earlier, we covered
Test-Idea Lists, which
are input here
 In the next module,
we’ll cover Analyze
Test Failures, the
second half of Test
and Evaluate
Review: Defining the Test Approach
 In Module 4, we covered Test Approach
 A good test approach is:
 Diversified
 Risk-focused
 Product-specific
 Practical
 Defensible
 The techniques you apply should follow
your test approach
Discussion Exercise 5.1: Test Techniques
 There are as many as 200 published testing
techniques. Many of the ideas are
overlapping, but there are common themes.
 Similar sounding terms often mean different
things, e.g.:
 User testing
 Usability testing
 User interface testing
 What are the differences among these
techniques?
Dimensions of Test Techniques
 Think of the testing you do in terms of five
dimensions:
 Testers: who does the testing.
 Coverage: what gets tested.
 Potential problems: why you're testing (what
risk you're testing for).
 Activities: how you test.
 Evaluation: how to tell whether the test passed
or failed.
 Test techniques often focus on one or two
of these, leaving the rest to the skill and
imagination of the tester.
Test Techniques—Dominant Test Approaches
 Of the 200+ published Functional Testing techniques,
there are ten basic themes.
 They capture the techniques in actual practice.
 In this course, we call them:
 Function testing
 Equivalence analysis
 Specification-based testing
 Risk-based testing
 Stress testing
 Regression testing
 Exploratory testing
 User testing
 Scenario testing
 Stochastic or Random testing
“So Which Technique Is the Best?”
 Each has
strengths and Technique A
weaknesses
 Think in Technique H Technique B
terms of Testers
complement Coverage
Technique G Technique C
 There is no Potential problems
“one true way” Activities
Technique FEvaluation
Technique D
 Mixing
techniques
can improve Technique E
coverage
Apply Techniques According to the LifeCycle
 Test Approach changes over the project
 Some techniques work well in early phases;
others in later ones
 Align the techniques to iteration objectives

Inception Elaboration Construction Transition

A limited set of focused tests Many varied tests

A few components of software under test Large system under test

Simple test environment Complex test environment

Focus on architectural & requirement risks Focus on deployment risks


Module 5 Agenda
 Overview of the workflow: Test and Evaluate
 Defining test techniques
 Individual techniques
 Function testing
 Equivalence analysis
 Specification-based testing
 Risk-based testing
 Stress testing
 Regression testing
 Exploratory testing
 User testing
 Scenario testing
 Stochastic or Random testing
 Using techniques together
At a Glance: Function Testing
Tag line Black box unit testing
Objective Test each function thoroughly, one at a
time.
Testers Any
Coverage Each function and user-visible variable
Potential problems A function does not work in isolation
Activities Whatever works
Evaluation Whatever works
Complexity Simple
Harshness Varies
SUT readiness Any stage
Strengths & Weaknesses: Function Testing
 Representative cases
 Spreadsheet, test each item in isolation.
 Database, test each report in isolation
 Strengths
 Thorough analysis of each item tested
 Easy to do as each function is implemented
 Blind spots
 Misses interactions
 Misses exploration of the benefits offered by the
program.
At a Glance: Equivalence Analysis (1/2)
Partitioning, boundary analysis, domain
Tag line
testing
There are too many test cases to run.
Use stratified sampling strategy to
Objective
select a few test cases from a huge
population.
Testers Any
All data fields, and simple combinations
of data fields. Data fields include input,
Coverage output, and (to the extent they can be
made visible to the tester) internal and
configuration variables
Potential problems Data, configuration, error handling
At a Glance: Equivalence Analysis (2/2)
Divide the set of possible values of a field into
subsets, pick values to represent each subset.
Typical values will be at boundaries. More
generally, the goal is to find a “best
Activities representative” for each subset, and to run
tests with these representatives.
Advanced approach: combine tests of several
“best representatives”. Several approaches to
choosing optimal small set of combinations.
Evaluation Determined by the data
Complexity Simple
Designed to discover harsh single-variable
Harshness tests and harsh combinations of a few
variables
SUT readiness Any stage
Strengths & Weaknesses: Equivalence Analysis
 Representative cases
 Equivalence analysis of a simple numeric field.
 Printer compatibility testing (multidimensional variable,
doesn’t map to a simple numeric field, but stratified
sampling is essential)
 Strengths
 Find highest probability errors with a relatively small set
of tests.
 Intuitively clear approach, generalizes well
 Blind spots
 Errors that are not at boundaries or in obvious special
cases.
 The actual sets of possible values are often unknowable.
Optional Exercise 5.2: GUI Equivalence Analysis
 Pick an app that you know and some dialogs
 MS Word and its Print, Page setup, Font format dialogs
 Select a dialog
 Identify each field, and for each field
• What is the type of the field (integer, real, string, ...)?
• List the range of entries that are “valid” for the field
• Partition the field and identify boundary conditions
• List the entries that are almost too extreme and too
extreme for the field
• List a few test cases for the field and explain why the
values you chose are the most powerful representatives of
their sets (for showing a bug)
• Identify any constraints imposed on this field by other
fields
At a Glance: Specification-Based Testing
Tag line Verify every claim
Objective Check conformance with every statement in
every spec, requirements document, etc.
Testers Any
Coverage Documented reqts, features, etc.
Potential Mismatch of implementation to spec
problems
Activities Write & execute tests based on the spec’s.
Review and manage docs & traceability
Evaluation Does behavior match the spec?
Complexity Depends on the spec
Harshness Depends on the spec
SUT readiness As soon as modules are available
Strengths & Weaknesses: Spec-Based Testing
 Representative cases
 Traceability matrix, tracks test cases associated with each
specification item.
 User documentation testing
 Strengths
 Critical defense against warranty claims, fraud charges,
loss of credibility with customers.
 Effective for managing scope / expectations of regulatory-
driven testing
 Reduces support costs / customer complaints by ensuring
that no false or misleading representations are made to
customers.
 Blind spots
 Any issues not in the specs or treated badly in the specs
/documentation.
Traceability Tool for Specification-Based Testing
The Traceability Matrix
Stmt 1 Stmt 2 Stmt 3 Stmt 4 Stmt 5
Test 1 X X X
Test 2 X X
Test 3 X X X
Test 4 X X
Test 5 X X
Test 6 X X
Optional Exercise 5.5: What “Specs” Can You Use?
 Challenge:
 Getting information in the absence of a spec
 What substitutes are available?
 Example:
 The user manual – think of this as a commercial
warranty for what your product does.
 What other “specs” can you/should you be
using to test?
Exercise 5.5—Specification-Based Testing

 Here are some ideas for sources that you


can consult when specifications are
incomplete or incorrect.
 Software change memos that come with new
builds of the program
 User manual draft (and previous version’s
manual)
 Product literature
 Published style guide and UI standards
Definitions—Risk-Based Testing
 Three key meanings:
1. Find errors (risk-based approach to the technical
tasks of testing)
2. Manage the process of finding errors (risk-based
test management)
3. Manage the testing project and the risk posed by
(and to) testing in its relationship to the overall
project (risk-based project management)

 We’ll look primarily at risk-based testing (#1),


proceeding later to risk-based test management.
 The project management risks are very
important, but out of scope for this class.
At a Glance: Risk-Based Testing
Tag line Find big bugs first
Objective Define, prioritize, refine tests in terms of
the relative risk of issues we could test for
Testers Any
Coverage By identified risk
Potential problems Identifiable risks
Activities Use qualities of service, risk heuristics and
bug patterns to identify risks
Evaluation Varies
Complexity Any
Harshness Harsh
SUT readiness Any stage
Strengths & Weaknesses: Risk-Based Testing
 Representative cases
 Equivalence class analysis, reformulated.
 Test in order of frequency of use.
 Stress tests, error handling tests, security tests.
 Sample from predicted-bugs list.
 Strengths
 Optimal prioritization (if we get the risk list right)
 High power tests
 Blind spots
 Risks not identified or that are surprisingly more likely.
 Some “risk-driven” testers seem to operate subjectively.
• How will I know what coverage I’ve reached?
• Do I know that I haven’t missed something critical?
Optional Exercise 5.6: Risk-Based Testing
 You are testing Amazon.com
(Or pick another familiar application)
 First brainstorm:
 What are the functional areas of the app?
 Then evaluate risks:
• What are some of the ways that each of these
could fail?
• How likely do you think they are to fail? Why?
• How serious would each of the failure types be?
At a Glance: Stress Testing
Tag line Overwhelm the product
Learn what failure at extremes tells
Objective about changes needed in the
program’s handling of normal cases
Testers Specialists
Coverage Limited
Potential problems Error handling weaknesses
Activities Specialized
Evaluation Varies
Complexity Varies
Harshness Extreme
SUT readiness Late stage
Strengths & Weaknesses: Stress Testing
 Representative cases
 Buffer overflow bugs
 High volumes of data, device connections, long
transaction chains
 Low memory conditions, device failures, viruses, other
crises
 Extreme load
 Strengths
 Expose weaknesses that will arise in the field.
 Expose security risks.
 Blind spots
 Weaknesses that are not made more visible by stress.
At a Glance: Regression Testing
Tag line Automated testing after changes
Objective Detect unforeseen consequences of change
Testers Varies
Coverage Varies
Potential Side effects of changes
problems Unsuccessful bug fixes
Activities Create automated test suites and run against
every (major) build
Complexity Varies
Evaluation Varies
Harshness Varies
SUT readiness For unit – early; for GUI - late
Strengths & Weaknesses—Regression Testing
 Representative cases
 Bug regression, old fix regression, general functional
regression
 Automated GUI regression test suites
 Strengths
 Cheap to execute
 Configuration testing
 Regulator friendly
 Blind spots
 “Immunization curve”
 Anything not covered in the regression suite
 Cost of maintaining the regression suite
At a Glance: Exploratory Testing
Tag line Simultaneous learning, planning, and
testing
Simultaneously learn about the
Objective product and about the test strategies
to reveal the product and its defects
Testers Explorers
Coverage Hard to assess
Potential problems Everything unforeseen by planned
testing techniques
Activities Learn, plan, and test at the same time
Evaluation Varies
Complexity Varies
Harshness Varies
SUT readiness Medium to late: use cases must work
Strengths & Weaknesses: Exploratory Testing
 Representative cases
 Skilled exploratory testing of the full product
 Rapid testing & emergency testing (including thrown-over-
the-wall test-it-today)
 Troubleshooting / follow-up testing of defects.
 Strengths
 Customer-focused, risk-focused
 Responsive to changing circumstances
 Finds bugs that are otherwise missed
 Blind spots
 The less we know, the more we risk missing.
 Limited by each tester’s weaknesses (can mitigate this with
careful management)
 This is skilled work, juniors aren’t very good at it.
At a Glance: User Testing
Tag line Strive for realism
Let’s try real humans (for a change)
Objective Identify failures in the overall
human/machine/software system.
Testers Users
Coverage Very hard to measure
Potential problems Items that will be missed by anyone
other than an actual user
Activities Directed by user
Evaluation User’s assessment, with guidance
Complexity Varies
Harshness Limited
SUT readiness Late; has to be fully operable
Strengths & Weaknesses—User Testing
 Representative cases
 Beta testing
 In-house lab using a stratified sample of target market
 Usability testing
 Strengths
 Expose design issues
 Find areas with high error rates
 Can be monitored with flight recorders
 Can use in-house tests focus on controversial areas
 Blind spots
 Coverage not assured
 Weak test cases
 Beta test technical results are mixed
 Must distinguish marketing betas from technical betas
At a Glance: Scenario Testing
Tag line Instantiation of a use case
Do something useful, interesting, and complex
Objective Challenging cases to reflect real use
Testers Any
Coverage Whatever stories touch
Potential Complex interactions that happen in real use
problems by experienced users
Activities Interview stakeholders & write screenplays,
then implement tests
Evaluation Any
Complexity High
Harshness Varies
SUT readiness Late. Requires stable, integrated functionality.
Strengths & Weaknesses: Scenario Testing
 Representative cases
 Use cases, or sequences involving combinations of use
cases.
 Appraise product against business rules, customer data,
competitors’ output
 Hans Buwalda’s “soap opera testing.”
 Strengths
 Complex, realistic events. Can handle (help with)
situations that are too complex to model.
 Exposes failures that occur (develop) over time
 Blind spots
 Single function failures can make this test inefficient.
 Must think carefully to achieve good coverage.
At a Glance: Stochastic or Random Testing (1/2)
Monkey testing
Tag line High-volume testing with new cases all
the time
Have the computer create, execute,
and evaluate huge numbers of tests.
The individual tests are not all that
powerful, nor all that compelling.
Objective The power of the approach lies in
the large number of tests.
These broaden the sample, and
they may test the program over a
long period of time, giving us insight
into longer term issues.
At a Glance: Stochastic or Random Testing (2/2)
Testers Machines
Coverage Broad but shallow. Problems with
stateful apps.
Potential problems Crashes and exceptions
Activities Focus on test generation
Evaluation Generic, state-based
Complexity Complex to generate, but individual
tests are simple
Harshness Weak individual tests, but huge
numbers of them
SUT readiness Any
Combining Techniques (Revisited)
 A test approach should be diversified
 Applying opposite techniques can improve
coverage
 Often one technique can
extend another Technique A

Technique H Technique B
Testers
Coverage
TechniquePotential
G Technique C
problems
Activities
TechniqueEvaluation
F Technique D
Technique E
Applying Opposite Techniques to Boost Coverage
Contrast these two techniques
Regression Exploration
 Inputs:  Inputs:
• Old test cases and • models or other analyses
analyses leading to new that yield new tests
test cases  Outputs
 Outputs: • scribbles and bug reports
• Archival test cases,  Better for:
preferably well • Find new bugs, scout
documented, and bug new areas, risks, or ideas
reports
 Better for:
• Reuse across multi-
version products Exploration Regression
Applying Complementary Techniques Together
 Regression testing alone suffers fatigue
 The bugs get fixed and new runs add little info
 Symptom of weak coverage
 Combine automation w/ suitable variance
 E.g. Risk-based equivalence analysis
 Coverage of the combination
can beat sum of the parts Equivalence

Risk-based

Regression
How To Adopt New Techniques
1. Answer these questions:
 What techniques do you use in your test approach
now?
 What is its greatest shortcoming?
 What one technique could you add to make the
greatest improvement, consistent with a good test
approach:
• Risk-focused?
• Product-specific?
• Practical?
• Defensible?
2. Apply that additional technique until proficient
3. Iterate

You might also like