0% found this document useful (0 votes)
46 views96 pages

ST Chapter2

This document discusses various techniques for software testing, including both static (non-execution) and dynamic (execution-based) methods. It covers topics such as reviews/inspections, static analysis, black box testing, white box testing, test planning, test case design, and different levels of testing from unit to system. The goal is to validate and verify software functionality and quality through a systematic approach.

Uploaded by

Hung Pham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views96 pages

ST Chapter2

This document discusses various techniques for software testing, including both static (non-execution) and dynamic (execution-based) methods. It covers topics such as reviews/inspections, static analysis, black box testing, white box testing, test planning, test case design, and different levels of testing from unit to system. The goal is to validate and verify software functionality and quality through a systematic approach.

Uploaded by

Hung Pham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 96

Đà Nẵng, ngày

“ Add31 tháng
your 08 năm
company 2023
slogan ”

SOFTWARE TESTING

Chapter 2.
Software Testing Techniques

Nguyễn Quang Vũ, PhD.


E-mail: [email protected] LOGO
Mobile: 0901.982.982
Types of systematic technique
Static (non-execution)
• examination of documentation,
source code listings, etc.
Dynamic
Functional/Non-Functional(Black Box)
• based on behaviour /
functionality of software
Structural (White Box)
• based on structure
of software
Shortly Introduction
Static Dynamic

Reviews etc.
Behavioural
Inspection Static Analysis

Walkthroughs Structural Non-functional Functional


etc.
Desk-checking etc.
Equivalence
Control Usability Partitioning
Data Flow
Flow Performance
etc. Boundary
Value Analysis
etc.
Statement
Symbolic Cause-Effect Graphing
Execution Arcs
Branch/Decision
Random
Definition Branch Condition LCSAJ
-Use
Branch Condition State Transition
Combination
Static Testing
• Reviews and the test process
• Types of review
• Static analysis
People techniques
• individual:
• desk-checking, data-stepping, proof-reading
• group:
• Reviews (informal & formal): for consensus
• Walkthrough: for education
• Inspection (most formal): to find faults

Static techniques do not execute code


Benefits of reviews
• Development productivity improvement
• Reduced development timescales
• Reduced testing time and cost
• Lifetime cost reductions
• Reduced fault levels
• Improved customer relations
• etc.
Reviews are cost-effective
• 10 times reduction in faults reaching test,
testing cost reduced by 50% to 80%
(Handbook of Walkthroughs, Inspections & Technical Reviews - Freedman & Weinberg)

• reduce faults by a factor of 10


(Structured Walkthroughs – Yourdon)
• 25% reduction in schedules, remove 80% -
95% of faults at each stage, 28 times reduction
in maintenance cost, many others
(Software Inspection - Gilb & Graham)
What can be Inspected?
Anything written down can be Inspected
• policy, strategy, business plans, marketing or advertising
material, contracts
• system requirements, feasibility studies, acceptance test
plans
• test plans, test designs, test cases, test results
• system designs, logical & physical
• software code
• user manuals, procedures, training material
What can be reviewed?

• anything which could be Inspected


• i.e. anything written down
• plans, visions, “big picture”, strategic
directions, ideas
• project progress
• work completed to schedule, etc.
• “Should we develop this” marketing options
Costs of reviews
• Rough guide: 5%-15% of development effort
• half day a week is 10%
• Effort required for reviews
• planning (by leader / moderator)
• preparation / self-study checking
• meeting
• fixing / editing / follow-up
• recording & analysis of statistics / metrics
• process improvement (should!)
Types of review of documents
Informal Review undocumented
• widely viewed as useful and cheap (but no one
can prove it!) A helpful first step for chaotic
organisations.
Technical Review: (or peer review)
• includes peer and technical experts, no
management participation. Normally
documented, fault-finding. Can be rather
subjective.
Decision-making Review:
• group discusses document and makes a
decision about the content, e.g. how
something should be done, go or no-go
decision, or technical comments
Types of review of documents
Walkthrough
• author guides the group through a document and his or
her thought processes, so all understand the same
thing, consensus on changes to make

Inspection:
• formal individual and group checking, using sources and
standards, according to generic and specific rules and
checklists, using entry and exit criteria, Leader must be
trained & certified, metrics required
Reviews in general 1
• Objectives / goals
• validation & verification against
specifications & standards
• achieve consensus (excluding Inspection)
• process improvement (ideal, included in
Inspection)
Reviews in general 2
• Activities
• planning
• overview / kick-off meeting (Inspection)
• preparation / individual checking
• review meeting (not always)
• follow-up (for some types)
• metrics recording & analysis (Inspections
and sometimes reviews)
Reviews in general 3
• Roles and responsibilities
• Leader / moderator - plans the review /
Inspection, chooses participants, helps &
encourages, conducts the meeting,
performs follow-up, manages metrics
• Author of the document being reviewed /
Inspected
• Reviewers / Inspectors - specialised fault-
finding roles for Inspection
• Managers - excluded from some types of
review, need to plan project time for review
/ Inspection
• Others: e.g. Inspection/ review Co-ordinator
Reviews in general 4
• Deliverables
• Changes (edits) in review product
• Change requests for source documents (predecessor
documents to product being reviewed / Inspected)
• Process improvement suggestions
• to the review / Inspection process
• to the development process which produced the
product just reviewed / Inspected
• Metrics (Inspection and some types of review)
Reviews in general 5
• Pitfalls (they don’t always work!)
• lack of training in the technique (especially
Inspection, the most formal)
• lack of or quality of documentation - what is
being reviewed / Inspected
• Lack of management support - “lip service”
- want them done, but don’t allow time for
them to happen in project schedules
• Failure to improve processes (gets
disheartening just getting better at finding
the same thing over again)
Inspection is more and better
• entry criteria • process improvement
• training • exit criteria
• optimum checking rate • quantified estimates of
• prioritising the words remaining major faults
• standards per page
At first glance ..

Here’s a document: review this (or Inspect it)


Reviews: time and size determine rate
2 hrs? Time

100 pages?
Checking Size
Rate

50 pages per hour


Review “Thoroughness”?
minor major minor

ordinary “review” - finds some faults, one major, fix them,


consider the document now corrected and OK
Inspection: time and rate determine size

2 hrs? Time

Optimum:
1 page*
per hour
Checking Size
Rate

2 pages (at optimum rate)


* 1 page = 300 important words
Software (dynamic) testing techniques
Software (dynamic) testing techniques
• Black-Box (Functional/Non-Functional) Testing:
Is testing that ignores the internal mechanism
of a system or component and focuses solely on
the outputs generated in response to selected
inputs and execution conditions
• White-Box (Glass-Box/Structural) Testing: is
testing that takes into account the internal
mechanism of a system or component
• Grey-Box Testing
Shortly Introduction
Shortly Introduction
• Test Plan: A test plan is a document describing the scope,
approach, resources, and schedule of intended test
activities. It identifies test items, the features to be tested,
the testing tasks, who will do each task, and any risks
requiring contingency plans.
• Test cases: A test case is a set of test inputs, execution
conditions, and expected results developed for a particular
objective, such as to exercise a particular program path or
to verify compliance with a specific requirement .
An example!
Six levels of (dynamic) testing
Six levels of testing
• Unit Testing: Unit testing is the testing of individual hardware or software
units or groups of related units
• Integration Testing: Integration test is testing in which software
components, hardware components, or both are combined and tested to
evaluate the interaction between them
• System Testing: System testing is testing conducted on a complete,
integrated system to evaluate the system compliance with its specified
requirements
• Functional Testing
• Stress Testing
• Performance Testing
• Usability Testing
• Security Testing
• …
Six levels of testing (con’d)
• Acceptance Testing: Acceptance testing is formal testing
conducted to determine whether or not a system satisfies
its acceptance criteria (the criteria the system must satisfy
to be accepted by a customer) and to enable the customer
to determine whether or not to accept the system
• Regression Testing: Regression testing is selective retesting
of a system or component to verify that modifications have
not caused unintended effects and that the system or
component still complies with its specified requirements
• Alpha/Beta Testing
Unit/Component testing
• lowest level
• tested in isolation
• most thorough look at detail
• error handling
• interfaces
• usually done by programmer
• also known as unit, module, program testing
Integration testing
in the small
• more than one (tested) component
• communication between components
• what the set can perform that is not possible
individually
• non-functional aspects if possible
• integration strategy: big-bang vs incremental (top-
down, bottom-up, functional)
• done by designers, analysts, or
independent testers
Big-Bang Integration
• In theory:
• if we have already tested components why not
just combine them all at once? Wouldn’t this save
time?
• (based on false assumption of no faults)
• In practice:
• takes longer to locate and fix faults
• re-testing after fixes more extensive
• end result? takes more time
Incremental Integration
• Baseline 0: tested component
• Baseline 1: two components
• Baseline 2: three components, etc.
• Advantages:
• easier fault location and fix
• easier recovery from disaster / problems
• interfaces should have been tested in component
tests, but ..
• add to tested baseline
Top-Down Integration
• Baselines:
• baseline 0: component a
• baseline 1: a + b a
• baseline 2: a + b + c b c
• baseline 3: a + b + c + d
• etc. d e f g
• Need to call to lower
level components not h i j k l m
yet integrated
• Stubs: simulate missing n o
components
Stubs
• Stub (Baan: dummy sessions) replaces a called component
for integration testing
• Keep it Simple
• print/display name (I have been called)
• reply to calling module (single value)
• computed reply (variety of values)
• prompt for reply from tester
• search list of replies
• provide timing delay
Pros & cons of top-down approach
• Advantages:
• critical control structure tested first and most often
• can demonstrate system early (show working menus)
• Disadvantages:
• needs stubs
• detail left until last
• may be difficult to "see" detailed output (but should
have been tested in component test)
• may look more finished than it is
Bottom-up Integration
• Baselines:
a
• baseline 0: component n
• baseline 1: n + i b c
• baseline 2: n + i + o
• baseline 3: n + i + o + d d e f g
• etc.
• Needs drivers to call h i j k l m
the baseline configuration
n o
• Also needs stubs
for some baselines
Drivers
• Driver (Baan: dummy sessions): test harness: scaffolding
• specially written or general purpose
(commercial tools)
• invoke baseline
• send any data baseline expects
• receive any data baseline produces (print)
• each baseline has different requirements from
the test driving software
Pros & cons of bottom-up approach
• Advantages:
• lowest levels tested first and most thoroughly (but should
have been tested in unit testing)
• good for testing interfaces to external environment
(hardware, network)
• visibility of detail
• Disadvantages
• no working system until last baseline
• needs both drivers and stubs
• major control problems found last
Minimum Capability Integration
(also called Functional)
• Baselines:
a
• baseline 0: component a
• baseline 1: a + b b c
• baseline 2: a + b + d
d e f g
• baseline 3: a + b + d + i
• etc. h i j k l m
• Needs stubs
• Shouldn't need drivers n o
(if top-down)
Pros & cons of Minimum Capability
• Advantages:
• control level tested first and most often
• visibility of detail
• real working partial system earliest
• Disadvantages
• needs stubs
Thread Integration
(also called functional)
• order of processing some event
determines integration order a
• interrupt, user transaction
b c
• minimum capability in time
• advantages:
d e f g
• critical processing first
• early warning of
performance problems
h i j k l m
• disadvantages:
• may need complex drivers and stubs n o
Integration Guidelines
• minimise support software needed
• integrate each component only once
• each baseline should produce an easily
verifiable result
• integrate small numbers of components at once
• one at a time for critical or fault-prone
components
• combine simple related components
Integration Planning
• integration should be planned in the
architectural design phase
• the integration order then determines the
build order
• components completed in time for their
baseline
• component development and integration
testing can be done in parallel - saves time
System testing
• last integration step
• functional
• functional requirements and requirements-based testing
• business process-based testing
• non-functional
• as important as functional requirements
• often poorly specified
• must be tested
• often done by independent test group
Functional system testing
• Functional requirements
• a requirement that specifies a function that a
system or system component must perform
(ANSI/IEEE Std 729-1983, Software Engineering
Terminology)
• Functional specification
• the document that describes in detail the
characteristics of the product with regard to its
intended capability (BS 4778 Part 2, BS 7925-1)
Requirements-based testing
• Uses specification of requirements as the basis for
identifying tests
• table of contents of the requirements spec
provides an initial test inventory of test conditions
• for each section / paragraph / topic / functional
area,
• risk analysis to identify most important / critical
• decide how deeply to test each functional area
Business process-based testing
• Expected user profiles
• what will be used most often?
• what is critical to the business?
• Business scenarios
• typical business transactions (birth to death)
• Use cases
• prepared cases based on real situations
Non-functional system testing
• different types of non-functional system tests:
• usability - configuration / installation
• security - reliability / qualities
• documentation - back-up / recovery
• storage - performance, load,
• volume
• stress: verifies the stability & reliability of the system. This test
mainly measures the system on its robustness and error handling
capabilities under extremely heavy load conditions.
Performance Tests
• Timing Tests
• response and service times
• database back-up times
• Capacity & Volume Tests
• maximum amount or processing rate
• number of records on the system
• graceful degradation
• Endurance Tests (24-hr operation?)
• robustness of the system
• memory allocation
Multi-User Tests
• Concurrency Tests
• small numbers, large benefits
• detect record locking problems
• Load Tests
• the measurement of system behaviour under realistic
multi-user load
• Stress Tests
• go beyond limits for the system - know what will happen
• particular relevance for e-commerce

Source: Sue Atkins, Magic Performance Management


Usability Tests
• messages tailored and meaningful to (real)
users?
• coherent and consistent interface?
• sufficient redundancy of critical information?
• within the "human envelope"? (7±2 choices)
• feedback (wait messages)?
• clear mappings (how to escape)?
Who should design / perform these tests?
Security Tests
• passwords
• encryption
• hardware permission devices
• levels of access to information
• authorisation
• covert channels
• physical security
Configuration and Installation
• Configuration Tests
• different hardware or software environment
• configuration of the system itself
• upgrade paths - may conflict
• Installation Tests
• distribution (CD, network, etc.) and timings
• physical aspects: electromagnetic fields, heat,
humidity, motion, chemicals, power supplies
• uninstall (removing installation)
Reliability / Qualities
• Reliability
• "system will be reliable" - how to test this?
• "2 failures per year over ten years"
• Mean Time Between Failures (MTBF)
• reliability growth models
• Other Qualities
• maintainability, portability, adaptability, etc.
Back-up and Recovery
• Back-ups
• computer functions
• manual procedures (where are tapes stored)
• Recovery
• real test of back-up
• manual procedures unfamiliar
• should be regularly rehearsed
• documentation should be detailed, clear and
thorough
Documentation Testing
• Documentation review
• check for accuracy against other documents
• gain consensus about content
• documentation exists, in right format
• Documentation tests
• is it usable? does it work?
• user manual
• maintenance documentation
Integration testing in the large

• Tests the completed system working in conjunction


with other systems, e.g.
• LAN / WAN, communications middleware
• other internal systems (billing, stock, personnel,
overnight batch, branch offices, other countries)
• external systems (stock exchange, news, suppliers)
• intranet, internet / www
• 3rd party packages
• electronic data interchange (EDI)
Approach
• Identify risks
• which areas missing or malfunctioning would be most
critical - test them first
• “Divide and conquer”
• test the outside first (at the interface to your system,
e.g. test a package on its own)
• test the connections one at a time first
(your system and one other)
• combine incrementally - safer than “big bang”
(non-incremental)
Planning considerations
• resources
• identify the resources that will be needed
(e.g. networks)
• co-operation
• plan co-operation with other organisations
(e.g. suppliers, technical support team)
• development plan
• integration (in the large) test plan could influence
development plan (e.g. conversion software needed
early on to exchange data formats)
User acceptance testing
• Final stage of validation
• customer (user) should perform or be closely
involved
• customer can perform any test they wish, usually
based on their business processes
• final user sign-off
• Approach
• mixture of scripted and unscripted testing
• ‘Model Office’ concept sometimes used
Why customer / user involvement
• Users know:
• what really happens in business situations
• complexity of business relationships
• how users would do their work using the system
• variants to standard tasks (e.g. country-specific)
• examples of real cases
• how to identify sensible work-arounds

Benefit: detailed understanding of the new system


User Acceptance testing
Acceptance testing
distributed over
this line

80% of function
by 20% of code
20% of function
by 80% of code

System testing
distributed over
this line
Contract acceptance testing
• Contract to supply a software system
• agreed at contract definition stage
• acceptance criteria defined and agreed
• may not have kept up to date with changes
• Contract acceptance testing is against the
contract and any documented agreed changes
• not what the users wish they had asked for!
• this system, not wish system
Alpha and Beta tests: similarities
• Testing by [potential] customers or representatives of
your market
• not suitable for bespoke software
• When software is stable
• Use the product in a realistic way in its operational
environment
• Give comments back on the product
• faults found
• how the product meets their expectations
• improvement / enhancement suggestions?
Alpha and Beta tests: differences
• Alpha testing
• simulated or actual operational testing at an in-house site
not otherwise involved with the software developers (i.e.
developers’ site)
• Beta testing
• operational testing at a site not otherwise involved with
the software developers (i.e. testers’ site, their own
location)
Software testing techniques under the language of V&V:

• Verification
. "Are we building the product right”.
. What the testing technique is often used?
• Validiation
. "Are we building the right product”.
. What the testing technique is often used?
Black-Box testing technique
The tester’s dilemma
• We can’t test everything.
• No single approach, method, or technique is
sufficient to test a program adequately
• So, which subsets of tests do we choose
• …expose most defects
• …achieve high coverage
• …reduce exposure to risk
• …provide necessary information
Functional testing techniques
• Test design patterns designed to identify bugs
that negatively impact functionality or
behavior of software
• Derived from common mistakes or errors in
computer programming
• Techniques are useful in identifying specific
categories of bugs, but will not identify all
bugs or all types of bugs
Common functional testing techniques
• Error guessing / exploratory testing
• Equivalence partitioning
• Boundary value analysis
• Decision tables
• Combinatorial analysis
• State models
• Use cases / scenario testing
Lesson Roadmap
• Exploratory testing / error guessing
• Use intuition, experience, and knowledge to
identify bugs
• State benefits and limitations of exploratory
testing / error guessing
• Equivalence Class Partitioning (ECP)
Lesson 1 – Exploratory testing
• Exploratory testing is described as concurrent
learning, test design, and execution
• Dynamic tests based on the tester’s
experience and intuition executed during
runtime at the system level
• Effectiveness is dependent on the tester's skill,
intuition, and knowledge of the product
Exploratory Testing / Error Guessing
• When to use
• Learn product or features
• Broad scope – no distinct separate test
cases
• Helps identify “what if” scenarios
Exercise 1: Triangle problem
• The objective of this exercise is to:
• Test the triangle program against the stated requirements
• In this exercise you will:
• Define tests using you current experience, knowledge and
skills (exploratory testing)
• This is important because:
• This sets a baseline and allows you to evaluate yourself as
you learn how to apply functional and structural
techniques
Lesson Roadmap
• Exploratory testing / error guessing
• Equivalence Class Partitioning (ECP)
• Separate input and outputs into equivalent
domain spaces
• Define a subset of tests using data from
equivalent data sets
Basic principles
• Separate input or expected output conditions
into sets and test at least one value from each
set.
• Rationale – if each element in a class of values
has the exact same behavior, then the
program is likely to be constructed so that it
either succeeds or fails for all of the values in
that specific class.
Equivalence class partitioning
• In-depth analysis of input/output conditions
• 2 basic classes: Valid and Invalid
• Data grouped or partitioned by expected
results
• Primary testing focus
• Breadth of coverage (especially with
random data)
• Identifies edge cases
ECP Class subset Guidelines
• Range of values
• Numbers 1 – 10, characters A – Z, etc.
• Unique values in a group
• February 29 in a range of dates, tomato is a fruit
• Number of values
• Visa must have 14 digits
• Specific values
• Visa must start with 4, filename cannot have ‘\’
character
Equivalence Partitioning
• First-level
partitioning: Valid
vs. Invalid test cases
Valid Invalid
Equivalence Partitioning
• Partition valid and
invalid test cases
into equivalence
classes
Equivalence Partitioning
• Create a test case for
at least one value
randomly selected
from each
equivalence class
Equivalence class Definition
• Note that the examples so far focused on defining input
variables without considering the output variables.
Triangle Example
• For the “triangle problem,” we are interested in 4 questions:
• Is it a triangle?
• Is it an isosceles?
• Is it a scalene?
• Is it an equilateral?
• We may define the input test data by defining the
equivalence class through the 5 output groups:
• input sides <a, b, c> do not form a triangle
• input sides <a, b ,c> form an isosceles triangle
• input sides <a, b, c> form a scalene triangle
• input sides <a, b, c> form an equilateral triangle
• Error inputs
Triangle Equivalent Classes and Input Conditions
• Error Input
• a <= 0 || b <= 0 || c <= 0
• Not triangle
Error Input
• a + b <= c || b + c <= a || a + c <= b
Equilateral
• Equilateral
• a == b && b == c Isosceles
Not
• Scalene Triangle
• a != b && b != c && a != c
Scalene
• Isosceles
• (a == b && b != c) || (b == c && c != a) ||
(a == c && a != b)

* The question: “Can we continue the equivalent partitioning for above partitions? And
how?”
Triangle Equivalent Classes and Input Conditions
• Error Input
• a <= 0
• b <= 0
• c <= 0
• Invalid
• a + b <= c
• b + c <= a
• a + c <= b
• Equilateral
• a == b && b == c
• Scalene
• a != b && b != c && a != c
• Isosceles
• (a == b && b != c)
• (b == c && c != a)
• (a == c && a != b)
Evaluation for Exercise 1
Each “yes” gives you one point
• Do you have a test case for an equilateral triangle?
• Do you have a test case for an isosceles triangle? (must
be a triangle, not, e.g. (2,2,4))
• Do you have a test case for an admissible scalene
triangle (must be a real triangle, not, e.g. (1,2,3))
• Do you have at least three test cases for isoscele
triangles, where all permutations of sides are
considered? (e.g. (3,3,4), (3,4,3), (4,3,3))
• Did you state for each test case the expected result?
Evaluation for Exercise 1 (con’d)
• Do you have a test case with one side zero?
• Do you have a test case with negative values?
• Do you have a test case where the sum of two sides
equals the third one? (e.g. (1,2,3))
• Do you have at least three test cases for such non-
triangles, where all permutations of sides are
considered? (e.g. (1,2,3), (1,3,2), (3,1,2))
• Do you have a test case where the sum of the two
smaller inputs is greater than the third one?
• Do you have at least three such test cases?
Evaluation for Exercise 1 (con’d)
• Do you have the test case (0,0,0)?
• Do you have test cases with very large integers (maxint)?
• Do you have a test case with non-integer values? (e.g., real
numbers, hex values, strings,…)
• Do you have a test case where 2 or 4 inputs are provided?
Exercise (Lab) 1
Next Date Problem
Next Date program takes 3 inputs representing the month, day,
and year between 1/1/1900 and 12/31/3000 and calculates
the next calendar date. The program only allows the user to
input only positive integers.
Let’s do:
1. Define equivalence classes , test conditions (of input data)
using equivalent partitioning technique;
2. Design a set of test cases based on defined equivalence
classes.
Lesson Key Points
• Relatively small number of test cases
• Provides a sense of complete coverage
• Helps reduce redundancy
• Special case situations
• Requires in-depth system knowledge to be
most effective (especially in regard to unique
values in a set)
Module summary
• Today we discussed:
• Exploratory testing / error guessing
approach to software testing
• Equivalence class partitioning technique to
identify sets of ‘equivalent data’ and create
a subset of tests based on data sets rather
than specific data

You might also like