0% found this document useful (0 votes)
106 views

SoftwareTesting Lect 1.1

This document discusses software testing basics including definitions of testing, objectives of testing, benefits of testing, bugs, myths and facts about testing, skills required for testers, need for testing, testing principles, basic concepts like errors, faults, failures and defects.

Uploaded by

Kunal Ahire
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views

SoftwareTesting Lect 1.1

This document discusses software testing basics including definitions of testing, objectives of testing, benefits of testing, bugs, myths and facts about testing, skills required for testers, need for testing, testing principles, basic concepts like errors, faults, failures and defects.

Uploaded by

Kunal Ahire
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 188

SOFTWARE TESTING BASICS

Testing
 Testing is a process of identifying defects
 Develop test cases and test data
 A test case is a formal description of
• A starting state
• One or more events to which the software must
respond
• The expected response or ending state
 Test data is a set of starting states and events used
to test a module, group of modules, or entire system
Software Testing

 Software testing is a popular risk management


strategy. It is used to verify that functional
requirements were met.

 The limitation of this approach, however, is


that by the time testing occurs, it is too late to
build quality into the product

3
What is Software Testing?
Several definitions:

“Testing is the process of establishing confidence that a program or


system does what it is supposed to.” by Hetzel 1973

“Testing is the process of executing a program or system with the


intent of finding errors.” by Myers 1979

“Testing is any activity aimed at evaluating an attribute or


capability of a program or system and determining that it meets its
required results.” by Hetzel 1983
What is Software Testing?

- One of very important software development phases

- A software process based on well-defined software quality control


and testing standards, testing methods, strategy, test criteria, and
tools.

- Engineers perform all types of software testing activities to


perform a software test process.

- The last quality checking point for software on its production line
Testing Objectives
 Testing is a process of executing a program with the intention of
finding errors.

 Establishing confidence that a program does what it is supposed to do.

 Process of demonstrating that errors are not present.

 The measurement of Software Quality.

 Confirming that a program perform its intended functions correctly.

 Identifying the difference between Expected and Actual Result.

 Testing is a process of trying to discover every conceivable fault or


weakness in a work product.
Benefits Of Software Testing
 Primary benefits of testing is – it improves the Quality as testing
looks for a class of make sure that get fixed.

 Secondary benefits is – testing demonstrates that the software


functions appear to be working according to the specifications.

 Goal Of Tester :-

 The goal of Software Tester is to find defects, find them as early as


possible , and make sure that they get fixed.

 Calculate the testing costs and ensure that the management


understands these costs.
Software Bug
 A fault in a program which causes the program to perform
in an un-intended manner .

 A software bug occurs when one or more of the following


rules is true :-

 The software does not do something that the product


specification says it should do.

 The software does something that the product specification


says it should not do.
Myths and Facts about Testing
 Myths #1 :- Testing is a phase near the end of development cycle.
 Fact #1 :- Testing is done throughout the life cycle of development.

 Myths #2 :- Defects found means blaming the developers.


 Fact #2 :- Defects relates to Software Development such as defining
users requirements, internal structure and design, coding as well as
testing.
 Myths #3 :- Anyone can test software – No particular skill is required.
 Fact #3 :- Testing is professional discipline requiring trained, skilled
people(testers).

 Myths #4 :- Tests need only run once or twice.


 Fact #4 :-Tests Need to be repeated many times
- Tests need to be repeated after changes are made, as
even minor modifications can lead to serious defects
being introduced.
Skills Required In Tester

 General

 Testing Skills

 Test Planning

 Executing the test plan

 Teat Analysis Reports


Skills Required In Tester
 A clear communicator.
 A defect report is no good if we can't understand it.

 Patient.
 Sometimes it takes a lot of back-and-forth to get to the root of a problem.
 And programmers have egos, they'll often try to push issues back to the tester.

 Passionate. 
 The best developers are the ones that really care about development and maybe
even get a little excited about it sometimes. Testing isn't that much different.
 Read a few posts from James Bach or Michael Bolton and you'll see that
there are passionate testers out there, however implausible that may sound to us
developers.

 Creative. 
 Really exercising a system requires one to try non-intuitive ways of accomplishing
tasks, to go outside the workflow that the program expects of them and do things
that normal users wouldn't do.
 Task-oriented people who receive a set of instructions and do the exact same
thing every time are no good for this job.
Skills Required In Tester
 Analytical.
 Just finding a defect isn't enough - a tester has to be able to figure out
how to reproduce it.
 If a report comes in as "intermittent" then there's about a 10% chance it'll
get solved.
 Most developers won't even look at a case without a reasonably concise
sequence of repro steps.
 Good testers have to be able to retrace their steps and narrow down the
field of possibilities so as to come up with the simplest possible sequence
of actions that trigger a bug.

 Not a programmer.
  Programmers never want to do any actual work testing.
 They'll spend all their time trying to write automated tests and not do what
really matters, which is to make sure the damn thing actually works the
way it is supposed to.
 Although there are exceptions to every rule, most programmers simply
find testing boring and will do the absolute minimum amount required.
Need Of Testing

 The purpose of testing is to identify implementation errors


before the product is shipped.
 To verify that all requirements are implemented correctly.
 Post release defect fixing is expensive.
 Certain bugs are easier to find during testing.
 To gain a sufficient level of confidence for the system and
customer satisfaction as testing gives Risk information, Bug
information and Process information.
 Testing for Quality Assurance(Reliability, Availability,
Productivity, Cost )
 To make software predictable in behavior.
 To reduce incompatibility and interoperability issues.
 To help marketability and retention of customers.
Testing Principles
 All test should be traceable to customer requirements
 Tests should be planned long before testing begins
 The Pareto principle applies to software testing
 Testing should begin “in the small” and progress toward testing
“in the large”
 To be most effective, an independent third party should
conduct testing
 Assign the best personnel to the task
 Testing should not be planned under the assumption that no
errors will be found
 Keep the software static during test
 The probability of the existence of more errors in a module or
group of modules is directly proportional to the number already
found
 Tests must be repeatable and reusable.
Basic Concepts of Testing
Software Engineering Terminologies
 Error
 Fault
 Defects
 Failure
 Test Cases
 Test Suite
 Test Oracle
 Test Bed
 Software Quality
 Software Quality Assurance Group
 Reviews
Errors
 An error is a mistake, misconception or misunderstanding on
the part of a software developer.
 It might be typographical error, a misleading of specification, a
misunderstanding of what subroutine does.
 The error may be :-
 Actual code in code.
 Incorrect implementation of the requirements or functional
specification because of misunderstanding or incomplete
requirements or functional specification.
 Incorrect human action that produces erroneous step, process
or inaccurate results.
 User Interface errors :- Functionality, Communication,
Command structure, Missing commands, Performance, Output,
Calculation errors, errors in handling or interpreting data ,
Testing error.
Faults
 A fault occurs when a human error results in a mistake in some
software product.
 For eg. A developer might misunderstand a user-interface
requirement and therefore create a design that include
misunderstanding.
 The design fault can also result in incorrect code, as well as
incorrect instructions in the user manual.
 A fault is the difference between an incorrect program and the
correct version.
 A single error can result in one or more faults.
 One fault can result in multiple changes to one product or
multiple changes to multiple products
Failure
 A fault(bug) can go undetected until a failure occurs, which is
when a user or tester perceives that the system is not
delivering the expected service.
 Failure can be defined as :-
 Deviation of the system from its expected behavior or service.
 The inability of a system or component to perform its required
functions within the specified performance requirements.
 Failure can be discovered both before and after system
delivery , as they can occur in testing as well as in operation.
 Faults represent problems that the developer sees, while
failures are problems that the user sees.
 Ideally the life of a bug ends when it is uncovered it is testion
and fixed.
 For eg. Consider an ATM machine.
Defects
 Abnormal behavior of the software

 Nonconformance to requirements of functional / program


specification

 Errors in testing

 Mistakes in correction

 For eg.

 Requirements and specification defects, design defects,


coding defects or testing defects.
Quality Concepts

 Variation control is the heart of quality control


 Quality of design
 refers to characteristics designers specify for the end product to
be constructed
 Quality of conformance
 degree to which design specifications are followed in
manufacturing the product
 Quality control
 series of inspections, reviews, and tests used to ensure
conformance of a work product (artifact) to its specifications
 Quality assurance
 auditing and reporting procedures used to provide management
with data needed to make proactive decisions

21
Quality Control versus SQA

 Quality Control (QC) is a set of activities carried out with the main
objective of withholding products from shipment if they do not qualify.

 Quality Assurance (QA) is meant to minimize the costs of quality by


introducing a variety of activities throughout the development process
and maintenance process in order to prevent the causes of errors,
detect them, and correct them in the early stages of the development.
As a result, quality assurance substantially reduces the rate of non-
qualifying products.

22
Why do we care about Quality?

 Development cost  Correction cost & source

 Specification : 6%  Req. & Specification 56%


 Design : 5%  Design : 24%
 Coding : 7%  Coding : 10%
 V&V (Testing) : 15%  Other: 10%
 Maintenance : 67%

23
Quality as Dealing with defects

 When people associate Quality or High Quality with software system,


it is an indication that few, if any, defects are expected to occur during
its operations or when problems do occur, the negative impact is
expected to be minimized.

 Key of the correctness aspect of software quality is the concept of


defect, failure, fault and error.

 The term “defect” refers to some problem with the software either with
its external or with internal characteristics.

24
Causes of software defects
1. Faulty requirements definition
2. Client-developer communication failures
3. Deliberate deviations from software requirements
4. Logical design errors
5. Coding errors
6. Non-compliance with documentation and coding instructions
7. Shortcomings of the testing process
8. User interface and procedure errors
9. Documentation errors

25
Software errors, software faults and software failures

Software development process

software error
software fault

software
failure

26
Software errors, software faults and software failures
More precisely:

 An error can be a grammatical error in one or more of the code


lines, or a logical error in carrying out one or more of the client’s
requirements.

 Not all software errors become software faults. in some cases, the
software error can cause improper functioning of the software.
 In many other cases, erroneous code lines will not affect the
functionality of the software as a whole.

 A failure is said to occur whenever the external behaviour of a


system does not conform to that prescribed in the system
specification.
 A software fault becomes a software failure only when it is “activated”

27
Error, faults, failures

© Aditya P. Mathur 2009 28


Types of Testing
Static Testing

Test Design Techniques

30
Static Testing

 Static testing is also called as “Dry Run Testing” refer to testing


something, which is STATIC – not running.
 For eg. Consider an analogy that you want to purchase the
vehicle.

 The static test can be


- Door locks and Door Retention Components
- Seating Systems
- Seat Belt Assembly
- Kicking the tires
- Checking the paint
 Static Testing is a type of testing in which requires only the
source code of the product, not the binaries or executables.

Static Testing
 Static testing does not involve executing the programs, but
involves people going through the code to find out whether:-
- The code works according to the functional requirements
- The code has been written in accordance with the design
developed.
- The code for any functionality has been missed out and
- The code handles error properly

 Static testing may not examine the code, also the specification,
design and the user documents.
 It is testing prior to deployment.
 Static testing is done by human or with the specialized tools.
Static Testing
 Static testing include :-
 Desk checking the code, code review, design walkthroughs,
Inspections to check requirements and design documents.
 Review, walkthroughs, Inspections are the method for detailed
examination of a product by systematically reading the content
on step-by-step basis to find defects.
 Running the syntax and type checkers as part of the
compilation process, or other code analysis tools.
 Syntax checking and manually reading the code to find errors
is the method of static testing.
 This type of testing is mostly done by the developer himself.
 Static testing usually the first type of testing done on any
system.
Dynamic Testing

Test Design Techniques

34
Dynamic Testing

 Testing that involves the execution of the software of the


component or system or refers to test the software by
executing it.
 Execution of the test object on a computer
 Need a test bed
 For eg. Consider the same analogy of purchasing the
vehicle.
- Starting it up
- Listening to the engine, horn
- Driving down the road
 Tester generally do dynamic testing. All testing tools
comes under this category.

35
Dynamic testing Includes
 Feed the program with input and observe the behavior.
 Check a certain number of input and output values.

 Dynamic testing example :-

- Unit testing
- Integrated Testing
- System testing
- User Acceptance Testing
Test Bed

Test
Driver
Test Test Test
Case 1 Case 2 Case n

PoC Test
output
Stub1
PoO
Stub 2
Test Object comparison

Stub k

Run time environment,


Analysis tools, monitors
Test stubs

PoC – Point of Control PoO – Point of Observation

37
Incremental approach to execute tests

 Steps are:
 Determine conditions and preconditions for the test and the goals
that are to be achieved
 Specify individual test cases
 Determine how to execute the tests (usually chaining together
several test cases)
• Eg: group test cases in such a way that a whole sequence of test
cases is executed (test sequence or test scenario)
• Document it in test procedure specification
• Need test script, e.g., JUnit

38
Techniques for Testing

 They are test case design techniques


 Different techniques are:
 Black Box Testing
 White Box or Glass Box or Open Box Testing

39
Black Box Testing Techniques(1)
 Test object is treated as a black box
 The inner structure and design of the test object is unknown
 Test cases are derived/design using the specification or the
requirements of the test object
 The behavior of the test object is watched from the outside (PoO is
outside the test object)
 It is not possible to control the operating sequence of the object other
than choosing the adequate input test data (PoC is situated outside of
test object)
 Used for higher levels of testing
 Any test design before the code is written (test-first programming, test-
driven development) is black box driven

40
Black Box testing
 Focus :- I/O behavior :- For any given input, predict the output.
If it matches with the actual output then the module passes the
test.
 In Black box testing, software is considered as a box. The
tester only knows what the software is supposed to do – he
can’t look in the box to see to see how it operates as shown in
fig.
 If it gives certain input, he gets certain output.
 He doesn’t know HOW or WHY it happens?
Black Box Testing Can be defined as
 It is testing against functional specifications.

 Its primary objectives is to assess whether the program does


what it is supposed to do i.e. what is specified in the
requirements.

 It is not based on any knowledge of internal design or code.

 Tests are based on requirements and functionality.

 It will not test hidden functions and errors associated with them
will not be found.

 It tests valid and invalid inputs but can not possibly test all
inputs.
Black Box Testing Can be defined as
 Black box testing is so named as it IS NOT POSSIBLE to look
inside the box to determine the design and implementation
structures of the components you are testing.

 The author of the program who knows too much about the
program internal should not perform black box testing.

 Synonyms for black-box include : Behavioral testing ,


Functional testing , Opaque-box testing , concrete Box testing
and closed-Box testing.
Black Box Testing Techniques(2)

Poc and PoO “outside” the test object

Test input data

PoC Test
Output
data
PoO
Test Object

44
Black-Box Testing Types
 Static Black Box Testing :- Testing the specification
 Dynamic Black Box Testing :-
 Testing the software without having an insight into the details of
underlying code.
 It’s dynamic because the program is running. And it is black
box because you are testing it without knowing exactly how it
works.

 Disadvantages

- Cannot guarantee that implementation is correct


- Possibility missing logical errors
- Possibility of redundant testing
White Box Testing
 Focus :- (Coverage) Every statement in the component is
executed at least once.

 In white Box Testing, tester has access to the program’s code


and can examine it for clues to help him with his testing.
White Box Testing can be defined as
 Testing based on design and implementation structures.
 It deals with the internal logic and structure of the code .
 It is testing – tests that go in and test the actual program structure ,
which provides information about flow of control. Fro eg. If-then-
else statements, loops etc. as shown in fig.
 The tests written based on the white box testing strategy
incorporate coverage of the code written, branches, paths,
statements and internal logic.
 It will not test missing functions.
 It is good at catching following errors in implementation
- Initialization errors
- Bounds – checking and out-of-range problems
- Buffer overflow errors etc.
• White box testing is so named as it IS POSSIBLE to look inside
the box to determine the design and implementation structures of
the components you are testing.
• Synonyms for white box include : Structural testing, Clear box
testing, Glass box testing ,Open box testing
White Box Testing Types
 Static White Box Testing : It does not necessitate the execution
of the software . It is the process of carefully and methodically
examining the design and code.
 Dynamic White Box Testing : Dynamic analysis is what is
generally considered as “testing” i.e. involves running the
system.
 Statement Coverage : Testing performed where every
statement is executed at least once.
 Branch Coverage : Helps in validating all the branches in the
code and making sure that no branching leads to abnormal
behavior of the applications.
 Path Coverage : Testing all possible paths.
Disadvantages :-

 Can not guarantee that the specification are fulfilled.

 As knowledge of code and internal structure is a


prerequisite, a skilled tester is needed to carry out this
type of testing, which increases the cost.

 It near is ly impossible to look into every bit of code to find


out hidden errors , which may create problems, resulting
in failure of the application.
Test types and detected defects
Unit Testing

 The process of testing individual methods,


classes, or components before they are
integrated with other software
 Two methods for isolated testing of units
 Driver
• Simulates the behavior of a method that sends a
message to the method being tested
 Stub
• Simulates the behavior of a method that has not
yet been written
UNIT Testing
 The primary goal of unit testing is to make the smallest piece(unit)
of testable software in the application, isolate it from the
remainder of the code, and determine whether it behaves exactly
as you except.
 It focuses on the smallest testable unit of software design. This
unit can be software component or module.
 Hence Synonyms for unit testing are :- Component testing,
Module testing
 Unit testing is white-box oriented. It is used to verify control flow
and data flow of the code.
 It requires knowledge of the code hence is performed by the
developers before subjecting the code to more extensive code
coverage testing. This can happen by adding dummy code.
 Unit testing on the part of the component will not really help in
deciding the final working of the same component in the entire
system.
 Unit testing is normally performed by software developers
themselves or their peers.
Unit test Considerations
Unit test Considerations
 Module Interface : Tests data flow across the module interface
to ensure that information properly flows in and out of the
program.
 Local Data Structures : Examines local data structures i.e. data
stored temporarily to ensure data maintains integrity during all
steps in an algorithm’s execution.
 Boundary Conditions : Are tested to ensure that the module
operates properly at boundaries established to limit. This is an
important task, as software often fails at its boundaries.
 For e.g. When the maximum or minimum allowable value is
encountered, or when the nth element of n dimensional array is
being processed.
 Independents paths : All Independent Paths are examined to
ensure that all statements in a module have been executed at
least once.
Unit Test Tools

 C++ test :- C/C++ unit testing tool, which automatically


tests any C/C++ class, function, or components

 JUnit :- Implements units tests in java.

 NUnit :- A unit- testing framework for all .Net languages

 PhpUnit :- Testing framework for PHP

 Check :- A unit Test framework for C


Advantages of Unit Testing

 Much easier to make sure that small pieces of a system are


working right than to debug the entire thing at once.

 Unit testing is also good for catching “rare” bugs that only occur
when specific sets of inputs are fed to module.

 In order to make unit testing possible, codes need to be


modular. This means codes are easier to reuse.

 The cost of fixing a defect detected during unit testing is lesser


in comparison to that of defects detected at higher levels.

 Debugging is easy. When a test fails, only the latest changes


need to be debugged.
Driver And Stub
 It is always a good idea to develop and test software in
"pieces". But, it may seem impossible because it is hard to
imagine how you can test one "piece" if the other "pieces" that
it uses have not yet been developed (and vice versa). To solve
this kindof diffcult problems we use stubs and drivers.

 In white-box testing, we must run the code with predetermined


input and check to make sure that the code produces
predetermined outputs. Often testers write stubs and drivers for
white-box testing.
Driver for Testing:
 Driver is a the piece of code that passes test cases to another piece of
code. Test Harness or a test driver is supporting code and data used to
provide an environment for testing part of a system in isolation. It can be
called as a software module which is used to invoke a module under
test and provide test inputs, control and, monitor execution, and report
test results or most simplistically a line of code that calls a method and
passes that method a value.

 For example, if you wanted to move a fighter on the game, the driver
code would be

 moveFighter(Fighter, LocationX, LocationY);

 This driver code would likely be called from the main method. A white-
box test case would execute this driver line of code and check
"fighter.getPosition()" to make sure the player is now on the expected
cell on the board.
Stubs for Testing
 A Stub is a dummy procedure, module or unit that stands in for
an unfinished portion of a system.

 Four basic types of Stubs for Top-Down Testing are:

1 Display a trace message


2 Display parameter value(s)
3 Return a value from a table
4 Return table value selected by parameter

 A stub is a computer program which is used as a substitute for


the body of a software module that is or will be defined elsewhere
or a dummy component or object used to simulate the behavior
of a real component until that component has been developed.
Stubs for Testing
 Ultimately, the dummy method would be completed with the
proper program logic. However, developing the stub allows the
programmer to call a method in the code being developed,
even if the method does not yet have the desired behavior.

 Stubs and drivers are often viewed as throwaway code.


However, they do not have to be thrown away: Stubs can be
"filled in" to form the actual method. Drivers can become
automated test cases.
Integration Testing
 The integration testing focuses on finding defects which mainly
arise because of combining various components for testing.
 The main objectives testing is to take unit tested components
and build a program structure.
 Integration testing is a logical extension of unit testing.
 Integration testing works to expose defects or errors in the
interfaces and interaction between the integrated components
(Modules).
 Integration testing is done by developers / quality assurance
teams. These members both normal processing and exceptions.
 Defects can be
- Interface incompatibility
- Incorrect parameter values
- Resource problems
- Run-time exceptions
- Navigation problems and Data flow errors
Types of Integration Testing
Integration Testing
 Evaluates the behavior of a group of methods
or classes
 Identifies interface compatibility, unexpected
parameter values or state interaction, and run-time
exceptions
 System test
 Integration test of the behavior of an entire system
or independent subsystem
 Build and smoke test
 System test performed daily or several times a
week
Integration Strategy
 How low-level modules are assembled to
form higher-level program entities
 Strategy impacts on
 the form in which units test-cases are written
 the type of test tools to use
 the order of coding/testing units
 the cost of generating test cases
 the cost of locating and correcting detected
defects
Integration testing strategies

 Several different strategies can be used for integration testing.


 Comparison criteria:
 fault localization
 effort needed (for stubs and drivers)
 degree of testing of modules achieved
 possibility for parallel development
 Examples of strategies
 Big-bang
 Top-down
 Bottom-up
 Sandwich
Big Bang Integration
 Non-incremental strategy
 Unit test each module in isolation
 Integrate as a whole

main test
main
A B C test
A
test
D E F B test
test main, A, B, C
C D, E, F
test
D
test
E
test
F
Big Bang Integration
 In this type all components are combined at once to from a program.
 That is, test all components in isolation, then mix them all together
and see how it works.
 This approach is great with smaller systems, but it can end up taking
a lot of time for larger, more complex systems.
 However this approach is not recommended as both drivers and
stubs are required.
Big Bang Integration
 Advantages
 Convenient for small systems
 Disadvantages
 Need driver and stubs for each module
 Integration testing can only begin when all
modules are ready
 Fault localization difficult
 Easy to miss interface faults
Top-down Integration
 It is incremental approach to construction of a program
structure

 Modules / Components are integrated by moving downward


through control hierarchy, beginning with the main control
module(main program).

 Modules / components subordinate to the main module are


integrated in either

- Depth-first manner
- Breadth-first manner
Top-down Integration
 Incremental strategy
1. Start by including highest level modules in test
set.
• All other modules replaced by stubs or
test
mock objects. main
2. Integrate (i.e. replace stub by real module)
modules called by called by modules in test set.
3. Repeat until all modules in test set. test
main, A,
M1 B,C

M2 M3 M4
test
main, A, B, C
M5 M6 M7 D, E, F

M8
M8
Top-down Integration
Depth First Manner :-

Depth first integration would integrate all components on control path of


the structure.
For e.g. Selecting left hand path, components M1,M2,M5 would be
integrated first.
Next M8 and M6 would be integrated.
Then the central and right hand control paths are build.

Breadth First Manner :-

In Breadth first integration all components directly subordinate at each


level.
For e.g. Components M1,M2,M3 and M4 would be integrated first .
The next control level test (M5,M6) would be integrated , and so on.
Based on Depth-first integration
Based on Breadth-first integration
Top-down Integration
 The integration order can be modified to:

 include critical modules first test


 leave modules not ready to later main

test
main, A, C

test
main main, A,
C, D, E, F

A B C

test
D E F main, A, B, C
D, E, F
Steps in Top-down Integration
 The main control module is used as a test driver, and stubs are
substituted for all components directly subordinate to the main
control module.
 Depending on the integration approach, subordinate stubs are
replaced one at a time with actual components.
 Tests are conducted as each components is integrated.
 On completion on each set of tests, another stub is replaced
with a real components.
 Re-testing may be conducted to ensure that new errors have
not been introduced.

The process continues from step 2 until the entire program


structure is build.
Top-down Integration
 Advantages
 Fault localization easier
 Few or no drivers needed
 Possibility to obtain an early prototype
 Different order of testing/implementation possible
 Major design flaws found first
• in logic modules on top of the hierarchy
 Disadvantages
 Need lot of stubs / mock objects
 Potentially reusable modules (in bottom of the hierarchy) can be
inadequately tested
Bottom-up Integration
 Incremental strategy
 Test low-level modules, then
 Modules calling them until highest level module

test
main D test
D,E,A
A B C
test test
test
E main, A, B,
B
D E F
C
D, E, F
test test
F C,F
Bottom-up Integration
 Bottom-up integration is just the opposite of top-down
integration, where the components for a new product
development become available in reverse order, starting
from the bottom.

 That is, this approach begins construction and testing with


components at the lowest levels in the program structure.

 As the components are integrated from the bottom,


processing required for the components subordinate to a
given level is always available and the need for stub is
eliminated.
Bottom-up Integration
 Low-level components are tested individually, by writing a
driver to coordinate the test case input and output.

 Drivers are removed and tested components are


integrated moving upward in the program structure.

 Tests are conducted as each components is integrated.

 This is done repeatedly until all entire program structure is


built.
Bottom-up Integration
 Advantages
 Fault localization easier (than big-bang)
 No need for stubs / fewer mock objects
 Logic modules tested thoroughly
 Testing can be in parallel with implementation
 Dis-advantages
 Need drivers
 The program as an entity does not exist until the last module is added.
 High-level modules (that relate to the solution logic) tested in the last
(and least).
 Bad fro functionally decomposed systems as it tests the most important
module last.
 No concept of early skeletal system
 Interface errors are discovered late.
Sandwich Integration
 In sandwich integration testing, both top down and bottom up is
started simultaneously and testing is built sides.
 This approach is also called as “Bi-directional integration
testing”.
 The system is viewed as having three layers
- A target layer in the middle
- A layer above the target
- A layer below the target
- testing converges at the layer

 Top-down approach is used for the uppers layers.


 Bottom-up approach is used for the subordinate layers.
Sandwich Integration
 Combines top-down and bottom-up approaches
 Distinguish 3 layers
 logic (top) - tested top-down
 middle
 operational (bottom) – tested bottom-up

test
main D test
D,E,A
A B C
test test
E main, A, B,
D E F
C
test test D, E, F
F C,F

test
main test
main, A, B, C
Sandwich Integration
 Pros :-

- Top and bottom Layer can be done parallel.


- Allows integration to begin early in the testing phase.

 Cons :-

- Does not test the individual subsystems thoroughly

before integration
Function/thread integration
 Integrate modules according to
threads/functions they belong to

main

A B C

D E F
Smoke Testing
 The term smoke testing originated in the hardware industry.
 After a piece of hardware or a hardware components(e.g.
transformer) was changed or repaired, the equipment was simply
powered up.
 If there was no smoke , the component passed the test.
 In software, the term smoke testing describes the process of
validating changed code to identity and fix defects in software before
the changes are mapped into the source product.
 It is designed to confirm that changes in the code, functions as
expected.
 Before running a smoke test, code review is conducted, which
focuses on changes in the code.
 Tester must work with the developer who has written the code to
understand :
- What changes are made in the code ?
- How the changes affects the functionality ?
- How the change affects the interdependencies of various
components ?
Smoke Testing
 Whenever a new software build is received, smoke test is
played against the software, verifying that major functionality still
operates.

 Smoke testing consist of :-

 Identifying the basic functionality that product must satisfy


 Designing test cases to ensure that these basic functionality
work and then packaging them into the smoke test suite
 Ensuring that every time a product is built, this suite is run
successfully
 If this suite fails, Approach the developers to identify the
changes and possibly change or roll back the changes to a state
where the test suite succeeds.

 A smoke test is reduced version of regression testing.


Smoke Testing

 McConnell describes the smoke test as :-

“ The smoke test should exercise the entire system from


end to end. It does not have to be exhaustive, but it
should be capable of exposing major problems ”
Regression Testing
 Whenever software is corrected some aspect of software
configuration is changed.
 Regression Testing is the activity that helps to determine
whether the changed components has introduced any error in
the unchanged components.
 In other words, before a new version of a software product is
released, the old test cases are run against the new version to
make sure that all the old capabilities still work.

Definition :-

 Retesting a previously tested program to ensure that faults


have not been introduced as a result of the changes made.
 Re-running previously conducted tests to ensure that
unchanged components function correctly.
Regression Testing
 Regression testing is a style of testing that focuses on retesting after
changes are made.
 In traditional regression testing, we reuse the same tests.
 In risk-oriented regression testing, test the same areas, but we use
different tests.
 In Integration testing adding a new module or changing an existing
module impacts the systems as :
 New data flows paths are established
 New I/O occur
 New control logic is invoked

 These changes may cause problems with functions that previously


worked perfectly.
 Regression testing is “ Re-execution of the subnet of tests that have
already conducted to ensure that changes have not propagated
unintended side effects”
Regression Approaches
 Manual testing :- By re-executing a subnet of all test cases.

 Automated Capture/Playback tools :- Capture the operation of


an existing version of a system under test, and then the
operation can be played back automatically on later version of
the system and any differences in behavior are reported.

 Regression test suite :-


 Representative sample of tests that exercise all software
functions.
 Additional tests that focus on functions that are affected by
change.
 Tests that focus on components that been changed.
Types of regression testing
 When test teams or customer being using a product, they
report defects.
 These defects are examined by a developer who makes
individual defects fixes.
 The developers then do appropriate unit testing and check the
defects fixes into a Configuration Management system.
 The source code for the complete product is then complied and
these defects fixes along with the existing features get merged
into the build.
 Thus a build “an aggregation the defect fixes and features that
are present in the product”.

 Types :-
 Regular Regression Testing
 Final Regression Testing
Types of regression testing
 Regular Regression Testing :-

 It is done between test cycles to ensure that the


defect fixed that are done and the functionality
that were working with the earlier test cycles
continue to work.
Types of regression testing
 Regular Final Testing :-

 It is done to validate the final build before release.

 The Configuration management engineer delivers the final


build and other contents exactly as it would go to the customer.

 The final regression test cycles is conducted for a specific


period of duration, which is mutually agreed between the
developments and testing teams.

 A "final regression testing" is done to validate the gold


master builds, and "Regression testing" is done to validate
the product and failed test cases between system test cycles.
Selecting test cases for regression testing
 It was found from industry data that good number of the defects reported by
customers were due to last minute bug fixes creating side effects and
hence selecting the test case for regression testing is an art and not that
easy.
 
 The selection of test cases for regression testing
 Requires knowledge on the bug fixes and how it affect the system
 Includes the area of frequent defects
 Includes the area which has undergone many/recent code changes
 Includes the area which is highly visible to the users
 Includes the core features of the product which are mandatory requirements
of the customer
 
 Selection of test cases for regression testing depends more on the criticality
of bug fixes than the criticality of the defect itself. A minor defect can result
in major side effect and a bug fix for an Extreme defect can have no or a
just a minor side effect. So the test engineer needs to balance these
aspects for selecting the test cases for regression testing.
Concluding the results of a regression testing
 Regression testing uses only one build for testing (if not, it is
strongly recommended). It is expected that all 100% of those test
cases pass using the same build. In situations where the pass %
is not 100, the test manager can look at the previous results of the
test case to conclude the expected result;

a. If the result of a particular test case was PASS using


the previous builds and FAIL in the current build, then regression
failed. We need to get a new build and start the testing from
scratch after resetting the test cases.

b. If the result of a particular test case was a FAIL


using the previous builds and a PASS in the current build, then it
is easy to assume the bug fixes worked.
Concluding the results of a regression testing
c. If the result of a particular test case was a FAIL using the previous
builds and a FAIL in the current build and if there are no bug fixes for
this particular test case, it may mean that the result of this test case
shouldn't be considered for the pass %. This may also mean that such
test cases shouldn't be selected for regression.

d.If the result of a particular test case is FAIL using the previous builds
but works with a documented workaround and

a. if you are satisfied with the workaround then it should


considered as PASS for both system test cycle and regression test
Cycle.
b. If you are not satisfied with the workaround then it should
be considered as FAIL for a system test cycle but can be considered as
PASS for regression test cycle.
Concluding the results of a regression testing
Current result from Previous result(s) Conclusion Remarks
regression

FAIL PASS FAIL Need to improve the


regression process and
code reviews

PASS FAIL PASS This is the expected


result of a good
regression to say bug
fixes work properly

FAIL FAIL FAIL Need to analyze why


bug fixes are not
working. “Is it a wrong
fix?”

PASS FAIL Analyze the Workarounds also need


workaround and if a good review as
workarounds can also
satisfied mark result create side effects
as PASS
PASS PASS PASS This pattern of results
Acceptance Testing
Acceptance Testing
 Acceptance testing is often the final step before rolling out the
application. It is performed to determine whether the system
meets user requirements or not.
 Usually the end user, who will be using the applications, test
the application before ‘accepting’ it.
 This type of testing gives the end users the confidence that the
application being delivered to them meets their requirements.
 Acceptance tests can range from an informal “test drive” to
planned and systematically executed series of tests.
 With respect to the website or application produced, the clients
or end users “review” and “test” the application for any
nonconformity specifications or any updates which are not
included in original specifications.
 The customer may write the acceptance test criteria and
request the organization to execute these, or the organization
may produce the acceptance testing criteria, which are to be
approved by the customer.
Acceptance Testing
 Acceptance Criteria :-

 Product acceptance :-

 During the requirements phase, each requirement is associated


with acceptance criteria.

 Whenever there are changes to requirements, the acceptance


criteria are accordingly modified and maintained.

 While accepting the product, end users execute all existing test
cases to determine they meet the requirements.
Acceptance Testing
Procedure acceptance :-

 Acceptance criteria can be defined based on the procedures


followed for delivery. Some of the examples of acceptance
criteria can be :

- User manuals, administration and troubleshooting


documentation should be part of the release.
- Along with executable file (s), the source code of the
product should be delivered in a CD.
- A minimum of 10 to 15 employees should be trained on the

product usage prior to deployment


Acceptance Testing

 Service Level agreements :-

 Service Level agreements can become part of acceptance


criteria.

 A service Level Agreement is a document or a contract signed


by two parties: the product organization and the customer

 Example Contract items related to defects can be :-

- All major defects that come up during the first six months of
deployment need to fixed free of cost.
- All major defects are to be fixed within 48 hours of reporting.
Types of Acceptance Testing
 Alpha Test :-

 A customer conducts it at the developers site.

 It is the first test of a newly developed hardware or software

in a laboratory setting in the presence of a developer.

 Errors and usage problems are recorded .

 “Alpha tests are conducted in a controlled environment”

 When the first round of bugs have been fixed, the product goes
into beta test.
Types of Acceptance Testing
 Beta Test :-

 It is conducted at one or more customer’s site by the end-user of software.

 Generally the developer is not present.

 It is “live” application of the software in an environment that cannot be


controlled by the developer.

 Customer records all the problems(real or imagined) that are


encountered during beta testing and reports these to the developers at regular
intervals.

 As a result of the problems reported during beta tests, software engineers


make modifications and then release the software product to the entire
customer base.
Positive and Negative Testing
 Positive testing tries to prove that a given product does what it is
supposed to do.

 When a test case verifies the requirements of the product with a set of
expected output it is called “ positive test case”

 A product “delivering an error when it is expected to give an error”, is


also called as positive testing.

 Negative testing is performed to ensure that the system is able to


handle inconsistent information.

 It should be applied to all types of testing as it identifies how a system


responds to incorrect or inappropriate information being included
within a text or numeric field.

 The purpose of negative testing is try and break the system.


Positive and Negative Testing
 Negative testing covers scenarios for which the product is not
designed and coded.

 When a test case not does verify the requirements of the product
with a set of expected output, it is called negative test case.

 A product “not delivering an error when it should” or “ delivering


an error when it should not”, is also negative testing.

 For e.g.
if the program is supposed to give an error when the person
types in “101” on a field that should be between “1” and “100”,
then that is positive test if an error shows up. If however, the
application does not give an error when the user typed “101”
then you have a negative test.
Positive and Negative Testing

 To summarize, positive and negative testing can be defined as

 Positive Testing = (Not showing error when not supposed to )


+ (Showing error when supposed to)

 Negative Testing = (Showing error when not supposed to )


+ (Not showing error when supposed to)
Who Tests Software?
 Programmers
 Unit testing
 Testing buddies can test other’s programmer’s code
 Users
 Usability and acceptance testing
 Volunteers are frequently used to test beta versions
 Quality assurance personnel
 All testing types except unit and acceptance
 Develop test plans and identify needed changes
System Testing
 Once the entire system has been built then it has to be tested against
the “System Specification” to check if it delivers the features required.
 System testing verifies the entire product, after integrating all
software and hardware components and validates it according to the
original project requirements.
 A Testing group performs system testing before the perspective of
the end user different configurations or setup.
 It can begin whenever other product has sufficient functionality to
execute some of the test or after unit and integration testing are
complete.
 System testing is a series of different tests whose primary purpose is
to fully exercise the computer-based system.
 System testing comprises of two types as

 Functional system testing


 Non-Functional system testing
Types System Testing
Types System testing
 Functional testing involves testing a product’s functionality and
features.
 It is conducted for functional requirements.
 In other words,
 It is the process of validating an application or web site to
verify that each feature complies its specifications and correctly
perform all its required functions.
Part II

Principles of Software Testing for


Testers
Module 0: About This Course
Course Objectives
 After completing this course, you will be a more
knowledgeable software tester. You will be able to
better:
 Understand and describe the basic concepts of
functional (black box) software testing.
 Identify a number of test styles and techniques and
assess their usefulness in your context.
 Understand the basic application of techniques used to
identify useful ideas for tests.
 Help determine the mission and communicate the status
of your testing with the rest of your project team.
 Characterize a good bug report, peer-review the reports
of your colleagues, and improve your own report writing.
 Understand where key testing concepts apply within the
context of the Rational Unified Process.
Course Outline
0 – About This Course
1 – Software Engineering Practices
2 – Core Concepts of Software Testing
3 – The RUP Testing Discipline
4 – Define Evaluation Mission
5 – Test and Evaluate
6 – Analyze Test Failure
7 – Achieve Acceptable Mission
8 – The RUP Workflow As Context
Principles of Software Testing for
Testers
Module 1: Software Engineering Practices
(Some things Testers should know about them)
Objectives
 Identify some common software
development problems.
 Identify six software engineering practices
for addressing common software
development problems.
 Discuss how a software engineering
process provides supporting context for
software engineering practices.
Symptoms of Software Development Problems
 User or business needs not met
 Requirements churn
 Modules don’t integrate
 Hard to maintain
 Late discovery of flaws
 Poor quality or poor user experience
 Poor performance under load
 No coordinated team effort
 Build-and-release issues
Trace Symptoms to Root Causes

Software Engineering
Symptoms Root Causes
Practices
Needs not met Incorrect requirements Develop Iteratively
Requirements churn Ambiguous communications
Modules don’t fit Brittle architectures Manage Requirements
Hard to maintain Overwhelming complexity
Late discovery Undetected inconsistencies Use Component Architectures

Poor
Poor quality
quality Insufficient testing
Model Visually (UML)
Poor performance Subjective assessment
Colliding developers Waterfall development Continuously Verify Quality
Build-and-release Uncontrolled change
Insufficient automation Manage Change
Software Engineering Practices Reinforce Each Other
Software Engineering
Practices
Develop Iteratively

Ensures users involved


Manage Requirements as requirements evolve

Validates architectural
Use Component Architectures decisions early on

Model Visually (UML) Addresses complexity of design/


implementation incrementally

Continuously Verify Quality Measures quality early and often

Manage Change Evolves baselines incrementally


Principles of Software Testing for
Testers
Module 2: Core Concepts of Software
Testing
Objectives
 Introduce foundation topics of functional
testing
 Provide stakeholder-centric visions of
quality and defect
 Explain test ideas
 Introduce test matrices
Module 2 Content Outline
Definitions
 Defining functional testing
 Definitions of quality
 A pragmatic definition of defect
 Dimensions of quality
 Test ideas
 Test idea catalogs
 Test matrices
Functional Testing
 In this course, we adopt a common, broad
current meaning for functional testing. It is
 Black box
 Interested in any externally visible or
measurable attributes of the software other than
performance.
 In functional testing, we think of the
program as a collection of functions
 We test it in terms of its inputs and outputs.
How Some Experts Have Defined Quality
 Fitness for use (Dr. Joseph M. Juran)
 The totality of features and characteristics of a
product that bear on its ability to satisfy a given
need (American Society for Quality)
 Conformance with requirements (Philip Crosby)
 The total composite product and service
characteristics of marketing, engineering,
manufacturing and maintenance through which
the product and service in use will meet
expectations of the customer (Armand V.
Feigenbaum)
 Note absence of “conforms to
specifications.”
Quality As Satisfiers and Dissatisfiers
 Joseph Juran distinguishes between
Customer Satisfiers and Dissatisfiers as key
dimensions of quality:
 Customer Satisfiers
• the right features
• adequate instruction
 Dissatisfiers
• unreliable
• hard to use
• too slow
• incompatible with the customer’s equipment
A Working Definition of Quality

Quality is value to some person.

---- Gerald M. Weinberg


Change Requests and Quality
 A “defect” – in the eyes of a project
stakeholder– can include anything about
the program that causes the program to
have lower value.

 It’s appropriate to report any aspect of the


software that, in your opinion (or in the
opinion of a stakeholder whose interests
you advocate) causes the program to have
lower value.
Dimensions of Quality: FURPS
Usability
 e.g., Test application from
the perspective of
Functionality convenience to end-user.
 e.g., Test the accurate
workings of each
usage scenario Reliability
 e.g., Test the application
behaves consistently and
predictably.
Supportability
 e.g., Test the ability to
maintain and support
application under Performance
production use
 e.g., Test online
response under average
and peak loading
A Broader Definition of Dimensions of Quality
 Accessibility  Maintainability
 Capability  Performance
 Compatibility  Portability
 Concurrency
 Reliability
 Conformance to
standards  Scalability
 Efficiency  Security
 Installability and  Supportability
uninstallability  Testability
 Localizability  Usability

Collectively, these are often called Qualities of Service,


Nonfunctional Requirements, Attributes, or simply the -ilities
Test Ideas
 A test idea is a brief statement that
identifies a test that might be useful.
 A test idea differs from a test case, in that
the test idea contains no specification of the
test workings, only the essence of the idea
behind the test.
 Test ideas are generators for test cases:
potential test cases are derived from a test
ideas list.
 A key question for the tester or test analyst
is which ones are the ones worth trying.
Exercise 2.3: Brainstorm Test Ideas (1/2)
 We’re about to brainstorm, so let’s review…
 Ground Rules for Brainstorming
 The goal is to get lots of ideas. You brainstorm together
to discover categories of possible tests—good ideas
that you can refine later.
 There are more great ideas out there than you think.
 Don’t criticize others’ contributions.
 Jokes are OK, and are often valuable.
 Work later, alone or in a much smaller group, to
eliminate redundancy, cut bad ideas, and refine and
optimize the specific tests.
 Often, these meetings have a facilitator (who runs the
meeting) and a recorder (who writes good stuff onto
flipcharts). These two keep their opinions to themselves.
Exercise 2.3: Brainstorm Test Ideas (2/2)
 A field can accept integer values between
20 and 50.
 What tests should you try?
A Test Ideas List for Integer-Input Tests
 Common answers to the exercise would include:

Test Why it’s interesting Expected result


20 Smallest valid value Accepts it
19 Smallest -1 Reject, error msg
0 0 is always interesting Reject, error msg
Blank Empty field, what’s it do? Reject? Ignore?
49 Valid value Accepts it
50 Largest valid value Accepts it
51 Largest +1 Reject, error msg
-1 Negative number Reject, error msg
4294967296 2^32, overflow integer? Reject, error msg
Discussion 2.4: Where Do Test Ideas Come From?
 Where would you derive Test Ideas Lists?
 Models
 Specifications
 Customer complaints
 Brainstorm sessions among colleagues
A Catalog of Test Ideas for Integer-Input tests
 Nothing  Non-digits
 Valid value  Wrong data type (e.g. decimal
 At LB of value into integer)
 At UB of value  Expressions
 At LB of value - 1  Space
 At UB of value + 1  Non-printing char (e.g.,
 Outside of LB of value Ctrl+char)
 Outside of UB of value  DOS filename reserved chars
 0 (e.g., "\ * . :")
 Negative  Upper ASCII (128-254)
 At LB number of digits or chars  Upper case chars
 At UB number of digits or chars  Lower case chars
 Empty field (clear the default  Modifiers (e.g., Ctrl, Alt, Shift-
value)
Ctrl, etc.)
 Outside of UB number of digits
or chars  Function key (F2, F3, F4, etc.)
The Test-Ideas Catalog
 A test-ideas catalog is a list of related test
ideas that are usable under many
circumstances.
 For example, the test ideas for numeric input
fields can be catalogued together and used for
any numeric input field.
 In many situations, these catalogs are
sufficient test documentation. That is, an
experienced tester can often proceed with
testing directly from these without creating
documented test cases.
Apply a Test Ideas Catalog Using a Test Matrix

Field name

Field name

Field name
Review: Core Concepts of Software Testing
 What is Quality?
 Who are the Stakeholders?
 What is a Defect?
 What are Dimensions of Quality?
 What are Test Ideas?
 Where are Test Ideas useful?
 Give some examples of a Test Ideas.
 Explain how a catalog of Test Ideas could
be applied to a Test Matrix.
Principles of Software Testing for
Testers
Module 4: Define Evaluation Mission
So? Purpose of Testing?
 The typical testing group has two key
priorities.
 Find the bugs (preferably in priority order).
 Assess the condition of the whole product
(as a user will see it).
 Sometimes, these conflict
 The mission of assessment is the underlying
reason for testing, from management’s
viewpoint. But if you aren’t hammering hard on
the program, you can miss key risks.
Missions of Test Groups Can Vary
 Find defects
 Maximize bug count
 Block premature product releases
 Help managers make ship / no-ship decisions
 Assess quality
 Minimize technical support costs
 Conform to regulations
 Minimize safety-related lawsuit risk
 Assess conformance to specification
 Find safe scenarios for use of the product (find ways to
get it to work, in spite of the bugs)
 Verify correctness of the product
 Assure quality
A Different Take on Mission: Public vs. Private Bugs
 A programmer’s public bug rate includes all
bugs left in the code at check-in.
 A programmer’s private bug rate includes
all the bugs that are produced, including the
ones fixed before check-in.
 Estimates of private bug rates have ranged
from 15 to 150 bugs per 100 statements.
 What does this tell us about our task?
Defining the Test Approach
 The test approach (or “testing strategy”)
specifies the techniques that will be used to
accomplish the test mission.
 The test approach also specifies how the
techniques will be used.
 A good test approach is:
 Diversified
 Risk-focused
 Product-specific
 Practical
 Defensible
Heuristics for Evaluating Testing Approach
 James Bach collected a series of heuristics
for evaluating your test approach. For
example, he says:
 Testing should be optimized to find important
problems fast, rather than attempting to find all
problems with equal urgency.
 Please note that these are heuristics – they
won’t always the best choice for your
context. But in different contexts, you’ll find
different ones very useful.
What Test Documentation Should You Use?
 Test planning standards and templates
 Examples
 Some benefits and costs of using IEEE-829
standard based templates
 When are these appropriate?
 Thinking about your requirements for test
documentation
 Requirements considerations
 Questions to elicit information about test
documentation requirements for your project
Write a Purpose Statement for Test Documentation
 Try to describe your core documentation
requirements in one sentence that doesn’t
have more than three components.
 Examples:
 The test documentation set will primarily
support our efforts to find bugs in this version,
to delegate work, and to track status.
 The test documentation set will support ongoing
product and test maintenance over at least 10
years, will provide training material for new
group members, and will create archives
suitable for regulatory or litigation use.
Review: Define Evaluation Mission
 What is a Test Mission?
 What is your Test Mission?
 What makes a good Test Approach (Test
Strategy)?
 What is a Test Documentation Mission?
 What is your Test Documentation Goal?
Principles of Software Testing for
Testers
Module 5: Test & Evaluate
Test and Evaluate – Part One: Test
 In this module, we drill into
Test and Evaluate
 This addresses the “How?”
question:
 How will you test those
things?
Test and Evaluate – Part One: Test
 This module focuses
on the activity
Implement Test
 Earlier, we covered
Test-Idea Lists, which
are input here
 In the next module,
we’ll cover Analyze
Test Failures, the
second half of Test
and Evaluate
Review: Defining the Test Approach
 In Module 4, we covered Test Approach
 A good test approach is:
 Diversified
 Risk-focused
 Product-specific
 Practical
 Defensible
 The techniques you apply should follow
your test approach
Discussion Exercise 5.1: Test Techniques
 There are as many as 200 published testing
techniques. Many of the ideas are
overlapping, but there are common themes.
 Similar sounding terms often mean different
things, e.g.:
 User testing
 Usability testing
 User interface testing
 What are the differences among these
techniques?
Dimensions of Test Techniques
 Think of the testing you do in terms of five
dimensions:
 Testers: who does the testing.
 Coverage: what gets tested.
 Potential problems: why you're testing (what
risk you're testing for).
 Activities: how you test.
 Evaluation: how to tell whether the test passed
or failed.
 Test techniques often focus on one or two
of these, leaving the rest to the skill and
imagination of the tester.
Test Techniques—Dominant Test Approaches
 Of the 200+ published Functional Testing techniques,
there are ten basic themes.
 They capture the techniques in actual practice.
 In this course, we call them:
 Function testing
 Equivalence analysis
 Specification-based testing
 Risk-based testing
 Stress testing
 Regression testing
 Exploratory testing
 User testing
 Scenario testing
 Stochastic or Random testing
“So Which Technique Is the Best?”
 Each has
strengths and Technique A
weaknesses
 Think in Technique H Technique B
terms of Testers
complement
Coverage
 There is no TechniquePotential
G Technique C
problems
“one true way” Activities
 Mixing Evaluation
Technique F Technique D
techniques can
improve Technique E
coverage
Apply Techniques According to the LifeCycle
 Test Approach changes over the project
 Some techniques work well in early phases;
others in later ones
 Align the techniques to iteration objectives

Inception Elaboration Construction Transition

A limited set of focused tests Many varied tests

A few components of software under test Large system under test

Simple test environment Complex test environment

Focus on architectural & requirement risks Focus on deployment risks


Module 5 Agenda
 Overview of the workflow: Test and Evaluate
 Defining test techniques
 Individual techniques
 Function testing
 Equivalence analysis
 Specification-based testing
 Risk-based testing
 Stress testing
 Regression testing
 Exploratory testing
 User testing
 Scenario testing
 Stochastic or Random testing
 Using techniques together
At a Glance: Function Testing
Tag line Black box unit testing
Objective Test each function thoroughly, one at a
time.
Testers Any
Coverage Each function and user-visible variable
Potential problems A function does not work in isolation
Activities Whatever works
Evaluation Whatever works
Complexity Simple
Harshness Varies
SUT readiness Any stage
Strengths & Weaknesses: Function Testing
 Representative cases
 Spreadsheet, test each item in isolation.
 Database, test each report in isolation
 Strengths
 Thorough analysis of each item tested
 Easy to do as each function is implemented
 Blind spots
 Misses interactions
 Misses exploration of the benefits offered by the
program.
At a Glance: Equivalence Analysis (1/2)
Partitioning, boundary analysis, domain
Tag line
testing
There are too many test cases to run.
Use stratified sampling strategy to
Objective
select a few test cases from a huge
population.
Testers Any
All data fields, and simple combinations
of data fields. Data fields include input,
Coverage output, and (to the extent they can be
made visible to the tester) internal and
configuration variables
Potential problems Data, configuration, error handling
At a Glance: Equivalence Analysis (2/2)
Divide the set of possible values of a field into
subsets, pick values to represent each subset.
Typical values will be at boundaries. More
generally, the goal is to find a “best
Activities representative” for each subset, and to run
tests with these representatives.
Advanced approach: combine tests of several
“best representatives”. Several approaches to
choosing optimal small set of combinations.
Evaluation Determined by the data
Complexity Simple
Designed to discover harsh single-variable
Harshness tests and harsh combinations of a few
variables
SUT readiness Any stage
Strengths & Weaknesses: Equivalence Analysis
 Representative cases
 Equivalence analysis of a simple numeric field.
 Printer compatibility testing (multidimensional variable,
doesn’t map to a simple numeric field, but stratified
sampling is essential)
 Strengths
 Find highest probability errors with a relatively small set
of tests.
 Intuitively clear approach, generalizes well
 Blind spots
 Errors that are not at boundaries or in obvious special
cases.
 The actual sets of possible values are often unknowable.
Optional Exercise 5.2: GUI Equivalence Analysis
 Pick an app that you know and some dialogs
 MS Word and its Print, Page setup, Font format dialogs
 Select a dialog
 Identify each field, and for each field
• What is the type of the field (integer, real, string, ...)?
• List the range of entries that are “valid” for the field
• Partition the field and identify boundary conditions
• List the entries that are almost too extreme and too
extreme for the field
• List a few test cases for the field and explain why the
values you chose are the most powerful representatives of
their sets (for showing a bug)
• Identify any constraints imposed on this field by other
fields
At a Glance: Specification-Based Testing
Tag line Verify every claim
Objective Check conformance with every statement in
every spec, requirements document, etc.
Testers Any
Coverage Documented reqts, features, etc.
Potential Mismatch of implementation to spec
problems
Activities Write & execute tests based on the spec’s.
Review and manage docs & traceability
Evaluation Does behavior match the spec?
Complexity Depends on the spec
Harshness Depends on the spec
SUT readiness As soon as modules are available
Strengths & Weaknesses: Spec-Based Testing
 Representative cases
 Traceability matrix, tracks test cases associated with each
specification item.
 User documentation testing
 Strengths
 Critical defense against warranty claims, fraud charges,
loss of credibility with customers.
 Effective for managing scope / expectations of regulatory-
driven testing
 Reduces support costs / customer complaints by ensuring
that no false or misleading representations are made to
customers.
 Blind spots
 Any issues not in the specs or treated badly in the specs
/documentation.
Traceability Tool for Specification-Based Testing
The Traceability Matrix
Stmt 1 Stmt 2 Stmt 3 Stmt 4 Stmt 5
Test 1 X X X
Test 2 X X
Test 3 X X X
Test 4 X X
Test 5 X X
Test 6 X X
Optional Exercise 5.5: What “Specs” Can You Use?
 Challenge:
 Getting information in the absence of a spec
 What substitutes are available?
 Example:
 The user manual – think of this as a commercial
warranty for what your product does.
 What other “specs” can you/should you be
using to test?
Exercise 5.5—Specification-Based Testing

 Here are some ideas for sources that you


can consult when specifications are
incomplete or incorrect.
 Software change memos that come with new
builds of the program
 User manual draft (and previous version’s
manual)
 Product literature
 Published style guide and UI standards
Definitions—Risk-Based Testing
 Three key meanings:
1. Find errors (risk-based approach to the technical
tasks of testing)
2. Manage the process of finding errors (risk-based
test management)
3. Manage the testing project and the risk posed by
(and to) testing in its relationship to the overall
project (risk-based project management)

 We’ll look primarily at risk-based testing (#1),


proceeding later to risk-based test management.
 The project management risks are very
important, but out of scope for this class.
At a Glance: Risk-Based Testing
Tag line Find big bugs first
Objective Define, prioritize, refine tests in terms of
the relative risk of issues we could test for
Testers Any
Coverage By identified risk
Potential problems Identifiable risks
Activities Use qualities of service, risk heuristics and
bug patterns to identify risks
Evaluation Varies
Complexity Any
Harshness Harsh
SUT readiness Any stage
Strengths & Weaknesses: Risk-Based Testing
 Representative cases
 Equivalence class analysis, reformulated.
 Test in order of frequency of use.
 Stress tests, error handling tests, security tests.
 Sample from predicted-bugs list.
 Strengths
 Optimal prioritization (if we get the risk list right)
 High power tests
 Blind spots
 Risks not identified or that are surprisingly more likely.
 Some “risk-driven” testers seem to operate subjectively.
• How will I know what coverage I’ve reached?
• Do I know that I haven’t missed something critical?
Optional Exercise 5.6: Risk-Based Testing
 You are testing Amazon.com
(Or pick another familiar application)
 First brainstorm:
 What are the functional areas of the app?
 Then evaluate risks:
• What are some of the ways that each of these
could fail?
• How likely do you think they are to fail? Why?
• How serious would each of the failure types be?
At a Glance: Stress Testing
Tag line Overwhelm the product
Learn what failure at extremes tells
Objective about changes needed in the
program’s handling of normal cases
Testers Specialists
Coverage Limited
Potential problems Error handling weaknesses
Activities Specialized
Evaluation Varies
Complexity Varies
Harshness Extreme
SUT readiness Late stage
Strengths & Weaknesses: Stress Testing
 Representative cases
 Buffer overflow bugs
 High volumes of data, device connections, long
transaction chains
 Low memory conditions, device failures, viruses, other
crises
 Extreme load
 Strengths
 Expose weaknesses that will arise in the field.
 Expose security risks.
 Blind spots
 Weaknesses that are not made more visible by stress.
At a Glance: Regression Testing
Tag line Automated testing after changes
Objective Detect unforeseen consequences of change
Testers Varies
Coverage Varies
Potential Side effects of changes
problems Unsuccessful bug fixes
Activities Create automated test suites and run against
every (major) build
Complexity Varies
Evaluation Varies
Harshness Varies
SUT readiness For unit – early; for GUI - late
Strengths & Weaknesses—Regression Testing
 Representative cases
 Bug regression, old fix regression, general functional
regression
 Automated GUI regression test suites
 Strengths
 Cheap to execute
 Configuration testing
 Regulator friendly
 Blind spots
 “Immunization curve”
 Anything not covered in the regression suite
 Cost of maintaining the regression suite
At a Glance: Exploratory Testing
Tag line Simultaneous learning, planning, and
testing
Simultaneously learn about the
Objective product and about the test strategies
to reveal the product and its defects
Testers Explorers
Coverage Hard to assess
Potential problems Everything unforeseen by planned
testing techniques
Activities Learn, plan, and test at the same time
Evaluation Varies
Complexity Varies
Harshness Varies
SUT readiness Medium to late: use cases must work
Strengths & Weaknesses: Exploratory Testing
 Representative cases
 Skilled exploratory testing of the full product
 Rapid testing & emergency testing (including thrown-over-
the-wall test-it-today)
 Troubleshooting / follow-up testing of defects.
 Strengths
 Customer-focused, risk-focused
 Responsive to changing circumstances
 Finds bugs that are otherwise missed
 Blind spots
 The less we know, the more we risk missing.
 Limited by each tester’s weaknesses (can mitigate this with
careful management)
 This is skilled work, juniors aren’t very good at it.
At a Glance: User Testing
Tag line Strive for realism
Let’s try real humans (for a change)
Objective Identify failures in the overall
human/machine/software system.
Testers Users
Coverage Very hard to measure
Potential problems Items that will be missed by anyone
other than an actual user
Activities Directed by user
Evaluation User’s assessment, with guidance
Complexity Varies
Harshness Limited
SUT readiness Late; has to be fully operable
Strengths & Weaknesses—User Testing
 Representative cases
 Beta testing
 In-house lab using a stratified sample of target market
 Usability testing
 Strengths
 Expose design issues
 Find areas with high error rates
 Can be monitored with flight recorders
 Can use in-house tests focus on controversial areas
 Blind spots
 Coverage not assured
 Weak test cases
 Beta test technical results are mixed
 Must distinguish marketing betas from technical betas
At a Glance: Scenario Testing
Tag line Instantiation of a use case
Do something useful, interesting, and complex
Objective Challenging cases to reflect real use
Testers Any
Coverage Whatever stories touch
Potential Complex interactions that happen in real use
problems by experienced users
Activities Interview stakeholders & write screenplays,
then implement tests
Evaluation Any
Complexity High
Harshness Varies
SUT readiness Late. Requires stable, integrated functionality.
Strengths & Weaknesses: Scenario Testing
 Representative cases
 Use cases, or sequences involving combinations of use
cases.
 Appraise product against business rules, customer data,
competitors’ output
 Hans Buwalda’s “soap opera testing.”
 Strengths
 Complex, realistic events. Can handle (help with)
situations that are too complex to model.
 Exposes failures that occur (develop) over time
 Blind spots
 Single function failures can make this test inefficient.
 Must think carefully to achieve good coverage.
At a Glance: Stochastic or Random Testing (1/2)
Monkey testing
Tag line High-volume testing with new cases all
the time
Have the computer create, execute,
and evaluate huge numbers of tests.
The individual tests are not all that
powerful, nor all that compelling.
Objective The power of the approach lies in
the large number of tests.
These broaden the sample, and
they may test the program over a
long period of time, giving us insight
into longer term issues.
At a Glance: Stochastic or Random Testing (2/2)
Testers Machines
Coverage Broad but shallow. Problems with
stateful apps.
Potential problems Crashes and exceptions
Activities Focus on test generation
Evaluation Generic, state-based
Complexity Complex to generate, but individual
tests are simple
Harshness Weak individual tests, but huge
numbers of them
SUT readiness Any
Combining Techniques (Revisited)
 A test approach should be diversified
 Applying opposite techniques can improve
coverage
 Often one technique can
extend another Technique A

Technique H Technique B
Testers
Coverage
TechniquePotential
G Technique C
problems
Activities
TechniqueEvaluation
F Technique D
Technique E
Applying Opposite Techniques to Boost Coverage
Contrast these two techniques
Regression Exploration
 Inputs:  Inputs:
• Old test cases and • models or other analyses
analyses leading to new that yield new tests
test cases  Outputs
 Outputs: • scribbles and bug reports
• Archival test cases,  Better for:
preferably well • Find new bugs, scout
documented, and bug new areas, risks, or ideas
reports
 Better for:
• Reuse across multi-
version products Exploration Regression
Applying Complementary Techniques Together
 Regression testing alone suffers fatigue
 The bugs get fixed and new runs add little info
 Symptom of weak coverage
 Combine automation w/ suitable variance
 E.g. Risk-based equivalence analysis
 Coverage of the combination
can beat sum of the parts Equivalence

Risk-based

Regression
How To Adopt New Techniques
1. Answer these questions:
 What techniques do you use in your test approach
now?
 What is its greatest shortcoming?
 What one technique could you add to make the
greatest improvement, consistent with a good test
approach:
• Risk-focused?
• Product-specific?
• Practical?
• Defensible?
2. Apply that additional technique until proficient
3. Iterate

You might also like