0% found this document useful (0 votes)
44 views82 pages

Test Levels and Types

The document discusses different types and levels of software testing. It covers component testing, integration testing, system testing, and acceptance testing. It also describes risk-based testing, functional testing, non-functional testing, structural testing, and regression testing. The key points are that different types of testing target different aspects of software and occur at different stages of development. Risk-based testing prioritizes tests based on the potential risks and costs of failures.

Uploaded by

Habib Sarajlić
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views82 pages

Test Levels and Types

The document discusses different types and levels of software testing. It covers component testing, integration testing, system testing, and acceptance testing. It also describes risk-based testing, functional testing, non-functional testing, structural testing, and regression testing. The key points are that different types of testing target different aspects of software and occur at different stages of development. Risk-based testing prioritizes tests based on the potential risks and costs of failures.

Uploaded by

Habib Sarajlić
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

Test Levels and Test Types

Basic Phases and Generic Types of Testing

Software Quality Assurance


Telerik Software Academy
http://academy.telerik.com
The Lectors
 Snejina Lazarova
Project Manager
BI & Reporting Team

 Dimo Mitev
QA Architect
Backend Services Team

2
Table of Contents
 Test Levels
 Component Testing
 Integration Testing
 System Testing
 Acceptance Testing

3
Table of Contents (2)
 Test Types
 Risk-Based Testing
 Functional Testing
 Non-functional Testing
 Structural Testing
 Testing Related to Changes:
Re-testing and Regression Testing
 Maintenance Testing

4
Component Testing
Short Review
Main Terms
 Component testing
 Testing separate components of the software
 Software units (components)
 Modules, units, programs, functions
 Classes – in Object Oriented Programming
 Respective tests are called:
 Module, unit, program or class tests

6
Units vs. Components
 Unit
 The smallest compilable component
 Component
 A unit is a component
 The integration of one or more components is a
component
 “One” stands for components that call
themselves recursively

7
Test Objects
 Individual testing
 Components are tested individually
 Isolated from all other software components
 Isolation
 Prevents external influences on the
components
 Component test checks aspects internal to the
component
 Interaction with neighbors is not performed
8
Component Testing Helpers
 Stubs
 In Component testing called components are
replaced with stubs, simulators, or trusted
components
 Drivers
 Calling components are replaced with drivers or
trusted super-components

9
Integration Testing
Testing Components‘ Collaboration
Integration
 Composing units to form larger structural units
and subsystems
 Done by developers, testers, or special
integration teams
 Supposes that components are already tested
individually

11
Levels of Integration Testing
 Component integration testing
 Expose defects in the interfaces and interaction
between integrated components
 Also called
 “Integration test in the small”
 System integration testing
 Testing the integration of systems and packages
 Testing interfaces to external organizations
 Also called
 “Integration test in the large”
12
Off-the-shelf Products
 Standard, existing components used with
some modification
 Usually not subject of component testing
 Must be tested for integration

13
Why Integration Testing?
 After assembling the components new fault
may occur
 Testing must confirm that all components
collaborate correctly
 The main goal - exposing faults
 In the interfaces
 In the interaction between integrated
components

14
Some Typical Problems
 Wrong interface formats
 Incompatible interface formats
 Wrong files format
 Typical faults in data exchange
 Syntactically wrong or no data
 Different interpretation of received data
 Timing problems

15
Integration Approaches
 There are different approaches for integration
testing
 The Big Bang approach
 all components or systems are integrated
simultaneously
 The main disadvantage: difficult to trace the
cause of failures
 The incremental approach
 The main disadvantage: time-consuming

16
Incremental Approaches
 The Top-Down approach
 The high level logic and flow are tested first -
the low level components are tested later
 The Bottom-Up approach
 Opposite to the Top-Down approach
 The main disadvantage - the high level or the
most complex functionalities are tested late

17
System Testing
Comparing The System With Requirements
Why System Testing
 Previous tests were done against technical
specifications
 The system test
 Looks at the system from another perspective
 Of the customer
 Of the future user
 Many functions and system characteristics
result from the interaction of all system
components

19
Test Environment
 System testing requires specific test
environment
 Hardware
 System software
 Device driver software
 Networks
 External systems
 Etc.

20
Test Environment (2)
 A common mistake is testing in the customer’s
operational environment
 Failures may cause damage to the system
 No control on the environment
 Parallel processes may influence
 The test can hardly be reproduced

21
Common Problems
 Unclear or missing system requirements
 Missing specification of the system's correct
behavior
 Missed decisions
 Not reviewed and not approved requirements
 Project failure possible
 Realization might turn to be in the wrong
direction

22
Acceptance Testing
Involving the Customer Himself
The Main Idea
 The focus is on the customer's perspective and
judgment
 Especially for customer specific software
 The customer is actually involved
 The only test he can understand
 Might have the main responsibility
 Performed in a customer’s like environment
 As similar as possible to the target environment
 New issues may occur
24
Forms of Acceptance Testing
 Typical aspects of acceptance testing:
 Contract fulfillment verification
 User acceptance testing
 Operational (acceptance) testing
 Field test (alpha and beta testing)

25
Contract Fulfillment Verification
 Testing according to the contract
 Is the development / service contract
accomplished
 Is the software free of (major) deficiencies
 Acceptance criteria
 Determined in the development contract
 Any regulations that must be adhered to
 Governmental, legal, or safety regulations

26
User Acceptance Testing
 The client might not be the user
 Every user group must be involved
 Different user groups may have different
expectations
 Rejection even by a single user group may be
problematic

27
Acceptance In Advance
 Acceptance tests can be executed within lower
test levels
 During integration
 E.g. a commercial off-the-shelf software
 During component testing
 For component’s usability
 Before system testing
 Using a prototype
 For new functionality

28
Operational (Acceptance) Testing
 Acceptance by the system administrators
 Testing backup/restore cycles
 Disaster recovery
 User management
 Maintenance tasks
 Security vulnerabilities

29
Field Testing
 Software may be run on many environments
 All variations cannot be represented in a test
 Testing with representative customers
 Alpha testing
 Carried out at the producer's location
 Beta testing
 Carried out at the customer's side

30
Test Types
Risk-Based Testing
Prioritization Of Tests Based On Risk And Cost
Risk
 Risk
 The possibility of a negative or undesirable
outcome or event
 Any problem that may occur
 Would decrease perceptions of product quality or
project success

33
Types of Risk
 Two main types of risk are concerned
 Product (quality) risks
 The primary effect of a potential problem is on
the product quality
 Project (planning) risks
 The primary effect is on the project success

34
Levels of Risk
 Not all risks are equal in importance
 Factors for classifying the level of risk:
 Likelihood of the problem occurring
 Arises from technical considerations
 E.g. programming languages used, bandwidth of
connections, etc.
 Impact of the problem in case it occurs
 Arises from business considerations
 E.g. financial loss, number of users affected, etc.

35
Levels of Risk - Chart

RISK

Impact Likelihood
(damage) (Probability of failure)

Use Lack of
frequency quality

36
Prioritization of Effort
 Effort is allocated proportionally to the level of
risk
 The more important risks are tested first

37
Product Risks:
What to Think About
 Which functions and attributes are critical (for
the success of the product)?
 How visible is a problem in a function or
attribute?
 (For customers, users, people outside)
 How often is a function used?
 Can we do without?

38
Functional Testing
Verifying a System's Input-Output Behavior
Functional Testing
 Functional testing verifies the system's input–
output behavior
 Black box testing methods are used
 The test bases are the functional requirements

40
Functional Requirements
 They specify the behavior of the system
 “What" the system must be able to do?
 Define constraints on the system

41
Requirements Specifications
 Functional requirements must be documented
 Requirements management system
 Text based Software Requirements
Specification (SRS)

42
Software Requirements
Specifications (SRS)
Live Demo
Requirements-based Testing
 Requirements are used as the basis for testing
 At least one test case for each requirement
 Usually more than one is needed
 Mainly used in:
 System testing
 Acceptance testing

44
Non-functional Testing
Testing Non-functional Software Characteristics
Testing the System Attributes
 “How well" or with what quality the system
should carry out its function
 Attributive characteristics:
 Reliability
 Usability
 Efficiency

46
Testability of Requirements
 Nonfunctional requirements are often not
clearly defined
 How would you test:
 “The system should be easy to operate”
 “The system should be fast”
 Requirements should be expressed in a
testable way
 Make sure every requirement is testable
 Make it early in the development process

47
Nonfunctional Tests
 Performance test
 Processing speed and response time
 Load test
 Behavior in increasing system loads
 Number of simultaneous users
 Number of transactions
 Stress test
 Behavior when overloaded

48
Nonfunctional Tests (2)
 Volume test
 Behavior dependent on the amount of data
 Testing of security
 Against unauthorized access
 Service attacks
 Stability
 Mean time between failures
 Failure rate with a given user profile
 Etc.
49
Nonfunctional Tests (3)
 Robustness test
 Response
 Examination of exception handling and
recovery to errors
 Compatibility and data conversion
 Compatibility to given systems
 Import/export of data

50
Nonfunctional Tests (4)
 Different configurations of the system
 Back-to-back testing
 Usability test
 Ease of learning the system
 Ease and efficiency of operation
 Understandability of the system

51
Structural Testing
Testing the Software Structure / Architecture
Examining the Structure
 Often referred to as ‘white-box’ or ‘glass-box’
testing
 Uses information about the internal code
structure or architecture
 Tools can be used to measure the code
coverage of elements, such as statements or
decisions

53
Structure Testing Application
 Mostly used for:
 Component testing
 Integration testing
 Can also be applied at:
 System integration
 Acceptance testing

54
Testing Related to Changes:
Re-testing and
Regression Testing
Repeating Tests After Changes Are Made
Re-testing
 After a defect is detected and fixed, the
software should be re-tested
 To confirm that the original defect has been
successfully removed
 This is called confirmation

56
What is Regression Testing
 Retest of a previously tested program
 Needed after modifications of the program
 Testing for newly introduced faults
 As a result of the changes made to the system
 May be performed at all test levels

57
Tests Reusability
 Test cases, used in regression testing, run
many times
 They have to be well documented and reusable
 Strong candidates for test automation

58
Volume of the Regression Test
 How extensive a regression test should be?
 There are a few levels of testing extensity:
1. Defect retest (confirmation testing)
 Rerunning tests that have detected faults
2. Testing altered functionality
 Only changed or corrected parts

59
Volume of the Regression Test (2)
 There are a few levels of testing extensity:
3. Testing new functionality
 Testing newly integrated program parts
4. Complete regression test
 Testing the whole system

60
Unexpected Side Effects
 The main trouble of software
 The code complexity
 Altered or new code parts may affect
unchanged code
 Testing only code, that is changed, is not
enough

61
Complete Regression Test
 The only way to be sure (as possible)
 System environment changes
 Also require regression testing
 Could have effects on every part of the system
 Too time consuming and costly
 Not achievable in a reasonable cost
 Impact analysis of the changes is needed

62
Maintenance Testing
Testing New Versions of The Software
What Do We Maintain?
 Software does not wear out and tear
 Some design faults already exist
 Bugs are about to be revealed
 A software project does not end with the first
deployment
 Once installed, it will often be used for years or
decades
 It will be changed, updated, and extended
many times

64
What Do We Maintain? (2)
 New versions
 Each time a correction is made - a new version
of the original product is created
 Testing the changes can be difficult
 Outdated or missing system specifications

65
Main Types Of Maintenance
 Adaptive maintenance
 Product is adapted to new operational
conditions
 Corrective maintenance
 Defects being eliminated

66
Common Reasons For
Maintenance
 The system is run under new operating
conditions
 Not predictable and not planned
 The customers express new wishes
 Rarely occurring special cases
 Not anticipated by design
 New methods and classes need to be written
 Rarely occurring crashes reported

67
Testing After Maintenance
 Anything new or changed should be tested
 Regression testing is required
 The rest of the software should be tested for
side effects
 What if the system is unchanged?
 Testing is needed even if only the environment
is changed

68
Test Levels and Test Types

?
?

?
Questions?
?

?
?

?
? ?
Exercises
1. Which of the following is a test type?
a) Component testing
b) Functional testing
c) System testing
d) Acceptance testing

70
Exercises (2)
2. Which of these is a functional test?
a) Measuring response time on an on-line booking system
b) Checking the effect of high volumes of traffic in a call-
center system
c) Checking the on-line bookings screen information and
the database contents against the information on the
letter to the customers
d) Checking how easy the system is to use

71
Exercises (3)
3. Which of the following is a true statement
regarding the process of fixing emergency
changes?
a) There is no time to test the change before it goes live, so only
the best developers should do this work and should not
involve testers as they slow down the process
b) Just run the retest of the defect actually fixed
c) Always run a full regression test of the whole system in case
other parts of the system have been adversely affected
d)Retest the changed area and then use risk assessment to
decide on a reasonable subset of the whole regression test to
run in case other parts of the system have been adversely
affected
72
Exercises (4)
4.Which of the following are characteristics of
regression testing ?
a) Regression testing is run ONLY once
b)Regression testing is used after fixes have been
made
c) Regression testing is often automated
d)Regression tests need not to be maintained
e) Regression testing is not needed when new
functionality is added.

73
Exercises (5)
5. Non-functional testing includes:
a) Testing to see where the system does not function
correctly
b) Testing the quality attributes of the system including
reliability and usability
c) Gaining user approval for the system
d) Testing a system feature using only the software
required for that function

74
Exercises (6)
6.Where may functional testing be performed?
a) At system and acceptance testing levels only
b) At all test levels
c) At all levels above integration testing
d) At the acceptance testing level only

75
Exercises (7)
7. Which of the following is correct?
a) Impact analysis assesses the effect on the system
of a defect found in regression testing
b)Impact analysis assesses the effect of a new person
joining the regression test team
c) Impact analysis assesses whether or not a defect
found in regression testing has been fixed correctly
d)Impact analysis assesses the effect of a change to
the system to determine how much regression
testing to do

76
Exercises (8)
8.What is beta testing?
a) Testing performed by potential customers at the
developers location
b)Testing performed by potential customers at their
own locations
c) Testing performed by product developers at the
customer's location
d)Testing performed by product developers at their
own locations

77
Exercises (9)
9.Which is the non-functional testing?
a) Performance testing
b)Unit testing
c) Regression testing
d)Sanity testing

78
Exercises (10)
10.What determines the level of risk?
a) The cost of dealing with an adverse event if it
occurs
b)The probability that an adverse event will occur
c) The amount of testing planned before release of a
system
d)The likelihood of an adverse event and the impact
of the event

79
Exercises (11)
11. The difference between re-testing and regression
testing is:
a) Re-testing is running a test again; regression testing
looks for unexpected side effects
b) Re-testing looks for unexpected side effects;
regression testing is repeating those tests
c) Re-testing is done after faults are fixed; regression
testing is done earlier
d) Re-testing uses different environments, regression
testing uses the same environment
e) Re-testing is done by developers, regression testing is
done by independent testers
80
Exercises (12)
12.Contract and regulation testing is a part of
a) System testing
b) Acceptance testing
c) Integration testing
d) Smoke testing

81
Free Trainings @ Telerik Academy
 C# Programming @ Telerik Academy
 csharpfundamentals.telerik.com
 Telerik Software Academy
 academy.telerik.com
 Telerik Academy @ Facebook
 facebook.com/TelerikAcademy
 Telerik Software Academy Forums
 forums.academy.telerik.com

You might also like