Chapter 2: Testing Throughout The Software Developement Lifecycle

Download as pdf or txt
Download as pdf or txt
You are on page 1of 47

Chapter 2: Testing throughout the

software developement lifecycle


Hela Lajmi
Assistant Professor
IEEE Membership Developement chair at Tunisia Section
IEEE Intelligent Transportation Systems Chair

[email protected]
2
Introduction

 All development projects have common activities

Requirements
Design Coding Test
Analysis
3
Outline and Learning Objectives

Part 1: Software Development Lifecycle Models

Part 2: Test levels

Part 3: Test types

Part 4: Maintenance Testing

Part 5: Quizz
4
Outline and Learning Objectives

Part 1: Software Development Lifecycle Models

Learning Objectives:

LO-2.1.1 Explain the relationships between software development activities and test
activities in the software development lifecycle (K2)

LO-2.1.2 Identify reasons why software development lifecycle models must be adapted to
the context of project and product characteristics (K1)

LO-2.1.3 Recall characteristics of good testing that are applicable to any life cycle model
(K1)
5
Waterfall model
6
Incremental model [V-Model : Test Levels]

Business Acceptance
Requirements Testing

Project Integration Testing


Specification in the Large

System System
Specification Testing

Design Integration Testing


Specification in the Small

Component
Code
Testing
7
Incremental model [V-Model : Late test design]
Tests
Business Acceptance
Requirements Testing
Tests
Project “We don’t have Integration Testing
Specification in the Large
time to design
tests early” Tests
System System
Specification Testing
Tests
Design Integration Testing
Specification in the Small
Tests
Component
Code Design
Testing
Tests?
8
Incremental model [V-Model : Early test design]
Tests Tests
Business Acceptance
Requirements Testing
Tests Tests
Project Integration Testing
Specification in the Large
Tests Tests
System System
Specification Testing
Tests Tests
Design Integration Testing
Specification in the Small
Tests Tests
Component
Design Code Run
Testing
Tests Tests
9
Iterative model (Agile)
10
Iterative model (Agile)

Agile software development [ISTQB Glossary-P1]


A group of software development methodologies based on iterative incremental
development, where requirements and solutions evolve through collaboration
between self-organizing cross-functional teams.

Agile testing [ISTQB Glossary-P1]


Testing practice for a project using Agile software development methodologies,
incorporating techniques and methods, such as extreme programming (XP), treating
development as the customer of testing and emphasizing the test-first design
paradigm.
11
Early test design

 test design finds faults


 faults found early are cheaper to fix
 most significant faults found first
 faults prevented, not built in
 no additional effort, re-schedule test design
 changing requirements caused by test design

Early test design helps to build quality,


stops fault multiplication
12
Outline and Learning Objectives

Part 1: Software Development Lifecycle Models

Part 2: Test levels

Learning Objectives:

LO-2.2.1 Compare the different test levels from the perspective of objectives, test basis, test
objects, typical defects and failures, and approaches and responsibilities (K2)
13
Test Levels

Test levels are characterized by the following attributes:

 Specific objectives
 Test basis, referenced to derive test cases
 Test object (i.e., what is being tested)
 Typical defects and failures
 Specific approaches and responsibilities
14
Test Levels

Acceptance
Test

System Test

Integration Test

Component Testing (Unit)


15
Test Levels

 For every test level, a suitable test


environment is required.

 In acceptance testing, a production-


like test environment is used

 In component testing, developers use


their own development environment.
16
Testing Levels

Component Low level of design


White box developer
(unit / code) (code)

Integration (Test
strategies: Top/Down or High / Low level of design White box developer
Test levels

Down/Top or BigBang)

System Functional specification Black box Tester

User requirements Black box Customer / User

Acceptance Alpha testing: pre-release: performed at the developing organization’s site but not by
developing team

Beta testing: pre-release: performed by customer at their own location


Alpha and Beta testing

Beta Testing [ISTQB Glossary-P1]

• A type of acceptance testing performed at an external site to the


developer's test environment by roles outside the development
organization.

Alpha Testing [ISTQB Glossary-P1]

• A type of acceptance testing performed in the developer's test


environment by roles outside the development organization.
18
Outline and Learning Objectives

Part 1: Software Development Lifecycle Models

Part 2: Test levels

Part 3: Test types


Learning Objectives: k2

LO-2.3.1 Compare functional, non-functional and white-box testing (K2)

LO-2.3.2 Recognize that functional and structural tests occur at any test level (K1)

LO-2.3.3 Recognize that functional, non-functional and white-box tests occur at any test
level (K1)
LO-2.3.4 Compare the purposes of confirmation testing and regression testing (K2)
19
Test Types

Test
activities

Specefic Test Testing Specfic


Test Objectives characteristics
activities

Test
activities
20
Test Types

Objectives include :

 Evaluating functional quality characteristics, such as completeness,


correctness, and appropriateness
 Evaluating non-functional quality characteristics, such as reliability,
performance efficiency, security, compatibility, and usability
 Evaluating whether the structure or architecture of the component or
system is correct, complete, and as specified
21
Test Types

Objectives include :

 Evaluating the effects of changes, such as confirming that defects have


been fixed (confirmation testing) and looking for unintended changes in
behavior resulting from software or environment changes (regression
testing)
22
Test Types

Black box testing


Test Types

Functional testing
White box testing

Change related
Non functional
testing
testing
23
Test Types : Functional Testing (1/2)

 The functions are “what?” the system should do.

 Functional requirements may be described in work products such as business


requirements specifications, epics, user stories, use cases, or functional
specifications, or they may be undocumented.

 Functional tests should be performed at all test levels (e.g., tests for
components may be based on a component specification)
24
Test Types : Functional Testing

 Functional testing considers the behavior of the software, so black-box


techniques may be used to derive test conditions and test cases for the
functionality of the component or system (see section 4.2).

In the types of functional testing following testing types should be covered:


 Unit Testing
 Integration Testing
 Interface Testing
 System Testing
 Regression Testing
 User Acceptance Testing (UAT)
 Smoke testing (=Confidence Testing)
 Sanity testing
25
Test Types : Non-Functional Testing (1/3)

Non-functional testing is the testing of “how well?” the system behaves.


 Non-functional testing of a system evaluates characteristics of systems and
software such as usability, performance efficiency or security.
 Refer to ISO standard (ISO/IEC 25010) for a classification of software product
quality characteristics.
 Non-functional testing can and often should be performed at all test levels, and
done as early as possible.
 Black-box techniques (see section 4.2) may be used to derive test conditions
and test cases for nonfunctional testing.
26
Test Types : Non-Functional Testing (2/3)

Example
Following testing should consider in non-functional testing types:
 Availability Testing  Interoperability Testing
 Compatibility testing  Installation Testing
 Configuration Testing  Load testing
 Documentation testing  Localization testing and
 Endurance testing Internationalization testing
27
Test Types : Non-Functional Testing (3/3)

Following testing should consider in non-functional testing types:

 Maintainability Testing
 Operational Readiness Testing
 Performance testing
 Recovery testing
 Reliability Testing
 Security testing
 Scalability testing
 Stress testing
 Usability testing
 Volume testing
black-box test technique [Glossary P2]

• A test technique based on an analysis of the specification of a


component or system.

Synonyms: specification-based test technique, black-box test


design technique
29
Test Types : White-Box Testing

 White-box testing derives tests based on the system’s internal structure or


implementation.
 Internal structure may include code, architecture, work flows, and/or data flows within the
system (see section 4.3).
30
Test Types : Change-Related Testing

 When changes are made to a system, either to correct a defect or


because of new or changing functionality, testing should be done to
confirm that the changes have corrected the defect or implemented the
functionality correctly, and have not caused any unforeseen adverse
consequences.
 Confirmation testing and regression testing are performed at all test
levels.
31
Outline and Learning Objectives

Part 1: Software Development Lifecycle Models

Part 2: Test levels

Part 3: Test types

Part 4: Maintenance Testing

LO-2.4.1 Summarize triggers for maintenance testing (K2)


LO-2.4.2 Describe the role of impact analysis in maintenance testing (K2)
LO-2.4.3 Describe the role of impact analysis in maintenance testing (K2)
32
Maintenance Testing

 Once deployed to production environments, software and systems need


to be maintained.
 Changes of various sorts are almost inevitable in delivered software and
systems, either to fix defects discovered in operational use, to add new
functionality, or to delete or alter already-delivered functionality
33
Triggers for Maintenance

 Modification, such as planned enhancements (e.g., release-based), corrective


and emergency changes, changes of the operational environment (such as
planned operating system or database upgrades), upgrades of COTS software,
and patches for defects and vulnerabilities

 Migration, such as from one platform to another, which can require operational
tests of the new environment as well as of the changed software, or tests of
data conversion when data from another application will be migrated into the
system being maintained

 Retirement, such as when an application reaches the end of its life


34
Impact Analysis for Maintenance

Impact analysis can be difficult if:


 Specifications (e.g., business requirements, user stories, architecture) are out of
date or missing
 Test cases are not documented or are out of date
 Bi-directional traceability between tests and the test basis has not been maintaine
 Tool support is weak or non-existent
 The people involved do not have domain and/or system knowledge
 Insufficient attention has been paid to the software's maintainability during
development
35
What to test in maintenance testing ?

 Test any new or changed code

 Impact analysis
 what could this change have an impact on?
 how important is a fault in the impacted area?
 test what has been affected, but how much?
 most important affected areas?
 areas most likely to be affected?
 whole system?

 The answer: “It depends”


36
Testing Types
Specification based
Functional testing Function / Features test design techniques All levels
(Black box)

Non-Functional Black box test design


Quality characteristics All levels
testing techniques
- Component
Test types

Coverage White box test design


Structural testing techniques
- Integration
(Code)
- System
Re-testing
= Bug is fixed and the
Regression testing
Confirmation related TCs need to be retested
testing Modification in the
software Impact analysis Regression testing
(New changes like CR)
Maintenance Migration of the
testing software
Impact analysis Regression testing

Retirement of the
software
Impact analysis Regression testing
37
Outline and Learning Objectives

Part 1: Software Development Lifecycle Models

Part 2: Test levels

Part 3: Test types

Part 4: Maintenance Testing

Part 5: Quizz
38
Q 1 : Recognize that functional, non-functional and white-box tests occur at any T.level

How can white-box testing be applied during acceptance testing?

a) To check if large volumes of data can be transferred between


integrated systems.
b) To check if all code statements and code decision paths have been
executed.
c) To check if all work process flows have been covered. Syllabus 2.3.5
d) To cover all web page navigations.
39
Q 1 : Recognize that functional, non-functional and white-box tests occur at any T.level
40
Q 2 : Compare the different test levels from the perspective of objective

Which of the following statements comparing component testing and system


testing is TRUE?
a) Component testing verifies the functionality of software modules, program
objects, and classes that are separately testable, whereas system testing
verifies interfaces between components and interactions between different
parts of the system. Syllabus
b) Test cases for component testing are usually derived from component 2.2.1
specifications, design specifications, or data models, whereas test cases for
system testing are usually derived from requirement specifications, or use
cases.
c) Component testing only focuses on functional characteristics, whereas
system testing focuses on functional and non-functional characteristics.
d) Component testing is the responsibility of the testers, whereas system testing
typically is the responsibility of the users of the system.
41
Q 3 : Compare the purposes of confirmation testing and regression testing

Which one of the following is TRUE?

a) The purpose of regression testing is to check if the correction has been


successfully implemented, while the purpose of confirmation testing is to
confirm that the correction has no side effects.
b) The purpose of regression testing is to detect unintended side effects, while
the purpose of confirmation testing is to check if the system is still working in
a new environment.
c) The purpose of regression testing is to detect unintended side effects, while
the purpose of confirmation testing is to check if the original defect has been
fixed. Syllabus 2.3.4
d) The purpose of regression testing is to check if the new functionality is
working, while the purpose of confirmation testing is to check if the originally
defect has been fixed.
Q 4 : Explain the relationship between software development activities and test activities 42

Which one of the following is the BEST definition of an incremental


development model?

a) Defining requirements, designing software and testing are done in a series


with added pieces. Syllabus 2.1.1
b) A phase in the development process should begin when the previous phase is
complete.
c) Testing is viewed as a separate phase which takes place after development
has been completed.
d) Testing is added to development as an increment.
43
Q 5 : Summarize triggers for maintenance testing

Which of the following should NOT be a trigger for maintenance testing?

a) Decision to test the maintainability of the software. Syllabus 2.4.1


b) Decision to test the system after migration to a new operating platform.
c) Decision to test if archived data is possible to be retrieved.
d) Decision to test after “hot fixes”.
44
Q 6 : Functional testing

Where may functional testing be performed?

A) At system and acceptance testing levels only.


B) At all test levels.
C) At all levels above integration testing.
D) At the acceptance testing level only.
45
Q 7: Non-Functional testing

A reliable system will be one that:

A) Is unlikely to be completed on schedule


B) Is unlikely to cause a failure
C) Is likely to be fault-free
D) Is likely to be liked by the users
46
Q 7: Exit criteria

What is the purpose of exit criteria?

A) To define when a test level is complete.


B) To determine when a test has completed.
C) To identify when a software system should be retired.
D) To determine whether a test has passed

What is the purpose of exit criteria?


A) Define when to stop testing
B) End of test level
C) When a set of tests has achieved a specific pre condition
D) All of the above
47
Q 8: Testing Levels

What is the normal order of activities in which software testing is organized?


A. Unit, integration, system, validation
B. System, integration, unit, validation
C. Unit, integration, validation, system
D. None of the above

You might also like