0% found this document useful (0 votes)
10 views41 pages

Testing Throughout The Software Development Lifecycle. Group 2

The document outlines the roles and responsibilities of a software testing group, detailing various software development lifecycle models, test levels, and types. It emphasizes the importance of integrating testing throughout the development process and provides insights into maintenance testing and impact analysis. The agenda includes topics such as sequential and iterative models, component and system testing, and the significance of functional and non-functional testing.

Uploaded by

angelbrenna20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views41 pages

Testing Throughout The Software Development Lifecycle. Group 2

The document outlines the roles and responsibilities of a software testing group, detailing various software development lifecycle models, test levels, and types. It emphasizes the importance of integrating testing throughout the development process and provides insights into maintenance testing and impact analysis. The agenda includes topics such as sequential and iterative models, component and system testing, and the significance of functional and non-functional testing.

Uploaded by

angelbrenna20
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

GROUP 2

GROUP MEMBERS
1. Akeza Favorite 25227
2. Babra Frida Mulinda 23630
3.Mugisha Abdoullatif 24978
4.Regis Mfitumukiza 23475
5. 25129 Ishimwe Pacifique
6. Seth Amen 25116
7. Ntabana Nelson 24562
8. Niyonkuru Regis 24592
9. Nzamwitakuze Fabrice 24855
10. Manzi Hervé 24605
11. MUGABO Ronald24704
12. ISHIMWE Thierry Henry 25319
13. Mukiza Yvon 24109

2
AGENDA

I. Software Development Lifecycle


Models

II. Test Levels

III. Test Types

IV. Maintenance Testing

3
SOFTWARE
DEVELOPMENT
LIFECYCLE MODELS
SOFTWARE DEVELOPMENT AND
SOFTWARE TESTING

Testers must be acquainted with common lifecycle models to


effectively integrate testing throughout the development process.
Key characteristics of good testing include:
• Corresponding test activities for every development activity.
• Specific test objectives for each test level.
• Initiation of test analysis and design during the corresponding
development activity.
• Early involvement in requirements and design discussions, and
reviewing work products (e.g., requirements, design, user stories)
as soon as drafts are available.

5
COMMON SOFTWARE DEVELOPMENT
LIFECYCLE MODELS

Lifecycle models can be broadly categorized into:


• Sequential development models.
• Iterative and incremental development models.

6
SEQUENTIAL DEVELOPMENT MODELS

In sequential models, the development process flows linearly, with each phase
starting only after the previous one is complete. Although, in practice, some overlap
might occur for early feedback.
Waterfall Model:
Development phases (e.g., requirements analysis, design, coding, testing) occur one
after another.
Testing activities commence after all development activities are completed.
V-Model:
Integrates testing throughout the development process, supporting early testing.
Each development phase has a corresponding test level.
Tests are executed sequentially but may overlap.
Sequential models typically deliver complete feature sets but may take months or
years to deliver. 7
ITERATIVE AND INCREMENTAL DEVELOPMENT MODELS

These models involve developing software in pieces, with each iteration


refining and adding features.
Incremental Development:
Features are added incrementally, ranging from small changes to larger
feature sets.
Example: Agile methodologies.
Iterative Development:
Features are developed, tested, and refined in cycles.
Each iteration delivers a working subset of the final product.

8
ITERATIVE AND INCREMENTAL DEVELOPMENT MODELS
CONT'S

Examples include:
Rational Unified Process (RUP): Longer iterations (e.g., 2-3 months) with
larger feature increments.
Scrum: Short iterations (e.g., days to a few weeks) with small feature
increments.
Kanban: Flexible iterations, delivering single or grouped features.
Spiral: Iterative cycles with experimental increments, allowing for
significant rework .
These methods often involve overlapping test levels and continuous
delivery or deployment, emphasizing significant test automation.

9
SOFTWARE DEVELOPMENT LIFECYCLE
MODELS IN CONTEXT

Choosing and adapting lifecycle models depends on project and


product characteristics, including project goals, product type, business
priorities, and identified risks. For example:
A minor internal system vs. a safety-critical system like an
automobile’s brake control.
Organizational and cultural issues affecting team communication.
Models may be combined or reorganized as needed. For instance:
Integration of COTS Products: Interoperability testing at system
integration and acceptance test levels.
Combining Models: Using the V-model for backend systems and Agile
for frontend development.

10
Internet of Things (IoT) Systems
IoT systems, consisting of various objects like devices and services,
often apply separate lifecycle models for each object. This presents
challenges in development, emphasizing later lifecycle phases (e.g.,
operate, update, decommission).
Adapting Models to Context
Reasons for adapting models include:
• Differing product risks.
• Multiple business units in a project.
• Short time-to-market requiring merged test levels and integrated
test types.

11
II.TEST LEVELS
TEST LEVELS

Test levels are distinct phases in the testing process, each with
specific objectives and activities. They are organized and managed
to ensure comprehensive testing from individual components to
complete systems. The main test levels discussed are:
• Component testing
• Integration testing
• System testing
• Acceptance testing

13
COMPONENT TESTING

Objectives of Integration Testing:


• Reducing risk
• Verifying interface behaviors
• Building confidence in interface quality
• Finding defects in interfaces or components
• Preventing defects from reaching higher test levels

14
Test Basis:
• Software and system design
• Sequence diagrams
• Interface and communication protocol specifications
• Use cases
• Architecture at component or system level
• Workflows
• External interface definitions

15
Test Objects:
• Subsystems
• Databases
• Infrastructure
• Interfaces
• APIs
• Microservices
Typical Defects and Failures:
• Incorrect data handling
• Interface mismatches
• Communication failures
• Incorrect assumptions about data 16
Approaches and Responsibilities:
• Component integration testing focuses on interactions between
components, often automated and part of continuous integration
• System integration testing focuses on interactions between
systems and external organizations
• Should be incremental rather than "big bang"
• Developers typically handle component integration testing, while
system integration testing is usually the responsibility of testers

17
SYSTEM TESTING

Objectives of System Testing:


• Reducing risk
• Verifying system behaviors
• Validating system completeness
• Building confidence in the system's quality
• Finding defects
• Preventing defects from reaching production

18
Test Basis:
• System and software requirements
• Risk analysis reports
• Use cases
• Epics and user stories
• Behavioral models
• State diagrams
• Manuals

19
Test Objects:
• Applications
• Hardware/software systems
• Operating systems
• System under test (SUT)
• System configuration
Typical Defects and Failures:
• Incorrect calculations
• Unexpected behaviors
• Incorrect data/control flows
• End-to-end task failures
• Environmental issues

20
Approaches and Responsibilities:
• Focuses on end-to-end system behavior
• Uses appropriate techniques for the system aspect being tested
• Typically carried out by independent testers
• Early involvement in user story refinement or static testing is
crucial

21
ACCEPTANCE TESTING
Objectives of Acceptance Testing:
• Establishing system quality confidence
• Validating system completeness
• Verifying system behaviors
• Assessing system readiness for deployment
Forms of Acceptance Testing:
• User Acceptance Testing (UAT): Validates system fitness for user needs
in a real or simulated environment
• Operational Acceptance Testing (OAT): Tests operational aspects in a
simulated production environment
• Contractual and Regulatory Acceptance Testing: Ensures compliance
with contractual criteria and regulations
• Alpha and Beta Testing: Involves user feedback before market release

22
Test Basis:
• Business processes
• User/business requirements
• Regulations, legal contracts, and standards
• Use cases/user stories
• System requirements
• Documentation and procedures

23
Test Objects:
• System under test
• System configuration
• Business processes
• Recovery systems
• Operational processes
• Forms and reports

24
Typical Defects and Failures:
• Non-compliance with requirements
• Business rule implementation issues
• Non-functional failures
Approaches and Responsibilities:
• Often the responsibility of customers, business users, product
owners, or operators
• May occur at various stages in the development lifecycle, not just at
the end
• Can involve various forms depending on the iteration and
development stage

25
III. TEST TYPES
TEST TYPES
A test type is a group of test activities aimed at testing specific
characteristics of a software system or a part of a system, based on
specific test objectives. These objectives can include:
• Evaluating functional quality characteristics, such as completeness,
correctness, and appropriateness.
• Evaluating non-functional quality characteristics, such as reliability,
performance efficiency, security, compatibility, and usability.
• Evaluating whether the structure or architecture of the component or
system is correct, complete, and as specified.
• Evaluating the effects of changes, such as confirming that defects
have been fixed (confirmation testing) and looking for unintended
changes in behavior resulting from software or environment changes
(regression testing).

27
FUNCTIONAL TESTING

Functional testing of a system involves tests that evaluate the functions that the
system should perform.

• Functional tests should be performed at all test levels, though the focus is
different at each level.
• Functional testing considers the behavior of the software, so black-box
techniques may be used to derive test conditions and test cases for the
functionality of the component or system.
• The thoroughness of functional testing can be measured through functional
coverage, which is the extent to which some functionality has been exercised
by tests. This can be expressed as a percentage of the types of elements
being covered, such as requirements.
• Functional test design and execution may involve special skills or knowledge,
such as understanding the particular business problem the software solves.

28
NON-FUNCTIONAL TESTING
Non-functional testing of a system evaluates characteristics of systems
and software such as usability, performance efficiency, or security.
• Non-functional testing is the testing of "how well" the system
behaves and can be performed at all test levels.
• Black-box techniques may be used to derive test conditions and test
cases for non-functional testing.
• The thoroughness of non-functional testing can be measured
through non-functional coverage, which is the extent to which some
type of non-functional element has been exercised by tests.
• Non-functional test design and execution may involve special skills
or knowledge, such as understanding inherent weaknesses of a
design or technology or the particular user base.

29
WHITE-BOX TESTING
White-box testing derives tests based on the system’s internal
structure or implementation, which may include code, architecture,
workflows, and/or data flows within the system.
• The thoroughness of white-box testing can be measured through
structural coverage, which is the extent to which some type of
structural element has been exercised by tests.
• At the component testing level, code coverage is based on the
percentage of component code that has been tested and may be
measured in terms of different aspects of code such as
executable statements or decision outcomes.
• White-box test design and execution may involve special skills or
knowledge, such as understanding how the code is built, how
data is stored, and how to use coverage tools and interpret their
results.

30
CHANGE-RELATED TESTING
When changes are made to a system, either to correct a defect or
because of new or changing functionality, testing should be done to
confirm the changes and check for unforeseen adverse
consequences.
• Confirmation testing: After a defect is fixed, tests that failed due
to the defect should be re-executed on the new software version.
The purpose is to confirm whether the original defect has been
successfully fixed.
• Regression testing: This involves running tests to detect
unintended side-effects of changes made to the system. It is
performed at all test levels and is especially important in
iterative and incremental development lifecycles.

31
TEST TYPES AND TEST LEVELS
It is possible to perform any of the test types mentioned above at any test level.
Examples for a banking application include:
Functional tests:
• Component testing: Tests how a component should calculate compound
interest.
• Component integration testing: Tests how account information captured at
the user interface is passed to the business logic.
• System testing: Tests how account holders can apply for a line of credit on
their checking accounts.
• System integration testing: Tests how the system uses an external
microservice to check an account holder’s credit score.
• Acceptance testing: Tests how the banker handles approving or declining a
credit application.

32
Non-functional tests:
• Component testing: Performance tests to evaluate the number of
CPU cycles required for a complex interest calculation.
• Component integration testing: Security tests for buffer overflow
vulnerabilities.
• System testing: Portability tests to check if the presentation layer
works on all supported browsers and mobile devices.
• System integration testing: Reliability tests to evaluate system
robustness if the credit score microservice fails.
• Acceptance testing: Usability tests to evaluate the accessibility of
the banker’s credit processing interface.

33
White-box tests:
• Component testing: Tests designed for complete statement and
decision coverage.
• Component integration testing: Tests to exercise data passing
between screens and business logic.
• System testing: Tests to cover sequences of web pages during a
credit line application.
• System integration testing: Tests to exercise all possible inquiry
types sent to the credit score microservice.
• Acceptance testing: Tests to cover all supported financial data file
structures for bank-to-bank transfers.

34
Change-related tests:
• Component testing: Automated regression tests for each component
included in the continuous integration framework.
• Component integration testing: Tests to confirm fixes to interface-
related defects.
• System testing: Re-execution of all tests for a given workflow if any
screen in that workflow changes.
• System integration testing: Daily re-execution of tests interacting
with the credit scoring microservice.
• Acceptance testing: Re-execution of all previously failed tests after
a defect is fixed.

35
IV. MAINTENANCE
TESTING
MAINTENANCE TESTING

Once deployed to production environments, software and systems


need to be maintained.
A maintenance release may require maintenance testing at multiple
test levels, using various test types, based on its scope. The scope
of maintenance testing depends on:
• The degree of risk of the change, for example, the degree to
which the changed area of software communicates with other
components or systems.
• The size of the existing system.
• The size of the change.

37
TRIGGERS FOR MAINTENANCE
There are several reasons why software maintenance, and thus
maintenance testing, takes place, both for planned and unplanned
changes. The triggers for maintenance can be classified as follows:
Modification: Planned enhancements (e.g., release -based), corrective and
emergency changes, changes of the operational environment (such as
planned operating system or database upgrades), upgrades of Commercial
Off-The-Shelf (COTS) software, and patches for defects and vulnerabilities.
Migration: Moving from one platform to another, which can require
operational tests of the new environment as well as the changed software,
or tests of data conversion when data from another application will be
migrated into the system being maintained.
Retirement: When an application reaches the end of its life, testing may be
required for data migration or archiving if long data retention periods are
needed. Testing restore/retrieve procedures after archiving for long
retention periods may also be necessary. Regression testing may be needed
to ensure that any functionality remaining in service still works.

38
IMPACT ANALYSIS FOR MAINTENANCE

Impact analysis evaluates the changes made for a maintenance


release to identify the intended consequences as well as expected
and possible side effects of a change, and to identify the areas in
the system that will be affected. Impact analysis can also help to
identify the impact of a change on existing tests. The side effects
and affected areas in the system need to be tested for regressions,
possibly after updating any existing tests affected by the change.
Impact analysis may be conducted before a change is made to help
decide if the change should be implemented, based on potential
consequences in other areas of the system.

39
Impact analysis can be difficult if:
• Specifications (e.g., business requirements, user stories,
architecture) are out of date or missing.
• Test cases are not documented or are out of date.
• Bi-directional traceability between tests and the test basis has not
been maintained.
• Tool support is weak or non-existent.
• The people involved do not have domain and/or system knowledge.
• Insufficient attention has been paid to the software's
maintainability during development.

40
THANK YOU

You might also like