0% found this document useful (0 votes)
12 views27 pages

Lecture 3 STN

Integration testing is a crucial phase in software testing that combines individual modules to identify interface errors before system testing. It aims to ensure that modules developed by different teams work correctly together, addressing potential communication issues and verifying interactions. The document outlines various integration testing approaches, common errors, and the importance of a structured System Integration Testing (SIT) plan.

Uploaded by

Hesham MosaAd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views27 pages

Lecture 3 STN

Integration testing is a crucial phase in software testing that combines individual modules to identify interface errors before system testing. It aims to ensure that modules developed by different teams work correctly together, addressing potential communication issues and verifying interactions. The document outlines various integration testing approaches, common errors, and the importance of a structured System Integration Testing (SIT) plan.

Uploaded by

Hesham MosaAd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Software Testing and

Quality Assurance
Chapter 3
System Integration Testing
Introduction to
Integration Testing
Integration testing is a phase in software testing where
individual modules are combined and tested as a group. The
purpose is to detect interface errors between modules before
conducting system testing.
 Why is Integration Testing Important?
 Modules are developed by different teams, leading to
potential communication and integration issues.
 Unit testing ensures each module works in isolation, but
integration testing verifies how they interact.
 Some modules are more error-prone than others, requiring
special attention.
 Objectives of Integration Testing
 Gradually integrate and test modules to ensure they work
correctly together.
 Identify and fix interface-related errors early.
 Ensure system stability before full-scale system testing.
Types of Interfaces and
Common Errors
Types of Interfaces in Software Systems
Procedure Call Interface – One module calls a
function from another.
Shared Memory Interface – Modules share a
common memory block.
Message Passing Interface – Modules
exchange messages, common in client-server
architectures.
Common
Error Type Interface Errors
Example
Function expecting (name, age)
Misordered Parameters
receives (age, name).
A module updates an array size
Changed Data Structure without informing dependent
modules.
Two processes try to access
Synchronization Issues
shared data simultaneously.
Common Interface Errors
Interface errors arise when modules do not interact correctly.
Below are common types of interface errors:
 Construction Errors – Misuse of interface specifications
(e.g., incorrect #include statements in C programs).
 Inadequate Functionality – One module assumes
another provides a function that it does not.
 Location of Functionality – Confusion over where a
function should be implemented.
 Changes in Functionality – Modifying one module
without adjusting dependent modules.
 Added Functionality – New features added without
proper integration planning.
 Misuse of Interface – Incorrect parameter passing (wrong
type, order, or number of parameters).
 Misunderstanding of Interface – Caller assumes
incorrect conditions about the called module.
 Data Structure Alteration – Changes in data structures without
updating dependent modules.
 Inadequate Error Processing – Errors returned by a module are
not handled properly by the caller.
 Additions to Error Processing – Changes in error handling
require modifications in multiple modules.
 Inadequate Postprocessing – Failure to release resources (e.g.,
memory deallocation).
 Inadequate Interface Support – Mismatched data expectations
(e.g., Celsius vs. Fahrenheit values).
 Initialization/Value Errors – Uninitialized variables or pointers
leading to unexpected behavior.
 Violation of Data Constraints – Breaking predefined
relationships among data items.
 Timing/Performance Problems – Issues like race conditions due
to improper synchronization.
 Coordination of Changes – Failing to communicate updates
across modules.
 Hardware/Software Interface Errors – Software does not handle
hardware interactions correctly, causing data loss or
miscommunication.
Granularity of System Integration
Testing:
Granularity of System Integration Testing refers to
the level of detail at which system integration testing is
performed. Different levels of integration testing focus
on different aspects of the system's interaction, from
small module combinations to full system-wide
integration.
Levels of System Integration Testing
Intrasystem Testing
This is a low-level integration test that combines different
modules within the same system.
The goal is to ensure that individual components function
correctly together before integrating the entire system.
Example: In a client-server system, the client and server
are tested separately before testing their interactions.
Intersystem Testing
A high-level integration test where independently
tested systems are connected and tested as a
whole.
The focus is on verifying that different systems can
work together, rather than on detailed functionality
testing.
Example: Integrating a call control system with a
billing system in a telecom network.
Pairwise Testing
A mid-level integration test where only two
interconnected systems are tested at a time.
It assumes other systems in the environment
function correctly and focuses on interactions
between the two selected systems.
Example: Testing communication between a radio
node and an element management system in a
Why is Granularity Important?
It helps identify and resolve integration issues
early in development.
It makes debugging easier by isolating errors at
specific levels.
It ensures smoother final system integration with
minimal unexpected failures.
Integration Testing Approaches
1. Top-Down Integration Testing
Starts with the top-level module and gradually
integrates sub modules.
Uses stubs as placeholders for lower modules
until they are developed.
Advantages:
Detects high-level design flaws early.
Helps with early prototype validation.
Disadvantages:
Lower-level modules are tested later, which
can delay bug detection.
2. Bottom-Up Integration Testing
Begins by testing the lower-level modules first.
Uses drivers to simulate higher-level modules.
Advantages:
Detects foundational issues early.
Simplifies debugging as lower-level
components are stable first.
Disadvantages:
High-level system interactions are validated
late.
3. Sandwich Integration Testing
Combines top-down and bottom-up approaches.
Tests upper and lower modules first, then integrates the
middle ones.
Advantages:
Faster testing cycles.
Effective for large, layered systems.
4. Big Bang Integration Testing
All modules are integrated at once.
Risky for large systems due to difficulty in identifying errors.
Advantages:
Requires less planning.
Suitable for small, independent modules.
Disadvantages:
Debugging is complex due to multiple simultaneous failures.
Comparison Between Top-Down and
Bottom-Up Approaches
Criterion Top-Down Approach Bottom-Up Approach
Starts with submodules
Starts with the main
and integrates them
Integration Flow module and integrates
upwards until reaching the
submodules step by step.
main module.
Uses drivers to simulate
Uses stubs to simulate
Testing Aids missing higher-level
missing submodules.
modules.
Detects low-level
Detects high-level
Error Detection module errors early but
design issues early.
integration issues later.
More complex as it
Simpler as drivers mimic
Test Case Design requires designing stub
actual module behavior.
responses manually.
Test cases can be reused Most test cases cannot be
Reusability of Test
for system-level reused once integration is
Cases
validation. complete.
Observes system behavior System behavior is
System-Level
early, but only in a limited observed at the final
Observation
way. stages.
Faster in verifying
Slower in detecting lower
individual modules but
Time and Cost module issues, leading to
Hardware Design Verification Tests
refers to a set of tests conducted to ensure that a
hardware design meets its intended specifications and
functions correctly before mass production or
deployment. These tests are crucial for detecting design
flaws, verifying compliance with standards, and ensuring
reliability under different conditions.
Key Aspects of Hardware Design Verification Tests:
1- Functional Verification – Ensures the hardware
performs as expected based on design specifications.
2- Timing Analysis – Verifies that the design meets
required timing constraints (e.g., clock cycles, signal
propagation).
3- Power Analysis – Checks power consumption to
optimize efficiency and ensure compliance with power
limits.
4- Signal Integrity Testing – Ensures signals are
transmitted correctly without excessive noise or
distortion.
5- Environmental Testing – Evaluates the
hardware's performance under different conditions
such as temperature, humidity, and vibration.
6- Compliance Testing – Ensures the hardware
adheres to industry standards and regulations.
7- Reliability & Stress Testing – Assesses long-
term performance and resilience under extreme
conditions.
These tests are typically conducted using
simulation tools, prototype testing, and automated
verification techniques to minimize design flaws
before final production.
Hardware and Software Compatibility Matrix
What is a Hardware and Software
Compatibility Matrix?
A Hardware and Software Compatibility
Matrix is a document that outlines the
compatibility between different versions of
hardware and multiple software versions. It is
officially used to ensure that a product
operates correctly when changes are made to
its components.
When updates or modifications are made to
hardware or software, an Engineering
Change Order (ECO) is created. This is an
official document describing the changes
made to hardware or software.
Hardware ECO Process
(Engineering Change Order)
 When is this process used?
 When a hardware set needs a new release or a specific
component update, the ECO process is implemented to
ensure compatibility with other components.
Process Steps:
 Issuing a Design Change Notice: A notice is sent specifying
potential changes and their impact on the software.
 Reviewing the Changes: The changes are reviewed by the
Change Control Board (CCB) to prevent issues.
 Releasing the Engineering Change Order (ECO): Upon
approval, the compatibility matrix is updated based on the new
changes.
 Conducting Integration Testing: The changes are tested to
ensure software compatibility with the updated hardware.
Software ECO Process (Engineering Change Order)
 When is this process used?
 When a team needs to release a new software version and ensure its
compatibility with hardware components.
 Process Steps:
 Issuing a Software Build with Release Notes: Information about new
changes is provided.
 System Testing: Updates are tested by the system testing team.
 Readiness Review: A meeting is held to discuss the new release's
readiness.
 Releasing the Engineering Change Order (ECO): The changes are
documented and approved.
 Updating the Compatibility Matrix: Official documentation is updated to
ensure hardware compatibility.
 Performing New Integration Testing: The new version is verified for
compatibility before the final release.
 Conclusion
 The Hardware and Software Compatibility Matrix is used to document
compatibility between different component versions.
 Engineering Change Orders (ECOs) are formal processes to ensure
compatibility before releasing updates.
 There are two main processes: Hardware ECO and Software ECO, each
System Integration Testing (SIT)
Plan
A System Integration Testing (SIT) plan is a
structured approach to testing how different
system components work together. It ensures
that different modules, subsystems, or external
systems integrate smoothly before full system
deployment.
What is the SIT Plan?
SIT requires a controlled environment, clear
communication between developers and testers,
and careful decision-making.
It involves planning, designing tests, and
executing them over weeks or months.

Structure of the SIT Plan (Table 7.3
Overview)
A framework for SIT includes the following key sections:
1. Scope of Testing
 Defines what will be tested, including system functionality,
performance, and other characteristics.
2. Integration Structure
 Different phases of integration testing (e.g., functional, end-to-
end, endurance testing).
 Identifies which modules or subsystems will be integrated at each
phase.
 Specifies the testing schedule (e.g., daily, weekly builds).
 Describes the test environment (hardware, simulators, software
tools, etc.).
3. Criteria for Each Integration Test Phase
 Entry Criteria → What conditions must be met before testing starts?
 Exit Criteria → What conditions must be met before testing is
considered successful?
 Integration techniques → How the system components will be
integrated (e.g., top-down or bottom-up).
 Test Configuration Setup → Ensures the system is ready for
testing.
4. Test Specifications
Each test phase includes:
Test Case ID (unique identifier for each test).
Input Data (what data is used for testing).
Initial Condition (the system’s state before the test
starts).
Expected Results (what the test should produce).
Test Procedure (step-by-step execution and result
interpretation).
5. Actual Test Results
Compares expected results with actual results.
Records problems, failures, or unusual system behavior.
6. References
Any supporting documents, standards, or guidelines used.
7. Appendix
Additional details, such as test logs or explanations of
technical terms.
Off-the-Shelf (OTS) Component Integration
1. What is Off-the-Shelf Component
Integration?
Instead of building software from scratch,
organizations purchase ready-made software
components from third-party vendors and
integrate them into their systems.
This approach helps reduce costs and
development time.
A major challenge in integrating different
components is compatibility issues, as
different vendors develop components
independently.
2. Key Elements for Successful Integration
 Researchers have identified three key techniques that help integrate
different software components smoothly:
 Wrappers
 A wrapper is a piece of code that isolates a component from other
components.
 It can also restrict how the component is used to avoid compatibility issues.
 Glue Components
 These are software elements that connect and unify different components.
 They help ensure smooth communication between software modules.
 Tailoring
 Tailoring adds new functionalities to a component without modifying its
original code.
 Example: Using scripting to extend an application’s capabilities dynamically.

3. Role of Adapters in Integration


 Adapters help solve compatibility issues between different systems.
 They act as interfaces that allow components to communicate properly.
 Adapters are especially useful for resolving syntax mismatches during
integration
Off-the-Shelf (OTS) Component Testing
1. What is OTS Component Testing?
Organizations test Off-the-Shelf (OTS)
components before purchasing them to ensure
compatibility and quality. There are two main
types of testing:
Acceptance Testing – Checking if the
component meets specific criteria before
purchase.
Integration Testing – Ensuring the component
works well with other system components after
purchase.
A common issue in integration is insufficient
acceptance testing, which can lead to
problems during debugging and system failures.
2. Challenges in Integrating OTS Components
According to research by Basil and Boehm, OTS
integration is difficult due to:
No access to the source code – Buyers cannot
modify the component.
Vendor control – Only the vendor can update or
fix the component.
Complex installation requirements – The
component may require additional setup.
Built-in Testing (BIT)
. What is Built-in Testing?
Built-in Testing (BIT) is a self-testing
mechanism integrated into software components
to help detect, diagnose, and handle faults
during runtime. This system improves the
reliability of components when reused in new
applications.
A software component can generate its own
test cases or include features that allow users to
conduct tests on demand. This capability is called
"self-testability," which is a key part of the BIT
methodology that makes testing and
maintenance more efficient.

You might also like