0% found this document useful (0 votes)
5 views32 pages

CH 3

The document discusses the importance of multi-level software testing, detailing various testing levels such as Unit Testing, Integration Testing, System Testing, and Acceptance Testing. It highlights the benefits of comprehensive coverage, early defect detection, and improved quality, while also addressing specific testing methodologies and best practices. Additionally, it covers performance and security testing, emphasizing the need for effective communication between users and developers to ensure software reliability and meet user expectations.

Uploaded by

abenezeradugna38
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views32 pages

CH 3

The document discusses the importance of multi-level software testing, detailing various testing levels such as Unit Testing, Integration Testing, System Testing, and Acceptance Testing. It highlights the benefits of comprehensive coverage, early defect detection, and improved quality, while also addressing specific testing methodologies and best practices. Additionally, it covers performance and security testing, emphasizing the need for effective communication between users and developers to ensure software reliability and meet user expectations.

Uploaded by

abenezeradugna38
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Software Testing and

Quality Assurance

Chapter 3: Levels of testing


Brainstorming Questions

What are the key benefits of a multi-level testing


approach?
What are the different types of testing levels, and
how do they relate to each other?
What are the goal of performance testing
How can you test software for security
vulnerabilities?
How can you involve end-users in the acceptance
testing process?
The Need for Levels of Testing
Execution-based software testing, especially for large
systems, is usually carried out at different levels.
Here are some reasons.
Comprehensive coverage: Each level focuses on
different aspects of the software, ensuring that all
potential defects are identified.
Early defect detection: Identifying defects earlier in
the development process can save time and resources.
Improved quality: A well-tested software product is
more likely to be reliable and meet user expectations.
Risk mitigation: By testing at various levels,
organizations can mitigate risks associated with
software failures.
Testing level

Figure 5.1. testing level


Unit Testing
 A unit is the smallest testable part of software.
 In procedural programming a unit may be an
individual program, function, procedure, etc.
 In OOP, the smallest unit is a method.
 Unit testing is often neglected but it is, in fact, the
most important level of testing.

Figure 5.2. Unit testing


5
Continued...
METHOD
 Unit Testing is performed by using the
method White Box Testing
When is it performed?
 Unit Testing is the first level of testing and
is performed prior to Integration Testing
Who performs it?
 Unit Testing is normally performed by
software developers themselves or their
peers.
 In rare cases it may also be performed by
independent software testers.
Figure 5.3. unit testing

Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill


Continued...
 Best Practices
 Test-Driven Development (TDD)
 High Code Coverage
 Maintainable Tests
 Test Automation
 Testing Frameworks
 Java: JUnit
 .NET: NUnit
 Python: Pytest
 JavaScript: Jest

Object-Oriented Systems Development Bahrami © Irwin/ McGraw-Hill


Unit Test Planning
 A unit test plan, though often informal, is valuable. It can be a
standalone document or part of a larger test plan.
 Key Phases:
 Phases1: Approach and Risk
 Identify risks
 Define test case design techniques
 Determine data validation and recording methods
 Establish completeness criteria and termination conditions
 Phases2:Identify Unit Features
 Review specifications and design details to identify testable features
 Functions, performance, states, control structures, messages, data
flow, I/O characteristics
 Phases3: Plan Refinement
 Add details to approach, resource, and scheduling sections

8
Unit Test Design
 Create test cases with:
 Input data
 Expected outputs
 Test procedures
 Organize test cases in a tabular or network format,
including:
 Object ID
 Test Case ID
 Purpose
 Test Case Steps

9
Test Harness
 A test harness is a framework used to test software
components.
 It includes drivers that call the component and stubs that
simulate modules it interacts with.

Figure 5.4Test Harness

10
Running the unit tests and recording results
 Unit tests can begin when:
 Units are available from developers.
 Test cases are designed and reviewed.
 Test harness and supporting tools are ready.
 Other potential causes for test failures:
 Incorrect test case specification.
 Execution errors.
 Environment faults.
 Unit design flaws

11
Continued…
 The test summary report should document the causes
of any test failures.

Figure 5.5 Summary work sheet for unit test results

12
Integration Testing
 Integration testing is a software testing method that combines
individual units and tests them as a group to verify their
interactions and interfaces.
 It ensures that different parts of the system work together
seamlessly.
ANALOGY
 During the process of manufacturing a ballpoint pen, the cap, the
body, the tail and clip, the ink cartridge and the ballpoint are
produced separately and unit tested separately. When two or more
units are ready, they are assembled and Integration Testing is
performed. For example, whether the cap fits into the body or not.
METHOD
 Any of Black Box, White Box, grey Box Testing methods can be
used.
 The integration testing strategy determines the order in which
subsystems are tested and integrated

13
Integration Testing Strategy
 Top Down is an approach to
Integration Testing where top
level units are tested first and
lower level units are tested step
by step after that. This approach
is taken when top down
development approach is
followed
 Bottom Up is an approach to
Integration Testing where
bottom level units are tested
first and upper level units step
by step after that. This approach
is taken when bottom up
development approach is
followed.
Figure 5.6. integration testing
14
Continued...
 Big Bang is an approach to
Integration Testing where all or
most of the units are combined
together and tested at one go. This
approach is taken when the testing
team receives the entire software in
a bundle.

 Sandwich/Hybrid is an approach to
Integration Testing which is a
combination of Top Down and
Bottom Up approaches.
Figure 5.8. integration testing
15
Continued...
 Integration test plan includes:
 Dependencies: Clusters this cluster relies on.
 Functionality: A description of what the cluster does.
 Classes: A list of classes within the cluster.
 Test Cases: A set of tests for the cluster.
 The integration testing strategy depends on various
factors, including:
 Project complexity
 Module dependencies
 Team structure and skills
 Time constraints

16
System Testing
 It involves testing the entire system to ensure it meets all
requirements.
 It involves creating a detailed test plan, designing test
cases, and executing them to identify and fix defects.
 This process is crucial for verifying both functional
behaviour and non-functional attributes like
performance, security, and usability.
 Resource-intensive to verify both functional behaviour
and quality attributes.
 Crucial for detecting external interface defects and
complex issues.
 Performed by a dedicated team.
 Followed by user acceptance testing (alpha/beta).
17
Continued…

Figure 5.10: Types of System Tests

18
Functional Testing
 Goal: Test functionality of system
 Functional tests: Verify system behaviour against
requirements.
 Example: Personal finance system tests account setup,
entry management, and reporting.
 Client expectation: Functional tests are essential for
acceptance.
 Black box testing: Focus on external behaviour.
 Test cases: Based on requirements and key functions
(use cases).
 Test case reuse: Unit test cases can be reused, but new
ones are needed.

19
Performance Testing
 Performance testing aims to evaluate a system's response to
various workloads and conditions. It involves:
 Stress testing: Simulating heavy loads to assess the system's
behaviour under stress.
 Volume testing: Testing the system's performance with large
volumes of data.
 Endurance testing: Assessing the system's ability to handle
sustained workloads over time.
Goals:
 Identify performance bottlenecks.
 Optimize resource allocation.
 Predict future performance.
 Ensure the system meets non-functional requirements.

20
Stress Testing
 Stress testing subjects a system to a load that exceeds
its expected capacity.
 Purpose: Identifies the system's breaking point and
assesses behaviour under extreme conditions.
 Example: An operating system designed for 10
interrupts/second might be tested with 20
interrupts/second.
 Goal: Push the system to its limits to identify
bottlenecks, degradation, and failures.
 Importance: Ensures reliability under peak
workloads, boosting user confidence.

21
Configuration Testing

 Verifies system behaviour when hardware or software


components are changed.
 Ensures system stability and operability after
modifications.
 Evaluates system performance and availability under
different hardware setups.
 Critical for specialized software like real-time or
embedded systems.
 Addresses user need for hardware flexibility and
interchange ability.

22
Security Testing
 Protect software systems from vulnerabilities
 Ensure the safety of user data
 Meet increasing reliance on internet-based applications
 Safeguard user data (confidentiality, integrity,
availability)
 The Role of Security Testing is to evaluate a system's
ability to withstand threats.
 Users and clients must communicate security
requirements clearly to developers and testers.
 Effective communication ensures proper implementation.
 Incorporate security measures early in the development
process.

23
Other types of Performance Testing
Volume testing
Test what happens if large amounts of data are handled
Compatibility test
Test backward compatibility with existing systems
Timing testing
Evaluate response times and time to perform a function
Environmental test
Test tolerances for heat, humidity, motion
Quality testing
Test reliability, maintain- ability & availability
Recovery testing
Test system’s response to presence of errors or loss of data
Human factors testing
Test with end users.
24
Acceptance Testing
 Goal: Demonstrate system is ready for operational use
 Test selection: Client chooses tests, often based on
integration testing.
 Client performs testing: Using requirements and user
manual.
 Test reusability: System test cases may be applicable, but
real-world evaluation is needed.
 Post-acceptance feedback: Client identifies
fulfilled/unfulfilled requirements.
 Client approval: If satisfied, approves installation.
 Installation and retesting: Developers set up the system,
retest if needed.

25
Alpha and Beta test
Alpha test: Beta test:
 Alpha testing is conducted  Pre-release stage where
by the client in the software is deployed in a real-
developer's environment.
world environment.
 Developers can directly
observe and address  No developer presence.
problems.  Assesses performance under
 Key areas tested include actual usage conditions.
usability, functionality,  Users provide feedback on:
content accuracy
 Typographical errors
 It aims to identify and fix
 Confusing application flow
issues like typos, broken
links, and unclear  Crashes or unexpected
instructions before release. behavior

26
Information needed at different Levels of
Testing

Figure 5.11 testing levels


27
System Testing

Figure 5.12. system testing


28
Regression Testing
 Testing to ensure existing functionality remains intact
after software changes.
 Purpose: Verify modifications haven't introduced new
bugs or impacted previous features.
 Importance: Maintain previous functionality in multi-
release projects.
 Key Practices:
 Re-test affected components.
 Keep old test cases and procedures.
 Can be manual or automated.

29
Regression Testing Levels
 Unit Level: Test individual units of code after changes.
 Re-integration: Test interactions between changed units
and the rest of the system.
 Function Level: Test specific functionalities after changes.
 System Level: Test the entire system after changes.
Requirements:
 Change information: Details of modifications.
 Updated documentation: Updated requirements, design,
and user manuals.
 Testing process: Plan, design, execution, and evaluation.
 Testing methods: Techniques used (e.g., test cases,
automation).
 Criteria: Pass/fail conditions for test cases.
30
Continued…

Software Change
Analysis

Software Change
Impact Analysis

Define Regression
Testing Strategy

Build Regression
Test Suite

Run Regression
Tests at different levels

Report Retest
Results
Thank you!
Questions?

You might also like