Software Testing Lecture Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

Dr. R.A.N.

M ARTS AND SCIENCE COLLEGE


(Co-Education), Rangampalayam, Erode - 638009
Affiliated to Bharathiar University
Accredited with B+ by NAAC

LECTURE HANDOUTS

B.SC – Computer Science

Semester V

Skill Based Subject 3: Software Testing

Total Lecture Hours - 75

Course Faculty: Yuvaraj S

Course Objectives:

The main objectives of this course are to:

1. To study fundamental concepts in software testing

2. To discuss various software testing issues and solutions in software unit


test, integration and system testing.

3. To expose the advanced software testing topics, such as object-oriented


software testing methods.

4. List of different software testing techniques and strategies and be able to


apply specific automated unit testing method to the projects.

UNIT 1 – Software Development Life Cycle Models (15 Hours)

Software Development Life Cycle models: Phases of Software project–Quality,


Quality Assurance, Quality control – Testing, Verification and Validation –
Process Model to represent Different Phases - Life Cycle models. White-Box
Testing: Static Testing – Structural Testing –Challenges in White-Box Testing.
Software Development Life Cycle Models:

The Software Development Life Cycle (SDLC) is a systematic process for


planning, creating, testing, and deploying software. Various models define the
stages of this cycle. Some common SDLC models include:

1. Waterfall Model:

 Sequential approach with distinct phases.

 Progression to the next phase only after completion of the


previous one.

2. Iterative Model:

 Repeating cycles of development and refinement.

 Allows for flexibility and refinement during the development


process.

3. Agile Model:

 Incremental and iterative approach.

 Emphasizes adaptability to changing requirements and customer


feedback.

4. Spiral Model:

 Combines aspects of both waterfall and iterative models.

 Iterative cycles with risk assessment and planning in each


iteration.

Phases of a Software Project:

1. Initiation:

 Identifying the need for software.

 Defining project scope, objectives, and feasibility analysis.

2. Planning:

 Creating a detailed project plan.

 Resource allocation, scheduling, and risk management.


3. Execution:

 Implementation of the project plan.

 Coding, testing, and integration of software components.

4. Monitoring and Controlling:

 Tracking project performance.

 Making adjustments to ensure that project goals are met.

5. Closing:

 Finalizing all project activities.

 Handing over deliverables and obtaining customer feedback.

Quality, Quality Assurance, and Quality Control:

1. Quality:

 Meeting customer expectations and requirements.

 A measure of excellence or superiority.

2. Quality Assurance (QA):

 The systematic process to ensure quality in the development


process.

 Includes process audits, code reviews, and adherence to


standards.

3. Quality Control (QC):

 Ensuring the quality of the product itself.

 Involves testing, inspections, and validation processes.

Testing, Verification, and Validation:

1. Testing:

 Systematically evaluating a system or component.

 Identifying defects or ensuring that the system works as intended.


2. Verification:

 Evaluation of work products during development.

 Ensures that the product is being built according to the


requirements.

3. Validation:

 Evaluation of the final product to ensure it meets the customer's


needs.

 Confirms that the product satisfies the specified requirements.

Process Models to Represent Different Phases:

1. Unified Process (UP):

 Iterative development with focus on use cases.

 Emphasizes collaboration and component-based architecture.

2. Capability Maturity Model Integration (CMMI):

 Framework for process improvement.

 Helps organizations improve their processes and systems.

White-Box Testing:

1. Static Testing:

 Examining the code without executing it.

 Includes reviews, inspections, and walkthroughs.

2. Structural Testing:

 Testing the structure or internal logic of the software.

 Examples include statement coverage and branch coverage.

Challenges in White-Box Testing:

1. Complexity:

 Dealing with intricate code structures.

 Ensuring all code paths are tested.


2. Resource Intensive:

 Requires a deep understanding of the code.

 Time-consuming compared to black-box testing.

3. Maintenance Issues:

 Challenges in updating or modifying the code.

 Can be more difficult when the code base is large.


UNIT 2 – Black Box Testing (15 Hours)

Black-Box Testing: What is Black-Box Testing? - Why Black-Box Testing? –


When to do Black- Box Testing? – How to do Black-Box Testing? – Challenges in
White Box Testing – Integration Testing: Integration Testing as Type of Testing
– Integration Testing as a Phase f Testing–Scenario Testing– Defect Bash.

Black-Box Testing:

What is Black-Box Testing?


Black-Box Testing is a software testing methodology that focuses on assessing
the functionality of a system without delving into its internal code or structure.
In other words, the tester is oblivious to the internal workings of the application
and treats it as a "black box" where inputs are fed, and outputs are observed.
The primary goal is to evaluate the software's compliance with specified
requirements and ensure that it functions as expected from the end user's
perspective.

Why Black-Box Testing?

1. User-Centric Approach: Black-Box Testing mirrors the end user's


experience, ensuring that the software meets user expectations. This
method is vital for validating the software's functionality and user
interface.
2. Independent Testing: Since testers don't need knowledge of the internal
code, Black-Box Testing allows for independent testing by individuals or
teams without direct involvement in the software development process.
3. Requirement Verification: It helps in verifying whether the software
meets the specified requirements and adheres to the defined
specifications. This is crucial for ensuring that the software aligns with
user needs and business objectives.
4. Error Localization: Black-Box Testing is effective in identifying errors,
anomalies, or unexpected behaviors that may occur during real-world
usage. This aids in improving the overall reliability of the software.
When to do Black-Box Testing?

1. Functional Testing: Black-Box Testing is ideal for functional testing,


where the primary focus is on the software's functionality and features. It
ensures that the application performs as intended.
2. System Testing: During the system testing phase of software
development, Black-Box Testing is employed to evaluate the integrated
system's overall functionality.
3. Acceptance Testing: It is widely used in acceptance testing to verify
whether the software meets user requirements and is ready for
deployment.

How to do Black-Box Testing?

1. Define Test Cases: Develop comprehensive test cases based on the


software requirements, specifications, and user expectations. These test
cases will serve as a guide for executing the testing process.
2. Input-Output Analysis: Feed various inputs into the system and analyze
the corresponding outputs. The focus is on understanding how the
software responds to different inputs without knowing the internal logic.
3. Equivalence Partitioning: Group input data into partitions or classes
that are expected to exhibit similar behaviors. This helps in minimizing
the number of test cases while maximizing coverage.
4. Boundary Value Analysis: Test the software's behavior at the
boundaries of permissible input values. This helps in uncovering potential
issues related to edge cases.
5. Random Testing: Conduct random testing by inputting data without a
predefined pattern. This helps in identifying unforeseen issues that might
arise during real-world usage.
In conclusion, Black-Box Testing is an indispensable part of the software
testing lifecycle, ensuring that software meets user requirements and
functions seamlessly in diverse scenarios. Its emphasis on the end user's
perspective and independence from internal code intricacies make it a
valuable testing methodology in the realm of software quality assurance.

Challenges in White Box Testing:

1. Code Complexity:
 Challenge: White Box Testing requires an in-depth understanding of the
internal code structure. In cases of complex codebases, understanding
every pathway and interaction can be challenging.
 Solution: Testers and developers need to collaborate closely to ensure
comprehensive test coverage. Documentation and code comments can
also assist in understanding intricate code.

2. Code Changes and Maintenance:


 Challenge: Frequent changes to the codebase, especially in agile
development, can pose challenges in keeping white box tests up-to-date.
 Solution: Establishing a robust version control system and adopting
continuous integration practices can help streamline the process of
updating and maintaining tests with code changes.
3. Testing All Paths:
 Challenge: Achieving complete path coverage in complex applications
can be practically impossible due to the sheer number of possible paths.
 Solution: Prioritize testing critical paths and focus on boundary
conditions. Techniques such as path analysis and code coverage tools
can aid in identifying untested areas.
4. Security Concerns:
 Challenge: White Box Testing may not adequately address security
vulnerabilities, as it often focuses on functional aspects rather than
potential exploits.
 Solution: Combine White Box Testing with security testing
methodologies like penetration testing to identify and address security
weaknesses.
5. Skill Dependency:
 Challenge: Effective White Box Testing requires skilled testers with a
deep understanding of programming languages and software
architecture.
 Solution: Invest in training programs for testers to enhance their
programming and code analysis skills. Collaboration with developers can
bridge the knowledge gap.

Integration Testing: Uniting Components for Seamless Functionality


Integration Testing as a Type of Testing:

1. Definition:
 Integration Testing is a software testing phase where individual
components or modules are combined and tested as a group to ensure
they function seamlessly together. The primary goal is to identify and
address issues related to the interaction between integrated
components.

2. Types of Integration Testing:


 Top-Down Integration Testing: Starts with testing the higher-level
modules and gradually incorporates lower-level modules. Stubs are used
for simulating lower-level modules.
 Bottom-Up Integration Testing: Begins with testing the lower-level
modules and progressively integrates higher-level modules. Drivers are
used to simulate higher-level modules.
3. Benefits:
 Early Issue Identification: Integration Testing helps identify interface
issues and integration problems early in the development process,
preventing them from escalating into more significant problems.
 Improved Collaboration: It encourages collaboration between
development teams working on different modules, ensuring that the
integrated system functions cohesively.
 Increased Confidence: Successful integration testing instills confidence in
the reliability and performance of the overall system, making it ready for
subsequent testing phases and deployment.
4. Strategies:
 Big Bang Integration: All components are integrated simultaneously, and
the entire system is tested as a whole. This approach is suitable for small
projects or when component interfaces are well-defined.
 Incremental Integration: Components are integrated and tested
incrementally, one at a time. This allows for the detection of issues early
in the development process.
5. Challenges:
 Complexity: As the number of components increases, the complexity of
integration testing grows, making it challenging to cover all possible
interactions.
 Dependency Management: Ensuring that all dependencies between
modules are properly handled can be complex, especially in large and
interconnected systems.
 Environment Setup: Creating a realistic testing environment that mirrors
the production environment can be time-consuming and resource-
intensive.
6. Tools:
 JUnit: A widely used testing framework for Java applications that
supports integration testing.
 PyTest: A testing framework for Python that facilitates integration
testing.
In summary, Integration Testing plays a pivotal role in ensuring that different
components of a software system work seamlessly together. Despite its
challenges, effective integration testing is crucial for building robust and
reliable software systems.

Integration Testing as a Phase of Testing:

1. Definition:

 Integration Testing is a crucial phase in the software testing life cycle


where individual components or modules are combined and tested
together to ensure they interact seamlessly. The primary objective is to
identify and rectify issues related to the interfaces and interactions
between integrated components.
2. Importance:

 Early Issue Detection: Integration Testing helps identify problems in the


interaction between components early in the development process,
reducing the likelihood of more complex issues arising later.

 System Reliability: By testing the integration of different modules, this


phase ensures that the system functions as a cohesive unit, meeting the
specified requirements.

 Collaboration: It encourages collaboration between development teams


working on different modules, fostering communication and
coordination.

3. Types of Integration Testing:

 Top-Down Integration Testing: Focuses on testing higher-level modules


first, gradually incorporating lower-level modules. Stubs simulate the
behavior of lower-level modules.

 Bottom-Up Integration Testing: Begins with testing lower-level modules


and progressively integrates higher-level modules. Drivers simulate the
behavior of higher-level modules.

 Big Bang Integration Testing: All components are integrated


simultaneously, and the entire system is tested as a whole.

 Incremental Integration Testing: Components are integrated and tested


incrementally, one at a time.

4. Challenges:

 Complexity: As the number of components increases, so does the


complexity of integration testing, making it challenging to cover all
possible interactions.

 Dependency Management: Ensuring that all dependencies between


modules are properly handled can be complex, especially in large and
interconnected systems.

 Environment Setup: Creating a realistic testing environment that mirrors


the production environment can be time-consuming and resource-
intensive.
Scenario Testing:

1. Definition:

 Scenario Testing is a software testing technique that involves creating


and executing real-world scenarios to validate the application's
functionality. It goes beyond individual test cases and focuses on testing
end-to-end user interactions with the software.

2. Key Aspects:

 User-Centric: Scenarios are designed to mimic real-world usage, ensuring


that the application meets user expectations and requirements.

 End-to-End Testing: It involves testing the complete workflow of a


particular feature or functionality, including all possible paths and
interactions.

 Data Variation: Scenarios often incorporate different sets of data to


simulate diverse user inputs and usage patterns.

3. Benefits:

 Comprehensive Testing: Scenario testing ensures comprehensive


coverage by considering various aspects of user interactions and
workflows.

 User Satisfaction: By focusing on real-world scenarios, this testing


method helps ensure that the software is user-friendly and satisfies user
expectations.

 Issue Identification: It can reveal issues that may not be apparent in


individual test cases, such as integration issues or unexpected
interactions between features.

4. Steps in Scenario Testing:

 Identify Scenarios: Define scenarios based on user stories, business


requirements, and expected user interactions.

 Create Test Data: Prepare relevant test data that reflects different
scenarios and user inputs.
 Execute Scenarios: Run the scenarios, observing how the system behaves
in each situation, and identify any deviations from expected behavior.

 Document Results: Document the results, including any issues or


unexpected behaviors encountered during scenario execution.

Defect Bash:

1. Definition:

 Defect Bash, also known as Bug Bash or Bug Bash Testing, is an informal
and collaborative testing event where stakeholders, including
developers, testers, and sometimes end-users, come together to identify
and address software defects.

2. Objectives:

 Intensive Testing: The primary goal is to conduct intensive testing in a


short period, often with a specific focus on finding and fixing defects.

 Collaboration: Encourages collaboration and communication between


different stakeholders, fostering a sense of shared responsibility for
software quality.

3. Key Features:

 Time-Bound: Defect Bashes are usually time-bound events, ranging from


a few hours to a few days, during which participants actively explore the
software for defects.

 Diverse Testers: Involves participants with diverse roles, including


developers, testers, and sometimes end-users, to bring different
perspectives to the testing process.

4. Process:

 Defect Identification: Participants actively explore the software, identify


defects, and report them in a centralized system.

 Defect Prioritization: Once defects are identified, they are prioritized


based on severity and potential impact on the software.

 Resolution: Development and testing teams collaborate to address and


fix the identified defects.
5. Benefits:

 Rapid Issue Identification: Defect Bashes help rapidly identify and


address defects, particularly those that might not be uncovered in
regular testing cycles.

 Team Building: Collaborative testing events foster team building and


communication, breaking down silos between different roles in the
software development process.

 User Feedback: If end-users are involved, Defect Bashes provide an


opportunity for them to provide direct feedback on the software.
UNIT 3 – System and Acceptance Testing (15 Hours)

System and Acceptance Testing: System Testing Overview – Why system


testing is done? – Functional versus Non-functional Testing-Functional testing-
Non-functional Testing– Acceptance Testing– Summary of Testing Phases.

System Testing Overview:

1. Definition:

 System Testing is a comprehensive testing phase in the software


development life cycle where the complete and integrated software
system is tested. The primary goal is to ensure that the software
functions as intended in the specified environment and meets the
defined requirements.

2. Key Characteristics:

 End-to-End Testing: System testing involves testing the entire software


system from end to end, including all integrated components and
interactions.

 External Interfaces: It assesses how the system interacts with external


elements, such as databases, hardware, networks, and other software
applications.

 System Behavior: The focus is on evaluating the system's behavior and


performance under various conditions to ensure it aligns with user
expectations.

3. Types of System Testing:

 Functional Testing: Ensures that the system functions according to


specified requirements.

 Performance Testing: Evaluates the system's responsiveness, scalability,


and overall performance.

 Security Testing: Identifies and addresses vulnerabilities and ensures the


system is secure against unauthorized access.
 Usability Testing: Assesses the user-friendliness of the system and its
adherence to user experience design principles.

 Compatibility Testing: Ensures the system works seamlessly across


different platforms, browsers, and devices.

4. Test Environment:

 Realistic Environment: System testing is conducted in an environment


that closely mirrors the production environment to simulate real-world
conditions.

 Test Data: It involves using realistic and diverse test data to assess the
system's behavior in various scenarios.

5. Testing Levels:

 Alpha Testing: Internal testing conducted by the development team


before releasing the software to a selected group of users.

 Beta Testing: External testing involving a group of end-users who


evaluate the software in a real-world environment.

 Acceptance Testing: Ensures that the software meets user acceptance


criteria and is ready for deployment.

Why System Testing is Done?

1. Validation of Requirements:

 Ensuring Compliance: System testing verifies that the software complies


with the specified requirements outlined in the software requirements
specification (SRS) or other relevant documents.

2. Comprehensive Testing:

 End-to-End Evaluation: System testing assesses the entire software


system, providing a holistic view of its functionality and behavior.

3. Integration Verification:

 Integrated Functionality: Validates that all integrated components and


modules work seamlessly together, identifying and addressing any
integration issues.
4. Defect Identification:

 Early Issue Detection: System testing helps identify defects and issues
early in the development process, reducing the likelihood of critical
issues surfacing later.

5. Performance Assessment:

 Scalability and Responsiveness: Evaluates the system's performance


under varying conditions, assessing factors like scalability,
responsiveness, and resource utilization.

6. Security Assurance:

 Vulnerability Identification: Security testing within the system testing


phase helps identify and address vulnerabilities, ensuring the software is
secure against potential threats.

7. User Satisfaction:

 Usability and User Experience: Ensures that the software is user-friendly,


meets design expectations, and provides a positive user experience.

8. Compliance and Standards:

 Adherence to Standards: Verifies that the software complies with


industry standards, regulations, and any legal requirements applicable to
the domain.

9. Risk Mitigation:

 Risk Assessment: System testing allows for the identification and


mitigation of potential risks associated with the software's functionality,
performance, and security.

10. Readiness for Deployment:

 Deployment Assurance: Confirms that the software is ready for


deployment by ensuring it meets the necessary quality standards and
user acceptance criteria.

Functional Testing:

1. Definition:
 Functional Testing is a type of software testing that focuses on verifying
that the software functions according to the specified requirements. It
involves testing the application's features and functionality to ensure
they meet the intended user expectations.

2. Key Aspects:

 User Requirements: Functional testing primarily revolves around


validating whether the software satisfies the user's specified
requirements.

 Test Cases: Test cases for functional testing are derived from the
software requirements and cover various scenarios to assess the
functionality comprehensively.

3. Types of Functional Testing:

 Unit Testing: Focuses on testing individual units or components of the


software in isolation to ensure they perform as intended.

 Integration Testing: Evaluates the interaction and cooperation between


different components or modules to verify the seamless integration of
the entire system.

 System Testing: Tests the entire system's functionality in an integrated


environment to ensure it meets specified requirements.

 Acceptance Testing: Ensures that the software satisfies user acceptance


criteria and is ready for deployment.

4. Common Functional Testing Techniques:

 Black-Box Testing: Focuses on assessing the software's functionality


without examining its internal code or structure.

 White-Box Testing: Involves testing the software with knowledge of its


internal code, structure, and logic.

 User Acceptance Testing (UAT): Conducted by end-users to validate that


the software meets their business needs and requirements.

Non-functional Testing:

1. Definition:
 Non-functional Testing is a type of software testing that assesses the
non-functional aspects of a system, such as performance, security,
usability, and scalability. Unlike functional testing, non-functional testing
is not concerned with specific features but with how the system
performs under various conditions.

2. Key Aspects:

 Performance: Non-functional testing evaluates how the system performs


in terms of response time, scalability, and resource utilization.

 Security: Focuses on identifying and addressing vulnerabilities in the


software to ensure it is secure against unauthorized access and other
security threats.

3. Types of Non-functional Testing:

 Performance Testing: Assesses the system's responsiveness, speed, and


overall performance under different conditions, including load testing,
stress testing, and scalability testing.

 Security Testing: Identifies and addresses security vulnerabilities to


ensure the system is protected against unauthorized access, data
breaches, and other security threats.

 Usability Testing: Evaluates the software's user-friendliness, accessibility,


and overall user experience.

 Reliability Testing: Examines the software's reliability and stability under


various conditions to ensure it operates consistently without unexpected
failures.

4. Common Non-functional Testing Techniques:

 Load Testing: Assesses the system's performance under expected load


conditions to ensure it can handle the anticipated user activity.

 Stress Testing: Tests the system's behavior under extreme conditions to


identify its breaking point and assess how it recovers from failure.

 Usability Testing: Involves assessing the user interface and overall user
experience to ensure the software is easy to use and meets user
expectations.
 Security Penetration Testing: Simulates real-world cyber attacks to
identify and rectify potential security vulnerabilities.

Functional Testing vs. Non-functional Testing:

1. Focus:

 Functional Testing: Focuses on validating specific features and


functionalities of the software according to user requirements.

 Non-functional Testing: Focuses on assessing the non-functional aspects


of the software, such as performance, security, and usability.

2. What is Tested:

 Functional Testing: Tests what the system does.

 Non-functional Testing: Tests how well the system performs.

3. Test Objectives:

 Functional Testing: Ensures that the software meets user expectations


and specified requirements in terms of features and functionality.

 Non-functional Testing: Assesses the performance, security, reliability,


and other non-functional aspects of the software.

4. Examples:

 Functional Testing: Unit testing, integration testing, system testing,


acceptance testing.

 Non-functional Testing: Performance testing, security testing, usability


testing, reliability testing.

5. Test Cases:

 Functional Testing: Test cases are derived from user requirements and
focus on specific features.

 Non-functional Testing: Test cases assess aspects like response time,


scalability, security measures, and overall system performance.

In conclusion, both functional and non-functional testing are integral


components of the software testing process, each addressing distinct aspects
of software quality. While functional testing ensures that the software meets
specified requirements and functions as intended, non-functional testing
focuses on broader aspects such as performance, security, and usability to
ensure a well-rounded and high-quality software system.

Acceptance Testing:

1. Definition:

 Acceptance Testing is the final phase of the software testing process,


where the software is evaluated to determine whether it meets
specified requirements and is ready for deployment. It involves
validating the software's functionality from the end user's perspective
and ensuring that it aligns with business objectives.

2. Key Aspects:

 User Involvement: Acceptance testing often involves end-users who


assess the software to confirm that it satisfies their business needs.

 Criteria for Success: Success in acceptance testing is based on meeting


predefined acceptance criteria, which are established in collaboration
with stakeholders.

3. Types of Acceptance Testing:

 User Acceptance Testing (UAT): Conducted by end-users to validate that


the software meets their business needs and is ready for production
deployment.

 Operational Acceptance Testing (OAT): Focuses on assessing whether


the software is operationally ready, including considerations for system
maintenance, backups, and recovery.

4. Approaches to Acceptance Testing:

 Alpha Testing: Internal acceptance testing conducted by the


development team before releasing the software to a selected group of
users.

 Beta Testing: External acceptance testing involving a group of end-users


who evaluate the software in a real-world environment.

5. Benefits:
 User Satisfaction: Ensures that the software meets user expectations
and is aligned with business requirements, enhancing overall user
satisfaction.

 Quality Assurance: Provides a final check on the software's quality


before deployment, reducing the risk of critical issues arising in a live
environment.

 Risk Mitigation: Helps identify and address any remaining defects or


issues that may impact the software's performance or functionality.

Summary of Testing Phases:

1. Unit Testing:

 Focus: Individual components or modules.

 Objective: Validate that each unit of the software performs as intended.

2. Integration Testing:

 Focus: Interaction between integrated components.

 Objective: Identify and address issues related to the integration of


different modules.

3. System Testing:

 Focus: Entire software system.

 Objective: Evaluate the system's functionality, performance, and


behavior in an integrated environment.

4. Acceptance Testing:

 Focus: Confirming that the software meets specified requirements.

 Objective: Validate the software from the end user's perspective and
ensure it aligns with business objectives.

5. Types of Acceptance Testing:

 User Acceptance Testing (UAT): Involves end-users and validates the


software against their business needs.
 Operational Acceptance Testing (OAT): Focuses on the operational
readiness of the software.

6. Beta Testing:

 Approach: External acceptance testing involving a group of end-users.

 Objective: Evaluate the software's performance in a real-world


environment before full deployment.

7. Regression Testing:

 Focus: Ensuring that new changes do not adversely affect existing


functionalities.

 Objective: Detect and fix any unintended side effects of software


changes.

8. Performance Testing:

 Focus: Assessing the system's responsiveness, scalability, and overall


performance.

 Objective: Identify performance bottlenecks and ensure the software


can handle expected user loads.

9. Security Testing:

 Focus: Identifying and addressing security vulnerabilities.

 Objective: Ensure the software is secure against unauthorized access,


data breaches, and other security threats.

10. Usability Testing:

 Focus: Assessing the user interface and overall user experience.

 Objective: Ensure the software is user-friendly and meets design


expectations.

11. Summary:

 Testing Phases: Begin with unit testing, progress to integration and


system testing, and culminate in acceptance testing to ensure the
software meets user requirements and is ready for deployment.
 Iterative Process: Testing is often an iterative process, with regression
testing performed throughout the development life cycle to maintain
software quality.

 Collaboration: Effective communication and collaboration between


development and testing teams are essential for successful testing and
software delivery.
UNIT 4 – Performance Testing (15 Hours)

Factors governing Performance Testing–Methodology of Performance Testing–


tools for Performance Testing – Process for Performance Testing – Challenges.
Regression Testing: What is Regression Testing? – Types of Regression Testing
– When to do Regression Testing – How to do Regression Testing– Best
Practices in Regression Testing.

Factors Governing Performance Testing:

**1. Scalability:

 Definition: The ability of the system to handle an increasing amount of


workload or users.

 Importance: Ensures that the system can scale with growing demand
without a significant drop in performance.

**2. Response Time:

 Definition: The time taken by the system to respond to a user request.

 Importance: A critical metric, as users expect prompt responses for a


positive user experience.

**3. Throughput:

 Definition: The number of transactions or requests processed by the


system in a given time period.

 Importance: Indicates the system's capacity to handle a certain volume


of transactions effectively.

**4. Concurrency:

 Definition: The ability of the system to handle multiple users or


transactions simultaneously.

 Importance: Important for applications with a large user base or high


transaction volumes.
**5. Resource Utilization:

 Definition: The efficient use of system resources such as CPU, memory,


and disk.

 Importance: Assessing how well the system utilizes resources under


different workloads.

**6. Reliability:

 Definition: The ability of the system to consistently deliver acceptable


performance.

 Importance: Ensures that the system performs reliably under normal and
peak conditions.

Methodology of Performance Testing:

**1. Requirements Analysis:

 Define Objectives: Understand the performance goals and objectives for


the application.

**2. Test Planning:

 Define Scenarios: Identify and create performance test scenarios based


on user behavior and system requirements.

**3. Test Design:

 Create Scripts: Develop performance test scripts based on the identified


scenarios.

 Parameterization: Use parameterization to simulate different user inputs


and conditions.

**4. Test Execution:

 Run Tests: Execute the performance tests using the predefined scripts.

 Monitor Metrics: Monitor key performance metrics like response time,


throughput, and resource utilization.
**5. Analysis and Tuning:

 Analyze Results: Analyze test results to identify performance bottlenecks


and areas for improvement.

 Optimization: Implement changes to optimize performance, such as


code optimizations or infrastructure adjustments.

**6. Reporting:

 Generate Reports: Create comprehensive performance test reports


highlighting key findings and recommendations.

Tools for Performance Testing:

**1. Apache JMeter:

 Type: Open-source tool.

 Features: Supports load testing, performance testing, and functional


testing.

**2. LoadRunner:

 Type: Commercial tool.

 Features: Provides performance testing for a wide range of applications


and protocols.

**3. Gatling:

 Type: Open-source tool.

 Features: Scala-based tool for load testing web applications.

**4. Apache Benchmark (ab):

 Type: Command-line tool.

 Features: Simple and effective tool for basic performance testing.

**5. Neoload:

 Type: Commercial tool.

 Features: Offers performance testing for web and mobile applications.


Process for Performance Testing:

**1. Identify Testing Environment:

 Define Environment: Set up a testing environment that mirrors the


production environment.

**2. Identify Performance Acceptance Criteria:

 Define Criteria: Establish performance criteria and goals based on


business requirements.

**3. Plan and Design Performance Tests:

 Create Scenarios: Design performance test scenarios that mimic real-


world user behavior.

 Define Metrics: Identify key performance metrics to be measured.

**4. Configure Test Environment:

 Prepare Resources: Ensure that the necessary hardware, software, and


network resources are configured for testing.

**5. Implement Test Design:

 Develop Scripts: Create performance test scripts based on the designed


scenarios.

 Parameterize Scripts: Use parameterization to simulate variations in user


inputs.

**6. Execute Tests:

 Run Tests: Execute performance tests and monitor the system's


performance under different conditions.

**7. Analyze, Optimize, and Retest:

 Analyze Results: Analyze test results to identify performance


bottlenecks.

 Optimize: Implement optimizations to address identified issues.

 Retest: Conduct retesting to validate performance improvements.


**8. Report and Monitor:

 Generate Reports: Prepare comprehensive performance test reports.

 Continuous Monitoring: Implement continuous monitoring for ongoing


performance assessment.

Challenges in Performance Testing:

**1. Realistic Simulation:

 Challenge: Simulating real-world scenarios accurately can be challenging,


leading to potentially inaccurate results.

**2. Dynamic Environments:

 Challenge: Dynamic and evolving environments may impact test results,


especially in agile development.

**3. Scalability Testing:

 Challenge: Testing for scalability requires significant resources and may


be complex to implement effectively.

**4. Resource Constraints:

 Challenge: Limited resources, such as hardware or network


infrastructure, may affect the accuracy of performance tests.

**5. Data Privacy and Security:

 Challenge: Testing with real data may raise privacy and security
concerns, limiting access to actual production data.

**6. Tool Selection:

 Challenge: Choosing the right performance testing tools

Regression Testing: Ensuring Continuous Software Quality

1. What is Regression Testing?

Regression Testing is a type of software testing that aims to confirm that


recent changes to the codebase, such as bug fixes, enhancements, or new
feature additions, do not adversely affect existing functionalities. The goal is to
ensure that the modified code works well with the existing codebase without
introducing new defects.

2. Types of Regression Testing:

 Selective Regression Testing:

 Scope: Focuses on testing specific areas of the application affected


by recent changes.

 Complete Regression Testing:

 Scope: Involves testing the entire application, regardless of the


areas where changes have occurred.

 Unit Regression Testing:

 Scope: Tests changes made to individual units or modules.

 Partial Regression Testing:

 Scope: Involves testing a subset of test cases that are likely to be


affected by recent changes.

3. When to do Regression Testing:

 After Code Changes:

 Scenario: Whenever there are changes to the codebase, such as


bug fixes, feature enhancements, or new functionality.

 After Integration:

 Scenario: Following the integration of new modules or


components into the existing system.

 Before Releases:

 Scenario: As a pre-release validation to ensure that new changes


do not compromise the stability of the software.

4. How to do Regression Testing:

 Manual Regression Testing:


 Process: Testers manually execute a set of predefined test cases to
validate that recent changes haven't adversely impacted existing
functionalities.

 Automated Regression Testing:

 Process: Automated test scripts are developed to quickly and


efficiently execute a suite of test cases, providing rapid feedback
on whether recent changes have introduced defects.

5. Best Practices in Regression Testing:

 Maintain a Comprehensive Test Suite:

 Practice: Build and maintain a comprehensive test suite that


covers critical and frequently used functionalities of the
application.

 Automate Repetitive Tests:

 Practice: Automate repetitive and time-consuming test cases to


speed up the regression testing process and increase efficiency.

 Version Control:

 Practice: Use version control systems to manage code changes


and track modifications to the codebase over time.

 Regular Execution:

 Practice: Perform regression testing regularly, especially after


significant code changes or before releasing new versions of the
software.

 Prioritize Test Cases:

 Practice: Prioritize test cases based on the impact of recent


changes, focusing on critical functionalities first.

 Record and Reuse Test Cases:

 Practice: Record and reuse test cases to ensure consistency and


accuracy in test execution.

 Collaboration with Developers:


 Practice: Foster collaboration between testers and developers to
understand the scope of changes and identify potential areas of
impact.

 Continuous Integration:

 Practice: Implement continuous integration practices to


automatically trigger regression tests whenever there are code
changes.

 Capture and Analyze Metrics:

 Practice: Capture and analyze metrics related to regression


testing, such as test execution time, defect detection rate, and
test coverage.

 Regression Testing in Agile:

 Practice: Integrate regression testing seamlessly into agile


development cycles, conducting frequent, small-scale tests with
each iteration.

Regression Testing is a critical aspect of software testing that ensures the


stability and reliability of a software application throughout its development
lifecycle. By adopting best practices and choosing the right testing approach,
teams can maintain high software quality even in the face of continuous
changes and updates.
UNIT 5 – Test Planning, Management, Execution and Reporting (15 Hours)

Test Planning, Management, Execution and Reporting: Test Planning – Test


Management – Test Process – Test Reporting –Best Practices. Test Metrics and
Measurements: Project Metrics– Progress Metrics– Productivity Metrics–
Release Metrics.

Test Planning:

1. Definition:

 Test Planning is a critical phase in the software testing process where


the overall testing strategy, scope, resources, schedule, and deliverables
are defined. It involves creating a detailed plan that guides the testing
team throughout the software development life cycle.

2. Key Components of Test Planning:

 Objectives: Clearly define the goals and objectives of the testing effort.

 Scope: Identify the features, functionalities, and areas of the application


to be tested.

 Resources: Allocate human resources, tools, and infrastructure required


for testing.

 Schedule: Create a timeline that outlines when testing activities will be


performed.

 Test Environment: Specify the testing environment, including hardware,


software, and network configurations.

3. Importance:

 Early Identification of Risks: Test planning allows for the early


identification of potential risks and challenges in the testing process.

 Resource Allocation: Efficiently allocate resources and manage the


testing effort to meet project deadlines.

 Communication: Ensure clear communication between stakeholders


about the testing strategy and expectations.
Test Management:

1. Definition:

 Test Management involves the planning, monitoring, and control of the


testing activities throughout the software development life cycle. It
includes overseeing the test planning process, resource allocation, test
execution, and reporting.

2. Key Responsibilities of Test Managers:

 Planning: Develop and oversee the test plan, ensuring alignment with
project goals.

 Resource Allocation: Allocate testing resources, including personnel,


tools, and environments.

 Monitoring: Continuously monitor testing progress, identifying and


addressing issues as they arise.

 Reporting: Provide stakeholders with accurate and timely reports on


testing status, progress, and results.

 Risk Management: Identify and manage risks related to testing activities.

3. Test Management Tools:

 Jira: An agile project management tool widely used for test management
and issue tracking.

 TestRail: Test management software that allows for test case


management, execution, and reporting.

 HP ALM (Application Lifecycle Management): An integrated tool for


managing the complete application lifecycle, including testing.

Test Process:

1. Definition:

 Test Process refers to the systematic set of activities and tasks


performed to ensure the quality of a software application. It
encompasses various phases, from test planning to test execution,
defect tracking, and reporting.
2. Phases of the Test Process:

 Test Planning: Define the testing strategy, scope, and resources.

 Test Design: Create test cases and test scripts based on requirements.

 Test Execution: Run test cases, record results, and identify defects.

 Defect Tracking: Log and manage defects found during testing.

 Test Reporting: Generate reports on testing progress, results, and overall


quality.

3. Iterative Nature:

 Continuous Improvement: The test process is often iterative, with


feedback from each phase informing improvements in subsequent
cycles.

 Adaptability: The process should be adaptable to changes in


requirements, scope, or project timelines.

Test Reporting:

1. Purpose:

 Test Reporting involves providing detailed information about the testing


activities, progress, and results to stakeholders. It helps in making
informed decisions and ensuring transparency in the testing process.

2. Key Elements of Test Reports:

 Testing Progress: Status of testing activities, including completed and


pending tasks.

 Defect Metrics: Number of defects found, resolved, and outstanding.

 Test Coverage: Percentage of features or requirements tested.

 Pass/Fail Results: Summary of test case execution results.

3. Types of Test Reports:

 Daily/Weekly Status Reports: Provide a snapshot of testing progress and


any critical issues.
 Defect Summary Reports: Detail the status of defects, including open,
closed, and pending items.

 Test Execution Summary Reports: Summarize the results of test case


execution.

Best Practices:

1. Collaborative Planning:

 Practice: Involve all stakeholders in the test planning process to ensure


alignment with project goals.

2. Continuous Communication:

 Practice: Maintain open and continuous communication between testing


teams, development teams, and other project stakeholders.

3. Test Automation Strategy:

 Practice: Develop a strategic approach to test automation, focusing on


critical and repetitive test cases.

4. Risk-Based Testing:

 Practice: Prioritize testing efforts based on the potential impact and


likelihood of occurrence of identified risks.

5. Traceability:

 Practice: Establish traceability between requirements, test cases, and


defects to ensure comprehensive test coverage.

6. Test Environment Management:

 Practice: Effectively manage test environments to mirror production as


closely as possible for realistic testing.

7. Continuous Learning:

 Practice: Encourage a culture of continuous learning and improvement


within the testing team.
8. Metrics for Improvement:

 Practice: Define and track key metrics related to testing processes to


identify areas for improvement.

Effective Test Planning, Test Management, Test Process, Test Reporting, and
adherence to best practices are crucial for ensuring the success of software
testing efforts. These activities collectively contribute to the delivery of high-
quality software that meets user expectations and project requirements.

Test Metrics and Measurements: Project Metrics, Progress Metrics,


Productivity Metrics, Release Metrics

1. Project Metrics:

Definition:

 Project Metrics in the context of testing involve quantitative measures


that assess the overall progress, effectiveness, and quality of the testing
project.

Key Metrics:

 Test Case Execution Progress:

 Definition: Percentage of test cases executed compared to the


total planned.

 Importance: Indicates the progress of test case execution and


helps in estimating the time required to complete testing.

 Defect Density:

 Definition: Number of defects identified per unit of code size (e.g.,


defects per KLOC - thousand lines of code).

 Importance: Provides insights into the quality of the codebase and


helps identify defect-prone areas.

 Test Coverage:

 Definition: Percentage of features, requirements, or code covered


by testing.
 Importance: Evaluates the thoroughness of testing and identifies
areas that may need additional testing.

 Requirements Traceability:

 Definition: Percentage of requirements linked to corresponding


test cases.

 Importance: Ensures that all requirements are covered by test


cases, enhancing traceability and completeness.

2. Progress Metrics:

Definition:

 Progress Metrics focus on assessing the advancement of testing


activities during a specific phase or throughout the entire testing
process.

Key Metrics:

 Test Case Execution Status:

 Definition: Indicates the number of test cases passed, failed, and


pending.

 Importance: Provides real-time visibility into the current state of


test case execution.

 Defect Closure Rate:

 Definition: Percentage of defects closed compared to the total


reported.

 Importance: Reflects the efficiency of defect resolution efforts and


the overall project health.

 Test Execution Velocity:

 Definition: Rate at which test cases are executed over a specific


period.

 Importance: Helps assess the pace of testing and identifies


potential bottlenecks.

 Regression Test Progress:


 Definition: Status of regression testing in terms of completed and
pending tasks.

 Importance: Ensures that regression testing is on track and aligned


with project timelines.

3. Productivity Metrics:

Definition:

 Productivity Metrics measure the efficiency and effectiveness of the


testing team in terms of producing high-quality results.

Key Metrics:

 Test Case Productivity:

 Definition: Number of test cases created or executed per tester


per unit of time.

 Importance: Measures the efficiency of testers and helps in


resource planning.

 Defect Fix-to-Test Cycle Time:

 Definition: Duration from defect identification to confirmation of


the fix.

 Importance: Assesses the efficiency of the defect resolution


process.

 Automated Test Coverage:

 Definition: Percentage of test cases automated compared to the


total.

 Importance: Reflects the level of test automation and its impact


on overall testing efficiency.

 Testing Effort vs. Defect Discovery:

 Definition: Relationship between testing effort and the number of


defects discovered.

 Importance: Helps optimize testing efforts by identifying areas


where defects are frequently found.
4. Release Metrics:

Definition:

 Release Metrics assess the overall quality of the software product as it


progresses toward release.

Key Metrics:

 Release Readiness Index:

 Definition: Aggregated measure combining various metrics to


assess the readiness for release.

 Importance: Provides a holistic view of the software's


preparedness for deployment.

 Escaped Defects:

 Definition: Number of defects identified by users or customers


post-release.

 Importance: Measures the effectiveness of testing in preventing


defects from reaching end-users.

 Post-Release Defect Density:

 Definition: Number of defects reported by users per unit of time


after release.

 Importance: Helps monitor the stability of the software in a real-


world environment.

 Customer Satisfaction Metrics:

 Definition: Surveys or feedback mechanisms to assess user


satisfaction with the released product.

 Importance: Provides insights into user experience and overall


product satisfaction.
TEXT BOOKS

1. Software Testing Principles and Practices,Srinivasan Desikan & Gopal


swamy Ramesh,2006,Pearson Education.(UNIT-I:2.1-2.5,3.1-3.4 UNIT-
II:4.1-4.4,5.1-5.5UNIT- III:6.1-6.7 UNIT -IV:7.1-7.6,8.1-8.5 UNIT-V:15.1-
15.6,17.4-17.7)

2. Limaye M.G.,“Software Testing Principles,Techniques and Tools”,Second


Reprint,TMH Publishers, 2010.

3. Aditya P.Mathur, “Foundations of Software Testing”,2nd Edition,Pearson


Education,2013.

REFERENCE BOOKS

1. Effective Methods of Software Testing, William.E.Perry, 3rd Ed, Wiley


India.
2. Software Testing, Renu Rajani, Pradeep Oak,2007,TMH.

E LEARNING

www.utest.com

www.udemy.com

www.testing.googleblog.com

www.stickyminds.com

www.satisfice.com

www.techtarget.com

www.seleniumeasy.com

You might also like