0% found this document useful (0 votes)
16 views21 pages

Software Testing Important Materials

Uploaded by

rathodnikil07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views21 pages

Software Testing Important Materials

Uploaded by

rathodnikil07
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Software Testing

Important Materials

Test Case -:
A test case is a set of actions, conditions, and inputs that are used to verify whether a specific
feature or functionality of a software application is working as intended. It includes a series
of steps that the tester needs to perform, along with the expected outcome for each step. Test
cases help ensure that the software meets its requirements and functions correctly.

Parameters to be used -:

 Test Case ID: A unique identifier for the test case.


 Description: A detailed explanation of the test case, including the purpose and the
expected outcome.
 Test Data: Specific data inputs required for executing the test case. This could include
usernames, passwords, or other data relevant to the scenario.
 Expected Result: The expected outcome of the test steps, based on requirements or
specifications.
 Actual Result: The actual outcome after executing the test steps. This is filled in
during test execution.
 Pass/Fail Criteria: Indicates whether the test case has passed or failed based on the
comparison of expected and actual results.
Template -:

Test case Description Test data Expected Actual status


Id result result
Test case example on ticket booking system -:
Test case example on login system
Test case example on calculator system
Test case example on GUI based testing
Test Plan -:

A test plan is a comprehensive document that outlines the scope, objectives, resources, and
approach for software testing activities within a project. It serves as a blueprint that guides
the testing process, ensuring that testing is systematic, thorough, and aligned with the
project’s requirements and goals.

Key parameters of a Test Plan:

1. Test Plan ID: A unique identifier for the test plan document.
2. Introduction: An overview of the test plan, including its purpose and objectives.
3. Scope of Testing: Defines what features, functionalities, or modules will be tested
and what will not be tested (in-scope and out-of-scope items).
4. Test Objectives: The goals of the testing process, such as identifying defects,
verifying functionality, ensuring performance, etc.
5. Test Items: Lists the software modules, components, or features that are subject to
testing.
6. Test Criteria:
o Suspension Criteria: Conditions under which testing will be suspended, such
as unresolved critical defects.
o Exit Criteria: Conditions that must be met to consider the testing phase
complete, such as meeting all test objectives or having a certain percentage of
test cases pass.
7. Test Deliverables: Artifacts that will be produced during and after testing, such as
test cases, test scripts, test results, and defect reports.
8. Test Approach/Strategy: The overall methodology for testing, including the types of
testing (e.g., functional, regression, performance), testing levels (e.g., unit,
integration, system), and tools to be used.
9. Test Environment: Details of the hardware, software, network configurations, and
any other environmental factors required for testing.
10. Roles and Responsibilities: Defines the roles of team members involved in the
testing process, such as testers, test managers, and developers, along with their
specific responsibilities.
11. Resource Requirements: The human resources, software, hardware, and other tools
needed to execute the testing activities.
12. Schedule and Milestones: A timeline for the testing activities, including key
milestones such as the start and end dates of testing phases, test case preparation, test
execution, and test completion.
13. Risk and Contingencies: Potential risks that could impact the testing process (e.g.,
resource availability, tight deadlines) and the mitigation strategies to handle them.
14. Defect Management: The process for identifying, reporting, tracking, and resolving
defects found during testing.
15. Approval and Sign-off: Signatures from stakeholders to confirm that the test plan has
been reviewed and approved for execution.

Test Plan Example on testing web application ticket booking

1. Test Plan ID

TP_TBS_001

2. Introduction

The test plan outlines the testing strategy, scope, and objectives for the Ticket Booking
System. It is designed to ensure that the system functions correctly, meets specified
requirements, and provides a seamless user experience.

3. Scope of Testing

In-Scope:

 User Registration and Login


 Ticket Search and Booking
 Payment Processing
 Ticket Cancellation and Refund
 User Account Management
 Booking History and Notifications

Out-of-Scope:
 Third-party API integrations (e.g., Payment Gateway)
 External system compatibility testing

4. Test Objectives

 Validate that users can successfully book tickets.


 Ensure correct payment and refund processing.
 Verify user account management and booking history functionality.
 Identify and resolve any defects before deployment.

5. Test Items

 User Interface
 Backend functionalities related to ticket booking and management
 Database interactions
 Security and data validation mechanisms

6. Test Criteria

 Suspension Criteria: Testing will be suspended if critical defects are found that
block test case execution, such as the inability to access the system or a major
functionality failure (e.g., ticket booking not working).
 Exit Criteria: Testing will be considered complete when all test cases have been
executed with at least 95% passing, critical defects are resolved, and remaining
defects are of low priority.

7. Test Deliverables

 Test Plan Document


 Test Cases and Test Scripts
 Test Execution Report
 Defect Reports
 Test Summary Report
8. Test Approach/Strategy

 Functional Testing: Validate each function of the system against requirements,


including login, search, booking, and cancellation.
 Regression Testing: Ensure new changes do not adversely affect existing
functionality.
 Usability Testing: Verify the ease of use and intuitive navigation of the user interface.
 Performance Testing: Check system performance under varying loads, such as
multiple concurrent bookings.
 Security Testing: Validate user authentication, authorization, and data protection
measures.

9. Test Environment

 Hardware: Test Server, User Machines


 Software: Windows 10, Chrome/Firefox, Test Management Tool (e.g., JIRA,
TestRail)
 Network: Stable Internet Connection

10. Roles and Responsibilities

 Test Manager: Coordinates overall testing activities, ensures resources, and resolves
issues.
 Test Lead: Manages test case preparation and execution, reports progress.
 Testers: Execute test cases, log defects, and perform re-testing.
 Developers: Resolve reported defects and provide necessary support.

11. Resource Requirements

 Human Resources: 1 Test Manager, 1 Test Lead, 3 Testers


 Software: Test Management Tools (e.g., TestRail, JIRA), Automation Tool (e.g.,
Selenium)
 Hardware: Test servers, user workstations
12. Schedule and Milestones
Milestone Start Date End Date

Test Plan Preparation 2024-09-25 2024-09-26

Test Case Design 2024-09-26 2024-09-29

Test Environment Setup 2024-09-29 2024-09-30

Test Execution 2024-10-01 2024-10-10

Defect Reporting and Re-testing 2024-10-03 2024-10-12

Test Summary Report Preparation 2024-10-13 2024-10-14

13. Risk and Contingencies

 Risk: Delays in test environment setup may impact the schedule.


o Mitigation: Prepare backup environments and ensure early resource
allocation.
 Risk: High-priority defects may delay test execution.
o Mitigation: Prioritize defect resolution and allocate additional resources if
needed.

14. Defect Management

 Defect Logging: All defects will be logged in the test management tool with detailed
information.
 Defect Triage: Defects will be reviewed daily, prioritized, and assigned for
resolution.
 Defect Retesting: After a defect is marked as resolved, it will be re-tested to confirm
the fix.

15. Approval and Sign-off

 Test Manager: ___________________________ Date: ___________


 Project Manager: _________________________ Date: ___________
 Client/Stakeholder: _______________________ Date: ___________
Test summary report

A Test Summary Report is a formal document that provides a detailed overview of the
testing activities and results conducted during a testing phase or cycle. It serves as a
comprehensive summary of what was tested, how it was tested, the results obtained, and the
overall quality of the software under test. The report is used to communicate the status of
testing to stakeholders, including project managers, clients, and team members, and helps in
making informed decisions about the software's readiness for release.

Key parameters of a Test Summary Report:

1. Report ID and Title


o A unique identifier and title for the report, such as
"TSR_TicketBookingSystem_001."
2. Introduction
o An overview of the testing scope, objectives, and the software components
tested.
3. Test Summary
o A brief description of the testing activities carried out, including types of
testing performed (e.g., functional, regression, performance).
4. Test Environment
o Details about the environment where the testing was conducted, including
hardware, software, and network configurations.
5. Test Coverage
o Information on the extent of testing, including:
 Number of test cases executed.
 Number of test cases passed, failed, blocked, or skipped.
 Coverage of requirements and functionalities tested.
6. Test Results
o A detailed summary of test results, often presented in tabular form, showing
the status of each test case and any associated defects.
7. Defect Summary
o A summary of defects identified during testing, categorized by severity
(critical, high, medium, low) and status (open, fixed, closed).
o Information on the total number of defects raised, fixed, and deferred.
8. Major Issues
o A description of any major issues or blockers encountered during testing,
along with their impact on the project and mitigation steps.
9. Deviation from Test Plan
o Any deviations from the original test plan, such as changes in scope, timelines,
or resources.
10. Recommendations
o Suggestions for improving the software's quality, additional testing needed, or
other actions before the software can be released.
11. Conclusion
o A final assessment of the testing process, including whether the software
meets the quality standards and is ready for release.
12. Approval and Sign-off
o Signatures of the stakeholders involved, such as the Test Manager, Project
Manager, and Client, indicating their agreement with the contents of the
report.

Test summary report example -:

Report ID: TSR_TBS_001

Title: Test Summary Report for Ticket Booking System

1. Introduction

This report summarizes the testing activities and results for the Ticket Booking System. The
primary objective was to validate the functionality, performance, and usability of the system
to ensure a seamless ticket booking experience for users.
2. Test Summary

 Testing Period: 2024-09-01 to 2024-09-15


 Testing Types Conducted: Functional Testing, Regression Testing, Usability Testing,
Performance Testing.
 Modules Tested:
o User Registration and Login
o Ticket Search and Booking
o Payment Gateway Integration
o Ticket Cancellation and Refund
o Booking History and Notifications

3. Test Environment

 Hardware: Windows Server, Client Machines


 Software: Windows 10, Chrome Browser, MySQL Database
 Tools: Selenium (for automation), JIRA (for defect tracking)

4. Test Coverage

 Total Test Cases Executed: 120


o Passed: 110
o Failed: 5
o Blocked: 2
o Skipped: 3
 Coverage: 95% of functional requirements covered.

5. Test Results
Test Case ID Description Result Defect ID (if any)

TC_REG_001 Verify user registration Passed N/A

TC_LOGIN_002 Verify user login functionality Passed N/A

TC_BOOK_003 Verify ticket booking process Failed DEF_001


Test Case ID Description Result Defect ID (if any)

TC_PAY_004 Verify payment processing Passed N/A

TC_CANCEL_005 Verify ticket cancellation Blocked DEF_002

TC_HISTORY_006 Verify booking history display Passed N/A

6. Defect Summary

7. Major Issues

 DEF_001: The system fails to validate seat numbers properly during the booking
process, resulting in a booking failure. This is a high-priority issue that needs to be
addressed before the system can go live.
 DEF_002: Ticket cancellation requests are not being correctly updated in the
database, causing discrepancies in the user’s booking history.

8. Deviation from Test Plan

 The "Mobile App Testing" was deferred due to a delay in the development of the
mobile application. This was not part of the initial scope but will be covered in a
separate test cycle.

9. Recommendations

 Fix the high-priority defects (DEF_001 and DEF_002) before proceeding with the
production release.
 Perform an additional round of regression testing after the defects are resolved.
 Consider usability testing for the mobile version once development is complete.
10. Conclusion

The Ticket Booking System has passed the majority of test cases and meets the expected
quality standards for most functionalities. However, critical issues identified in booking and
cancellation processes must be resolved before deployment. Once these issues are addressed,
the system should be ready for release.

11. Approval and Sign-off

Defect Report -:

A Defect Report, also known as a Bug Report or Issue Report, is a formal document that
provides detailed information about a defect or issue found during the software testing
process. It serves as a communication tool between testers and developers, helping to ensure
that defects are tracked, understood, and resolved effectively.

Key parameters of a Defect Report:

1. Defect ID: A unique identifier assigned to the defect for tracking purposes.
2. Title/Summary: A brief, descriptive title that summarizes the defect.
3. Description: A detailed explanation of the defect, including what is wrong, how it
was discovered, and the expected vs. actual behavior of the system.
4. Severity: An assessment of the defect's impact on the system and its users, typically
classified as Critical, High, Medium, or Low.
5. Priority: Indicates the urgency of fixing the defect, often classified as High, Medium,
or Low. This may differ from severity, as a critical defect may have a low priority if it
is not frequently encountered.
6. Environment: Information about the environment in which the defect was found,
including hardware, software, and network configurations.
7. Steps to Reproduce: A clear, step-by-step guide on how to replicate the defect. This
should include all necessary actions, inputs, and expected outcomes.
8. Actual Result: A description of what happened when the defect was encountered,
including any error messages or incorrect behavior observed.
9. Expected Result: A description of what should have happened if the defect did not
exist, based on requirements or specifications.
10. Attachments: Screenshots, logs, or other relevant files that provide additional context
or evidence of the defect.
11. Status: The current status of the defect (e.g., Open, In Progress, Resolved, Closed).
12. Assigned To: The name of the developer or team responsible for fixing the defect.
13. Date Reported: The date when the defect was reported.
14. Date Resolved: The date when the defect was fixed (if applicable).

Defect report example -:

Defect ID: DEF_001

Title/Summary: Booking fails with invalid seat numbers

Description:

When a user attempts to book a ticket using an invalid seat number (such as zero or a
negative number), the system throws an error and fails to process the booking. This issue
prevents users from completing their reservations and can lead to frustration and loss of sales.

Severity: High

Priority: High

Environment:

 Operating System: Windows 10


 Browser: Google Chrome 95
 Application Version: Ticket Booking System v1.0

Steps to Reproduce:

1. Log in to the Ticket Booking System.


2. Search for available tickets for a specific event.
3. Select a ticket and enter an invalid seat number (e.g., 0).
4. Click on the "Book Now" button.

Actual Result:

The system displays an error message:


"Invalid seat number. Please enter a valid seat number."
The booking process fails, and no tickets are booked.

Expected Result:

The system should validate the seat number before submission. If an invalid seat number is
entered, the user should be prompted with a message indicating that the seat number must be
positive and within the available range.

Attachments:

 Screenshot of the error message (attached).


 Log file snippet showing the error.

Status: Open

Assigned To: John Doe


Date Reported: 2024-09-10

Date Resolved: N/A

Defect Report example for login system

Defect ID: DEF_002

Title/Summary: User unable to log in with valid credentials

Description:

Users are unable to log in to the system even when they enter valid credentials (username and
password). The system displays an error message indicating incorrect login details, leading to
frustration among users and potential loss of access to the application.

Severity: Critical

Priority: High

Environment:

 Operating System: macOS Monterey


 Browser: Safari 15
 Application Version: Login System v1.2

Steps to Reproduce:

1. Navigate to the Login page of the application.


2. Enter a valid username (e.g., [email protected]).
3. Enter a valid password (e.g., Password123).
4. Click on the "Log In" button.
Actual Result:

The system displays an error message:


"Login failed. Please check your username and password."
The user remains on the login page without access to their account.

Expected Result:

The system should authenticate the user with valid credentials and redirect them to the
dashboard or homepage of the application.

Attachments:

 Screenshot of the error message (attached).


 Log file snippet showing the authentication attempt.

Status: Open

Assigned To: Jane Smith

Date Reported: 2024-09-11

Date Resolved: N/A


Question for practice -:

1. Functional Testing: Create a test case for the "Add to Cart" functionality in an e-
commerce application. Include details such as test data, expected results, and status.
2. Boundary Testing: Write a test case to validate the input for a "Search" feature that
accepts a maximum of 50 characters. What are the test data inputs you would use?
3. Usability Testing: Develop a test case for evaluating the user interface of a mobile
banking application. What aspects would you assess, and what criteria would you use
to determine usability?
4. Performance Testing: Formulate a test case for measuring the response time of a
website during peak traffic. What specific metrics would you track, and what
thresholds would you define?
5. Prepare any 8 test case for irctc .
6. Prepare a test plan for testing zomato ordering system.
7. Prepare a defect report for Netflix login defect.

You might also like