0% found this document useful (0 votes)
25 views17 pages

Q.1 Any 4 1. Write Test Case Specifications

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views17 pages

Q.1 Any 4 1. Write Test Case Specifications

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

SET 1

Q.1 Any 4
1. Write test case specifications.
A Test case specification is a detailed document that outlines the specific conditions, inputs,
steps, and expected results for a test case to verify if a particular functionality or feature of a
system works as intended. It includes:
1. The purpose of the test.
2. Items being tested, along with their version/release numbers as appropriate.
3. Environment that needs to be set up for running the test case.
4. Input data to be used for the test case.
5. Steps to be followed to execute the test.
6. The expected result that are considered to be “correct result”
7. A step to compare the actual results produced with the expected results.
8. Any relationship between this test and other tests.

2 Elaborate the term metrics and measurement.


● A Metric is a measurement of the degree that any attribute belongs to a system, product
or process. For example, the number of errors per person hours would be a metric.
Thus, software measurement gives rise to software metrics.
Types of Metrics: 1. Product Metric, 2. Project Metric, 3. Process Metric.
● A measurement is an indication of the size, quantity, amount, or dimension of a particular
attribute of a product or process. For example, the number of errors in a system is a
measurement.

3 State the classification of defects.


1. Requirement/Specification Defects: Requirement-related defects arise in a product when one
fails to understand what the customer requires. These defects may be due to the customer gap,
where the customer is unable to define his requirements. Producer gap, where the developing
team is not able to make a product as per requirements.
2. Design Defects: Design defects occur when system components, interactions between
system components, interactions between the outside software/hardware, or users are
incorrectly designed. Design defects generally refer to the way of design creation or its usage
while creating a product.
3. Coding Defects: This defect arises when variables are not initialized properly, or variables are
not declared correctly, or database is not created properly. Coding also needs adequate
commenting to make it readable and maintainable in future.
4. Testing Defects: These would encompass incorrect, incomplete, missing inappropriate test
cases and test procedures.
4 Give any four differences between manual and automated testing.

5 State the need of test deliverables & test plan for test planning.
● Need for Test Deliverables for Test Planning:
1. Provides documentation of testing activities and results.
2. Enables tracking of testing progress and coverage.
3. Ensures quality control and accountability.
4. Facilitates communication among stakeholders.
5. Serves as evidence of completed testing for audit and compliance.

● Need for a Test Plan for Test Planning:


1. Defines testing scope, objectives, and approach.
2. Helps allocate resources effectively and manage time.
3. Aligns team members on testing responsibilities.
4. Identifies risks and mitigation strategies.
5. Ensures systematic verification of all requirements.
6 Write the need for software measurement.
● Need of Software measurement:
1. Establish the quality of the current product or process.
2. To predict future qualities of the product or process.
3. To improve the quality of a product or process.
4. To determine the state of the project in relation to budget and schedule.

Q2 Any 3

1. Explain people management in test planning.


People management is an integral part of any project management. It requires the ability to hire,
motivate and retain the right people. Effective people management is key to successful test
planning, as it ensures that the testing process is efficient, thorough, and effective. All members
need to work together, follow the applied test processes and deliver the work within the
specified schedule.
Activities performed by Test People Management
1. Initiate the test planning activities for test case designing.
2. Encourage the team to conduct review meetings and incorporate meeting comments.
3. Monitor the test progress, check available resources and re-balance or re-allocate them as
required.
4. Check for any delays in the schedule, discuss and resolve issues of testers, and prepare plan
to resolve risks if any.
5. Intimate the timely status to the stakeholders and management.
6. Bridge any gaps between the testing team and the management.

OR
(intro remains same)
Activities performed by Test People Management
1. Team Composition: Assemble a diverse team with complementary skills and experiences.
Consider factors like technical expertise, domain knowledge, and testing methodologies.
2. Clear Objectives: Communicate the objectives of the testing phase clearly to your team.
Ensure everyone understands the goals, scope, and expected outcomes of the testing effort.
3. Assigning Roles and Responsibilities: Clearly define roles and responsibilities within the
testing team. Assign tasks based on individual strengths and expertise, while also providing
opportunities for skill development.
4. Setting Expectations: Establish clear expectations regarding timelines, quality standards, and
reporting mechanisms. Ensure everyone understands their individual and collective
responsibilities.
5. Effective Communication: Foster open and transparent communication within the team.
Encourage regular updates, discussions, and feedback sessions to address any issues or
challenges promptly.
6. Risk Management: Identify potential risks and challenges early in the planning phase. Work
with your team to develop mitigation strategies and contingency plans to address any
unforeseen issues during testing.

2. Prepare a defect report after executing test cases for withdrawal of the amount from
the ATM machine.

(defect report format yei follow krna hai, attributes remain same, values change hongi!!)
changes:
project name: ATM Simulator
module: withdrawal
title: ATM cash Withdrawal Defect
description: no option of withdrawing amount excess of 3000.
resolution comment: limited denotation of options in cash withdrawal was noticed, hence
added option to ask custom amount for withdrawal, fixing the defect.
retest comment: successful withdrawal of amount excess of 3000
3. Calculate effort variance and schedule variance if actual effort = 110, planned effort
=100, actual calendar days = 310, planned calendar days = 300.

4. Explain the defect management process with a suitable diagram.

1. Defect Prevention:
Using techniques, methods, and standard processes to reduce the chance of defects.
2. Deliverable Baseline:
Setting milestones where deliverables are marked as complete and ready for the next stage.
Changes are controlled after this point, and errors are only considered defects after the baseline
is set.
3. Defect Discovery:
Finding and reporting defects for the development team to acknowledge. A defect is considered
discovered only when documented and confirmed by the responsible team.
4. Defect Resolution:
The development team prioritizes, schedules, fixes defects, and documents the fixes. The tester
is informed to verify the resolution.
5. Process Improvement:
Defects highlight issues in the development process. Fixing these processes leads to better
products with fewer defects.
6. Management Reporting:
Analyzing and reporting defect data helps management with risk management, process
improvements, and project oversight.

5. Explain in detail how to prepare a test plan with a suitable example.


A test plan is a document that outlines the approach, resources, schedule, and scope of testing
activities within a project. It serves as a roadmap for the testing process, ensuring clarity and
alignment among team members and stakeholders.

1. Introduction
○ Brief overview of the project and goals of the test plan.
○ Example: "This test plan is for a banking mobile application. The purpose is to
verify functionality, security, and performance before the production launch."
2. Scope of Testing
○ Defines the features to be tested and what is out of scope.
○ Example: “Testing will cover login, account balance check, fund transfer, and
transaction history. Features like bill payments and account settings are out of
scope.”
3. Test Objectives
○ Clearly state what the testing aims to accomplish.
○ Example: “Ensure all critical functionalities operate as expected, identify and fix
critical bugs, and ensure compliance with security standards.”
4. Test Strategy
○ Specifies the approach for testing (e.g., functional, performance, regression).
○ Example: “Functional testing will validate core features; regression testing will
ensure new changes do not disrupt existing features.”
5. Resources and Roles
○ Defines team members, their roles, and the required tools or environments.
○ Example: “Team will consist of a test manager, two functional testers, and one
performance tester. Tools required include JIRA for defect tracking and Selenium
for automation.”
6. Test Schedule and Milestones
○ Details the timeline, including key milestones and deadlines.
○ Example: “Test execution to start on 1st Nov, regression testing to begin on 15th
Nov, and final testing report to be completed by 25th Nov.”
7. Risk Management
○ Lists potential risks (e.g., delays in development) and plans to mitigate them.
○ Example: “Risk of feature delays due to development challenges will be mitigated
by having a buffer week for testing.”
8. Entry and Exit Criteria
○ Establishes conditions to begin (entry) and conclude (exit) testing.
○ Example: “Testing begins when the development team delivers a stable build.
Testing concludes when all critical defects are resolved and pass rate is 95%.”
9. Deliverables
○ Specifies expected outputs like test cases, defect reports, and test summary
reports.

Example: For a banking app, the plan might specify critical test cases for transaction accuracy,
security, and compatibility with different devices.

– mam ka format is as below:

1. Introduction
● This section provides a high-level overview of the project and the purpose
of the test plan.
● Example: "This test plan is for an online shopping application. The primary
goal is to validate the functionality, security, and performance of core
features such as product search, cart, checkout, and payment before
launch."
2. Test Items
● Lists specific components or modules of the application to be tested.
● Example: "Test items include modules for user authentication, product
search, shopping cart management, order placement, and payment
processing."
3. Features to be Tested
● Specifies the features or functionalities included in the testing scope
based on project requirements.
● Example: "Features to be tested include user login and registration,
product search, cart functionality, checkout process, payment gateway
integration, and order tracking."
4. Approach
● Outlines the methods and strategies for testing, such as manual or
automated testing and black-box or white-box testing.
● Example: "A combination of manual and automated testing will be used.
Functional testing will be performed manually, while regression testing will
utilize automated scripts with Selenium."
5. Quality Objectives
● Defines the quality benchmarks that the software must meet in terms of
performance, reliability, and usability.
● Example: "Quality objectives include achieving 99.9% uptime, a maximum
page load time of 3 seconds, and zero high-priority defects at the time of
release."
6. Item Pass/Fail Criteria
● Sets the criteria to determine if a test case has passed or failed based on
expected results.
● Example: "A test case passes if the actual outcome matches the expected
result and the feature works as specified. A fail occurs if there is any
deviation from the expected outcome or a critical error that impacts the
user experience."
7. Suspension Criteria
● Defines conditions under which testing should be paused temporarily.
● Example: "Testing will be suspended if the application experiences
repeated server failures, critical defects in the checkout module, or
database connection issues."
8. Resumption Criteria
● Lists the conditions required to resume testing after a suspension.
● Example: "Testing will resume once server stability is restored, critical
defects are resolved, and the application passes smoke testing."
9. Test Deliverables
● Identifies the documents and artifacts produced during and after the
testing process.
● Example: "Deliverables include test cases, test execution reports, defect
logs, a test summary report, and a final test closure report."
10. Test Tasks
● Outlines specific tasks to be completed as part of the testing process,
such as preparing test cases, executing tests, logging defects, and
creating reports.
● Example: "Tasks include creating functional test cases, executing smoke
tests, performing regression tests, documenting defects, and preparing the
final test summary report."
11. Environmental Needs
● Specifies the hardware, software, and network requirements for the test
environment.
● Example: "Testing will be conducted on Windows and macOS operating
systems with browsers Chrome, Firefox, and Safari. The test environment
requires a stable internet connection, access to the staging server, and a
test database."
12. Responsibilities
● Defines the roles and responsibilities of each team member involved in
testing.
● Example: "The test lead will oversee the testing process and report
progress to stakeholders. Test engineers will create and execute test
cases, while the automation engineer will develop and maintain automated
test scripts."
13. Testing Types and Objectives
● Lists the types of testing to be performed (e.g., functional, performance,
security) and their specific objectives.
● Example: "Functional testing will validate core features work as expected,
performance testing will measure response times and load capacity, and
security testing will verify data protection during transactions."
14. Staffing & Training Needs
● Identifies the necessary team members, their skills, and any required
training.
● Example: "The testing team includes one test lead, two functional testers,
and one performance tester. New testers will undergo training on the
Selenium automation tool and JIRA defect management to ensure smooth
test execution."
15. Schedule
● Provides a timeline for test activities, key milestones, and deadlines.
● Example: "Test execution will start on 1st December, regression testing on
10th December, and the final report will be submitted by 25th December.
Milestones include completing functional testing by 15th December and
performance testing by 20th December."
16. Risks and Contingencies
● Identifies potential risks during testing and outlines mitigation strategies.
● Example: "Potential risks include delayed code delivery, hardware
malfunctions, and resource unavailability. Mitigation strategies involve
allocating extra buffer time in the schedule, ensuring backup devices, and
having standby testers."

(skipping this ques coz of obvious reasons)


SET 2

Q1. Any 4

1. Write standards included in Test Management in short


Internal standards are the rules and guidelines a company sets for how work should be done.
They ensure everyone follows the same methods, helping keep quality high and work consistent
across the team.
Examples:
1. Naming and storage conventions for test artifacts.
2. Document standards
3. Test coding standards
4. Test reporting standards
External standards are rules and guidelines set by outside organizations that a company’s
product or service must follow. They are visible to others and ensure the product meets certain
quality or safety requirements set by external authorities.
Types of External standards are:
1. Customer standards
2. National Standards
3. International Standards

2. Describe criteria for Selecting Testing Tools in short.


Selection Criteria for Testing Tool:
1. Meeting Requirements: The tool should align with the project’s needs to ensure efficient and
effective testing. Choosing an unsuitable tool can lead to wasted time and reduced
effectiveness.
2. Technology Expectations: The tool must be compatible with the current technology and allow
for easy modifications without excessive costs or vendor dependency.
3. Training/Skills: Proper training is essential for all users of the tool. Without adequate skills,
the tool may not be used to its full potential.
4. Management Aspects: The tool should be affordable and not require significant upgrades.
Consider the overall cost-benefit before making a final decision

3. State any 4 attributes of defects.


Attributes of defect:
1) Defect ID: Identifies defects as there are many defects that might be identified in the system.
a. i.e. D1, D2, etc.
2) Defect Name: Name of defect which explains the defect in brief. It must be short but
descriptive. i.e. Login error.
3) Project Name: Indicates project name in which defect is found e.g.: Library management
system
4) Severity: Declared in the test plan, e.g. high, medium or low.
5) Priority: defines based on how the project decides a schedule to take the defects for fixing.
e.g. High, low, Moderate.
4. Describe any four limitations of manual testing.
Limitations of Manual Testing:
1. Manual testing is slow and costly.
2. It is very labor intensive; it takes a long time to complete tests.
3. Lack of training is a common problem.
4. GUI objects size difference and color combinations are not easy to find in manual testing.
5. Not suitable for large scale projects and time bound projects.
6. Batch testing is not possible, for each test execution Human user interaction is mandatory.
7. Comparing large amounts of data is impractical

5. Write a short note on the Test incident report.


A test incident report is created when an unexpected result or behavior is observed during
testing. It is a communication that happens through the testing cycle as and when defects are
encountered. A test incident report is nothing but an entry made in the defect repository. Key
attributes include:

● Incident ID: A unique identifier for tracking.


● Summary: Brief description of the issue.
● Steps to Reproduce: Detailed steps to recreate the issue.
● Expected vs. Actual Results: Comparison of expected and observed outcomes.
● Severity and Priority: Indicates the impact and urgency.
● Status: Current state of the incident, such as Open, In Progress, or Closed.

6. State the need for an automated testing tool.


Need of Automation Tools:
1. An automated testing tool can playback pre-recorded and predefined actions, compare the
results to the expected behavior and report the success or failure of these to a test
engineer.
2. Once automated tests are created, they can easily be repeated, and they can be extended
to perform tasks impossible with manual testing.
3. Automated Software Testing Saves Time and Money.
4. Software tests must be repeated often during development cycles to ensure quality.
5. They can even be run on multiple computers with different configurations.
Q2 Any 3

1. Describe the contents of "Test Summary Report” used in test reporting with suitable
example.
A Test Summary Report provides a comprehensive overview of the testing phase, summarizing
test results, highlighting major defects, and making recommendations about the product’s
release. Attributes include:

1. Report Identifier
○ A unique ID for tracking the report.
○ Example: "TSR-BankingApp-2024-Q1"
2. Overview and Description
○ Briefly outlines the project and scope of testing.
○ Example: "This report summarizes the functional and performance testing
conducted on the banking mobile application.”
3. Test Objectives
○ States what the testing aimed to achieve.
○ Example: "To verify that all critical functionalities operate as expected and ensure
high-priority bugs are resolved.”
4. Test Execution Summary
○ Provides details on total test cases executed, passed, and failed.
○ Example: "150 test cases executed, 135 passed, 15 failed.”
5. Defect Summary
○ Lists critical and major defects found, including their current status.
○ Example: "Two high-severity defects were identified in the fund transfer module;
one is resolved, one is open.”
6. Test Results
○ Detailed results for each test phase, such as functional, regression, and
performance testing.
○ Example: "Regression testing showed no critical failures; performance testing
met required benchmarks."
7. Deviations
○ Describes any deviations from the test plan, such as schedule changes.
○ Example: "Testing schedule was extended by two days due to feature changes in
the transaction module."
8. Conclusion and Recommendation
○ Final assessment of product readiness.
○ Example: "The application is stable and meets all functional requirements.
Recommended for release.”
2. Design any two test cases for a chatting application and prepare a defect report.
3. How to select a testing tool? Explain in detail.

The industry experts have suggested following four major criteria for selection of testing tools.
1) Meeting requirements.
2) Technology expectations.
3) Training / skills.
4) Management aspects.

1. Meeting Requirements: The tool should align with the project’s needs to ensure efficient and
effective testing. Choosing an unsuitable tool can lead to wasted time and reduced
effectiveness. Furthermore, it should facilitate seamless collaboration among team members,
enhancing communication and improving project outcomes.
2. Technology Expectations: The tool must be compatible with the current technology and allow
for easy modifications without excessive costs or vendor dependency. Additionally, it should
integrate smoothly with existing systems to minimize disruption & enhance overall productivity.
3. Training/Skills: Proper training is essential for all users of the tool. Without adequate skills,
the tool may not be used to its full potential. Ongoing support and resources should also be
provided to help users stay updated on best practices and new features.
4. Management Aspects: The tool should be affordable and not require significant upgrades.
Consider the overall cost-benefit before making a final decision. It should also provide clear
analytics and reporting features to help management make informed decisions about resource
allocation and project progress.

4. Explain different types of defect classification.

1) Severity Wise
1. Major: A defect, which will cause an observable product failure or departure from
requirements.
2. Minor: A defect that will not cause a failure in execution of the product.
3. Fatal: A defect that will cause the system to crash or close abruptly or affect other
applications.
2) Status Wise:
1. Open: The defect is acknowledged and needs to be addressed.
2. Closed: The defect has been fixed and verified.
3. Deferred: The defect is postponed for future evaluation.
4. Canceled: The defect will not be fixed and is disregarded.
3) Work product wise:
1. SSD: A defect from System Study document
2. FSD: A defect from Functional Specification document
3. ADS: A defect from Architectural Design Document
4. DDS: A defect from Detailed Design document
5. Source code: A defect from Source code
6. Test Plan/ Test Cases: A defect from Test Plan/ Test Cases
7. User Documentation: A defect from User manuals, Operating manuals
4) Errors Wise:
1. Comments: Inadequate/ incorrect/ misleading or missing
comments in the source code
2. Data error: Incorrect data population / update in database
3. Database Error: Error in the database schema/Design
4. In correct Design: Wrong or inaccurate Design
5. Navigation Error: Navigation not coded correctly in source code
6. System Error: Hardware and Operating System related error, Memory leak

5. Explain test infrastructure management with its components.

You might also like