0% found this document useful (0 votes)
8 views

Software Testing-sample-solved

Uploaded by

bhaktinimaj94
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Software Testing-sample-solved

Uploaded by

bhaktinimaj94
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Scheme- I

Sample Question Paper

Program Name : Diploma in Computer Engineering Group


Program Code : CO / CM /CW

Semester
Course Title
: Fifth
: Software Testing
22518
Time: 3 Hrs.
Marks : 70

Instructions:
(1) All questions are compulsory.
(2) Illustrate your answers with sketches wherever necessary.
(3) Figures to the right indicate full marks.
(4) Assume suitable data if necessary.
(5) Preferably, write the answers in sequential order.

Q.1) Attempt any FIVE of the following. 10 Marks


a) Define software Quality Assurance and software Quality Control.
It is a procedure that focuses on providing assurance that quality requested
will be achieved ------
 It is a procedure that focuses on fulfilling the quality requested.
 QA aims to prevent the defect
 QC aims to identify and fix defects
 It is a method to manage the quality- Verification
 It is a method to verify the quality-Validation
 It does not involve executing the program
 It always involves executing a program
 It's a Preventive technique

 It's a Corrective technique


 It's a Proactive measure
 It's a Reactive measure

b) State any two example of Security testing.

1
 Penetration Testing (Pen Test): It's like hiring someone to try breaking into
your system, just like a hacker would. This helps you find weaknesses before real
hackers do.
 Vulnerability Scanning: This is when you use a tool to check your system for
any known security issues, like old software or weak settings that could be
dangerous.

c) Enlist any four the benefits of Test Plan


 Clear Objectives: A test plan defines what needs to be tested, helping everyone
understand the goals of testing.
 Organizes testing activities and resources effectively.
 Identifies risks and prepares mitigation strategies.
 Enhances communication and coordination among teams.
d) State any four basic principles of writing good test cases.
 Clarity: Test cases should be simple, clear, and easy to understand.
 Coverage: Ensure all functional aspects of the application are tested.
 Reusability: Test cases should be reusable for future testing cycles.
Traceability: Connect each test case to what it's supposed to check.
e) Enlist different types of defect classification.
 Functional Defects: Issues where the software doesn't behave as expected.
 Performance Defects: Problems related to slow response times or system
crashes.
 Security Defects: Vulnerabilities that expose the system to threats.
 Usability Defects: Issues that affect the user experience, making the software
hard to use.
f) Write any four limitations of Manual Testing.
1. **Time-consuming**: Manual testing takes longer compared to automated
testing.
2. **Prone to Human Error**: Testers might overlook issues or make mistakes.
3. **Not Suitable for Large-Scale Testing**: It's hard to manually test large
applications or systems repeatedly.
4. **Lack of Reusability**: Manual test cases can’t be easily reused, unlike
automated scripts.
g) Define following terms-Failure, Error, Defect and Bug.

2
a bug refers to any fault, flaw, or defect in the software that can lead to malfunctions
or unexpected behavior. Bugs can manifest in various forms, such as incorrect
calculations, system crashes, user interface issues, or incorrect outputs.

Failure is when something doesn't work as expected. In the context of software testing,
a failure occurs when the software does not perform as intended

An error is like when something goes wrong or doesn't work properly. In software
testing, an error happens when there is a mistake in the software that causes it to
behave unexpectedly or incorrectly.

A defect is like a problem or a flaw in something, such as software, that causes it to not
work correctly or produce wrong results.

3
Q.2) Attempt any THREE of the following. 12 Marks
a) Describe the roles and responsibilities of a Test Leader.
b) Differentiate between Drivers and Stub (any four points).

Stubs are used in Top-Down Drivers are used in Bottom-


Integration Testing. Up Integration Testing.

Stubs are basically known as a While, drivers are the


“called programs”. “calling program” .

While drivers are mainly


Stubs are basically used in the used in place of high-level
unavailability of low-level modules and in some
modules. situation as well as for low-
level modules.

Stubs are used when testing a Drivers are used when


component that is called or used testing a component that
by another component that is not calls or depends on another
yet developed component.

A driver is a simplified
A stub is a basic implementation
version of the dependent
of the missing component that
component that enables
simulates its behavior to allow
the testing of the
testing of the component that
component being
depends on it.
developed.

c) Describe different types of attributes of a Test Plan.


A test plan is a document that describes the scope, approach, resources and
schedule required for conducting testing activities.
Test Objectives: Clearly defined goals of what the testing aims to achieve, such as
verifying functionality, performance, or security.
1. Scope Management: Clearly describes the importance of deciding what will be
tested and what will not.

4
2. Test Strategy: Describes the overall approach to testing, including
methodologies (e.g., manual or automated testing) and types of testing to be
performed (e.g., functional, regression, performance).
3.  Test Schedule: A timeline that specifies when testing activities will take
place, including milestones and deadlines.
4.  Test Deliverables: Lists the expected outputs from the testing process, such
as test cases, test scripts, defect reports, and final test summary reports.
5.  Risk Assessment: Identifies potential risks that could affect testing
and outlines strategies for mitigating those risks.

6. Setting up criteria for testing: There must be clear entry and exit criteria for
different phases of testing. The test strategies for the various features and combinations
determined how these features and combinations would be tested.

d) State the Advantages and Disadvantages of using testing tools.


Advantages:automated tools
 High Cost: Some testing tools can be very expensive to buy and maintain.
 Training Needed: Team members may need time to learn how to use the tools
properly.
 Limited Use: Not every tool works for all types of testing or software, which can
leave some areas untested.
 Ongoing Updates: Automated tests need to be regularly updated as the software
changes, which takes time.
 Too Much Dependence: Relying too much on tools can lead to ignoring
important manual testing tasks.

Q.3) Attempt any THREE of the following. 12 Marks


a) State process of Black box testing with labeled diagram? List any four techniques of
black box testing.

Test management and first point requirement analysis


b) Describe the Test Case Specification and list its parameters.
A Test Case Specification is a detailed document that outlines the specific
steps, inputs, expected results, and conditions needed to verify whether a

5
feature or functionality of an application works correctly. It serves as a
guideline for testers to execute test cases consistently and thoroughly.
Key Elements of Test Case Specification:
1. Test case id
A unique identifier assigned to each test case for easy reference and tracking.
2.  Test Description:
A brief description outlining the specific feature or functionality being tested.
3.  Preconditions:
Conditions that must be met before executing the test (e.g., user must be logged in).
4.  Input Data:
The specific data or values required to execute the test case (e.g., username,
password).
5.  Test Steps:
A sequence of actions to be performed by the tester to execute the test (e.g., enter
username and password, click login).
6.  Expected Result:
The outcome expected after performing the test steps.
7.  Actual Result:
The outcome observed when the test case is executed.
8.  Status:
Indicates whether the test case passed or failed based on the comparison between the
expected and actual results.

c) Draw Defect Management Process. State the working of each phase.

6
The defect management process in software testing is a structured
approach to identify, track, manage, and resolve defects (bugs) in a
software application. The primary goal is to ensure that the software
meets the required quality standards and functions as intended.
1. Defect identification – Defects are identified through various
testing activities, such as unit testing, integration testing, and user
acceptance testing.
2. Defect logging – Defects are logged in a defect tracking system,
along with details such as severity, status, reproducibility and
priority.
3. Defect triage – The triage process involves evaluating the defects
to determine their priority and the resources required to resolve
them.
4. Defect assignment – Defects are assigned to developers or testers
for resolution, based on their expertise and availability.

7
5. Defect Resolution and Verification: The defect is fixed and then
verified by the tester to ensure it’s correctly resolved without
introducing new issues.
6. Defect Closure and Reporting: After verification, the defect is
closed, and its status is updated in the tracking system. Regular
reports are generated to provide visibility into the overall defect
status and resolution progress.

d) State any four points of comparison between Static analysis tools and Dynamic
analysis tools.
1. Definition:

o Static Testing Tools: Analyze the code without executing it.

o Dynamic Testing Tools: Analyze the software by executing it.

2. Purpose:

o Static Testing Tools: To identify potential defects, vulnerabilities, and


code quality issues before the software is run.

o Dynamic Testing Tools: To identify issues that occur during execution,


such as functional errors and performance problems.

3. Examples:

o Static Testing Tools: Lint tools (check for syntax errors and coding
standards).

o Dynamic Testing Tools: Automated test scripts (execute test cases and
compare actual results with expected outcomes).

4. When Used:

8
o Static Testing Tools: During the development phase, before the software
is executed.

o Dynamic Testing Tools: During and after the software development


phase.

5. Benefits:

o Static Testing Tools: Can detect issues early in the development cycle.

o Dynamic Testing Tools: Can find issues that occur only during runtime.

6. Limitations:

o Static Testing Tools: Cannot find runtime or integration issues.

o Dynamic Testing Tools: Requires the software to be executed and may


not detect certain issues that static analysis can identify.

Q.4) Attempt any THREE of the following. 12 Marks


a) Describe the Integration Testing.
Integration testing involves combining individual software modules and
testing them as a group to uncover any issues that arise from their interactions.
There are different types of integration testing:

1. **Top-Down Integration Testing**: In this approach, testing starts from the


top module (usually the main module) and gradually moves down to lower-
level modules. Stubs are used to simulate the behavior of lower-level modules
that are not yet integrated.

2. **Bottom-Up Integration Testing**: This method begins testing from the


lowest level modules, gradually moving up to higher-level modules. Drivers
are used to simulate the behavior of higher-level modules that are not yet
integrated.

9
3. **Big Bang Integration Testing**: In this type, all modules are integrated
simultaneously to test the entire system at once. This approach is quick but can
make it challenging to isolate and fix issues.

4. **Sandwich Integration Testing**: Also known as hybrid integration


testing, this method combines elements of both top-down and bottom-up
approaches. It involves testing both from the top down and from the bottom
up, meeting in the middle.

b) State the testing approaches that are considered during Client- Server Testing.
Client-Server Testing is a type of software testing model used to test the
interaction between two components: the client (which requests services) and the
server (which provides services). This type of testing ensures that the
communication, functionality, and performance of both the client and server are
working as expected.
Key Components:
1. Client: The front-end component that sends requests to the server for services
or data. This could be a web browser, a desktop application, or a mobile app.

2. Server: The back-end component that processes the requests from the client
and sends the appropriate responses.

3. Database: Often, servers interact with a database to fetch, update, or delete


information based on the client's request.

Client-Server Testing Process:


1. Functional Testing: Ensuring that the client can send correct requests and the
server returns valid responses.

Example: Testing login functionality, file uploads, and API responses.


2. Load Testing: Testing the system under heavy loads to ensure it handles
multiple client requests simultaneously without failure.

Example: Simulating hundreds or thousands of clients to see how the


system performs under peak load.
3. Security Testing: Ensuring secure communication between the client and
server, protecting data integrity and confidentiality.

Testing login authentication mechanisms and ensuring sensitive data like


passwords are encrypted.
4. Performance Testing: Measuring the performance of the client-server system,
such as response time and throughput.

10
Checking how fast the server responds to requests when multiple clients
are interacting with it.

c) Explain the Test Management with Test Infrastructure management and Test People
Management.
**Test Management**:
Test management involves organizing and controlling the testing process to
ensure the software meets quality standards. Key components include:
1. **Test Planning**: Defining the scope, strategy, resources, and timeline for
testing.
2. **Test Execution**: Running test cases and tracking the results to identify
defects.
3. **Defect Management**: Logging, prioritizing, and tracking defects to
ensure they are resolved.
4. **Test Reporting**: Summarizing testing progress and outcomes to
stakeholders.

Test Infrastructure Management refers to the management of the hardware,


software, and environment necessary to conduct testing efficiently. It ensures
that the testing environment is properly set up, maintained, and supports the
testing activities.

 Test Environment:
 Description: The setup of hardware and software where tests are executed. This
includes servers, computers, and network configurations that mimic the
production environment.
 Example: A staging server that simulates the live environment.
 Test Data:
 Description: Data used during testing to validate the functionality of the
software. This includes input data and expected results.
 Example: User accounts and transaction records used to test login and payment
features.
 Test Tools:
 Description: Software applications that help design, execute, and manage tests.
This includes automation tools and defect tracking systems.
 Example: Selenium for automated testing, JIRA for bug tracking.

11
### **Test People Management**:
Test people management focuses on effectively managing the human resources
involved in testing. Key aspects include:
1. **Role Assignment**: Clearly defining roles such as test leads, engineers,
and automation testers.
2. **Skill Development**: Providing training and learning opportunities to
enhance the team’s skills.
3. **Motivation and Communication**: Keeping the team motivated and
ensuring clear communication.
4. **Resource Allocation**: Assigning tasks based on team members'
expertise to optimize efficiency.

d) Enlist and describe criteria for Selecting Testing Tools with its description.
Project Requirements and Compatibility:

 Description: The testing tool must align with the specific needs of the
project, such as the type of application (web, mobile, desktop) and the
technologies used (programming languages, frameworks, etc.).

 Example: If you're testing a web application built with Angular, you would
choose a tool like Selenium, which supports automated testing of web apps
across different browsers.

Ease of Use and Learning Curve:

 Description: The tool should be user-friendly and easy to learn, especially


if the testing team is new to it. A steep learning curve can delay the testing
process.

 Example: Postman is widely adopted for API testing because of its


intuitive interface and ease of use, allowing even beginners to quickly get
started.

**Budget**:

Consider the cost of the testing tool and whether it fits within the project's budget
constraints. Some tools may have licensing fees, while others are open source. It's

12
crucial to balance the cost with the tool's capabilities and the project's
requirements.

Example: selenium is an open-source testing framework widely used for


automating web browsers. It is free to use, making it a cost-effective choice for
projects with budget constraints.

Integration Capabilities**:

Check if the testing tool can seamlessly integrate with other tools and technologies
used in the project, such as bug tracking systems, continuous integration tools, or
test management platforms.

An example of integration capabilities in a testing tool can be seen with Selenium


integrating with Jenkins for continuous integration and JIRA for bug tracking.

e) Explain following concepts related to Web Application :


(1) Load testing
(2) Stress testing.
(i) Load Testing:
Description: Load testing is a type of performance testing that evaluates how a system
behaves under a specific, expected load. The goal is to determine whether the system
can handle the anticipated number of users or transactions without performance
degradation. It helps ensure that the system can operate efficiently under normal and
peak conditions.
Example: Imagine a website for an online store that expects around 10,000 users during
a big sale event. A load test would simulate 10,000 users accessing the site at the same
time, placing items in their carts, and checking out. The test checks whether the website
can handle this load without slowing down, crashing, or causing errors.
(ii) Stress Testing:
Description: Stress testing goes beyond load testing by evaluating how a system
behaves under extreme or unpredictable conditions, often pushing it beyond its normal
operational capacity. The purpose is to identify the breaking point and to see how the
system recovers after failure. It helps determine the system's robustness and stability
under stress.
Example: Consider the same online store website. A stress test would simulate 50,000
users accessing the site simultaneously, far exceeding the expected load. The test would
monitor how the website handles this excessive traffic, whether it crashes, and how it
recovers once the traffic decreases. This helps in understanding the system's limits and
its ability to handle unexpected surges in traffic.

13
Q.5) Attempt any TWO of the following. 12 Marks
a) Design test cases for the data filed from Admission form of your institute
(Data filed are Name, SSC percentage, Adhar no, Address, mobile no)
b) With respect to GUI testing write the test cases for Amazon login form.
c) Elaborate the concept of Software Metrics? Describe Product and Process metrics
with suitable example.

Q.6) Attempt any TWO of the following. 12 Marks


a) Design test cases for MSBTE Online Exam form filling.(any valid six test cases)
b) Prepare a Test Plan along with the Test Cases for the MS Word option ‘Save As’.
Test Cases should be at least six.
c) Design any three test cases for railway reservation form and prepare defect report of
it.
Scheme- I

Sample Test Paper - I

Program Name : Diploma in Computer Engineering Group


Program Code : CO / CM /CW

Semester
Course Title
: Fifth
: Software Testing
22518
Time: 1 Hour
Marks : 20

Instructions:
(1) All questions are compulsory.
(2) Illustrate your answers with sketches wherever necessary.
(3) Figures to the right indicate full marks.
(4) Assume suitable data if necessary.
(5) Preferably, write the answers in sequential order.

Q.1 Attempt any FOUR. 08 Marks


a) List the objectives of software testing.

14
b) Design any four boundary value test cases for textbox which accept numbers from
1999.
c) Define Static testing and Dynamic testing.
d) Describe the need of stub and driver in Unit testing?
e) Define Load testing and Stress testing.
f) Define Unit Testing?

Q.2 Attempt any THREE. 12 Marks


a) Differentiate between Verification and Validation.
b) Apply equivalence partitioning on application which display result on basis of
percentage obtained in exam.
c) Explain Top Down integration testing.
d) With respect to GUI testing, write any four test cases for Flipchart login form.
e) Explain need of Regression Testing.

Scheme- I

Sample Test Paper - II

Program Name : Diploma in Computer Engineering Group


Program Code : CO / CM /CW

22518
Semester : Fifth
Course Title : Software Testing
Time: 1 Hour
Marks : 20

Instructions:
(1) All questions are compulsory.
(2) Illustrate your answers with sketches wherever necessary.
(3) Figures to the right indicate full marks.
(4) Assume suitable data if necessary.
(5) Preferably, write the answers in sequential order.

15
Q.1 Attempt any FOUR. 08
Marks a) Define Test Plan?
b) List basic steps of Fundamental test Process.
c) Enlist different types of defect classification.
d) State five general activities of defect prevention.
e) List the benefits of automation testing.
f) Define software matrix and measurement.

Q.2 Attempt any THREE. 12 Marks


a) Prepare test plan along with test cases for notepad option copy-paste.
b) Design any four Test cases for User Login Form.
c) Design any two test cases for simple calculator application and prepare defect report.
d) Differentiate between manual testing & automation testing.
e) Enlist factors considered for selecting a testing tool for test automation.

16

You might also like