0% found this document useful (0 votes)
7 views

Manual Testing Interview Questions_RM-PART-02

The document discusses various aspects of software testing, including the impossibility of achieving 100% test coverage, the distinctions between test drivers and stubs, and the importance of agile testing. It outlines key challenges in software testing, types of functional testing, and metrics like defect detection percentage and defect removal efficiency. Additionally, it differentiates between quality assurance, quality control, and software testing, while highlighting essential qualities for QA leads and factors for choosing automated testing over manual testing.

Uploaded by

raveemakwana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Manual Testing Interview Questions_RM-PART-02

The document discusses various aspects of software testing, including the impossibility of achieving 100% test coverage, the distinctions between test drivers and stubs, and the importance of agile testing. It outlines key challenges in software testing, types of functional testing, and metrics like defect detection percentage and defect removal efficiency. Additionally, it differentiates between quality assurance, quality control, and software testing, while highlighting essential qualities for QA leads and factors for choosing automated testing over manual testing.

Uploaded by

raveemakwana
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

RM

111. Is it possible to achieve 100% testing coverage? How would you


ensure it?
It’s considered not possible to perform 100% testing of any product. But you can
follow the below steps to come closer.

 Set a hard limit on the following factors:


o Percentage of test cases passed
o Number of bugs found
 Set a red flag if:
o Test budget is depleted
o Deadlines are breached
 Set a green flag if:
o The entire functionality gets covered in test cases
o All critical and major bugs must have a ‘CLOSED’ status

112. What is the difference between a test driver and a test stub?
The test driver is a section of code that calls a software component under test. It is
useful in testing that follows the bottom-up approach.

The test stub is a dummy program that integrates with an application to complete its
functionality. It is relevant for testing that uses the top-down approach.

For example:

1. Let’s assume a scenario where we have to test the interface between


Modules A and B. We have developed only Module A. Here, we can test
Module A if we have the real Module B or a dummy module for it. In this
case, we call Module B as the test stub.
2. Now, Module B can’t send or receive data directly from Module A. In such
a scenario, we’ve to move data from one module to another using some
external features called test driver.

113. What is agile testing and why is it important?


Agile testing is a software testing process that evaluates software from the
customers’ point of view. It is favorable as it does not require the development team
RM

to complete coding for starting QA. Instead, both coding and testing go hand in hand.
However, it may require continuous customer interaction.

114. What do you know about data flow testing?


It is one of the white-box testing techniques.

Data flow testing emphasizes for designing test cases that cover control flow paths
around variable definitions and their uses in the modules. It expects test cases to
have the following attributes:

1. The input to the module


2. The control flow path for testing
3. A pair of an appropriate variable definition and its use
4. The expected outcome of the test case

115. How will you overcome the challenges faced due to the
unavailability of proper documentation for testing?
If the standard documents like System Requirement Specification or Feature
Description Document are not available, then QAs may have to rely on the following
references, if available.

 Screenshots
 A previous version of the application
 Wireframes

Another reliable way is to have discussions with the developer and the business
analyst. It helps in solving the doubts, and it opens a channel for bringing clarity on
the requirements. Also, the emails exchanged could be useful as a testing reference.

Smoke testing is yet another option that would help verify the main functionality of
the application. It would reveal some very basic bugs in the application. If none of
these work, then we can just test the application from our previous experiences.

116. Is there any difference between retesting and regression testing?


Possible differences between retesting and regression testing are as follows:
RM

 We perform retesting to verify the defect fixes. But, the regression testing
assures that the bug fix does not break other parts of the application.
 Regression test cases verify the functionality of some or all modules.
 Regression testing ensures the re-execution of passed test cases.
Whereas, retesting involves the execution of test cases that are in a failed
state.
 Retesting has a higher priority over regression. But in some cases, both get
executed in parallel.

117. As per your understanding, list down the key challenges of


software testing.
Following are some of the key challenges of software testing:

 The lack of availability of standard documents to understand the application


 Lack of skilled testers
 Understanding the requirements: Testers require good listening and
understanding capabilities to be able to communicate with the customers the
application requirements.
 The decision-making ability to analyze when to stop testing
 Ability to work under time constraints
 Ability to decide which tests to execute first
 Testing the entire application using an optimized number of test cases

118. What are the different types of functional testing?


Functional testing covers the following types of validation techniques:

 Unit testing
 Smoke testing
 UAT
 Sanity testing
 Interface testing
 Integration testing
 System testing
 Regression testing
RM

119. What are functional test cases and non-functional test cases?

 Functional testing: It is testing the ‘functionality’ of a software or an application


under test. It tests the behavior of the software under test. Based on the
requirement of the client, a document called a software specification or
requirement specification is used as a guide to test the application.

 Non-functional testing: In software terms, when an application works as per the


user’s expectation, smoothly and efficiently under any condition, then it is stated
as a reliable application. Based on quality, it is very critical to test these
parameters. This type of testing is called non-functional testing.

120. What do you understand by STLC?


Software testing life cycle (STLC) proposes the test execution in a planned and
systematic manner. In the STLC model, many activities occur to improve the quality
of the product.

The STLC model lays down the following steps:

1. Requirement Analysis
2. Test Planning
3. Test Case Development
4. Environment Setup
5. Test Execution
6. Test Cycle Closure

121. In software testing, what does a fault mean?


Fault is a condition that makes the software fail to execute while performing the
considered function.

122. Difference between Bug, Defect, and Error.


A slip in coding is indicated as an error. The error spotted by a manual tester
becomes a defect. The defect which the development team admits is known as a
bug. If a built code misses on the requirements, then it is a functional failure.
RM

123. How do severity and priority relate to each other?


Severity: It represents the gravity/depth of a bug. It describes the application point of
view.

Priority: It specifies which bug should get fixed first. It defines the user’s point of
view.

124. List the different types of severity.


The criticality of a bug can be low, medium, or high depending on the context.

 User interface defects – Low


 Boundary related defects – Medium
 Error handling defects – Medium
 Calculation defects – High
 Misinterpreted data – High
 Hardware failures – High
 Compatibility issues – High
 Control flow defects – High
 Load conditions – High

125. What do you mean by defect detection percentage in software


testing?
Defect detection percentage (DDP) is a type of testing metric. It indicates the
effectiveness of a testing process by measuring the ratio of defects discovered
before the release and reported after the release by customers.

For example, let’s say, the QA has detected 70 defects during the testing cycle and
the customer reported 20 more after the release. Then, DDP would be: 70/(70 + 20)
= 72.1%

126. What does defect removal efficiency mean in software testing?


Defect removal efficiency (DRE) is one of the testing metrics. It is an indicator of the
efficiency of the development team to fix issues before the release.

It gets measured as the ratio of defects fixed to total the number of issues
discovered.
RM

For example, let’s say, there were 75 defects discovered during the test cycle while
62 of them got fixed by the development team at the time of measurement. The DRE
would be 62/75 = 82.6%

127. What is the average age of a defect in software testing?


Defect age is the time elapsed between the day the tester discovered a defect and
the day the developer got it fixed.

While estimating the age of a defect, consider the following points:

 The day of birth of a defect is the day it got assigned and accepted by the
development team.
 The issues which got dropped are out of the scope.
 Age can be both in hours or days.
 The end time is the day the defect got verified and closed, not just the day it got
fixed by the development team.

128. How do you perform automated testing in your environment?


Automation testing is a process of executing tests automatically. It reduces the
human intervention to a great extent. We use different test automation tools like
QTP, Selenium, and WinRunner. Testing tools help in speeding up the testing tasks.
These tools allow you to create test scripts to verify the application automatically and
also to generate the test reports.

129. Is there any difference between quality assurance, quality


control, and software testing. If so, what is it?
Quality Assurance (QA) refers to the planned and systematic way of monitoring the
quality of the process which is followed to produce a quality product. QA tracks the
test reports and modifies the process to meet the expectation.

Quality Control (QC) is relevant to the quality of the product. QC not only finds the
defects but suggests improvements too. Thus, a process that is set by QA is
implemented by QC. QC is the responsibility of the testing team.

Software testing is the process of ensuring that the product which is developed by
developers meets the users’ requirements. The aim of performing testing is to find
RM

bugs and make sure that they get fixed. Thus, it helps to maintain the quality of the
product to be delivered to the customer.

130. Tell me about some of the essential qualities an experienced QA


or Test Lead must possess.
A QA or Test Lead should have the following qualities:

1. Well-versed in software testing processes


2. Ability to accelerate teamwork to increase productivity
3. Improve coordination between QA and Dev engineers
4. Provide ideas to refine QA processes
5. Skill to conduct RCA meetings and draw conclusions
6. Excellent written and interpersonal communication skills
7. Ability to learn fast and to groom the team members

140. What is a Silk Test and why should you use it?
Here are some facts about the Silk Test tool:

1. Skill tool is developed for performing regression and functionality testing


of an application.
2. It is used when we are testing Window-based, Java, web, and the
traditional client/server applications.
3. Silk Test helps in preparing the test plan and managing it to provide direct
accessing of the database and validation of the field.

141. On the basis of which factors you would consider choosing


automated testing over manual testing?
Choosing automated testing over manual testing depends on the following
factors:

1. Tests require periodic execution.


2. Tests include repetitive steps.
3. Tests execute in a standard runtime environment.
4. Automation is expected to take less time.
5. Automation is increasing reusability.
6. Automation reports are available for every execution.
RM

7. Small releases like service packs include a minor bug fix. In such cases,
executing the regression test is sufficient for validation.

142. Tell me the key elements to consider while writing a bug report.
An ideal bug report should consist of the following key points:

 A unique ID
 Defect description: A short description of the bug
 Steps to reproduce: They include the detailed test steps to emulate the issue.
They also provide the test data and the time when the error has occurred
 Environment: Add any system settings that could help in reproducing the issue
 Module/section of the application in which the error has occurred
 Severity
 Screenshots
 Responsible QA: This person is a point of contact in case you want to follow-up
regarding this issue

143. Is there any difference between bug leakage and bug release?
Bug leakage: Bug leakage is something, when the bug is discovered by the end
user/customer and missed by the testing team to detect while testing the software. It
is a defect that exists in the application and not detected by the tester, which is
eventually found by the customer/end user.

Bug release: A bug release is when a particular version of the software is released
with a set of known bug(s). These bugs are usually of low severity/priority. It is done
when a software company can afford the existence of bugs in the released software
but not the time/cost for fixing it in that particular version.

144. What is the difference between performance testing and monkey


testing?
Performance testing checks the speed, scalability, and/or stability characteristics of a
system. Performance is identified with achieving response time, throughput, and
resource-utilization levels that meet the performance objectives for a project or a
product.
RM

Monkey testing is a technique in software testing where the user tests the application
by providing random inputs, checking the behavior of the application (or trying to
crash the application).

You might also like