Manual Testing Interview Questions
Manual Testing Interview Questions
Ans: Negative testing ensures that your application can gracefully handle invalid input or
unexpected user behavior. For example, if a user tries to type a letter in a numeric field, the
correct behavior in this case would be to display the “Incorrect data type, please enter a
number” message.
Ans: What is End to End Testing? End to end testing (E2E testing) refers to a software testing
method that involves testing an application's workflow from beginning to end. This method
basically aims to replicate real user scenarios so that the system can be validated for integration
and data integrity.
Ans: Integration test cases or scenario focus mainly on the interface between the modules,
integrated links, data transfer between the modules as modules/components that are already
unit tested i.e. the functionality and the other testing aspects have already been covered.
Ans: Testing performed with a plan, documented set of test cases, etc that outline the
methodology and test objectives. Test documentation can be developed from requirements,
design, equivalence partitioning, domain coverage, error guessing, etc.
4a) What is the difference between a test case and a test scenario?
Ans: A test case is a detailed set of conditions, inputs, and actions that are executed during
testing to verify specific functionality or behavior of the software application. A test scenario,
on the other hand, is a high-level description of a test case or a set of related test cases. It
outlines the objective of the test and provides context for understanding the test case.
Ans: Usability Testing also known as User Experience(UX) Testing, is a testing method for
measuring how easy and user-friendly a software application is. A small set of target end-users,
use software application to expose usability defects. Usability testing mainly focuses on user's
ease of using application, flexibility of application to handle controls and ability of application to
meet its objectives.
5a) What is a test case template, and what information does it include?
Ans: A test case template is a standardized format for documenting test cases. It includes
information such as:
Test case description: Description of the test case objective and purpose.
Preconditions: Conditions that must be met before the test case can be executed.
5b) Explain the difference between smoke testing and sanity testing.
Ans: Smoke testing is a type of testing performed to ensure that the critical functionalities of
the application are working correctly before deeper testing is conducted. It is a subset of
regression testing. Sanity testing, on the other hand, is a type of testing performed to verify
that the defects fixed in the previous cycle are working as expected and no new defects are
introduced. It is a subset of acceptance testing.
Ans: Regression testing is the process of retesting the modified parts of the software to ensure
that the existing functionalities are not affected by the changes. It is important because
software is constantly evolving, and changes made in one part of the application may have
unintended consequences in other parts. Regression testing helps ensure that these changes do
not introduce new defects or break existing functionalities.
Ans: Test cases can be prioritized based on factors such as criticality of the functionality,
frequency of use, complexity of the test case, risk associated with the feature, and
dependencies between test cases. High-priority test cases that cover critical functionalities and
have a high risk of failure should be executed first.
6) Which model is called iterative and incremental model?
Ans:agile model
Ans:
b)Planning and estimating the highest value deliverables for the next release.
c)Ensuring that a Scrum team lives by the practices and rules of Scrum and its values (achieving
specific milestones through accurate forecasting and then providing deliverables in an iteration
while tracking and removing any impediments.)
Ans: A release note refers to the technical documentation produced and distributed alongside
the launch of a new software product or a product update (e.g., recent changes, feature
enhancements, or bug fixes). It very briefly describes a new product or succinctly details specific
changes included in a product update.
Ans: A Traceability Matrix is a document that maps and traces user requirements with test
cases. It ensures that all requirements have corresponding test cases, and vice versa, helping in
ensuring comprehensive test coverage and requirements traceability throughout the software
development lifecycle.
Requirement Identification: The first step involves identifying all the requirements of the
software, which may include functional requirements, non-functional requirements, and any
other specifications.
Test Case Creation: Once the requirements are identified, test cases are created to validate
each requirement. Test cases define the steps to be executed, expected results, and any test
data required.
Mapping: The Traceability Matrix then maps each requirement to one or more test cases.
Similarly, each test case is mapped back to the corresponding requirement(s).
Verification: The Traceability Matrix is used to verify that all requirements have corresponding
test cases, and all test cases are linked to the appropriate requirements. Any gaps or
inconsistencies can be identified and addressed.
Change Management: As the project progresses, changes may occur in requirements or test
cases. The Traceability Matrix helps in managing these changes by identifying the impact on
other elements. If a requirement changes, the corresponding test cases must be updated
accordingly, and vice versa.
Providing transparency and visibility into the relationship between requirements and
test cases.
Assisting in risk management by identifying areas of the system that are not adequately tested.
Ans:
1. Logical Thinking
2. Decipline
3. Good Observer
4. Imagination
6. Prioritize Tests
10. Curiosity
10) Difference between retesting and regression testing?
Ans: Regression testing is to ensure that changes have not affected unchanged part. Retesting is
done to make sure that the tests cases which failed in last execution are passed after the
defects are fixed. Regression testing is not carried out for specific defect fixes. Retesting is
carried out based on the defect fixes.
10a) What is the difference between ad-hoc testing and exploratory testing?
Ans: Ad-hoc testing is informal testing performed without any predefined test cases or test
plans. Testers randomly explore the application, executing test cases based on their intuition
and experience. Exploratory testing, on the other hand, is a structured approach to testing
where testers explore the application systematically while simultaneously designing and
executing test cases. It combines manual testing with test case design and execution in real-
time.
Ans: A/B testing, also known as split testing, is a method used in marketing, product
development, and web optimization to compare two or more versions of a webpage, email,
advertisement, or other digital asset to determine which one performs better in terms of a
predefined goal or metric.
Hypothesis: The testing process starts with a hypothesis or a question about how a change to a
particular element (such as a headline, button color, or call-to-action) could impact user
behavior or outcomes (such as conversion rate, click-through rate, or engagement).
Variations: Two or more versions of the asset (such as Version A and Version B) are created,
with each version containing a specific variation of the element being tested. These variations
can be minor or significant changes, depending on the hypothesis being tested.
Randomization: Visitors or users are randomly divided into groups, with each group exposed to
only one version of the asset. This randomization helps ensure that the results of the test are
not biased by factors such as user demographics or behavior.
Measurement: Key metrics or KPIs (Key Performance Indicators) are defined to measure the
performance of each version. These metrics could include conversion rate, click-through rate,
bounce rate, time on page, or any other relevant metric based on the goal of the test.
Analysis: Once the test has run for a sufficient period (typically until statistical significance is
reached), the results are analyzed to determine which version performed better in terms of the
defined metrics. Statistical analysis tools are often used to determine whether the differences
observed between the variations are statistically significant.
Conclusion: Based on the analysis, conclusions are drawn about which variation is the winner,
and whether the hypothesis was supported or not. The winning variation is then implemented
as the default version, and further iterations of testing may be conducted to continue
optimizing the asset.
A/B testing is widely used in digital marketing, website optimization, and product development
to make data-driven decisions and continuously improve the effectiveness of digital assets and
user experiences. It allows businesses to test hypotheses, iterate on designs, and optimize
conversion rates based on empirical evidence rather than relying on intuition or guesswork.
Ans: Performance Testing is the process of analyzing the quality and capability of a product. It is
a testing method performed to determine the system performance in terms of speed, reliability
and stability under varying workload. Performance testing is also known as Perf Testing.
Types are
a) Stress Testing : This test pushes an application beyond normal load conditions to determine
which components fail first. Stress testing attempts to find the breaking point of the application
and is used to evaluate the robustness of the application’s data processing capabilities and
response to high volumes of traffic.
b) Spike Testing : This testing evaluates the ability of the application to handle sudden volume
increases. It is done by suddenly increasing the load generated by a very large number of users.
The goal is to determine whether performance will suffer, the system will fail, or it will be able
to handle dramatic changes in load.
c) Load Testing : The purpose of load testing is to evaluate the application’s performance under
increasingly high numbers of users. Load, or increasing numbers of users are applied to the
application under test and the results are measured to validate the requirements are met. This
load can be the expected concurrent number of users on the application performing a specific
number of transactions within the set duration.
d) Endurance testing : Endurance testing evaluates the performance of the system under load
over time. It is executed by applying varying loads to the application under test for an extended
period of time to validate that the performance requirements related to production loads and
durations of those loads are met. Endurance testing can be considered a component of load
testing and is also known as soak testing.
e) Volume testing: Also known as flood testing, this testing is used to evaluate the application’s
ability to handle large volumes of data.
f) Scalability Testing: This testing is used to determine your application’s ability to handle
increasing amounts of load and processing. It involves measuring attributes including response
time, throughput, hits and requests per second, transaction processing speed, CPU usage,
Network usage and more. Results of this testing can be used in the planning and design phases
of development which reduces costs and mitigates the potential for performance issues.
Ans: Severity means how severe defect is affecting the functionality. Priority means how fast
defect has to be fixed. Severity is related to the quality standard. Priority is related to
scheduling to resolve the problem.
Answer: A test execution report is a document that provides an overview of the test execution
activities performed during a testing cycle. It includes information such as the number of test
cases executed, passed, failed, and blocked, along with details of defects identified and their
status. The test execution report serves as a summary of the testing efforts and helps
stakeholders assess the quality and readiness of the software for release.
Ans: Test data is the input data used in test cases to execute and verify the behavior of the
software application. It includes both valid and invalid data that represent different scenarios
and conditions. Test data is important in testing because it helps ensure thorough coverage of
the application's functionality, identify defects, and validate the behavior of the system under
various conditions.
12c) How do you prioritize test cases when there is limited time for testing?
Answer: When prioritizing test cases with limited time for testing, I would prioritize based on:
Criticality: Test cases covering critical functionalities or high-risk areas would be given top
priority.
Impact: Test cases that have the potential to cause significant impact or business risk if they fail
would be prioritized.
Frequency of use: Test cases covering functionalities that are used frequently by end-users
would be prioritized.
Dependencies: Test cases that have dependencies on other test cases or functionalities would
be prioritized accordingly.
Answer: A test environment is a setup that mimics the production environment and is used for
testing the software application. It includes hardware, software, network configurations, and
other resources required for testing. The test environment is important because it provides a
controlled environment for testing, isolating testing activities from the production
environment. It helps ensure that testing does not impact the stability or performance of the
production system and allows testers to validate the software under realistic conditions.
Ans: Big Bang approach is an integration testing where all the modules integrated together at
once and tested. It is good for smaller systems but risky for large systems. It requires good
amount of documentation for testing.
Ans:
4. Early testing
5. Defect clustering
6. Pesticide paradox
Ans: Repeating the same test cases again and again will not find new bugs. So it is necessary to
review the test cases and add or update test cases to find new bugs.
Answer:
Repetitive tasks: Manual testing can be time-consuming and repetitive, especially for regression
testing.
Human error: Testers may overlook defects or make mistakes during test execution.
Limited coverage: Manual testing may not cover all possible scenarios, leading to gaps in test
coverage.
Resource constraints: Manual testing requires skilled testers and may be resource-intensive in
terms of time and effort.
Ans: Test cases can be prioritized based on factors such as criticality of the functionality,
frequency of use, complexity of the test case, risk associated with the feature, and
dependencies between test cases. High-priority test cases that cover critical functionalities and
have a high risk of failure should be executed first.
Answer: The defect life cycle consists of several stages, including identification, logging,
prioritization, assignment, fixing, retesting, closure, and verification. When a defect is identified
during testing, it is logged in a defect tracking tool with detailed information about the issue,
including steps to reproduce, severity, and priority. The defect is then prioritized based on its
impact on the application and assigned to the appropriate developer for resolution. After the
defect is fixed, it is retested to verify that the issue has been resolved. Once the defect is
verified, it is closed, and the status is updated in the defect tracking tool.
15c) What is the difference between functional testing and non-functional testing?
Answer: Functional testing verifies the behavior of the software application against the
functional requirements. It ensures that the software performs the functions it is supposed to
perform. Non-functional testing, on the other hand, verifies the attributes of the software such
as performance, reliability, usability, security, and scalability.
Ans: Smoke Testing is a type of testing which is done to assure that the acute functionalities of
program is working fine. ... Sanity testing is done to check the bugs have been fixed after the
build. Smoke testing is also called subset of acceptance testing. Sanity testing is also called
subset of regression testing.
16a) What are the different techniques for test case design?
Answer: Test case design techniques include equivalence partitioning, boundary value analysis,
decision table testing, state transition testing, pairwise testing, and use case testing. Each
technique has its own approach to generating test cases based on specific criteria.
Answer: Equivalence partitioning is a test case design technique that divides the input domain
into classes of data from which test cases can be derived. It helps reduce the number of test
cases while still providing good test coverage. Boundary value analysis, on the other hand, is a
test case design technique that focuses on testing the boundaries of input ranges. It helps
identify errors at the boundaries of input domains where the behavior of the software may
change.
Ans:
1. Lack of Communication.
2. Missing Documentation.
3. Diversity in Testing Environments.
4. Inadequate Testing.
Ans: Waterfall is a Liner Sequential Life Cycle Model whereas Agile is a continuous iteration of
development and testing in the software development process.
In Agile vs Waterfall difference, the Agile methodology is known for its flexibility whereas
Waterfall is a structured software development methodology.
Comparing the Waterfall methodology vs Agile which follows an incremental approach whereas
the Waterfall is a sequential design process.
Agile allows changes in project development requirement whereas Waterfall has no scope of
changing the requirements once the project development starts.
Resource allocation: Roles, responsibilities, and staffing requirements for the testing team.
Risks and mitigation strategies: Potential risks to testing and actions to mitigate them.
Ans:
Stage 1: Project Planning
Stage 3: Design
Stage 5: Testing
Stage 6: Deployment
Stage 7: Maintenance
Ans
1. Requirement Analysis
2. Test Planning
5. Test Execution
6. Test Closure
Ans
Load Testing: Load testing is the process that simulates actual user load on any application or
website. It checks how the application behaves during normal and high loads. This type of
testing is applied when a development project nears to its completion.
Stress Testing: Stress testing is a type of testing that determines the stability and robustness of
the system. It is a non-functional testing technique. This testing technique uses auto-generated
simulation model that checks all the hypothetical scenarios.
Ans:
A defect as a deviation from expected software behavior. In other words, if a website or app is
functioning differently from what users would expect from it, that particular variation would be
considered a defect.
1. Arithmetic Defects.
2. Logical Defects.
3. Syntax Defects.
4. Multithreading Defects.
5. Interface Defects.
6. Performance Defects.
Ans: Soak testing, also known as endurance testing or longevity testing, is a type of
performance testing that evaluates the system's behavior under sustained load over an
extended period. The main objective of soak testing is to identify performance issues such as
memory leaks, resource exhaustion, degradation of system performance, and other issues that
may occur after prolonged usage.
During soak testing, the system is subjected to a continuous load for an extended duration,
typically ranging from several hours to several days or even weeks, depending on the
requirements and objectives of the test. The load applied may vary depending on the specific
use case or expected workload patterns.
Ans: Integration testing is a type of testing where individual software modules are combined
and tested as a group to ensure that they work together as expected. Integration testing can be
categorized into two main types: incremental integration testing and non-incremental
integration testing.
In incremental integration testing, new modules are integrated with existing ones one at a time,
and each integration is followed by testing to ensure that the newly integrated components
work correctly with the existing ones.
The main advantage of incremental integration testing is that it allows defects to be identified
and fixed early in the development process, as integration occurs incrementally. It also helps
manage the complexity of integration by breaking it down into smaller, more manageable
steps.
Non-incremental integration testing, also known as big bang integration testing, is an approach
where all modules or components are integrated and tested together as a complete system in a
single step.
In non-incremental integration testing, all modules are combined and tested simultaneously,
without any prior incremental integration.
The main advantage of non-incremental integration testing is that it allows for comprehensive
testing of the entire system in a relatively short period. However, it can be challenging to
identify and isolate defects when multiple modules are integrated simultaneously, making it
difficult to pinpoint the root cause of issues.
Ans: Web security testing is the process of identifying vulnerabilities and weaknesses in web
applications and systems to ensure that they are adequately protected against potential
security threats and attacks. It involves evaluating the security controls, configurations, and
defenses implemented within web applications and their underlying infrastructure to mitigate
risks and protect sensitive data from unauthorized access, manipulation, or disclosure.
Assessing Security Controls: It involves assessing the effectiveness of security controls, such as
authentication, authorization, encryption, input validation, session management, and access
controls, in mitigating potential security risks and threats.
Testing for Common Vulnerabilities: Web security testing typically focuses on testing for
common security vulnerabilities and weaknesses, such as SQL injection, cross-site scripting
(XSS), cross-site request forgery (CSRF), insecure direct object references, security
misconfigurations, and sensitive data exposure.
Analyzing Attack Surfaces: It involves analyzing the attack surfaces and potential entry points
that attackers could exploit to gain unauthorized access to web applications or sensitive data,
including client-side and server-side vulnerabilities.
Evaluating Security Architecture: Web security testing includes evaluating the overall security
architecture and design of web applications, including network architecture, infrastructure
configuration, secure coding practices, and compliance with security standards and best
practices.
Testing for Compliance: It may involve testing web applications for compliance with security
standards, regulations, and industry best practices, such as OWASP Top 10, PCI DSS, GDPR,
HIPAA, and ISO 27001, to ensure that they meet regulatory and compliance requirements.
Ans: The defect or bug life cycle, also known as the bug workflow or defect workflow, describes
the various stages that a defect or bug goes through from its identification to its resolution and
closure. The defect life cycle typically consists of several key stages, each with its associated
activities and statuses. While specific terminology and workflows may vary between
organizations or projects, the following are common stages in the defect life cycle:
New: The defect is identified and reported by a tester, developer, or end-user. It is assigned a
unique identifier and enters the system for tracking and management.
Open: The defect is reviewed and verified by a designated person, such as a QA lead or product
owner. If the defect is valid and reproducible, it is marked as "open" and assigned to the
appropriate developer or team for further investigation and resolution.
In Progress: The assigned developer or team begins working on fixing the defect. They analyze
the root cause, develop a solution, and implement the necessary changes to address the issue.
Fixed: Once the developer has implemented the fix, they mark the defect as "fixed." The fix
undergoes internal testing or validation to ensure that it resolves the issue effectively without
introducing new defects or regressions.
Ready for Testing: After the fix has been verified internally, the defect is marked as "ready for
testing." It is assigned back to the testing team or assigned tester for retesting to confirm that
the issue has been resolved satisfactorily.
Reopen: If the tester discovers that the defect still persists or if new issues arise as a result of
the fix, they reopen the defect, providing details on the observed behavior and any additional
information that may help the developer understand the problem.
Retest: The developer addresses the issues reported during retesting and implements any
necessary corrective actions. The defect is then marked as "retested" and undergoes another
round of validation to ensure that the fix is effective.
Closed: Once the defect has been successfully retested and verified, it is marked as "closed."
The defect is considered resolved, and no further action is required. A closing note or resolution
details may be provided to document the outcome of the defect resolution process.
Ans: Compatibility testing is a type of software testing that evaluates the compatibility of a
software application or system across different environments, platforms, devices, browsers,
and configurations. The goal of compatibility testing is to ensure that the software functions
correctly and delivers a consistent user experience across various combinations of hardware,
software, and network environments.
Ans: Recovery Testing." Recovery testing is a type of software testing that evaluates how well a
system can recover from crashes, hardware failures, software failures, or other unexpected
events. The primary goal of recovery testing is to verify that the system can resume normal
operation and restore data integrity after a failure or disruption.
Ans: Bug leakage refers to defects that are discovered by users or stakeholders after the
software release due to shortcomings in the testing process, while bug release refers to the
intentional decision to include known defects in a software release due to various factors.
Ans: API testing can be performed manually or automated using specialized API testing tools
and frameworks, such as Postman, SoapUI, REST Assured, Karate, or pytest. Automated API
testing helps streamline testing efforts, improve test coverage, and facilitate continuous
integration and delivery (CI/CD) pipelines in agile and DevOps environments. Overall, API
testing is essential for ensuring the reliability, performance, and security of APIs and the
applications that rely on them.
Ans : 1)Black Box Testing 2) Whitebox testing 3) Unit Testing 4) Module Testing 5) Integration
Testing 6) System Testing 7) UAT Testing.
Ans: Exploratory testing is a dynamic and flexible approach to software testing that emphasizes
learning, discovery, and experimentation. Unlike traditional scripted testing, where test cases
are predefined and executed based on detailed test plans, exploratory testing involves
simultaneous test design, execution, and evaluation in an exploratory and improvisational
manner.
Ans: Ad hoc testing is an informal and unstructured approach to software testing that focuses
on exploring the software application or system without predefined test plans, scripts, or
documentation. Unlike formal testing methodologies, where test cases are meticulously
designed and executed based on predefined requirements and specifications, ad hoc testing
involves spontaneous and improvisational testing activities driven by testers' intuition,
experience, and domain knowledge.
Ans: In Agile methodologies, there are several key roles or actors involved in the software
development process. These roles work collaboratively to deliver high-quality software
products in iterative and incremental cycles. The specific roles may vary depending on the Agile
framework or methodology being used, such as Scrum, Kanban, Extreme Programming (XP), or
Lean Software Development. However, some common roles in Agile include:
Product Owner:
The Product Owner represents the stakeholders and is responsible for maximizing the value of
the product by prioritizing and managing the product backlog. They define the features,
functionalities, and requirements of the product, communicate the vision and goals to the
development team, and make decisions about what gets built and in what order.
Scrum Master:
The Scrum Master is a servant-leader and facilitator responsible for ensuring that the Scrum
framework is understood and followed by the Scrum Team. They remove impediments,
facilitate meetings and ceremonies (such as Sprint Planning, Daily Standups, Sprint Review, and
Sprint Retrospective), coach the team on Agile practices, and foster a culture of continuous
improvement.
Development Team:
The Development Team consists of cross-functional individuals who are responsible for
delivering potentially shippable increments of product functionality at the end of each Sprint.
The team members collaborate closely to design, develop, test, and deliver software
increments, and they are self-organizing and empowered to make decisions about how to
accomplish their work.
Stakeholders:
Stakeholders are individuals or groups with an interest or stake in the success of the project or
product. They may include customers, end users, sponsors, executives, managers, and other
relevant parties. Stakeholders provide feedback, prioritize requirements, and participate in
reviews and demonstrations to ensure that the product meets their needs and expectations.
Development Team members are individuals with specific skills and expertise required to
deliver the product increments. They may include software developers, testers, designers,
analysts, architects, and other technical specialists. Development Team members collaborate
closely to implement the product backlog items and deliver value to the stakeholders.
The QA Team is responsible for ensuring the quality of the software product by defining and
implementing testing strategies, creating test plans and test cases, executing tests, identifying
defects, and verifying fixes. QA Team members collaborate with the Development Team to
ensure that quality is built into the product from the outset.
Customers or end users are the ultimate beneficiaries of the software product. They provide
feedback, validate requirements, and use the product to achieve their goals and objectives.
Customers or end users play a crucial role in guiding the development process and ensuring
that the product meets their needs and expectations.
Ans: In Agile methodologies, various types of meetings, also known as ceremonies, are
conducted at different stages of the development process to facilitate collaboration,
communication, and alignment among team members and stakeholders. These meetings
provide opportunities to plan, review progress, make decisions, and address issues in a
structured and iterative manner. Some common types of meetings in Agile include:
Sprint Planning is a meeting held at the beginning of each Sprint to plan the work to be done
during the Sprint. The Product Owner presents the items from the Product Backlog, and the
Development Team collaborates to select the items they can commit to completing within the
Sprint. The team discusses the requirements, estimates the effort required, and defines the
Sprint Goal and Sprint Backlog.
The Daily Standup, also known as the Daily Scrum, is a short, time-boxed meeting held every
day during the Sprint to synchronize and coordinate the work of the Development Team. Team
members share updates on what they worked on yesterday, what they plan to work on today,
and any impediments or blockers they are facing. The Daily Standup helps identify issues early,
maintain transparency, and foster collaboration.
The Sprint Review is held at the end of each Sprint to review and demonstrate the completed
work to stakeholders. The Development Team showcases the increment of functionality
delivered during the Sprint, and stakeholders provide feedback, ask questions, and discuss
potential changes or improvements. The Sprint Review helps validate progress, gather
feedback, and inform planning for the next Sprint.
Backlog Refinement, also known as Backlog Grooming or Story Refinement, is a meeting held
regularly to review and refine items in the Product Backlog. The Product Owner and
Development Team collaborate to clarify requirements, estimate effort, prioritize items, and
ensure that the backlog is ready for Sprint Planning. Backlog Refinement helps maintain a
healthy and well-prepared backlog.
Release Planning is a meeting held at the beginning of a new release or product increment to
plan and prioritize the upcoming work. The Product Owner presents the release goals and
priorities, and the team discusses the scope, timeline, dependencies, and risks. Release
Planning helps align stakeholders, set expectations, and define the roadmap for future
iterations.
In addition to the formal ceremonies, Agile teams may hold impromptu or ad hoc meetings as
needed to address specific issues, resolve conflicts, make decisions, or collaborate on urgent
tasks. These meetings are informal and may occur spontaneously based on the team's needs
and priorities.
Ans: In Agile methodologies, a backlog refers to a prioritized list of work items, requirements,
features, or user stories that need to be completed to deliver a product increment. The backlog
serves as a dynamic repository of all the work that needs to be done, organized based on its
relative importance, value, and urgency.
Product Backlog: The Product Backlog is a comprehensive list of all the work items required to
build and enhance the product. It represents the entire scope of work from which the
Development Team selects items to work on during each Sprint. The Product Backlog is owned
and managed by the Product Owner, who is responsible for prioritizing items based on business
value, stakeholder needs, and market demands.
User Stories or Items: The items in the backlog are typically expressed as user stories, features,
or requirements that capture specific functionalities, behaviors, or outcomes desired by users
or stakeholders. Each item in the backlog represents a small, independent unit of work that can
be completed within a single Sprint.
Prioritization: The backlog is prioritized based on the value it delivers to the product and its
stakeholders. The Product Owner collaborates with stakeholders to prioritize backlog items
based on factors such as business value, customer feedback, market trends, technical
dependencies, and risk mitigation.
Estimation: Backlog items may be estimated in terms of effort, complexity, or size to help the
team understand the relative size and effort required to complete each item. Estimation
techniques such as story points, ideal days, or t-shirt sizing may be used to provide a rough
estimate of the work involved.
Evolution: The backlog evolves over time as new requirements emerge, priorities change, and
stakeholders provide feedback. The Product Owner continuously updates and refines the
backlog based on changing business needs, market conditions, and customer feedback,
ensuring that it remains relevant and aligned with the product vision and goals.
Ans: a) Unit Testing b) Module testing c) Integration testing d) System Testing e) acceptance
testing
Ans) a) Understanding the requirements b) Writing the test scenerios and test cases c) Conduct
the test d) Log good bug reports e) Report the results
Ans: A test case is a documentwhich consists of a set of conditions or actions which are
performed on the software application in order to verify the expected functionality of a feature.
Ans: A test harness, also known as a test framework or testing framework, is a software tool or
infrastructure that provides a structured and automated environment for testing software
components, modules, or systems. Test harnesses facilitate the creation, execution, and
management of test cases and help streamline the testing process by automating repetitive
tasks, providing reporting capabilities, and supporting various testing activities.
Test Case Management: A test harness allows testers to define, organize, and manage test
cases, test suites, and test scenarios. Test cases specify the input data, expected results, and
execution conditions for testing specific functionalities or features of the software under test
(SUT).
Test Execution: A test harness provides mechanisms for executing test cases and test suites
against the SUT. It automates the process of running tests, capturing test results, and verifying
the behavior of the SUT against expected outcomes.
Test Data Management: Test harnesses enable testers to create, manage, and manipulate test
data required for testing the SUT. They support techniques such as data-driven testing, where
test cases are executed with different sets of input data to validate various scenarios and edge
cases.
Setup and Teardown: Test harnesses support setup and teardown procedures to prepare the
testing environment before executing test cases and clean up resources after testing is
complete. Setup procedures initialize the test environment, configure test dependencies, and
prepare the SUT for testing, while teardown procedures clean up resources, reset state, and
restore the environment to its original state.
Test Automation: Test harnesses facilitate test automation by providing frameworks, libraries,
and tools for writing, organizing, and executing automated test scripts. They support various
programming languages, scripting languages, and automation tools, allowing testers to create
reusable and maintainable test automation suites.
Reporting and Analysis: A test harness generates comprehensive test reports and metrics to
assess the quality and coverage of the testing effort. Test reports provide insights into test
execution status, test case results, defects found, and test coverage metrics, enabling
stakeholders to make informed decisions about the readiness of the SUT for release.
Integration: Test harnesses integrate with other development and testing tools, such as version
control systems, continuous integration (CI) servers, defect tracking systems, and test
management platforms. Integration with these tools streamlines the testing workflow,
enhances collaboration, and facilitates the adoption of DevOps practices.
Ans: black box testing is also known as specification based testing, analyses the functionality of
a software / application without knowing much about the internal structure/ design of the item
a)Equivalance partitioning
Ans: White box testing is also known as structure based testing , requires a profound
knowledge of code as it includes testing some structural part of the application
a)Statement Coverage
b) Decision Coverage
c) Condition Coverage
d) Multiple condition coverage
Ans: the SDLC and STLC are complementary processes that work together to deliver high-
quality software products. While the SDLC encompasses the entire software development
process, from planning to deployment, the STLC focuses specifically on testing activities within
the development lifecycle. Both processes are essential for ensuring that software products
meet stakeholder requirements, quality standards, and business objectives.
Ans:
Ans: Test metrics are quantitative measures used to assess and evaluate various aspects of the
testing process and the quality of the software product. These metrics provide valuable insights
into the effectiveness, efficiency, and progress of testing activities, helping stakeholders make
informed decisions and improvements. Here are some common types of test metrics:
Code Coverage: Measures the percentage of code lines, branches, or paths exercised by the
executed tests. It includes metrics such as statement coverage, branch coverage, and path
coverage.
Functional Coverage: Measures the coverage of functional aspects or features of the software
application tested against the total number of functional areas. It helps assess the
thoroughness of testing in addressing functional requirements.
Defect Metrics:
Defect Density: Calculates the number of defects discovered per unit of size or effort, such as
defects per line of code, defects per test case, or defects per function point.
Defect Age: Measures the time elapsed between defect discovery and defect closure. It helps
track the aging of defects and identify bottlenecks in the defect resolution process.
Defect Leakage: Measures the percentage of defects discovered by users or customers after the
software release. It indicates the effectiveness of testing in identifying and preventing defects.
Execution Metrics:
Test Execution Progress: Tracks the progress of test execution over time, including the number
of test cases executed, passed, failed, and pending. It helps monitor testing progress and
identify potential delays or issues.
Test Execution Time: Measures the time taken to execute test cases or test suites. It helps
assess the efficiency of test execution and identify opportunities for optimization.
Test Cycle Time: Measures the elapsed time between test case creation and test case
execution. It helps assess the efficiency of the testing process and identify opportunities for
reducing cycle time.
Quality Metrics:
Defect Removal Efficiency (DRE): Calculates the percentage of defects removed during testing
compared to the total number of defects. It measures the effectiveness of testing in identifying
and removing defects before software release.
Mean Time to Failure (MTTF): Measures the average time between the start of testing and the
occurrence of the first failure. It helps assess the reliability and stability of the software under
test.
Mean Time Between Failures (MTBF): Measures the average time between consecutive failures
during testing. It helps assess the reliability and robustness of the software over time.
Resource Metrics:
Test Effort: Measures the amount of time, effort, and resources allocated to testing activities. It
helps assess the cost-effectiveness of testing and resource utilization.
Test Environment Availability: Measures the availability and accessibility of test environments,
including hardware, software, tools, and data. It helps ensure that testing activities can be
conducted effectively and efficiently.
User Satisfaction: Collects feedback from users or customers about their satisfaction with the
software product's quality, usability, and performance. It helps identify areas for improvement
and prioritize enhancement efforts.
Net Promoter Score (NPS): Measures the likelihood of users or customers to recommend the
software product to others. It helps gauge overall satisfaction and loyalty to the product.
58)