0% found this document useful (0 votes)
16 views22 pages

MDM Question Bankpdf

The document is a question bank on software engineering and software testing, covering fundamental concepts such as software engineering importance, software processes, SDLC, various development models, project management processes, Agile development (XP), and software testing processes. It emphasizes the systematic approach to software development and testing, highlighting the significance of quality assurance, effective project management, and the selection of good test cases. The document also outlines the stages involved in software development and testing, providing insights into methodologies and practices that enhance software quality and reliability.

Uploaded by

sujalyadav4142
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views22 pages

MDM Question Bankpdf

The document is a question bank on software engineering and software testing, covering fundamental concepts such as software engineering importance, software processes, SDLC, various development models, project management processes, Agile development (XP), and software testing processes. It emphasizes the systematic approach to software development and testing, highlighting the significance of quality assurance, effective project management, and the selection of good test cases. The document also outlines the stages involved in software development and testing, providing insights into methodologies and practices that enhance software quality and reliability.

Uploaded by

sujalyadav4142
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

MDM QUESTION BANK

CH1. INTRODUCTION TO SOFTWARE ENGINEERING


Q.1) What is software engineering? Give its importance.
Ans: Software Engineering is the branch of computer science that deals with the systematic design, development, testing, and
maintenance of software applications by applying engineering principles. It involves using structured processes and
methodologies to ensure the production of high-quality software that meets user needs within cost and time constraints.
Importance of Software Engineering:
1. Systematic Development: Software engineering uses a systematic approach, ensuring that the development process is well-
organized, consistent, and efficient, which helps reduce errors and improve quality.
2. Quality Assurance: It ensures that software is reliable, secure, and performs well. Techniques like testing, code reviews, and
quality standards help in building robust software.
3. Scalability and Maintenance: Software engineering practices facilitate easy updates and maintenance of software, which is
crucial for keeping up with user needs and technology changes.
4.Cost and Time Efficiency: By using proper planning and structured methodologies, software engineering reduces development
time and lowers the costs associated with fixing issues post-deployment.
5.Meeting User Requirements: It ensures that the end product meets the specific needs of users and stakeholders by employing
thorough requirement analysis and feedback incorporation.

Q.2) What is software process and software project?


Ans: Software Process: A software process is a structured set of activities and tasks involved in the development and maintenance
of software. It acts as a roadmap for software development, outlining the steps to be followed from the initial concept to the
deployment and maintenance of the software. The main phases typically include:
1. Requirements Analysis: Understanding and documenting what the software needs to do.
2. Design: Planning the software structure and architecture.
3. Implementation (Coding): Writing the actual code to build the software.
4. Testing: Checking for errors and verifying that the software meets the requirements.
5. Deployment: Releasing the software to users
6. Maintenance: Updating and improving the software as needed after release.
Common software processes include models like the Waterfall model, Agile methodology, and the Spiral model.
Software Project: A software project is a specific, goal-oriented effort to develop a software product. It encompasses all activities
from the initial planning to the final delivery and maintenance of the software. A software project involves:
1.Project Planning: Defining the scope, resources, budget, and timeline.
2.Execution: Implementing the project plan through team coordination and task management.
3.Monitoring and Control: Tracking progress, ensuring quality, and making necessary adjustments.
4.Completion: Finalizing the product and delivering it to the customer or end-users.
Software projects often require a dedicated team, including developers, project managers, testers, and designers, to work
collaboratively to achieve the desired outcome within the set time and budget constraints.

Q.3) Explain in detail component software process.


Ans: Component-Based Software Process (CBSP) focuses on building software by integrating reusable components, enhancing
development speed and maintainability. Here’s a concise overview:
Key Stages of CBSP:
1. Requirement Analysis: Identify the required functionalities and assess the feasibility of using existing components.
2. Component Selection: Search for and evaluate components based on compatibility, cost, and reliability.
3. System Design: Plan the architecture, defining how components will interact and integrate.
4. Customization and Development: Customize or develop additional components if necessary.
5. Integration: Assemble components and ensure smooth interaction.
6. Testing: Conduct unit and integration tests to validate the assembled system.
7. Deployment and Maintenance: Deploy the system and maintain it, updating components as needed.
Benefits:
Reusability: Speeds up development and reduces costs.
Modularity: Makes systems easier to maintain and upgrade.
Quality: Leveraging well-tested components improves overall reliability.

Q.4) What is SDLC?


Ans: SDLC (Software Development Life Cycle) is a systematic process for developing software through a series of well-defined
phases. It ensures that software is built efficiently and meets user requirements while maintaining quality. The main phases of
SDLC include:
1. Requirement Analysis: Gathering and analysing user needs to create a requirements document.
2. Planning: Determining the project’s scope, resources, cost, and timeline while assessing risks.
3. Design: Creating a detailed blueprint of the software architecture and data flow.
4. Implementation (Coding): Writing the actual code based on the design specifications.
5. Testing: Conducting tests to identify and fix any issues, ensuring quality.
6. Deployment: Releasing the software to users in the production environment.
7. Maintenance: Monitoring and updating the software as needed post-deployment.
Importance of SDLC:
Structured Development: Ensures a step-by-step approach, leading to better project management.
Quality Assurance: Integrates testing and validation at each stage for a reliable product.
Risk Management: Identifies and mitigates risks early, reducing potential project failures.
Cost and Time Efficiency: Helps control costs and time by outlining tasks and resources.

Q.5) Explain – 1. Waterfall model 2. V-model 3. Spiral model 4. Prototyping model 5. RAD 6. Iterative model.
Ans: There are several types of development process models (also known as software development life cycle (SDLC) models) that
define the approach to planning, creating, testing, and deploying software systems. Each model has its own characteristics,
advantages, and use cases depending on the project's needs.
Here are the types of development process models:
1. Waterfall Model
Description: The Waterfall model is a linear and sequential approach to software development. It divides the process into distinct
phases, where each phase must be completed before moving to the next.
Phases: Requirement analysis, System design, Implementation, Integration, Testing, Deployment, and Maintenance.
Advantages: Simple and easy to understand; well-suited for projects with clear, fixed requirements.
Disadvantages: Inflexible to changes during development; late testing may result in higher costs for fixing defects.
2. V-Model (Verification and Validation Model)
Description: The V-Model is an extension of the Waterfall model where each development phase is directly associated with a
testing phase. This emphasizes verification and validation at every stage.
Phases: The process follows a V shape, with the left side representing development phases (requirement analysis, design, coding)
and the right side representing corresponding testing phases (unit testing, integration testing, system testing).
Advantages: Early detection of defects ensures that each phase has corresponding testing.
Disadvantages: Rigid and inflexible; changes in requirements can be difficult to accommodate once development starts.
3. Spiral Model
Description: The Spiral model combines elements of both iterative development and the Waterfall model. It focuses on risk
assessment and iterative development, making it suitable for large, complex, and high-risk projects.
Phases: The model follows a spiral of four major phases: Planning, Risk Analysis, Engineering, and Evaluation. Each phase
iterates multiple times.
Advantages: Focuses on risk management, allows for frequent reassessment, and is flexible to changes.
Disadvantages: Expensive and complex; can lead to scope creep if not managed carefully.
4. Prototyping Model
Description: The Prototyping model involves building a prototype (an early approximation of a system or component) that is
refined based on user feedback.
Phases: Requirements gathering, building a prototype, user feedback, refining the prototype, and finally delivering the system.
Advantages: Provides early visualization of the system, helps in gathering user requirements, and allows quick changes based on
feedback.
Disadvantages: Prototypes may not always reflect the final system architecture, leading to issues in scalability and maintainability.
5. RAD (Rapid Application Development) Model
Description: RAD is an agile software development process that emphasizes quick development and iteration through user
feedback. It focuses on rapid prototyping and user involvement.
Phases: Requirements planning, user design, construction, and cutover. It often uses tools and techniques like component-based
development and time-boxing.
Advantages: Faster development cycles, user involvement ensures better user satisfaction.
Disadvantages: Requires skilled developers; can lead to scope creep due to rapid changes.
6. Iterative Model
Description: The Iterative model is a process where software is developed in small, manageable segments or iterations, with each
iteration improving on the previous one.
Phases: Planning, design, development, and testing occur in cycles, with each cycle refining the product until the final version is
achieved.
Advantages: Allows for flexibility and continuous improvement, accommodating changes during development.
Disadvantages: Can lead to scope creep if not managed properly; may lack clear documentation of each iteration.
Conclusion:
Each development process model has its own strengths and weaknesses, and choosing the right one depends on the project's size,
complexity, timeline, and flexibility required. Models like Waterfall and V-Model are best for projects with well-defined
requirements, while models like Agile and Spiral are suited for projects that require frequent changes and iterations.

Q.6) What is project management process?


Ans: Project Management Process refers to a series of structured phases used to plan, execute, and control a project from start to
finish. The main goal is to ensure that the project meets its objectives within the given constraints of time, budget, and scope. The
process is typically broken down into five main phases:
1) Initiation:

 Purpose: Define the project at a high level and determine its feasibility.
 Key Activities: Creating a project charter, identifying stakeholders, and conducting a feasibility study.
 Outcome: Approval to proceed with the project.
2) Planning:

 Purpose: Develop a detailed roadmap for the project.


 Key Activities: Setting project objectives, defining scope, creating schedules, budgeting, and risk management planning.
 Outcome: A comprehensive project plan that serves as a guide for the project execution.
3) Execution:

 Purpose: Implement the project plan and create the project deliverables.
 Key Activities: Coordinating team members, managing resources, and ensuring tasks are completed as per the plan.
 Outcome: Progress toward the project's completion.
4) Monitoring and Controlling:

 Purpose: Track project progress and make adjustments as needed.


 Key Activities: Measuring performance using KPIs, identifying variances from the plan, and taking corrective actions.
 Outcome: Ensured alignment with project goals and maintained quality.
5) Closure:

 Purpose: Finalize the project and evaluate its success.


 Key Activities: Completing final deliverables, obtaining client approval, releasing resources, and documenting lessons
learned.
 Outcome: A formally closed project and a project report for future reference.
Importance of the Project Management Process:
o Ensures Efficient Use of Resources: Optimizes the use of time, budget, and human resources.
o Risk Management: Identifies and mitigates risks early.
o Improved Communication: Enhances coordination among stakeholders.
o Quality Assurance: Helps maintain the quality of deliverables.
o Achieves Objectives: Ensures the project meets its intended goals and delivers value.

Q.7) Explain in detail Agile development XP.


Ans: Agile Development is an iterative and flexible approach to software development that emphasizes collaboration, customer
feedback, and responding to changes in requirements. One of the most prominent Agile methodologies is Extreme Programming
(XP), which focuses on improving software quality and adaptability through continuous feedback and frequent releases.
Key Principles of XP:
1. Communication: Continuous collaboration between developers and customers to ensure the software meets the client's needs.
2. Simplicity: Focus on simple, clear designs that solve the current problem without unnecessary complexity.
3. Feedback: Frequent testing and feedback from customers to adjust the development process and improve the software.
4. Courage: Developers are encouraged to make bold decisions, such as refactoring code or adopting new practices, to improve
software quality.
5. Respect: Encourages respect among team members, ensuring that everyone's contributions are valued.
6. Continuous Improvement: The team constantly reviews its processes and practices to improve efficiency and quality.
Key Practices of XP:
1. Test-Driven Development (TDD): Writing tests before coding ensures that the code meets the requirements and is error-free.
2. Pair Programming: Two developers work together at one computer, improving code quality and fostering collaboration.
3. Continuous Integration: Frequent integration of code into the main codebase, allowing for early detection of integration issues.
4. Small Releases: Software is delivered in small, frequent releases to quickly gain customer feedback.
5. Refactoring: Regularly improving the internal structure of the code without changing its behavior, making it cleaner and easier
to maintain.
6. On-Site Customer: Having a customer representative available for immediate feedback ensures that the product aligns with the
customer’s needs.
Advantages:
 High-quality code through practices like TDD and pair programming.
 Frequent feedback leads to better alignment with customer needs.
 Increased adaptability to changes in requirements and technologies.
Challenges:

 Requires experienced developers who are familiar with XP practices.


 The on-site customer requirement may not always be feasible for all projects.
 May be difficult to scale for large teams or complex projects
In summary, XP is a powerful Agile methodology that fosters collaboration, adaptability, and high-quality code through its focus
on communication, simplicity, and continuous improvement. It's most effective for small to medium-sized projects with frequent
customer interaction.

CH2. INTRODUCTION TO SOFTWARE TESTING


Q.1) Explain in brief software testing and testing process.
Ans: Software Testing is the process of evaluating and verifying that a software application or system works as intended and
meets specified requirements. The goal of software testing is to identify bugs or defects in the software and ensure the software is
of high quality, reliable, and performs well in the intended environment.
Types of Software Testing:
1. Manual Testing: Performed by testers who manually check the software for issues.
2. Automated Testing: Uses tools and scripts to automatically execute tests, improving efficiency and coverage.
3. Functional Testing: Verifies that the software functions according to specified requirements (e.g., unit testing, integration
testing).
4. Non-Functional Testing: Focuses on aspects like performance, usability, and security (e.g., load testing, security testing).
Software Testing Process:
1. Requirement Analysis:
Understanding the requirements of the software to create test cases that align with those requirements.
2. Test Planning:
Creating a detailed test plan, which outlines the scope, approach, resources, schedule, and deliverables for testing.
Defines the types of tests to be performed, tools to be used, and roles and responsibilities.
3. Test Design:
Creating test cases and test scripts based on the requirements and specifications.
Defines inputs, expected outputs, and test conditions.
4. Test Execution:
Running the tests in the defined environment to check if the software behaves as expected.
Logging defects and issues found during testing.
5. Defect Reporting and Tracking:
Identifying and documenting bugs or defects found during testing.
Defects are reported to the development team for fixing, and their status is tracked.
6. Test Closure:
After tests are complete, results are reviewed, and a final report is created.
Evaluates the testing process, assesses coverage, and ensures that all test cases were executed.
Importance of Software Testing:

 Ensures the quality, functionality, and performance of the software.


 Helps identify defects early, reducing the cost of fixing them.
 Improves user satisfaction by delivering a reliable, bug-free product.

Q.2) What is importance of selection of good test cases?


Ans: 1. Effective Coverage: Well-chosen test cases ensure comprehensive coverage of the software’s functionality, including
edge cases and complex scenarios, reducing the chance of bugs being missed.
They help test the system under various conditions, ensuring that all possible paths in the software are verified.
2. Efficiency: Good test cases prioritize critical features and high-risk areas of the software, ensuring that the most important
aspects are tested first.
Well-designed test cases reduce redundant testing and improve the overall efficiency of the testing process, saving time and
resources.
3. Identifying Critical Defects: Test cases that are well-selected focus on areas where defects are most likely to occur, increasing
the chances of discovering significant issues early in the development process.
Good test cases can help identify critical defects that could impact the software’s functionality or performance, ensuring higher-
quality products.
4. Cost-Effective: Selecting the right test cases reduces the need for rework and re-testing, leading to lower costs. Detecting and
fixing defects early (with the help of well-chosen test cases) is more cost-effective than addressing them later in the development
cycle or post-release.
5. Ensures Comprehensive Quality: A good set of test cases ensures that the software meets the specified requirements, both
functional and non-functional (e.g., security, performance, usability).
They help evaluate not only the correctness of the software but also its performance, usability, and reliability.
6. Reduces Risk: Effective test cases reduce the risk of the software failing in critical situations or at scale by testing scenarios
that the software will encounter in real-world use.
It ensures that any potential issues that might disrupt the user experience or cause system downtime are identified and mitigated.
7. Improves Customer Satisfaction: By ensuring that the software behaves as expected and is free from defects, good test cases
help ensure a higher-quality product, which leads to improved customer satisfaction.
Well-tested software is more reliable, stable, and secure, which enhances user confidence in the product.
8. Facilitates Maintenance: Good test cases help create a solid foundation for regression testing, ensuring that new features or
bug fixes don’t negatively impact existing functionality.
A well-structured test suite makes it easier to perform future testing and maintain the software over time.

Q.3) Explain measurement of testing.


Ans: Measurement of Testing refers to the process of collecting and analyzing data related to the software testing activities. The
goal is to evaluate the effectiveness, efficiency, and quality of the testing process. Measurement helps in tracking progress,
identifying improvement areas, and making informed decisions about the software’s readiness for release.
Key Metrics in Testing:
1. Test Coverage: Measures how much of the software has been tested, including code coverage and requirement coverage.
2. Defect Density: Refers to the number of defects found per unit of code (e.g., per 1000 lines of code), helping identify problem
areas in the software.
3. Defect Discovery Rate: Measures the number of defects found over a specific period, reflecting the efficiency of the testing
process.
4. Test Execution Progress: Tracks the number of test cases passed, failed, or blocked during testing, providing insight into the
progress and quality of testing.
5. Defect Resolution Time: Measures the average time taken to fix a defect, from discovery to re-testing, reflecting the
responsiveness of the team.
Importance of Measurement:

 Improves Testing Efficiency: Helps identify bottlenecks and optimize resources.


 Enhances Software Quality: Helps track defect trends, ensuring better quality control.
 Informs Decision Making: Provides data for assessing the software's readiness for release and potential risks.

Q.4) What is implemental testing approach?


Ans: Implemental Testing Approach is not a widely recognized term in standard software engineering methodologies. However, it
might refer to the implementation-focused testing process or incremental testing approach, where testing is performed alongside
the development process, with an emphasis on testing smaller parts of the software as they are being built. This approach ensures
that defects are identified and resolved early, as opposed to waiting until the full system is complete.
Key Aspects of Implemental/Incremental Testing:
1. Early Testing of Components: Testing is done on smaller, individual modules or components of the software as they are
implemented, rather than waiting for the entire system to be developed.
This can include unit testing or integration testing of smaller pieces of functionality.
2. Continuous Feedback: Provides immediate feedback to developers, enabling them to fix defects in real-time while the
software is still in development.
This iterative process helps in identifying bugs early, reducing the risk of defects accumulating late in the development cycle.
3. Improves Quality Control: By testing in increments, issues can be caught and resolved before they propagate through the rest
of the system, thus maintaining a higher quality of the codebase.
4. Parallel Development and Testing: Development and testing happen concurrently, speeding up the overall process. While
developers build features, testers test components or modules of the software in parallel.
5. Facilitates Integration: As components are tested and validated incrementally, they can be integrated and tested within the
overall system more smoothly.
6. Risk Mitigation: By addressing issues incrementally, risks associated with undetected defects are minimized, especially in
complex systems.
Benefits:

 Early detection of defects.


 Easier debugging and fixing issues in smaller components.
 Improved efficiency in the development process.
 Reduced costs associated with fixing defects later in the project.

Q.5) What is basic terminology related to software testing?


Ans: Basic Terminology Related to Software Testing:
1. Software Testing: The process of evaluating and verifying that a software application or system works as intended, identifying
defects or bugs, and ensuring it meets the specified requirements.
2. Test Case: A set of conditions, inputs, and expected results used to determine if the software behaves as expected. It includes the
steps to execute, the data to be used, and the expected outcome.
3. Test Suite: A collection of test cases that are designed to test a particular feature, functionality, or aspect of the software
application.
4. Test Plan: A document that outlines the strategy, objectives, scope, and approach to testing, including resources, schedules, and
deliverables.
5. Bug/Defect: An issue or flaw in the software that causes it to behave incorrectly or not according to the specified requirements.
6. Test Execution: The process of running the test cases on the software to verify that it functions as intended and to detect defects.
7. Test Environment: The hardware, software, and network configurations in which the software is tested to replicate the actual
operating conditions.
8. Regression Testing: Testing conducted to ensure that changes or fixes made to the software do not negatively affect the existing
functionality.
9. Acceptance Testing: A type of testing performed to verify if the software meets the business requirements and is ready for
deployment. It is often conducted by the customer or end-user.
10. Unit Testing: A type of testing that focuses on verifying the functionality of individual units or components of the software
(often performed by developers).
11. Integration Testing: Testing the interaction between multiple units or modules to ensure they work together as expected.
12. System Testing: Testing the complete, integrated system to verify that it meets the specified requirements.
13. Alpha Testing: The first phase of testing where the software is tested by the internal development team before it is released to
external testers or users.
14. Beta Testing: The second phase of testing where the software is released to a select group of external users to identify bugs
before the final release.
15. Smoke Testing: A quick, preliminary test to check the basic functionality of the software, ensuring that it is stable enough for
further testing.
16. Performance Testing: Testing the software to evaluate its speed, responsiveness, stability, and scalability under various
conditions.
17. Usability Testing: Testing the software to ensure it is user-friendly, intuitive, and meets the needs of end-users.
18. Test Automation: The use of specialized software tools to automatically execute tests, reducing manual intervention and
improving efficiency.
19. Defect Life Cycle: The journey of a defect from its discovery to its resolution, including identification, reporting, fixing, and
verification of the fix.
20. Test Result: The outcome of executing a test case, indicating whether the software passed or failed the test based on the
expected results.
21. Bug Tracking: The process of logging, managing, and tracking defects found during testing to ensure they are fixed and re-
tested.
22. Priority: The importance of fixing a defect based on its impact on the software, determining how soon it should be addressed.
23. Severity: The degree of impact a defect has on the functionality of the software, indicating how critical it is to the system's
operation.
24. Test Coverage: The measure of the extent to which the software has been tested, including code coverage, requirement
coverage, and functional coverage.
25. Test Metrics: Quantitative measures used to assess the testing process, such as defect density, test pass rate, and defect
discovery rate.

Q.6) What is STLC?


Ans: STLC (Software Testing Life Cycle) is a structured process followed by testers to ensure the quality of the software through
systematic testing. It defines the sequence of activities that are carried out during the testing phase of the software development
lifecycle. Each phase in STLC has specific objectives and deliverables that contribute to the overall quality assurance process.
Phases of STLC:
1. Requirement Analysis:
Objective: Understand and analyze the testing requirements based on the software's functional and non-functional specifications.
Activities: Review requirement documents, identify testable requirements, and define the scope of testing.
Deliverables: Test Plan, Test Strategy.
2. Test Planning:
Objective: Develop a comprehensive plan outlining the approach to testing, resources needed, timelines, and risk factors.
Activities: Define the scope of testing, testing objectives, test environment, resource allocation, and schedule.
Deliverables: Test Plan, Test Strategy, Resource allocation, and Test Environment setup.
3. Test Design:
Objective: Create detailed test cases and test scripts based on the requirements and test plan.
Activities: Design test cases, prepare test data, and create test scripts for automated testing (if applicable).
Deliverables: Test Cases, Test Scripts, and Test Data.
4. Test Environment Setup:
Objective: Prepare the testing environment in which the software will be tested.
Activities: Set up hardware, software, and network configurations needed for testing, and ensure they match the production
environment as closely as possible.
Deliverables: Test Environment.
5. Test Execution:
Objective: Execute the test cases on the software to identify defects or bugs.
Activities: Run the test cases, log defects, and record results for each test case.
Deliverables: Test Execution Logs, Defect Reports, and Test Case Results.
6. Defect Reporting and Tracking:
Objective: Identify, log, and track defects found during test execution.
Activities: Report defects, track their status, and work with developers to fix and verify the defects.
Deliverables: Defect Logs, Defect Tracking Reports.
7. Test Closure:
Objective: Conclude the testing process after achieving test objectives and evaluating the overall testing process.
Activities: Finalize testing activities, assess if the software meets requirements, and prepare test summary reports.
Deliverables: Test Summary Report, Test Closure Report, Lessons Learned.
Importance of STLC:

 Ensures Consistency: Provides a structured framework for testing that ensures all necessary activities are performed.
 Improves Quality: Helps in identifying defects early in the development process, improving the quality of the software.
 Provides Traceability: Links testing activities with project requirements, ensuring that all aspects of the software are
tested and validated.
 Enhances Communication: Improves collaboration among stakeholders, including testers, developers, and managers, by
defining clear deliverables at each stage.

Q.7) Write principles of testing.


Ans: The principles of testing provide guidelines that ensure the software testing process is effective, efficient, and reliable. These
principles help testers focus on the most important aspects of the software and ensure that the testing process is structured and
aligned with project goals.
Principles of Testing:
1. Testing shows the presence of defects, not their absence: Testing can confirm that defects are present in the software, but it
cannot guarantee that there are no defects. Even if a product passes all tests, some defects may remain undetected.
The goal is to identify as many defects as possible, not to prove that there are no defects.
2. Exhaustive testing is not possible: It is impractical to test every possible input, path, or scenario within the software due to
time and resource constraints.
Instead, a focused approach using risk-based testing and test prioritization is often more effective in ensuring high-quality
software.
3. Early testing: Testing should begin as early as possible in the software development life cycle (SDLC), ideally from the
requirement gathering or design phase.
Early testing helps in identifying defects early, reducing the cost of fixing them, and improving overall quality.
4. Defects cluster together: A small number of modules or components often contain most of the defects (the Pareto principle,
80/20 rule).
Focusing on high-risk areas and critical functionality can help uncover the majority of defects.
5. Testing is context-dependent: The level and type of testing should be adapted based on the context, such as the type of
software (web, mobile, enterprise), the criticality of the software, and the project’s requirements.
For example, a banking application requires more rigorous testing than a simple mobile game.
6. Absence of errors is a fallacy: Even if the software is defect-free, it may not meet the user’s needs or business requirements.
The focus should be on ensuring that the software meets the specified requirements and delivers value to the end-users, not just on
finding bugs.
7. Early involvement of testers: Testers should be involved from the early stages of the SDLC, including requirement analysis,
design, and planning.
Early involvement helps identify potential issues in the design or requirements and allows testers to develop effective test cases.
8. Testing should be independent: Testers should ideally be independent of the development team to avoid biases in the testing
process.
Independent testing helps ensure objective evaluation of the software and reduces the risk of overlooking defects.
9. Continuous improvement: The testing process should be continuously improved based on feedback and lessons learned from
previous projects.
Implementing new tools, techniques, and practices helps enhance the efficiency and effectiveness of testing over time.
10. Defect prevention over defect detection: Testing should focus on preventing defects rather than only detecting them. This
includes adopting practices like code reviews, design inspections, and static analysis to identify issues before they occur.
By preventing defects early in the development process, the overall quality of the software improves and the cost of fixing defects
decreases.

Q.8) What are the limitations of testing?


Ans: 1. Exhaustive Testing is Impractical: It is not feasible to test all possible inputs, paths, and combinations due to the vast
number of possible scenarios in complex systems. Testing focuses on the most critical areas, but exhaustive coverage is not
achievable.
2. Undetected Defects: Despite thorough testing, some defects may remain undetected, especially if they occur under rare
conditions or edge cases that are not considered during testing.
3. Inability to Prove Bug-Free Software: Testing can only demonstrate the presence of defects, not prove their complete absence.
Even if software passes all tests, it does not guarantee that no defects exist.
4. Time and Resource Constraints: Limited time, budget, and resources may prevent comprehensive testing. As a result, some
parts of the software may not be adequately tested, leading to potential risks in the final product.
5. Human Error and Subjectivity: Testers may make mistakes in designing test cases or miss important scenarios. Manual
testing is prone to human error, which can lead to inaccurate results or missed defects.

CH3. SOFTWARE VERIFICATION AND VALIDATION


Q.1) Write difference between verification and validation.
Ans:

Verification Validation
1. Verification is the process of evaluating a system or 1. Validation is the process of evaluating a system or
component to determine whether it meets the component during or at the end of the development process to
specified requirements and design specifications. It is ensure it meets the intended use and user needs. It is about
about ensuring that "you built the system right." ensuring that "you built the right system."

2. Verification aims to check that the product is being 2. Validation ensures that the final product fulfills its intended
developed correctly according to the design and purpose and functions as expected in real-world conditions.
requirements.
3. Verification involves activities such as inspections, 3. Validation involves activities such as functional testing,
reviews, walkthroughs, and desk-checking. These are integration testing, system testing, and user acceptance
often done through static testing (without executing testing. These are typically dynamic testing methods (with
the code). code execution).

4. Verification is performed during the development 4. Validation is performed after the verification process and
phase to check interim work products to ensure that typically towards the end of the development cycle or after the
they meet specified requirements. product is complete.

5. Verification helps identify errors early in the 5. Validation confirms that the product works as intended for
development phase, reducing the cost of fixing the end-user, ensuring that the final output meets the user’s
issues. needs and expectations.
6. Example- Reviewing design documents to ensure 6. Example- Conducting user acceptance testing to check if
that all requirements are included and correctly the final software provides the desired outcomes for the user.
defined.

Q.2) Write differences between QA and QC.


Ans:

Quality Assurance (QA) Quality Control (QC)


1. QA is a process-oriented approach focused on ensuring that 1. QC is a product-oriented approach that involves inspecting
quality is built into the development processes. It involves and testing the final product to identify defects and ensure that
establishing systematic activities and procedures to prevent the product meets quality standards. It is about finding and
defects and improve the process of creating a product. The fixing defects in the finished product.
goal of QA is to ensure that the right processes are followed to
achieve high quality.

2. QA aims to improve and ensure the quality of processes 2. QC aims to identify and fix defects in the final product. It is
used to create a product. It is proactive and focuses on reactive and focuses on detecting defects after the product has
preventing defects before they occur. been developed.

3. QA activities include process audits, process checklists, 3. QC activities include product inspections, testing, and
training, documentation reviews, and the implementation of reviews to verify that the product meets specifications.
quality management systems.
4. QA is the responsibility of everyone involved in the 4. QC is typically the responsibility of a designated team that
development process, as it involves designing and performs tests and inspections on the product to identify any
implementing processes that ensure quality. defects.

5. QA is a preventive approach focused on improving 5. QC is a corrective approach focused on identifying and


processes and methodologies to prevent quality issues. fixing quality issues in the product.
6. QA activities are conducted throughout the development 6. QC activities are usually conducted after the product has
cycle and are integrated into the entire process from start to been developed, during or after the manufacturing or
finish. development phase.

7. Example- Establishing a set of standards and procedures for 7. Example- Conducting tests on the final software to ensure it
software development to ensure that each stage meets quality performs as expected and is free of bugs.
benchmarks.

Q.3) What are limitations of VNV?


Ans: 1. High Cost and Resource-Intensive: V&V activities can be expensive, particularly for complex systems, due to the need
for specialized tools, skilled personnel, and extensive testing procedures. This can be a burden for smaller organizations with
limited budgets.
2. Time-Consuming: Thorough verification and validation take significant time, which can lead to project delays. This is
especially true when V&V involves detailed inspections, comprehensive testing, and iterative review cycles.
3. Incomplete Coverage: It is often impractical to test every possible scenario or condition due to time and resource constraints.
This can result in incomplete test coverage and the possibility of undetected defects, especially in highly complex systems.
4. Dependence on Initial Requirements: The effectiveness of V&V depends heavily on the quality and completeness of the
initial requirements. If the requirements are unclear, incomplete, or prone to change, verification may be challenging, and
validation may not yield accurate results.
5. Limited Scope for Subjective Aspects: V&V is better suited for objective and quantifiable requirements. It may not fully
address subjective or non-functional aspects of a product, such as user experience, aesthetics, or customer satisfaction, which are
harder to verify and validate formally.
6. Risk of Over-Reliance on Testing: Validation is often performed through testing, which has its own limitations. Testing can
show the presence of defects but not their absence, leading to a false sense of security if defects remain undetected outside of
tested scenarios.
7. Dynamic Changes and Agile Challenges: In agile and iterative development processes, requirements and features evolve
frequently. This can make it difficult to implement traditional V&V processes effectively, as they often rely on stable requirements
and structured timelines.
8. Difficulty in Simulating Real-World Conditions: Validation may not perfectly simulate real-world conditions or user
environments. This limitation can lead to situations where the product passes validation but fails when used in an actual
operational context due to unforeseen factors.

Q.4) Categorize VNV techniques.


Ans: Verification and Validation (V&V) techniques can be categorized based on their nature, methodology, and phase of
application. Here is a categorization of V&V techniques:
1. Static Techniques (Verification):
These techniques are applied without executing the code. They are mainly focused on reviewing and analyzing the product at
various stages of development.
Reviews: Walkthroughs: Informal meetings where developers present their work to peers for feedback.
Technical Reviews: A systematic examination of documentation or code by a team to identify issues.
Inspections: A formal review process where the product is examined for defects based on a defined checklist.
Analysis: Static Analysis: The use of tools to examine code or documents to detect issues such as coding errors or security
vulnerabilities.
Code Reviews: A detailed examination of the code by peers or automated tools to ensure adherence to coding standards.
Model Checking: Formal Verification: Mathematical methods used to prove that a system’s specifications meet certain properties.
2. Dynamic Techniques (Validation):
These techniques involve the execution of the code to check the behavior of the product in various scenarios.
Testing:
Unit Testing: Testing individual components or modules of the code to ensure they work as expected.
Integration Testing: Testing the interaction between integrated modules to ensure they work together properly.
System Testing: Testing the complete system as a whole to verify that it meets specified requirements.
Acceptance Testing: Ensuring the product meets the user's needs and requirements, typically involving end-users.
Regression Testing: Re-running previous test cases to confirm that new changes haven’t introduced new defects.
Simulation and Prototyping:
Simulation: Creating a model of the system to observe how it behaves under different conditions.
Prototyping: Building an early version of the product to test concepts and gather user feedback.
3. Manual vs. Automated Techniques:
Manual Techniques: Techniques such as code reviews, inspections, and walkthroughs are performed by individuals or teams
without the aid of automated tools.
Automated Techniques: Techniques like static analysis, automated testing, and model checking utilize software tools to perform
checks and tests efficiently.
4. White-Box vs. Black-Box Techniques:
White-Box Techniques: The internal structure and workings of the code are known and considered when designing tests (e.g., unit
testing and code analysis).
Black-Box Techniques: The focus is on validating the outputs based on input without considering internal code structure (e.g.,
system testing, acceptance testing).
5. Formal vs. Informal Techniques:
Formal Techniques: Involve rigorous, structured processes such as formal verification and model checking that follow defined
methodologies.
Informal Techniques: More flexible and may not follow strict protocols, such as walkthroughs and informal peer reviews.
6. User-Oriented Techniques:
Usability Testing: Ensuring that the product meets user expectations in terms of ease of use and satisfaction.
Beta Testing: Releasing the product to a limited group of users outside the development team for feedback and validation.
Q.5) What are different types of roles VNV in STLC?
Ans: 1. Requirement Analysis:
Verification: Check for clarity and completeness of requirements.
Validation: Ensure requirements match user needs.
2. Test Planning:
Verification: Review the test plan for alignment with objectives.
Validation: Confirm the strategy covers user expectations.
3. Test Design:
Verification: Review test cases and scripts.
Validation: Ensure test cases reflect real-world use.
4. Test Environment Setup:
Verification: Confirm the setup matches production.
Validation: Validate the environment can support tests.
5. Test Execution:
Verification: Monitor execution and results.
Validation: Ensure the software behaves as expected.
6. Defect Reporting:
Verification: Check defect documentation.
Validation: Validate defect impact and fixes.
7. Test Closure:
Verification: Review completion and criteria.
Validation: Confirm readiness for deployment.
8. Post-Release:
Verification: Ensure process adherence.
Validation: Validate ongoing performance and user satisfaction.
Verification ensures the product is built correctly; Validation ensures the right product is built for users.

Q.6) What is tabular form? (IEEE standard STB 2012)


Ans: In the context of the IEEE Standard for System and Software Test Documentation (IEEE 829, often referred to as STB 2012),
tabular form refers to a structured format used to present information in rows and columns. This form is utilized to clearly
organize data and make complex information easy to read and analyse.
In test documentation, a tabular format can be used for various purposes, such as:
1. Test Case Specifications: Listing test cases with details like test ID, description, expected results, and pass/fail criteria in a table.
2. Traceability Matrix: Mapping requirements to test cases to ensure coverage.
3. Defect Tracking: Documenting defects with attributes like defect ID, severity, priority, status, and description in tabular form.
4. Test Summary Reports: Summarizing test results, including metrics such as the number of test cases passed, failed, or blocked.

Q.7) Explain in detail SBVP.


Ans: SBVP (Standards-Based Verification and Validation Process) is a structured methodology aligned with industry standards
(e.g., IEEE 1012) to ensure software quality, reliability, and compliance throughout the software development life cycle (SDLC).
It aims to verify that the software meets specified requirements and validate that it fulfils user needs.
Key Components of SBVP:
1. Planning and Strategy:
Create a V&V Plan detailing scope, approach, and criteria for success.
Assign roles and responsibilities for V&V activities.
2. Requirements Verification and Validation:
Verification: Ensure requirements are complete, consistent, and feasible.
Validation: Confirm requirements align with user expectations.
3. Design V&V:
Verification: Review design documents for compliance with standards.
Validation: Use prototyping or simulations to check design against user needs.
4. Implementation V&V:
Verification: Conduct code reviews and static analysis for standard adherence.
Validation: Perform unit and integration tests to ensure intended functionality.
5. Testing and Execution:
Verification: Check that all test cases pass and cover requirements.
Validation: Conduct user acceptance testing (UAT) and real-world testing to confirm software meets user needs.

Q.8) Write on note on STRS.


Ans: Software Test Requirements Specification (STRS) is a detailed document that outlines the specific testing requirements for a
software system, ensuring that the testing process is well-structured and aligned with the software’s functional and non-functional
requirements. It serves as a guide for the testing team to ensure comprehensive coverage of all system features.
Key Elements of STRS:
1. Overview of the Software System: A description of the software’s purpose and key functionalities.
2. Test Objectives: Clear goals for what the testing process aims to achieve, such as validating system functionality and detecting
defects.
3. Scope of Testing: Defines the boundaries of testing, specifying which components and features will be tested.
4. Test Criteria: Defines the pass/fail criteria for tests, outlining the conditions under which tests are considered successful.
5. Resources and Constraints: Identifies required testing tools, environments, and any limitations, such as time or resource
constraints.
Role of STRS:
STRS plays a critical role in guiding the testing efforts, ensuring full test coverage, and acting as a communication tool between
stakeholders. It helps ensure that all requirements are tested and that testing is conducted in an organized and efficient manner.
Benefits:
 Improves test planning and coverage.
 Enhances communication among project stakeholders.
 Reduces risks by ensuring thorough testing in line with project objectives.

CH4. TYPES OF TESTING AND LEVELS OF TESTING


Q.1) Difference between function testing and non-functional testing.
Ans:

Functional Testing Non-functional Testing


Verifies that the software behaves as expected and meets its Evaluates the software's non-functional aspects, such as
specified functional requirements. performance, security, usability, and scalability.
Ensure that each feature of the software works according to Ensure that the software performs well under varying
the defined specifications and fulfils its intended purpose. conditions and meets non-functional requirements that
contribute to user satisfaction and system quality.
Tests the core functionality of the application, such as user Assesses attributes like load handling, system security,
authentication, data processing, and system responses to user response time, and overall user experience.
inputs.
Examples: Examples:
Unit Testing Performance Testing
Integration Testing Load Testing
System Testing Stress Testing
User Acceptance Testing (UAT) Security Testing
Usability Testing

Success or failure is measured by whether the system Success or failure is based on whether the system meets
functions as expected for each feature (e.g., does the login criteria such as response time, security standards, or system
work correctly, is data saved properly). usability.
Tools: Selenium, JUnit, TestNG, QTP. Tools: LoadRunner, JMeter, AppDynamics, Postman.

Q.2) Explain unit testing and system testing.


Ans: Unit Testing is a type of software testing that focuses on verifying individual components or units of a software application
to ensure that each part functions as expected. It typically involves testing small pieces of code, such as functions, methods, or
classes, in isolation from the rest of the application.
Key Characteristics:
Scope: Tests individual functions, methods, or classes.
Objective: Ensure that each unit or component works correctly on its own.
Performed By: Usually done by developers during the development phase.
Test Level: Low-level testing (typically the first level of testing).
Automation: Unit tests are often automated using tools like JUnit, N Unit, or TestNG.
Example: Testing a method that calculates the sum of two numbers to verify that it returns the correct result.
Benefits of Unit Testing:

 Helps identify bugs early in the development cycle.


 Facilitates code refactoring by providing a safety net.
 Ensures that the individual unit’s function correctly before integrating them into the larger system.
System Testing:
System Testing is a type of software testing where the complete and integrated software system is tested as a whole to ensure it
meets the specified requirements. It tests the system in an environment that closely resembles production to validate the entire
system’s behaviour and functionality.
Key Characteristics:
Scope: Tests the complete, integrated system to ensure all components and features work together.
Objective: Verify that the entire application functions as expected in its entirety and meets the requirements.
Performed By: Typically done by a dedicated QA team.
Test Level: High-level testing (after integration testing and before acceptance testing).
Test Types: Includes functional and non-functional testing, such as performance, security, and usability testing.
Example: Testing a web application by simulating user interactions (login, data entry, report generation) to ensure all components
work together as expected.
Benefits of System Testing:

 Ensures the entire system works in harmony.


 Verifies that all features and functionalities are correctly implemented.
 Identifies integration issues between different components of the system.

Q.3) What is integration testing and give its classification.


Ans: Integration Testing is a software testing technique where individual modules or components are combined and tested as a
group to ensure they work together correctly. It focuses on identifying issues with the interaction between different parts of the
system.
Classification of Integration Testing:
1. Top-Down Integration Testing:
Starts with testing high-level modules and uses stubs to simulate lower-level modules.
2. Bottom-Up Integration Testing:
Begins with low-level modules and uses drivers to simulate higher-level modules.
3. Big Bang Integration Testing:
All components are integrated at once and tested as a whole.
4. Incremental Integration Testing:
Modules are integrated and tested step by step, either top-down or bottom-up.

Q.4) What is decomposition-based integration?


Ans: Decomposition-Based Integration is a technique used in integration testing where a complex system is broken down into
smaller, manageable components or sub-systems, which are tested individually and then progressively integrated. This approach is
often used to simplify the testing process by isolating and addressing integration issues in smaller parts rather than dealing with
the entire system at once.
Key Aspects of Decomposition-Based Integration:
Step-by-Step Integration: The system is broken down into smaller units or sub-systems, which are integrated one after another.
Each integrated component is tested for its correct interaction with the other components.
Controlled Testing: Instead of testing the entire system at once (like in Big Bang integration), decomposition-based integration
ensures that each smaller unit works correctly before moving on to integrate the next one.
Isolation of Problems: By testing smaller parts, issues and defects can be more easily isolated, making it easier to locate and fix
integration problems.
Advantages:

 Easier to Manage: Smaller components are easier to test and troubleshoot, reducing complexity.
 Early Detection of Issues: Problems can be detected early in the process, before they escalate when larger components
are integrated.
 Faster Feedback: Since each part is tested in isolation, feedback on the system’s behaviour is provided more quickly.
Disadvantages:

 Initial Setup: Requires careful planning and designing to decompose the system properly.
 Can Be Time-Consuming: Depending on the number of components, the process may take time as each part is integrated
and tested.
Q.5) Difference between call graph-based integration and path-based integration.
Ans:

Call graph-based integration Path-based integration

In this approach, the integration testing is based on a call Path graph-based integration involves creating a path graph,
graph, which is a representation of how functions or methods which represents all possible execution paths through the
call each other within a software system. system based on control flow, such as decisions, loops, and
branches.
It focuses on function-level interactions and calls between It focuses on control flow between modules or components,
modules. analysing all possible execution paths through the system.
It tests the sequence of method/function calls between It tests the entire path a program might take during execution,
components or modules, ensuring that each module can covering conditions, loops, and branching logic between
correctly invoke other modules as expected. components.
The call graph is a directed graph where each node represents The path graph is a directed graph where each node represents
a function or method, and an edge represents a function call a point in the program (e.g., a decision or a branch), and edges
between them. represent the transitions between these points.
Example: Testing how a function in Module A calls a method Example: Testing different execution paths based on
in Module B and ensuring that the communication between conditional logic, like checking how a module handles both
them works as expected. true and false branches of an if statement.
Advantages: Advantages:
 Helps identify potential issues in the interaction  Provides more comprehensive test coverage by
between individual functions or methods. considering all possible execution paths.
 Focuses on ensuring correct function/method calls  Can detect complex issues involving control flow,
across modules. such as incorrect branching or loop handling.
Disadvantages: Disadvantages:
 May miss errors related to more complex paths that  The number of paths to test can grow exponentially
involve conditional statements or loops. in complex systems, leading to higher testing effort
and time.

CH5. SIMPLIFIED TESTING TYPES


Q.1) What are the specialised testing types?
Ans: Specialized testing types are distinct approaches in software testing tailored to meet specific testing needs that go beyond
general functional testing. These types of testing help ensure that software meets particular standards or behaves as expected under
unique conditions. Here’s an overview of some key specialized testing types:
1. Performance Testing: This type evaluates the speed, responsiveness, and stability of a system under a particular workload. It
helps identify performance bottlenecks and ensures the software can handle expected user loads. Subtypes include load testing
(checking system behavior under expected load) and stress testing (assessing limits by pushing beyond normal loads).
2. Security Testing: This focuses on identifying vulnerabilities, threats, and risks within a software application. It ensures that data
and resources are protected from potential intruders and that security measures like authentication, authorization, and data
encryption are properly implemented.
3. Usability Testing: This testing type assesses how user-friendly and intuitive the software is. It aims to provide a seamless user
experience by detecting any issues that may hinder the ease of use, including interface design, navigation, and overall user
satisfaction.
4. Compatibility Testing: This ensures that software runs smoothly across different devices, browsers, operating systems, and
network environments. It helps verify that the application maintains consistent behavior regardless of its operating conditions.
5. Localization and Internationalization Testing: These types check if the software can adapt to different languages, regional
settings, and cultural nuances. Localization testing verifies that the software appears and functions appropriately in a particular
locale, while internationalization testing ensures that the core application can support multiple locales without requiring redesign.

Q.2) Explain regression testing, smoke testing and sanity testing.


Ans: 1. Regression Testing
Definition: Regression testing is performed to confirm that recent code changes have not adversely affected existing
functionalities. The main objective is to ensure that the software continues to operate correctly after updates or enhancements.
Purpose: To detect bugs introduced by new code, updates, or fixes.
Scope: It typically involves re-executing existing test cases to verify the stability of unchanged parts of the application.
Example: After adding a new feature to a shopping cart system, regression tests check if previous functionalities, like adding or
removing items, still work as intended.
2. Smoke Testing
Definition: Smoke testing, also known as "build verification testing," is a preliminary test to check the basic functionality of a
software build. It ensures that the major functions work and the build is stable enough for further testing.
Purpose: To catch basic errors early in the testing process and determine whether the software build is stable enough for more
detailed testing.
Scope: Covers the most essential parts of the application but is not comprehensive. It acts as a checkpoint to decide if further
testing can proceed.
Example: After a new build, testers check if the application launches successfully, if the main menu is accessible, or if the core
features can be initiated.
3. Sanity Testing
Definition: Sanity testing is a narrow, focused testing type that verifies specific functionalities after a minor code change or bug
fix. It ensures that the particular function works as expected without a full regression test.
Purpose: To confirm that a specific function or bug fix is working correctly without testing the entire application.
Scope: Limited and focused on the areas impacted by the code changes. It checks the logic of a specific section to avoid wasting
time on detailed testing if major issues still exist.
Example: After fixing a bug related to the checkout button in an e-commerce app, sanity testing would check if the checkout
button now works properly without retesting unrelated parts of the application.

Q.3) What is exploratory and adobe in agile development?


Ans: 1. Exploratory Testing
Definition: Exploratory testing is an approach in software testing where testers actively explore and interact with the application
without predefined test cases. It combines learning about the application, designing tests, and executing them simultaneously.
Purpose: To discover hidden issues, unknown risks, or edge cases that structured testing might miss. It helps testers think
creatively and investigate the software's behavior in real-world scenarios.
Application in Agile: Exploratory testing is highly valuable in Agile development due to its flexibility and speed. Testers can
quickly adapt and test new features or changes without waiting for formal test case documentation, making it suitable for iterative
development.
2. Adobe in Agile Development
Adobe Software: Adobe offers a range of software solutions that can support Agile teams, such as Adobe XD for designing and
prototyping user interfaces, or Adobe Workfront for project management and collaboration.
Adobe Workfront: This is a work management tool that helps teams plan, execute, and manage projects, aligning with Agile
principles like task prioritization and iterative progress tracking. It helps facilitate communication, transparency, and agile
workflows among team members.

Q.4) What is agile development and its quadrants?


Ans: Agile development is a methodology used in software development that emphasizes iterative progress, collaboration,
flexibility, and customer feedback. It focuses on developing software incrementally, delivering functional components in short
cycles called sprints. This approach helps teams adapt quickly to changing requirements and fosters a collaborative environment
among developers, testers, and stakeholders.
Agile Testing Quadrants
The Agile Testing Quadrants, developed by Brian Marick and popularized by Lisa Crispin and Janet Gregory, provide a
framework for understanding and organizing various types of testing within Agile development. The quadrants help teams balance
different testing efforts throughout development.
Quadrant 1 (Q1) – Unit Tests & Component Tests

 Purpose: Supports the development process by verifying that individual units or components of the software function as
expected.
 Examples: Unit tests, component tests.
 Focus: Technology-facing and supports the development team.
 Automation: Often automated for continuous feedback during development.
Quadrant 2 (Q2) – Functional Tests & Story Tests

 Purpose: Validates that the system works as intended and meets user requirements.
 Examples: Functional tests, story tests, user acceptance testing (UAT).
 Focus: Business-facing and helps ensure that the software aligns with user needs.
 Automation: Can be automated, but manual testing and exploratory testing also play roles.
Quadrant 3 (Q3) – Exploratory Testing & Usability Testing

 Purpose: Focuses on testing the system from the user's perspective and ensuring it is user-friendly and meets usability
standards.
 Examples: Exploratory testing, usability testing, alpha/beta testing, user feedback sessions.
 Focus: Business-facing and evaluates the product’s usability and experience.
 Automation: Typically, not automated, as it requires human insight and analysis.
Quadrant 4 (Q4) – Performance & Security Testing

 Purpose: Validates non-functional aspects of the software, such as performance, load handling, security, and scalability.
 Examples: Performance testing, load testing, security testing, stress testing.
 Focus: Technology-facing and ensures that the system meets technical requirements and is robust under various
conditions.
 Automation: Often automated to test performance at scale and simulate real-world conditions.

CH6. SOFTWARE TESTING STANDARDS


Q.1) What are the key software testing standards?
Ans 1. ISO/IEC/IEEE 29119 (Software Testing)
Overview: This is a globally recognized set of standards specifically dedicated to software testing. It provides a comprehensive
framework covering test processes, documentation, techniques, and guidelines for organizations.
Key Points:

 Standardizes the terminology and definitions used in software testing.


 Outlines a detailed test process that includes planning, monitoring, control, design, and execution.
 Provides templates and examples for test documentation like test plans, test cases, and test reports.
Use: Ensures consistency and adherence to best practices in testing projects across industries.
2. ISO/IEC 25010 (System and Software Quality Models)
Overview: This standard defines a quality model for software and systems, including characteristics that should be evaluated
during testing.
Key Points:

 Identifies quality attributes such as functionality, reliability, performance efficiency, usability, security, compatibility, and
maintainability.
Guides testers in understanding and focusing on specific quality aspects that are relevant to the software being tested.
Use: Helps prioritize testing based on the most critical quality attributes of a software product.
3. IEEE 829 (Test Documentation Standard)
Overview: This standard, also known as the Standard for Software and System Test Documentation, provides a set of templates
and guidelines for test documentation.
Key Points:

 Specifies documents such as test plans, test design specifications, test case specifications, and test incident reports.
 Ensures that all aspects of testing are well-documented, aiding in communication and traceability.
Use: Commonly used by testing teams to standardize documentation and reporting in test projects.
4. ISO/IEC/IEEE 12207 (Software Life Cycle Processes)
Overview: Although primarily a software life cycle process standard, it includes processes related to software testing within the
development and maintenance stages.
Key Points:

 Covers the entire software lifecycle from concept to retirement.


 Includes verification and validation processes to ensure that software meets requirements and works as intended.
Use: Helps integrate testing into the overall software development lifecycle.
5. ISO 9001 (Quality Management Systems)
Overview: This is a general quality management standard that applies to any industry. While not specific to software testing, it
includes principles relevant to quality assurance in software testing.
Key Points:

 Emphasizes customer satisfaction, continuous improvement, and a process approach.


 Ensures that organizations establish a quality management system (QMS) that supports consistent testing practices.
Use: Helps organizations implement a framework for maintaining and improving the quality of their software testing processes.
6. ISTQB (International Software Testing Qualifications Board) Standards
Overview: Although ISTQB is known for certifying testers, it has developed a body of knowledge that outlines standardized best
practices for software testing.
Key Points:

 Defines testing levels (unit, integration, system, acceptance) and testing techniques (black-box, white-box, etc.).
 Provides common terminology and concepts for software testing professionals.
Use: Widely used for training and standardizing knowledge among software testing teams.
7. CMMI (Capability Maturity Model Integration)
Overview: A process improvement framework that helps organizations develop mature and efficient software development and
testing processes.
Key Points:

 Defines maturity levels for processes and emphasizes continuous process improvement.
 Assists organizations in assessing and improving their software testing practices to achieve higher quality outputs.
Use: Provides a structured approach for developing and refining testing processes.

Q.2) What are the needs/importance of testing standards?


Ans: 1. Consistency in Testing Practices
Purpose: Testing standards provide a structured approach and clear guidelines, ensuring that testing practices remain consistent
across teams and projects.
Benefit: Standardized practices help reduce variability in testing approaches, leading to more predictable outcomes and easier
collaboration between teams.
2. Improved Quality Assurance
Purpose: Standards outline best practices and methodologies for conducting thorough and effective testing.
Benefit: By adhering to these standards, organizations can enhance the quality of their software, resulting in fewer defects, better
user experiences, and increased customer satisfaction.
3. Enhanced Communication
Purpose: Standards create a common language and terminology for software testing, which helps improve communication among
stakeholders, including developers, testers, and project managers.
Benefit: Improved communication reduces misunderstandings and ensures that all parties involved have a clear understanding of
testing processes and expectations.
4. Regulatory Compliance
Purpose: Some industries, such as healthcare, finance, and aerospace, have strict regulatory requirements for software. Testing
standards help organizations meet these compliance requirements.
Benefit: Adhering to established testing standards ensures that software products comply with industry regulations, avoiding
potential legal and financial repercussions.
5. Efficiency and Cost Reduction
Purpose: Testing standards promote the use of efficient processes and the reuse of best practices.
Benefit: By following proven testing procedures, teams can avoid common pitfalls and reduce the time and resources spent on
testing, leading to cost savings and faster time-to-market.
6. Risk Management
Purpose: Standards help identify and mitigate risks associated with software failures by defining rigorous testing practices.
Benefit: Effective risk management ensures that critical defects are identified early in the development cycle, reducing the impact
of potential software issues.
7. Facilitation of Automation
Purpose: Testing standards provide a foundation for the development of automated testing frameworks and processes.
Benefit: With clear guidelines, teams can develop and implement automated tests more effectively, leading to continuous
integration and continuous deployment (CI/CD) practices.
8. Improved Documentation
Purpose: Testing standards often include requirements for documentation, such as test plans, test cases, and test reports.
Benefit: Comprehensive documentation ensures traceability, supports audits, and facilitates future maintenance and updates to the
software.
9. Training and Skill Development
Purpose: Standards serve as a foundation for training and certifying testers, helping them develop a clear understanding of testing
methodologies and best practices.
Benefit: This results in a skilled workforce that can conduct testing efficiently and effectively, improving overall project outcomes.
10. Benchmarking and Continuous Improvement
Purpose: Adhering to standards allows organizations to benchmark their testing practices against industry norms.
Benefit: This benchmarking can be used to assess the maturity of testing processes and guide continuous improvement efforts.

Q.3) What are the industry specific testing standards?


Ans: 1. Healthcare:
IEC 62304 for medical device software life cycle processes.
ISO 13485 ensures quality management in medical devices.
2. Automotive:
ISO 26262 addresses functional safety for vehicle electronics.
AUTOSAR standardizes software architecture for component integration.
3. Aerospace:
DO-178C mandates rigorous testing for airborne systems.
MIL-STD-498 focuses on software development and documentation.
4. Finance:
PCI DSS secures payment card data.
SOX Compliance ensures financial system testing for transparency.

5. Telecommunications:
ETSI and ITU-T provide guidelines for network and interoperability testing.
6. Energy:
IEC 60880 for nuclear power plant safety software.
NERC CIP secures critical power infrastructure.
7. Retail:
ISO/IEC 27001 ensures data security in e-commerce.
ISO 8583 for reliable financial transaction messaging.

You might also like