0% found this document useful (0 votes)
23 views22 pages

Stqa Insem

Uploaded by

Tanvi Rainak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views22 pages

Stqa Insem

Uploaded by

Tanvi Rainak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

STQA INSEM

Unit 1:

1. Compare between tools and techniques.[5]


Aspect Tools Techniques
Definition Software or hardware used to perform Methods or processes used to achieve certain
specific tasks. goals.
Examples JIRA, Selenium, Git, Postman Agile methodology, Regression testing, Code
review
Purpose Facilitate the execution of tasks, Improve processes, solve problems, and
automate processes, and manage enhance efficiency.
activities.
Functionality Provide specific functionalities like Offer strategies and approaches for tasks like
issue tracking, test automation, or code testing, development, and problem-solving.
management.
Usage Applied to execute and manage Applied as methods or practices to improve or
particular functions or operations. optimize processes and performance.
Complexity Can range from simple to complex, Generally simpler in concept, focusing on
depending on the software or hardware methods and approaches rather than specific
and its configuration. tools.
Scope Specific to certain tasks or aspects of a Broader in application, encompassing various
project, like test automation or version methods for improving work processes and
control. quality.
Examples of JIRA for tracking project issues, Agile methodology for project management,
Use Selenium for automating tests, Git for Regression testing to ensure software stability,
version control. Code review to improve code quality.

2. Explain defect life cycle with the help of diagram. [5]


 New: When any new defect is identified by the tester, it falls in the ‘New’ state. It is the first state of
the Bug Life Cycle. The tester provides a proper Defect document to the Development team so that
the development team can refer to Defect Document and can fix the bug accordingly.
 Assigned: Defects that are in the status of ‘New’ will be approved and that newly identified defect is
assigned to the development team for working on the defect and to resolve that. When the defect is
assigned to the developer team the status of the bug changes to the ‘Assigned’ state.
 Open: In this ‘Open’ state the defect is being addressed by the developer team and the developer
team works on the defect for fixing the bug. Based on some specific reason if the developer team
feels that the defect is not appropriate then it is transferred to either the ‘Rejected’ or ‘Deferred’
state.
 Fixed: After necessary changes of codes or after fixing identified bug developer team marks the state
as ‘Fixed’.
 Pending Request: During the fixing of the defect is completed, the developer team passes the new
code to the testing team for retesting. And the code/application is pending for retesting on the Tester
side so the status is assigned as ‘Pending Retest’.
 Retest: At this stage, the tester starts work of retesting the defect to check whether the defect is
fixed by the developer or not, and the status is marked as ‘Retesting’.
 Reopen: After ‘Retesting’ if the tester team found that the bug continues like previously even after
the developer team has fixed the bug, then the status of the bug is again changed to ‘Reopened’.
Once again bug goes to the ‘Open’ state and goes through the life cycle again. This means it goes for
Re-fixing by the developer team.
 Verified: The tester re-tests the bug after it got fixed by the developer team and if the tester does
not find any kind of defect/bug then the bug is fixed and the status assigned is ‘Verified’.
 Closed: It is the final state of the Defect Cycle, after fixing the defect by the developer team when
testing found that the bug has been resolved and it does not persist then they mark the defect as a
‘Closed’ state.

More States:

 Rejected: If the developer team rejects a defect if they feel that defect is not considered a genuine
defect, and then they mark the status as ‘Rejected’. The cause of rejection may be any of these three
i.e Duplicate Defect, NOT a Defect, Non-Reproducible.
 Deferred: All defects have a bad impact on developed software and also they have a level based on
their impact on software. If the developer team feels that the defect that is identified is not a prime
priority and it can get fixed in further updates or releases then the developer team can mark the
status as ‘Deferred’. This means from the current defect life cycle it will be terminated.
 Duplicate: Sometimes it may happen that the defect is repeated twice or the defect is the same as
any other defect then it is marked as a ‘Duplicate’ state and then the defect is ‘Rejected’.
 Not a Defect: If the defect has no impact or effect on other functions of the software then it is
marked as ‘NOT A DEFECT’ state and ‘Rejected’.
3. Define “Quality” as viewed by different stakeholders of software development and usage. [5]
1. End Users:
 Definition: For end users, quality means usability, reliability, and performance.
 Perspective: They expect the software to be easy to navigate, responsive, and free from errors or
crashes. Quality software for users is one that meets their needs and provides a smooth, intuitive
experience.
 Expectation: The software should perform its intended functions accurately and efficiently, without
requiring extensive effort to learn or use.
2. Developers:
 Definition: Developers view quality in terms of code quality, maintainability, and adherence to best
practices.
 Perspective: Quality software for developers is well-structured, modular, and easy to debug, extend,
and maintain. They value clear, concise code that follows coding standards and is documented
properly.
 Expectation: Developers expect the software to be built using efficient algorithms, with minimal
technical debt, and to be flexible enough to accommodate future changes.
3. Project Managers:
 Definition: Project managers see quality as meeting project requirements, timelines, and budget
constraints.
 Perspective: Quality for project managers is about delivering software that fulfills the agreed-upon
scope, within the planned schedule and cost. They focus on balancing quality with resource
management and risk mitigation.
 Expectation: Project managers expect the software to satisfy the client's requirements while being
delivered on time and within budget, with minimal rework or delays.
4. Quality Assurance (QA) Team:
 Definition: The QA team defines quality as the absence of defects, compliance with specifications,
and overall stability.
 Perspective: For QA professionals, quality means ensuring the software meets all functional and non-
functional requirements and is free of bugs or vulnerabilities. They emphasize thorough testing and
validation processes.
 Expectation: The QA team expects the software to perform consistently across different
environments, to be secure, and to conform to all specified standards and requirements.
5. Business Stakeholders:
 Definition: Business stakeholders, including clients and investors, see quality in terms of value, return
on investment (ROI), and market fit.
 Perspective: Quality for business stakeholders is about the software delivering tangible business
value, meeting market demands, and contributing to the organization's goals. They focus on the
software's ability to attract and retain users, generate revenue, and support business operations.
 Expectation: Business stakeholders expect the software to provide a competitive advantage, enhance
customer satisfaction, and be scalable and adaptable to future needs.

4. Describe “Total Quality Management” Principles of continual improvement. [5]


Total Quality Management (TQM) is a management approach focused on improving the quality of
products and services through continuous improvement, involving all members of an organization. One
of the core principles of TQM is Continual Improvement, which emphasizes the ongoing enhancement of
processes, products, and services. Here’s a detailed description of the principles of continual
improvement within TQM:
Principles of Continual Improvement in TQM:
1. Customer Focus:
o Definition: Continuously improving to meet and exceed customer expectations.
o Explanation: Organizations must understand customer needs and expectations and strive to enhance
their satisfaction through constant feedback and improvement. This involves adapting processes and
products to better serve customer requirements and preferences.
2. Employee Involvement:
o Definition: Engaging all employees in the improvement process.
o Explanation: Continual improvement requires contributions from everyone in the organization.
Employees at all levels should be encouraged to identify areas for improvement and participate in
problem-solving and innovation. This fosters a culture where everyone is committed to quality.
3. Process Approach:
o Definition: Improving organizational processes to enhance efficiency and effectiveness.
o Explanation: Focus on improving the processes that produce products or services. This involves
analyzing and optimizing workflows, reducing waste, and eliminating inefficiencies. A process-oriented
approach ensures that improvements are systematic and sustainable.
4. Integrated System:
o Definition: Ensuring all aspects of the organization work together toward common goals.
o Explanation: Continual improvement involves aligning all parts of the organization, including various
departments and processes, towards a unified quality management system. Integration helps in creating
a cohesive approach to quality improvement, where all systems support and enhance each other.
5. Data-Driven Decision Making:
o Definition: Using data and evidence to guide improvement efforts.
o Explanation: Decisions should be based on accurate and timely data rather than intuition or
assumptions. Continual improvement involves collecting, analyzing, and using data to identify problems,
measure performance, and evaluate the effectiveness of improvement actions.
6. Leadership Commitment:
o Definition: Ensuring top management is actively involved in and supports improvement initiatives.
o Explanation: Successful continual improvement requires strong leadership that demonstrates a
commitment to quality, provides necessary resources, and sets a vision for improvement. Leaders must
actively support and drive the quality improvement efforts throughout the organization.
7. Benchmarking:
o Definition: Comparing performance with industry best practices to identify areas for improvement.
o Explanation: Organizations should regularly compare their processes and performance with those of
industry leaders or competitors. Benchmarking helps in setting realistic improvement goals and adopting
best practices to enhance performance.
8. Focus on Long-Term Results:
o Definition: Prioritizing sustainable improvement over short-term gains.
o Explanation: Continual improvement should aim for long-term benefits rather than quick fixes. It
involves implementing changes that lead to lasting improvements in quality, efficiency, and
customer satisfaction.
5. Explain PDCA cycle. [5]

The Plan-Do-Check-Act (PDCA) cycle, also known as the Deming Cycle or Shewhart Cycle, is a
continuous improvement model used in business and management processes to ensure that changes are
effectively planned, implemented, evaluated, and standardized. It is widely applied in quality
management practices to drive ongoing improvement and enhance organizational efficiency. Here's a
detailed explanation of each phase in the PDCA cycle:

1. Plan:
 Objective: Identify an area for improvement and devise a detailed plan to address it.
 Actions:
o Identify Problems or Opportunities: Start by understanding the current state, identifying
problems, inefficiencies, or areas for enhancement.
o Set Objectives and Goals: Define clear, measurable goals for the improvement process.
What specific outcomes are you aiming to achieve?
o Develop a Plan: Outline the steps needed to achieve the desired improvement, including
assigning responsibilities, determining resources, and establishing timelines.
 Outcome: A comprehensive plan that provides a roadmap for the intended changes or
improvements.
2. Do:
 Objective: Implement the planned changes on a small, controlled scale.
 Actions:
o Execute the Plan: Carry out the planned changes, preferably in a pilot phase or within a small,
manageable scope.
o Collect Data: Monitor the process and gather data during the implementation to track
performance and identify any immediate issues.
o Document Findings: Record any observations, challenges, or deviations from the plan to
inform the next steps.
 Outcome: Initial insights and data from the implementation that can be analyzed to determine
the effectiveness of the change.
3. Check:
 Objective: Evaluate the results of the implemented changes against the original objectives.
 Actions:
o Analyze Data: Examine the data collected during the "Do" phase to assess whether the
change achieved the desired outcome.
o Compare Results: Compare actual results with the goals and benchmarks set during the
"Plan" phase.
o Identify Successes and Failures: Determine what aspects of the plan worked well and what
didn't, understanding the reasons behind these outcomes.
 Outcome: A clear evaluation of the change's impact, identifying whether the objectives were met
and what adjustments might be needed.
4. Act:
 Objective: Decide on the next steps based on the evaluation of the results.
 Actions:
o Standardize Successful Changes: If the change was successful, implement it on a broader
scale and integrate it into the standard operating procedures.
o Adjust or Re-plan: If the change did not yield the desired results, make necessary adjustments
or rethink the approach. The cycle may begin again with a new plan based on the insights
gained.
o Continuous Monitoring: Even after implementation, continue to monitor the process to
ensure sustained improvement and to catch any issues early.
 Outcome: Either the successful standardization of the change or a refined plan for further
improvement efforts.

6. Examine the relationship between Quality and Productivity. [5]


The relationship between Quality and Productivity is important in any industry, and understanding it can
help improve how well a company performs. Here’s a simpler look at how quality and productivity are
connected:
1. Quality Boosts Productivity:
 Fewer Mistakes: When processes are high-quality, there are fewer mistakes and less need to redo
work. This means less time and effort are spent fixing problems, which helps increase productivity.
 More Efficient Processes: Efforts to improve quality often make processes smoother and more
efficient. This leads to faster production and higher output, directly increasing productivity.
2. Productivity Affects Quality:
 Maintaining a Good Pace: Balancing productivity with quality ensures that employees aren’t
overwhelmed, which helps prevent mistakes and maintain high quality. A reasonable work pace
allows for better attention to detail.
 Using Resources Wisely: Effective productivity practices make sure that resources like time,
materials, and labor are used efficiently. This supports consistent quality by avoiding problems like
shortages that could lower standards.
3. Balancing Quality and Productivity:
 Speed vs. Quality: There’s often a trade-off between working fast (productivity) and maintaining
high quality. Rushing can lead to lower quality, while focusing too much on quality can slow things
down. Finding the right balance is key to improving both.
 Cost Savings: High-quality outputs reduce costs related to fixing defects or handling customer
returns. These savings can be used to boost productivity, creating a positive cycle where better
quality leads to higher productivity.
4. Long-Term Benefits:
 Steady Growth: Over time, focusing on quality can lead to happier customers and more loyalty,
which increases demand and productivity. Consistently delivering high-quality products or
services builds a good reputation and supports long-term growth.
 Continuous Improvement: Using continuous improvement methods (like the PDCA cycle) helps
gradually improve both quality and productivity. As processes get better, organizations can
produce higher quality products more efficiently.
5. Working Together:
 Quality Programs: Quality improvement programs like Total Quality Management (TQM) or Six
Sigma can also increase productivity. These programs aim to prevent defects and make processes
more efficient, improving both quality and productivity.
 Encouraging Innovation: Companies that focus on quality tend to encourage innovation and
learning. This culture not only improves quality but also finds new, more efficient ways to work,
boosting productivity.

7. Plan Software Quality control with respect to space research. [5]


The software used in space missions must be reliable, accurate, and secure. Here’s a simple plan for
ensuring software quality in space research:+
1. Requirement Analysis:
 Understand Mission Needs: Clearly define the software requirements based on the specific needs
of the space mission.
 Set Quality Standards: Establish strict quality standards that the software must meet, considering
the unique challenges of space environments.
 Risk Assessment: Identify potential risks that could affect the software’s performance in space,
such as extreme temperatures or radiation.
2. Design and Development:
 Follow Best Practices: Use proven software engineering practices to design and develop the
software, ensuring it is robust and reliable.
 Modular Design: Design the software in modular components so that individual parts can be
tested and updated without affecting the whole system.
 Security Measures: Implement strong security features to protect the software from potential
cyber threats, especially when dealing with satellite communications.
3. Testing and Validation:
 Simulate Space Conditions: Test the software in simulated space environments to ensure it can
handle the conditions it will face in orbit or beyond.
 Automated Testing: Use automated testing tools to conduct thorough and repeatable tests,
covering all aspects of the software.
 Failure Scenarios: Test the software against potential failure scenarios to ensure it can recover or
safely handle unexpected issues.
4. Quality Assurance:
 Code Reviews: Regularly review the software code to catch errors early and ensure adherence to
quality standards.
 Documentation: Maintain detailed documentation of the software development process,
including test results, to ensure traceability and accountability.
 Continuous Monitoring: After deployment, continuously monitor the software’s performance to
detect any issues in real-time and apply updates as needed.
5. Deployment and Maintenance:
 Controlled Deployment: Deploy the software in stages, starting with non-critical systems, to
minimize risks.
 Backup Plans: Prepare contingency plans, including backup software versions, in case of
unexpected failures.
 Regular Updates: Schedule regular updates and maintenance checks to keep the software
functioning at its best throughout the mission.

8. Define software quality. List & explain core component of quality. [5]
Software Quality refers to how well a software product meets the requirements, expectations, and
needs of its users. It is determined by how effectively the software performs its intended functions, how
easy it is to use, and how reliable and maintainable it is over time.
Core Components of Software Quality:
1. Functionality:
o Definition: Functionality is about how well the software performs its intended tasks and
meets the specified requirements.
o Explanation: It includes features like correctness, completeness, and suitability. The software
should produce the correct results and perform all required tasks accurately.
2. Usability:
o Definition: Usability refers to how easy and intuitive it is for users to interact with the
software.
o Explanation: It includes factors like user interface design, ease of learning, and user
satisfaction. High usability ensures that users can efficiently and comfortably use the software
to accomplish their tasks.
3. Reliability:
o Definition: Reliability is the ability of the software to perform consistently without failures
under specified conditions.
o Explanation: It includes aspects like fault tolerance, recovery from errors, and stability.
Reliable software operates correctly over time, even under unexpected conditions.
4. Efficiency:
o Definition: Efficiency refers to how well the software uses system resources like memory,
processing power, and time.
o Explanation: It includes performance and resource utilization. Efficient software runs quickly
and doesn't unnecessarily consume system resources, ensuring smooth operation.
5. Maintainability:
o Definition: Maintainability is about how easy it is to modify the software to fix defects,
improve performance, or adapt to new requirements.
o Explanation: It includes factors like modularity, reusability, and ease of understanding.
Software that is easy to maintain can be updated and improved over time with minimal effort.
6. Portability:
o Definition: Portability refers to the ease with which software can be transferred from one
environment or platform to another.
o Explanation: It includes adaptability and compatibility across different operating systems,
hardware, or devices. High portability ensures that the software can be easily installed and run
in different environments without extensive modification.
7. Security:
o Definition: Security is the ability of the software to protect data and resources from
unauthorized access, breaches, and other threats.
o Explanation: It includes features like confidentiality, integrity, and authentication. Secure
software ensures that sensitive information is protected and that the system is resistant to
attacks and vulnerabilities.
9. Give classification for different types of products. [5]
1. Consumer Products:
Products designed for personal use by individuals. These are bought and used by consumers in their
everyday lives.
 Durable Goods:
o Definition: Products that have a long lifespan and are not consumed quickly.
o Examples: Refrigerators, washing machines, and cars.
o Characteristics: Higher cost, used over a long period, and often require significant investment.
 Non-Durable Goods:
o Definition: Products that are consumed or used up quickly.
o Examples: Food, beverages, and toiletries.
o Characteristics: Lower cost, purchased frequently, and used up relatively quickly.
 Convenience Goods:
o Definition: Products that are bought frequently with minimal effort.
o Examples: Snacks, milk, and newspapers.
o Characteristics: Easily accessible, low cost, and purchased regularly.
 Shopping Goods:
o Definition: Products that require more effort and comparison before purchase.
o Examples: Clothing, electronics, and furniture.
o Characteristics: Higher cost, less frequent purchase, and often involve comparison shopping.
 Specialty Goods:
o Definition: Products with unique characteristics or brand identity that make them special.
o Examples: Luxury cars, designer clothes, and high-end watches.
o Characteristics: High cost, unique features, and often purchased infrequently.
2. Industrial Products:
Products used in the production of other goods or services and typically bought by businesses rather
than individual consumers.
 Capital Goods:
o Definition: Long-term assets used in the production of other goods and services.
o Examples: Machinery, factory equipment, and construction tools.
o Characteristics: High investment cost, long-term use, and critical to manufacturing processes.
 Raw Materials:
o Definition: Basic materials used to produce finished goods.
o Examples: Steel, wood, and chemicals.
o Characteristics: Essential for production, sourced in bulk, and transformed into final products.
 Component Parts:
o Definition: Parts that are used as components in the assembly of final products.
o Examples: Microchips, engine parts, and screws.
o Characteristics: Often purchased in large quantities, integrated into other products.
 Supplies:
o Definition: Items used in the daily operations of a business but not part of the final product.
o Examples: Office supplies, cleaning materials, and lubricants.
o Characteristics: Regularly purchased, support operational activities, and generally low-cost.
 Business Services:
o Definition: Intangible products that support business operations.
o Examples: Consulting services, IT support, and legal services.
o Characteristics: Not physical products, often customized, and critical for business functions.
10. What are the constraints of software product quality assessment. [5]
1. Subjective Evaluation:
o Different Opinions: People like developers, users, and managers might have different ideas
about what "quality" means, making it hard to agree on the assessment.
o Hard to Measure: Some quality aspects, like how easy the software is to use, are difficult to
measure clearly.
2. Changing or Unclear Requirements:
o Changing Needs: If the software requirements keep changing, it’s hard to assess the quality
because the target keeps moving.
o Unclear Goals: If the goals of the software aren’t clear, it’s tough to judge if the software
meets its purpose.
3. Limited Resources:
o Not Enough Time: There may not be enough time to fully test the software, so some quality
issues might be missed.
o Budget Limits: Limited money can mean less thorough testing, which can affect how well the
software’s quality is assessed.
4. Complex Software:
o Big or Complicated Systems: Assessing large or complex software can be challenging because
there are many parts to consider.
o Integration Issues: Making sure different parts of the software work together smoothly adds
to the difficulty.
5. Technology Limits:
o Tool Restrictions: The tools used for testing might not be perfect or might miss certain
problems.
o Different Platforms: Software may behave differently on various platforms, making it harder
to assess its quality consistently.
6. Human Factors:
o Tester Skill Levels : The quality of the assessment can depend on the skill and experience of
the testers. Inexperienced testers might miss important issues.
o Bias and Errors: Testers might have biases or make mistakes, which can affect the accuracy of
the quality assessment.

11. Plan software quality control with respect to college attendance software. [5]
Here’s a simple plan for software quality control specific to a college attendance software system:
1. Define Requirements:
 Gather Requirements: Clearly document what the software needs to do, such as tracking student
attendance, generating reports, and integrating with other college systems.
 Set Quality Standards: Establish standards for accuracy, reliability, and usability based on the
requirements to ensure the software meets expectations.
2. Design and Development:
 Follow Best Practices: Use proven software development methods to design and build the system,
ensuring it is reliable and meets the defined requirements.
 Modular Design: Develop the software in separate modules (e.g., student management, report
generation) so each can be tested independently.
3. Testing:
 Functional Testing: Test each feature to ensure it works as intended, such as marking attendance,
generating reports, and notifying students and staff.
 User Acceptance Testing (UAT): Involve actual users (e.g., teachers, administrative staff) in testing to
ensure the software meets their needs and is user-friendly.
4. Quality Assurance:
 Code Reviews: Regularly review the code for errors, adherence to standards, and overall quality.
 Documentation: Maintain detailed records of the development process, including test results and any
issues found, to ensure traceability and accountability.
5. Deployment and Maintenance:
 Controlled Deployment: Roll out the software in stages, starting with a pilot phase to catch any issues
before full deployment.
 Ongoing Support: Provide regular updates and maintenance to address any issues, improve features,
and ensure the software remains effective and secure.
Unit 2
1. Write test cases for login validation. [5]
When writing test cases for login validation, the goal is to ensure that the login functionality is secure,
reliable, and behaves as expected in different scenarios. Below are five test cases for login validation:
Test Case 1: Valid Login
 Objective: Verify that the user can log in successfully with a valid username and password.
 Steps:
1. Navigate to the login page.
2. Enter a valid username.
3. Enter the correct password corresponding to the username.
4. Click the "Login" button.
 Expected Result: The user is successfully logged in and redirected to the homepage or dashboard.
Test Case 2: Invalid Username
 Objective: Verify that the login fails when an invalid username is entered.
 Steps:
1. Navigate to the login page.
2. Enter an invalid username.
3. Enter a valid password.
4. Click the "Login" button.
 Expected Result: The system displays an error message indicating that the username is incorrect, and
the user is not logged in.
Test Case 3: Invalid Password
 Objective: Verify that the login fails when an incorrect password is entered with a valid username.
 Steps:
1. Navigate to the login page.
2. Enter a valid username.
3. Enter an incorrect password.
4. Click the "Login" button.
 Expected Result: The system displays an error message indicating that the password is incorrect, and
the user is not logged in.
Test Case 4: Empty Fields
 Objective: Verify that the login fails when the username or password field is left empty.
 Steps:
1. Navigate to the login page.
2. Leave the username and/or password field empty.
3. Click the "Login" button.
 Expected Result: The system displays an error message prompting the user to fill in the required
fields, and the login attempt is not successful.
Test Case 5: Password Masking
 Objective: Verify that the password is masked when entered into the password field.
 Steps:
i. Navigate to the login page.
ii. Enter any text in the password field.
 Expected Result: The entered password is masked (e.g., displayed as dots or asterisks) to prevent
visibility to others.
2. What is the entry & exit criteria of testing. [5]
1. Entry Criteria:
Entry criteria specify the prerequisites that must be fulfilled before testing activities can start. These
conditions ensure that the test environment, documentation, and necessary resources are in place for
effective testing. Common entry criteria include:
 Requirements are Finalized: The software requirements specification (SRS) or user stories are
complete, reviewed, and approved. This ensures that testers understand what needs to be tested.
 Test Environment is Ready: The testing environment (including hardware, software, network, and
tools) is set up, configured, and validated, ensuring it mirrors the production environment as closely
as possible.
 Test Data is Prepared: The necessary test data is identified, created, and available for use in testing.
This data should be accurate, complete, and relevant to the test cases.
 Test Plan and Test Cases are Approved: The test plan, along with test cases and test scripts, is
documented, reviewed, and approved. This ensures that the scope, objectives, and approach of
testing are clearly defined.
 Build is Delivered: The software build or the module to be tested is delivered and has passed smoke
testing to ensure its stability for further testing.
2. Exit Criteria:
Exit criteria define the conditions that must be met for testing activities to be considered complete. These
criteria ensure that the software has been tested sufficiently and meets the quality standards before
being released. Common exit criteria include:
 Test Case Execution is Complete: All planned test cases have been executed, and the pass/fail status
of each test case is documented. A high percentage of test cases should have passed.
 Defects are Resolved: All critical and major defects identified during testing have been fixed, retested,
and closed. Any remaining defects are minor or low-priority and have been accepted by stakeholders.
 Test Coverage is Satisfactory: The planned test coverage has been achieved, ensuring that all critical
functionalities and requirements have been tested. Code coverage tools may also confirm that
sufficient parts of the code have been tested.
 Test Summary Report is Prepared: A comprehensive test summary report has been prepared and
reviewed, documenting the testing activities, results, and any remaining risks.
 Stakeholder Approval: All relevant stakeholders, including QA leads, project managers, and product
owners, have reviewed and approved the testing outcomes, and agree that the software is ready for
release.

3. Explain use case testing with one example. [5]


Definition: Use Case Testing is a testing technique that evaluates the functionality of a software
application by testing specific use cases. A use case represents a scenario where a user interacts with
the system to achieve a particular goal. This method ensures that the software behaves as expected
in real-world situations that users might encounter.
i. Identify the Use Case:
 Use Case: "A teacher marks student attendance."
 Goal: Ensure the teacher can correctly mark and save attendance for a class.

ii. Create Test Cases:


 Test Case 1: The teacher logs into the system and selects the class for which they want to mark
attendance.
o Expected Result: The system should display a list of students for the selected class.
 Test Case 2: The teacher marks a student as present and another as absent.
o Expected Result: The system should update the attendance status correctly and save these
changes.
 Test Case 3: The teacher submits the attendance record.
o Expected Result: The system should confirm that the attendance has been saved and update
the records.
iii. Execute Test Cases:
 Test each of the cases above to see if the software performs as expected.
iv. Review Results:
 Check if the software correctly handles each part of the use case and make sure it meets the user's
needs.

4. How can you classify the role of software [5]


1. System Software:
o Definition: Software designed to manage and control the hardware components of a
computer, providing a platform for running application software.
o Examples: Operating systems like Windows, Linux, macOS, device drivers, and utility
programs.
o Role: It serves as the foundational layer that allows hardware to function and facilitates the
execution of application software. It manages hardware resources and provides an interface
for users to interact with the system.
2. Application Software:
o Definition: Software created to perform specific tasks or applications for end-users.
o Examples: Word processors like Microsoft Word, web browsers like Chrome and Firefox, and
database management systems like Oracle and MySQL.
o Role: It assists users in performing particular tasks or solving specific problems, such as writing
documents, browsing the internet, or managing data. Application software relies on system
software to interact with hardware and deliver the necessary functionality.
3. Middleware:
o Definition: Software that acts as an intermediary between system software and application
software or between different applications.
o Examples: Database middleware, message brokers like RabbitMQ, and API gateways.
o Role: Middleware facilitates communication and data exchange between different software
components or systems. It ensures that applications can interact and share information
seamlessly, often in distributed or networked environments.
4. Development Software:
o Definition: Tools used to create, develop, and maintain other software.
o Examples: Integrated Development Environments (IDEs) like Visual Studio and Eclipse,
compilers, and version control systems like Git.
o Role: Development software provides the necessary tools and environments for software
developers to write, test, debug, and manage code. It supports the entire software
development lifecycle, helping developers produce high-quality software.
5. Embedded Software:
o Definition: Software specifically designed to operate hardware devices with dedicated
functionality.
o Examples: Firmware in embedded systems found in smartphones, household appliances, and
automotive systems.
o Role: Embedded software runs on specialized hardware to control specific device functions. It
is tailored to meet the particular needs of the hardware and typically operates in real-time
environments to ensure optimal performance.

5. Analyse test policy & test strategy which is included in test documentation. [5]
Test Policy
Definition:
 A test policy is a high-level document that outlines the overall approach and principles for testing
within an organization or project. It offers a broad, organizational view of testing practices and goals.
Key Aspects:
1. Purpose and Scope:
o Purpose: Defines the goals of testing, such as ensuring software quality, maintaining
compliance with standards, and managing risks.
o Scope: Specifies the areas the policy covers, such as all software projects within the
organization or specific types of testing.
2. Testing Principles:
o Principles: Outlines the core values and guiding principles for testing, like the importance of
early testing, thorough documentation, and ongoing improvement.
o Example: A policy might stress that all software must undergo regression testing before being
released.
3. Roles and Responsibilities:
o Roles: Specifies who is responsible for different testing activities, including test managers,
testers, and developers.
o Responsibilities: Details each role's responsibilities in the testing process, such as planning,
execution, and reporting.
4. Testing Standards and Guidelines:
o Standards: Describes the standards to follow, such as industry standards (e.g., ISO/IEC 29119)
or organizational norms.
o Guidelines: Provides general guidelines for creating test plans, executing tests, and reporting
defects.
5. Policy Review and Updates:
o Review: Specifies how often the test policy should be reviewed and updated to ensure it
remains relevant and effective.
o Updates: Describes the process for updating the policy based on new developments or
feedback.

Test Strategy
Definition:
 A test strategy is a detailed plan outlining the approach and methods for testing a specific project or
system. It provides a roadmap for how testing will be conducted to meet the project’s objectives.
Key Aspects:
1. Test Objectives:
o Objectives: Defines what the testing aims to achieve, such as verifying functionality, ensuring
performance, or validating security.
o Example: The strategy might aim to ensure that the software meets all functional
requirements and performs well under expected load conditions.
2. Testing Methods and Techniques:
oMethods: Outlines the testing methods to be used, like manual testing, automated testing, or
performance testing.
o Techniques: Specifies techniques such as black-box testing, white-box testing, and exploratory
testing.
o Example: The strategy might detail using automated tests for regression testing and manual
tests for exploratory testing.
3. Risk Management:
o Risks: Identifies potential risks and challenges in the testing process and outlines mitigation
strategies.
o Example: The strategy might address risks like incomplete requirements or tight deadlines and
propose solutions such as prioritized testing.
4. Test Schedule and Milestones:
o Schedule: Provides a timeline for testing activities, including key milestones and deadlines.
o Milestones: Highlights significant events like the completion of test planning, the start of test
execution, and the final test report delivery.
5. Test Scope:
o Scope: Details what will and won’t be tested, including specific features, functionalities, and
components.
o Example: The strategy might specify that unit testing will cover all code modules, while
integration testing will focus on the interactions between modules.

Aspect Test Strategy Test Plan Test Policy

Definition A high-level A detailed document A high-level document


document that describing specific outlining the testing
outlines the overall testing tasks and principles and approach for
testing approach activities for a project. the organization.
and objectives.

Purpose Provides a roadmap Serves as a guide for Defines the testing goals
for the testing the testing team to and principles for the entire
process to align ensure testing is organization or project.
with project goals. executed as planned.

Scope Broad, covering the Focused on specific Organization-wide or


entire project or project tasks and project-wide, covering all
system testing phases, including testing activities and
approach. testing features and standards.
items.

Detail Level High-level, focusing Detailed, specifying High-level, focused on


on overall approach individual test cases, testing principles and long-
and testing test items, schedule, term goals.
methods. and resources.

Focus Focuses on the Focuses on executing Focuses on general testing


testing approach, specific tests and goals and long-term quality
techniques, and managing the testing strategies.
types of testing. process.

Content Includes objectives, Includes features to Includes testing goals,


scope, risk be tested, principles, standards, and
management, test responsibilities, test roles/responsibilities.
types, and tools. environment, and
timeline.

Responsibility Usually created by Created by the test Defined by senior


test managers or manager for a specific management or
leads at the project project or phase of organizational heads for all
level. development. projects.

6. Explain types of test Artifacts. [5]


 Test Strategy:
 Definition: A high-level plan that explains the overall approach and goals for testing in a project or
organization.
 Content: It includes the testing goals, what will be tested, the schedule, resources, testing methods,
tools, risk management, and criteria for starting and ending testing.
 Purpose: The test strategy gives a clear plan for testing, ensuring everything is aligned with the
project’s objectives and stakeholder expectations.
 Test Plan:
 Definition: A detailed document outlining the specific testing activities, tasks, and deliverables for a
project.
 Content: It includes the testing scope, goals, environment, schedule, resources, responsibilities, items
to be tested, and any constraints.
 Purpose: The test plan guides the testing team, ensuring all testing is well-organized and done
effectively to meet the project’s goals.
 Test Scenario:
 Definition: A high-level description of what will be tested, focusing on a particular use case or
software feature.
 Content: Each scenario covers a specific business need or user action, describing how users might
interact with the software.
 Purpose: Test scenarios help visualize how the software will be used, making sure the testing is
thorough and reflects real-world use.
 Test Case:
 Definition: A detailed, step-by-step guide on how to test a specific software feature.
 Content: It includes the test case ID, description, preconditions, steps, expected and actual results,
and any conditions after testing. It may also list the test data and environment.
 Purpose: Test cases provide clear instructions for testers to ensure each function is tested correctly
and results are properly recorded.
 Traceability Matrix:
 Definition: A document that links software requirements with their corresponding test cases.
 Content: It usually includes columns for requirement IDs, test case IDs, and the status of the tests.
 Purpose: The traceability matrix ensures all requirements are tested, helping identify any missing
tests and confirming that the software meets its requirements.
 Software Test Report:
 Definition: A summary of the testing results, including successes, failures, and overall test coverage.
 Content: It shows how many test cases were executed, passed, failed, and skipped, along with defect
statistics, key findings, and recommendations.
 Purpose: The test report informs stakeholders about the quality of the software, providing insights on
whether it’s ready for release.

7. Differentiate between test plan & test strategy.[5]

Aspect Test Plan Test Strategy

Definition A detailed document outlining A high-level document that defines the


specific testing activities, tasks, and overall approach, objectives, and
deliverables for a particular project methods for testing across the entire
or phase. project or organization.

Scope Project-specific, covering a single Organization-wide or across multiple


project or testing phase. projects, defining the overall approach
to testing.

Content Includes scope, objectives, test items, Includes testing objectives, types of
features to be tested, environment, testing, test levels, test environment,
schedule, resources, responsibilities, tools, and risk management at a
and risks. strategic level.

Level of Highly detailed, with specific High-level, providing general guidelines


Detail instructions and plans for execution. without detailed instructions.

Responsibility Usually prepared by the Test Lead or Typically prepared by higher-level


Test Manager for a specific project. management, such as QA Managers or
stakeholders, to guide overall testing
efforts.

Flexibility May need to be updated frequently Less frequently updated, as it provides


as project requirements change. a broad strategy that remains
consistent across projects.

Purpose Guides the testing team in executing Sets the overall direction for testing
specific tasks within a project, activities, ensuring consistency and
ensuring all aspects are covered. alignment with organizational goals.

8. Justify : [5]
i) Green money 1 cost of prevention.
Definition: Green money represents the investment in preventive measures to avoid defects and
issues in software development or other processes.
Justification:
 Prevents Issues: Investing in preventive measures such as thorough planning, quality assurance,
and early testing helps in identifying and addressing potential issues before they become major
problems.
 Reduces Long-Term Costs: By addressing issues early, you reduce the likelihood of costly rework,
fixes, and customer complaints later in the process.
 Improves Quality: Preventive actions lead to higher quality products or services, which can
enhance customer satisfaction and reduce the need for corrections and revisions.
 Enhances Efficiency: Spending on prevention often leads to more efficient processes and
smoother project execution, saving time and resources in the long run.
ii) Red money 1 cost of failure.
Definition: Red money represents the costs associated with defects or failures that occur after the
product or service has been delivered.
Justification:
 Increased Costs: Failures or defects often result in higher costs due to the need for rework,
patching, and fixing problems after the fact. This can be more expensive than addressing issues
during the early stages.
 Customer Dissatisfaction: Defects and failures can lead to poor customer experiences, which
might result in loss of trust, refunds, or damage to the company’s reputation.
 Operational Disruptions: Issues that arise after delivery can disrupt operations, causing delays
and additional costs to fix the problems.
 Legal and Compliance Issues: In some cases, defects can lead to legal consequences, compliance
issues, or regulatory penalties, adding to the overall cost of failure.

9. Discuss Integration testing and Acceptance testing. [5]


Integration Testing
Definition: Integration Testing is a phase in software testing where individual units or components of a
software application are combined and tested together. The goal is to ensure that these components
work as expected when integrated with each other.
Key Points:
1. Objective:
o Definition: To verify that different parts of the application work together correctly. Integration
testing checks the interfaces and interactions between components to ensure they function
together as intended.
2. Scope:
o Definition: Tests interactions between integrated units or modules. Unlike unit testing, which
focuses on individual components in isolation, integration testing combines several
components to test their collaboration.
3. Types:
o Definition: Includes various approaches such as incremental integration, big-bang integration,
and top-down or bottom-up integration. For example, incremental integration tests
components one by one as they are added, while big-bang integration tests all components
together at once.
4. Focus Areas:
o Definition: Focuses on data flow, control flow, and interactions between modules. Ensures
that data passed between modules is handled correctly and that control flows between
components function without errors.
5. Challenges:
o Definition: Can be complex due to interactions between multiple components. Integration
testing may reveal issues related to how components work together, such as data mismatches,
interface problems, or unexpected behavior.

6. Examples:
o Definition: Testing the integration of a payment gateway with an e-commerce system.Ensures
that when a user makes a payment, the payment system communicates correctly with the
order processing and inventory management systems.
Acceptance Testing
Definition: Acceptance Testing is a phase in software testing where the software is evaluated to ensure it
meets business requirements and is ready for delivery to the customer. The main goal is to validate that
the software is acceptable to end-users or stakeholders.
Key Points:
1. Objective:
o Definition: To ensure the software meets user requirements and is ready for deployment.
Acceptance testing verifies that the software satisfies the business needs and user
expectations.
2. Scope:
o Definition: Tests the software against predefined acceptance criteria. It checks whether the
software performs its intended functions correctly and meets the agreed-upon requirements
from a user’s perspective.
3. Types:
o Definition: Includes user acceptance testing (UAT), alpha testing, and beta testing. UAT is
performed by actual users in a real-world environment, alpha testing is done by internal
teams, and beta testing involves a limited release to external users for feedback.
4. Focus Areas:
o Definition: Focuses on usability, functionality, and compliance with business requirements.
o Explanation: Ensures that the software is user-friendly, performs required functions correctly,
and meets all specified business and regulatory requirements.
5. Challenges:
o Definition: Can be impacted by unclear requirements or changing user needs.
o Explanation: Acceptance testing may face difficulties if requirements are not well-defined or if
there are discrepancies between what is delivered and what users expected.
6. Examples:
o Definition: Testing a new CRM system to ensure it meets the needs of the sales team.
o Explanation: Involves checking whether the system supports the sales processes, integrates
with other tools used by the team, and provides the necessary reporting and data
management features.

10. Define and Explain configuration management. [5]


Definition: Configuration Management (CM) is a systematic process for managing and controlling the
changes and updates to software, hardware, and other related components in a project or system. Its
goal is to ensure that the system operates as intended and that all components are consistent, reliable,
and up-to-date.
Explanation:
1. Purpose:
o Maintain Consistency: Ensures that the system components (software, hardware,
documentation) are consistent and operate correctly together.
o Control Changes: Manages changes to prevent unintended disruptions and maintain system
integrity.

2. Key Components:
o Configuration Identification: Defines and documents the configuration of system components
and their relationships. This includes identifying what needs to be controlled and monitored.
o Configuration Control: Manages changes to the configuration items. It involves processes for
requesting, reviewing, and approving changes.
o Configuration Status Accounting: Keeps track of the status of configuration items, including
their versions and changes. This provides a historical record of all changes and configurations.
o Configuration Audits: Regularly checks and verifies that the configuration items conform to
their specifications and are correctly documented. This ensures the system’s configuration
meets the required standards and quality.
3. Processes:
o Planning: Develop a configuration management plan outlining how configuration items will be
identified, controlled, and audited.
o Implementation: Apply the configuration management processes to manage and control
changes throughout the lifecycle of the system.
o Review: Regularly review the configuration management processes and make adjustments as
needed to ensure effectiveness and efficiency.
4. Benefits:
o Improved Quality: Ensures that all changes are properly reviewed and tested, reducing the
risk of defects and ensuring the system’s reliability.
o Better Documentation: Provides accurate and up-to-date documentation of all configuration
items and changes, which is crucial for maintaining the system.
o Enhanced Control: Helps in managing and controlling changes to prevent unauthorized or
unintended modifications, reducing the risk of disruptions.

11. Differentiate between verification and validation. [5]

Aspects Verification Validation

Definition Verification refers to the set of activities Validation refers to the set of activities that
that ensure software correctly implements ensure that the software that has been
the specific function built is traceable to customer
requirements.

Focus It includes checking documents, designs, It includes testing and validating the actual
codes, and programs. product.

Type of Testing Verification is the static testing. Validation is dynamic testing.

Execution It does not include the execution of the It includes the execution of the code.
code.

Methods Used Methods used in verification are Methods used in validation are Black Box
reviews, walkthroughs, inspections and Testing, White Box Testing and non-
desk-checking. functional testing.

Purpose It checks whether the software conforms It checks whether the software meets the
to specifications or not. requirements and expectations of a
customer or not.

Bug It can find the bugs in the early stage of It can only find the bugs that could not be
the development. found by the verification process.

Goal The goal of verification is application and The goal of validation is an actual product.
software architecture and specification.

Responsibility Quality assurance team does verification. Validation is executed on software code
with the help of testing team.

Human or It consists of checking of documents/files It consists of execution of program and is


Computer and is performed by human. performed by computer.

Error Focus Verification is for prevention of errors. Validation is for detection of errors.

Performance Verification finds about 50 to 60% of the Validation finds about 20 to 30% of the
defects. defects.

Stability Verification is based on the opinion of Validation is based on the fact and is often
reviewer and may change from person to stable.
person.

You might also like