Stqa Insem
Stqa Insem
Unit 1:
More States:
Rejected: If the developer team rejects a defect if they feel that defect is not considered a genuine
defect, and then they mark the status as ‘Rejected’. The cause of rejection may be any of these three
i.e Duplicate Defect, NOT a Defect, Non-Reproducible.
Deferred: All defects have a bad impact on developed software and also they have a level based on
their impact on software. If the developer team feels that the defect that is identified is not a prime
priority and it can get fixed in further updates or releases then the developer team can mark the
status as ‘Deferred’. This means from the current defect life cycle it will be terminated.
Duplicate: Sometimes it may happen that the defect is repeated twice or the defect is the same as
any other defect then it is marked as a ‘Duplicate’ state and then the defect is ‘Rejected’.
Not a Defect: If the defect has no impact or effect on other functions of the software then it is
marked as ‘NOT A DEFECT’ state and ‘Rejected’.
3. Define “Quality” as viewed by different stakeholders of software development and usage. [5]
1. End Users:
Definition: For end users, quality means usability, reliability, and performance.
Perspective: They expect the software to be easy to navigate, responsive, and free from errors or
crashes. Quality software for users is one that meets their needs and provides a smooth, intuitive
experience.
Expectation: The software should perform its intended functions accurately and efficiently, without
requiring extensive effort to learn or use.
2. Developers:
Definition: Developers view quality in terms of code quality, maintainability, and adherence to best
practices.
Perspective: Quality software for developers is well-structured, modular, and easy to debug, extend,
and maintain. They value clear, concise code that follows coding standards and is documented
properly.
Expectation: Developers expect the software to be built using efficient algorithms, with minimal
technical debt, and to be flexible enough to accommodate future changes.
3. Project Managers:
Definition: Project managers see quality as meeting project requirements, timelines, and budget
constraints.
Perspective: Quality for project managers is about delivering software that fulfills the agreed-upon
scope, within the planned schedule and cost. They focus on balancing quality with resource
management and risk mitigation.
Expectation: Project managers expect the software to satisfy the client's requirements while being
delivered on time and within budget, with minimal rework or delays.
4. Quality Assurance (QA) Team:
Definition: The QA team defines quality as the absence of defects, compliance with specifications,
and overall stability.
Perspective: For QA professionals, quality means ensuring the software meets all functional and non-
functional requirements and is free of bugs or vulnerabilities. They emphasize thorough testing and
validation processes.
Expectation: The QA team expects the software to perform consistently across different
environments, to be secure, and to conform to all specified standards and requirements.
5. Business Stakeholders:
Definition: Business stakeholders, including clients and investors, see quality in terms of value, return
on investment (ROI), and market fit.
Perspective: Quality for business stakeholders is about the software delivering tangible business
value, meeting market demands, and contributing to the organization's goals. They focus on the
software's ability to attract and retain users, generate revenue, and support business operations.
Expectation: Business stakeholders expect the software to provide a competitive advantage, enhance
customer satisfaction, and be scalable and adaptable to future needs.
The Plan-Do-Check-Act (PDCA) cycle, also known as the Deming Cycle or Shewhart Cycle, is a
continuous improvement model used in business and management processes to ensure that changes are
effectively planned, implemented, evaluated, and standardized. It is widely applied in quality
management practices to drive ongoing improvement and enhance organizational efficiency. Here's a
detailed explanation of each phase in the PDCA cycle:
1. Plan:
Objective: Identify an area for improvement and devise a detailed plan to address it.
Actions:
o Identify Problems or Opportunities: Start by understanding the current state, identifying
problems, inefficiencies, or areas for enhancement.
o Set Objectives and Goals: Define clear, measurable goals for the improvement process.
What specific outcomes are you aiming to achieve?
o Develop a Plan: Outline the steps needed to achieve the desired improvement, including
assigning responsibilities, determining resources, and establishing timelines.
Outcome: A comprehensive plan that provides a roadmap for the intended changes or
improvements.
2. Do:
Objective: Implement the planned changes on a small, controlled scale.
Actions:
o Execute the Plan: Carry out the planned changes, preferably in a pilot phase or within a small,
manageable scope.
o Collect Data: Monitor the process and gather data during the implementation to track
performance and identify any immediate issues.
o Document Findings: Record any observations, challenges, or deviations from the plan to
inform the next steps.
Outcome: Initial insights and data from the implementation that can be analyzed to determine
the effectiveness of the change.
3. Check:
Objective: Evaluate the results of the implemented changes against the original objectives.
Actions:
o Analyze Data: Examine the data collected during the "Do" phase to assess whether the
change achieved the desired outcome.
o Compare Results: Compare actual results with the goals and benchmarks set during the
"Plan" phase.
o Identify Successes and Failures: Determine what aspects of the plan worked well and what
didn't, understanding the reasons behind these outcomes.
Outcome: A clear evaluation of the change's impact, identifying whether the objectives were met
and what adjustments might be needed.
4. Act:
Objective: Decide on the next steps based on the evaluation of the results.
Actions:
o Standardize Successful Changes: If the change was successful, implement it on a broader
scale and integrate it into the standard operating procedures.
o Adjust or Re-plan: If the change did not yield the desired results, make necessary adjustments
or rethink the approach. The cycle may begin again with a new plan based on the insights
gained.
o Continuous Monitoring: Even after implementation, continue to monitor the process to
ensure sustained improvement and to catch any issues early.
Outcome: Either the successful standardization of the change or a refined plan for further
improvement efforts.
8. Define software quality. List & explain core component of quality. [5]
Software Quality refers to how well a software product meets the requirements, expectations, and
needs of its users. It is determined by how effectively the software performs its intended functions, how
easy it is to use, and how reliable and maintainable it is over time.
Core Components of Software Quality:
1. Functionality:
o Definition: Functionality is about how well the software performs its intended tasks and
meets the specified requirements.
o Explanation: It includes features like correctness, completeness, and suitability. The software
should produce the correct results and perform all required tasks accurately.
2. Usability:
o Definition: Usability refers to how easy and intuitive it is for users to interact with the
software.
o Explanation: It includes factors like user interface design, ease of learning, and user
satisfaction. High usability ensures that users can efficiently and comfortably use the software
to accomplish their tasks.
3. Reliability:
o Definition: Reliability is the ability of the software to perform consistently without failures
under specified conditions.
o Explanation: It includes aspects like fault tolerance, recovery from errors, and stability.
Reliable software operates correctly over time, even under unexpected conditions.
4. Efficiency:
o Definition: Efficiency refers to how well the software uses system resources like memory,
processing power, and time.
o Explanation: It includes performance and resource utilization. Efficient software runs quickly
and doesn't unnecessarily consume system resources, ensuring smooth operation.
5. Maintainability:
o Definition: Maintainability is about how easy it is to modify the software to fix defects,
improve performance, or adapt to new requirements.
o Explanation: It includes factors like modularity, reusability, and ease of understanding.
Software that is easy to maintain can be updated and improved over time with minimal effort.
6. Portability:
o Definition: Portability refers to the ease with which software can be transferred from one
environment or platform to another.
o Explanation: It includes adaptability and compatibility across different operating systems,
hardware, or devices. High portability ensures that the software can be easily installed and run
in different environments without extensive modification.
7. Security:
o Definition: Security is the ability of the software to protect data and resources from
unauthorized access, breaches, and other threats.
o Explanation: It includes features like confidentiality, integrity, and authentication. Secure
software ensures that sensitive information is protected and that the system is resistant to
attacks and vulnerabilities.
9. Give classification for different types of products. [5]
1. Consumer Products:
Products designed for personal use by individuals. These are bought and used by consumers in their
everyday lives.
Durable Goods:
o Definition: Products that have a long lifespan and are not consumed quickly.
o Examples: Refrigerators, washing machines, and cars.
o Characteristics: Higher cost, used over a long period, and often require significant investment.
Non-Durable Goods:
o Definition: Products that are consumed or used up quickly.
o Examples: Food, beverages, and toiletries.
o Characteristics: Lower cost, purchased frequently, and used up relatively quickly.
Convenience Goods:
o Definition: Products that are bought frequently with minimal effort.
o Examples: Snacks, milk, and newspapers.
o Characteristics: Easily accessible, low cost, and purchased regularly.
Shopping Goods:
o Definition: Products that require more effort and comparison before purchase.
o Examples: Clothing, electronics, and furniture.
o Characteristics: Higher cost, less frequent purchase, and often involve comparison shopping.
Specialty Goods:
o Definition: Products with unique characteristics or brand identity that make them special.
o Examples: Luxury cars, designer clothes, and high-end watches.
o Characteristics: High cost, unique features, and often purchased infrequently.
2. Industrial Products:
Products used in the production of other goods or services and typically bought by businesses rather
than individual consumers.
Capital Goods:
o Definition: Long-term assets used in the production of other goods and services.
o Examples: Machinery, factory equipment, and construction tools.
o Characteristics: High investment cost, long-term use, and critical to manufacturing processes.
Raw Materials:
o Definition: Basic materials used to produce finished goods.
o Examples: Steel, wood, and chemicals.
o Characteristics: Essential for production, sourced in bulk, and transformed into final products.
Component Parts:
o Definition: Parts that are used as components in the assembly of final products.
o Examples: Microchips, engine parts, and screws.
o Characteristics: Often purchased in large quantities, integrated into other products.
Supplies:
o Definition: Items used in the daily operations of a business but not part of the final product.
o Examples: Office supplies, cleaning materials, and lubricants.
o Characteristics: Regularly purchased, support operational activities, and generally low-cost.
Business Services:
o Definition: Intangible products that support business operations.
o Examples: Consulting services, IT support, and legal services.
o Characteristics: Not physical products, often customized, and critical for business functions.
10. What are the constraints of software product quality assessment. [5]
1. Subjective Evaluation:
o Different Opinions: People like developers, users, and managers might have different ideas
about what "quality" means, making it hard to agree on the assessment.
o Hard to Measure: Some quality aspects, like how easy the software is to use, are difficult to
measure clearly.
2. Changing or Unclear Requirements:
o Changing Needs: If the software requirements keep changing, it’s hard to assess the quality
because the target keeps moving.
o Unclear Goals: If the goals of the software aren’t clear, it’s tough to judge if the software
meets its purpose.
3. Limited Resources:
o Not Enough Time: There may not be enough time to fully test the software, so some quality
issues might be missed.
o Budget Limits: Limited money can mean less thorough testing, which can affect how well the
software’s quality is assessed.
4. Complex Software:
o Big or Complicated Systems: Assessing large or complex software can be challenging because
there are many parts to consider.
o Integration Issues: Making sure different parts of the software work together smoothly adds
to the difficulty.
5. Technology Limits:
o Tool Restrictions: The tools used for testing might not be perfect or might miss certain
problems.
o Different Platforms: Software may behave differently on various platforms, making it harder
to assess its quality consistently.
6. Human Factors:
o Tester Skill Levels : The quality of the assessment can depend on the skill and experience of
the testers. Inexperienced testers might miss important issues.
o Bias and Errors: Testers might have biases or make mistakes, which can affect the accuracy of
the quality assessment.
11. Plan software quality control with respect to college attendance software. [5]
Here’s a simple plan for software quality control specific to a college attendance software system:
1. Define Requirements:
Gather Requirements: Clearly document what the software needs to do, such as tracking student
attendance, generating reports, and integrating with other college systems.
Set Quality Standards: Establish standards for accuracy, reliability, and usability based on the
requirements to ensure the software meets expectations.
2. Design and Development:
Follow Best Practices: Use proven software development methods to design and build the system,
ensuring it is reliable and meets the defined requirements.
Modular Design: Develop the software in separate modules (e.g., student management, report
generation) so each can be tested independently.
3. Testing:
Functional Testing: Test each feature to ensure it works as intended, such as marking attendance,
generating reports, and notifying students and staff.
User Acceptance Testing (UAT): Involve actual users (e.g., teachers, administrative staff) in testing to
ensure the software meets their needs and is user-friendly.
4. Quality Assurance:
Code Reviews: Regularly review the code for errors, adherence to standards, and overall quality.
Documentation: Maintain detailed records of the development process, including test results and any
issues found, to ensure traceability and accountability.
5. Deployment and Maintenance:
Controlled Deployment: Roll out the software in stages, starting with a pilot phase to catch any issues
before full deployment.
Ongoing Support: Provide regular updates and maintenance to address any issues, improve features,
and ensure the software remains effective and secure.
Unit 2
1. Write test cases for login validation. [5]
When writing test cases for login validation, the goal is to ensure that the login functionality is secure,
reliable, and behaves as expected in different scenarios. Below are five test cases for login validation:
Test Case 1: Valid Login
Objective: Verify that the user can log in successfully with a valid username and password.
Steps:
1. Navigate to the login page.
2. Enter a valid username.
3. Enter the correct password corresponding to the username.
4. Click the "Login" button.
Expected Result: The user is successfully logged in and redirected to the homepage or dashboard.
Test Case 2: Invalid Username
Objective: Verify that the login fails when an invalid username is entered.
Steps:
1. Navigate to the login page.
2. Enter an invalid username.
3. Enter a valid password.
4. Click the "Login" button.
Expected Result: The system displays an error message indicating that the username is incorrect, and
the user is not logged in.
Test Case 3: Invalid Password
Objective: Verify that the login fails when an incorrect password is entered with a valid username.
Steps:
1. Navigate to the login page.
2. Enter a valid username.
3. Enter an incorrect password.
4. Click the "Login" button.
Expected Result: The system displays an error message indicating that the password is incorrect, and
the user is not logged in.
Test Case 4: Empty Fields
Objective: Verify that the login fails when the username or password field is left empty.
Steps:
1. Navigate to the login page.
2. Leave the username and/or password field empty.
3. Click the "Login" button.
Expected Result: The system displays an error message prompting the user to fill in the required
fields, and the login attempt is not successful.
Test Case 5: Password Masking
Objective: Verify that the password is masked when entered into the password field.
Steps:
i. Navigate to the login page.
ii. Enter any text in the password field.
Expected Result: The entered password is masked (e.g., displayed as dots or asterisks) to prevent
visibility to others.
2. What is the entry & exit criteria of testing. [5]
1. Entry Criteria:
Entry criteria specify the prerequisites that must be fulfilled before testing activities can start. These
conditions ensure that the test environment, documentation, and necessary resources are in place for
effective testing. Common entry criteria include:
Requirements are Finalized: The software requirements specification (SRS) or user stories are
complete, reviewed, and approved. This ensures that testers understand what needs to be tested.
Test Environment is Ready: The testing environment (including hardware, software, network, and
tools) is set up, configured, and validated, ensuring it mirrors the production environment as closely
as possible.
Test Data is Prepared: The necessary test data is identified, created, and available for use in testing.
This data should be accurate, complete, and relevant to the test cases.
Test Plan and Test Cases are Approved: The test plan, along with test cases and test scripts, is
documented, reviewed, and approved. This ensures that the scope, objectives, and approach of
testing are clearly defined.
Build is Delivered: The software build or the module to be tested is delivered and has passed smoke
testing to ensure its stability for further testing.
2. Exit Criteria:
Exit criteria define the conditions that must be met for testing activities to be considered complete. These
criteria ensure that the software has been tested sufficiently and meets the quality standards before
being released. Common exit criteria include:
Test Case Execution is Complete: All planned test cases have been executed, and the pass/fail status
of each test case is documented. A high percentage of test cases should have passed.
Defects are Resolved: All critical and major defects identified during testing have been fixed, retested,
and closed. Any remaining defects are minor or low-priority and have been accepted by stakeholders.
Test Coverage is Satisfactory: The planned test coverage has been achieved, ensuring that all critical
functionalities and requirements have been tested. Code coverage tools may also confirm that
sufficient parts of the code have been tested.
Test Summary Report is Prepared: A comprehensive test summary report has been prepared and
reviewed, documenting the testing activities, results, and any remaining risks.
Stakeholder Approval: All relevant stakeholders, including QA leads, project managers, and product
owners, have reviewed and approved the testing outcomes, and agree that the software is ready for
release.
5. Analyse test policy & test strategy which is included in test documentation. [5]
Test Policy
Definition:
A test policy is a high-level document that outlines the overall approach and principles for testing
within an organization or project. It offers a broad, organizational view of testing practices and goals.
Key Aspects:
1. Purpose and Scope:
o Purpose: Defines the goals of testing, such as ensuring software quality, maintaining
compliance with standards, and managing risks.
o Scope: Specifies the areas the policy covers, such as all software projects within the
organization or specific types of testing.
2. Testing Principles:
o Principles: Outlines the core values and guiding principles for testing, like the importance of
early testing, thorough documentation, and ongoing improvement.
o Example: A policy might stress that all software must undergo regression testing before being
released.
3. Roles and Responsibilities:
o Roles: Specifies who is responsible for different testing activities, including test managers,
testers, and developers.
o Responsibilities: Details each role's responsibilities in the testing process, such as planning,
execution, and reporting.
4. Testing Standards and Guidelines:
o Standards: Describes the standards to follow, such as industry standards (e.g., ISO/IEC 29119)
or organizational norms.
o Guidelines: Provides general guidelines for creating test plans, executing tests, and reporting
defects.
5. Policy Review and Updates:
o Review: Specifies how often the test policy should be reviewed and updated to ensure it
remains relevant and effective.
o Updates: Describes the process for updating the policy based on new developments or
feedback.
Test Strategy
Definition:
A test strategy is a detailed plan outlining the approach and methods for testing a specific project or
system. It provides a roadmap for how testing will be conducted to meet the project’s objectives.
Key Aspects:
1. Test Objectives:
o Objectives: Defines what the testing aims to achieve, such as verifying functionality, ensuring
performance, or validating security.
o Example: The strategy might aim to ensure that the software meets all functional
requirements and performs well under expected load conditions.
2. Testing Methods and Techniques:
oMethods: Outlines the testing methods to be used, like manual testing, automated testing, or
performance testing.
o Techniques: Specifies techniques such as black-box testing, white-box testing, and exploratory
testing.
o Example: The strategy might detail using automated tests for regression testing and manual
tests for exploratory testing.
3. Risk Management:
o Risks: Identifies potential risks and challenges in the testing process and outlines mitigation
strategies.
o Example: The strategy might address risks like incomplete requirements or tight deadlines and
propose solutions such as prioritized testing.
4. Test Schedule and Milestones:
o Schedule: Provides a timeline for testing activities, including key milestones and deadlines.
o Milestones: Highlights significant events like the completion of test planning, the start of test
execution, and the final test report delivery.
5. Test Scope:
o Scope: Details what will and won’t be tested, including specific features, functionalities, and
components.
o Example: The strategy might specify that unit testing will cover all code modules, while
integration testing will focus on the interactions between modules.
Purpose Provides a roadmap Serves as a guide for Defines the testing goals
for the testing the testing team to and principles for the entire
process to align ensure testing is organization or project.
with project goals. executed as planned.
Content Includes scope, objectives, test items, Includes testing objectives, types of
features to be tested, environment, testing, test levels, test environment,
schedule, resources, responsibilities, tools, and risk management at a
and risks. strategic level.
Purpose Guides the testing team in executing Sets the overall direction for testing
specific tasks within a project, activities, ensuring consistency and
ensuring all aspects are covered. alignment with organizational goals.
8. Justify : [5]
i) Green money 1 cost of prevention.
Definition: Green money represents the investment in preventive measures to avoid defects and
issues in software development or other processes.
Justification:
Prevents Issues: Investing in preventive measures such as thorough planning, quality assurance,
and early testing helps in identifying and addressing potential issues before they become major
problems.
Reduces Long-Term Costs: By addressing issues early, you reduce the likelihood of costly rework,
fixes, and customer complaints later in the process.
Improves Quality: Preventive actions lead to higher quality products or services, which can
enhance customer satisfaction and reduce the need for corrections and revisions.
Enhances Efficiency: Spending on prevention often leads to more efficient processes and
smoother project execution, saving time and resources in the long run.
ii) Red money 1 cost of failure.
Definition: Red money represents the costs associated with defects or failures that occur after the
product or service has been delivered.
Justification:
Increased Costs: Failures or defects often result in higher costs due to the need for rework,
patching, and fixing problems after the fact. This can be more expensive than addressing issues
during the early stages.
Customer Dissatisfaction: Defects and failures can lead to poor customer experiences, which
might result in loss of trust, refunds, or damage to the company’s reputation.
Operational Disruptions: Issues that arise after delivery can disrupt operations, causing delays
and additional costs to fix the problems.
Legal and Compliance Issues: In some cases, defects can lead to legal consequences, compliance
issues, or regulatory penalties, adding to the overall cost of failure.
6. Examples:
o Definition: Testing the integration of a payment gateway with an e-commerce system.Ensures
that when a user makes a payment, the payment system communicates correctly with the
order processing and inventory management systems.
Acceptance Testing
Definition: Acceptance Testing is a phase in software testing where the software is evaluated to ensure it
meets business requirements and is ready for delivery to the customer. The main goal is to validate that
the software is acceptable to end-users or stakeholders.
Key Points:
1. Objective:
o Definition: To ensure the software meets user requirements and is ready for deployment.
Acceptance testing verifies that the software satisfies the business needs and user
expectations.
2. Scope:
o Definition: Tests the software against predefined acceptance criteria. It checks whether the
software performs its intended functions correctly and meets the agreed-upon requirements
from a user’s perspective.
3. Types:
o Definition: Includes user acceptance testing (UAT), alpha testing, and beta testing. UAT is
performed by actual users in a real-world environment, alpha testing is done by internal
teams, and beta testing involves a limited release to external users for feedback.
4. Focus Areas:
o Definition: Focuses on usability, functionality, and compliance with business requirements.
o Explanation: Ensures that the software is user-friendly, performs required functions correctly,
and meets all specified business and regulatory requirements.
5. Challenges:
o Definition: Can be impacted by unclear requirements or changing user needs.
o Explanation: Acceptance testing may face difficulties if requirements are not well-defined or if
there are discrepancies between what is delivered and what users expected.
6. Examples:
o Definition: Testing a new CRM system to ensure it meets the needs of the sales team.
o Explanation: Involves checking whether the system supports the sales processes, integrates
with other tools used by the team, and provides the necessary reporting and data
management features.
2. Key Components:
o Configuration Identification: Defines and documents the configuration of system components
and their relationships. This includes identifying what needs to be controlled and monitored.
o Configuration Control: Manages changes to the configuration items. It involves processes for
requesting, reviewing, and approving changes.
o Configuration Status Accounting: Keeps track of the status of configuration items, including
their versions and changes. This provides a historical record of all changes and configurations.
o Configuration Audits: Regularly checks and verifies that the configuration items conform to
their specifications and are correctly documented. This ensures the system’s configuration
meets the required standards and quality.
3. Processes:
o Planning: Develop a configuration management plan outlining how configuration items will be
identified, controlled, and audited.
o Implementation: Apply the configuration management processes to manage and control
changes throughout the lifecycle of the system.
o Review: Regularly review the configuration management processes and make adjustments as
needed to ensure effectiveness and efficiency.
4. Benefits:
o Improved Quality: Ensures that all changes are properly reviewed and tested, reducing the
risk of defects and ensuring the system’s reliability.
o Better Documentation: Provides accurate and up-to-date documentation of all configuration
items and changes, which is crucial for maintaining the system.
o Enhanced Control: Helps in managing and controlling changes to prevent unauthorized or
unintended modifications, reducing the risk of disruptions.
Definition Verification refers to the set of activities Validation refers to the set of activities that
that ensure software correctly implements ensure that the software that has been
the specific function built is traceable to customer
requirements.
Focus It includes checking documents, designs, It includes testing and validating the actual
codes, and programs. product.
Execution It does not include the execution of the It includes the execution of the code.
code.
Methods Used Methods used in verification are Methods used in validation are Black Box
reviews, walkthroughs, inspections and Testing, White Box Testing and non-
desk-checking. functional testing.
Purpose It checks whether the software conforms It checks whether the software meets the
to specifications or not. requirements and expectations of a
customer or not.
Bug It can find the bugs in the early stage of It can only find the bugs that could not be
the development. found by the verification process.
Goal The goal of verification is application and The goal of validation is an actual product.
software architecture and specification.
Responsibility Quality assurance team does verification. Validation is executed on software code
with the help of testing team.
Error Focus Verification is for prevention of errors. Validation is for detection of errors.
Performance Verification finds about 50 to 60% of the Validation finds about 20 to 30% of the
defects. defects.
Stability Verification is based on the opinion of Validation is based on the fact and is often
reviewer and may change from person to stable.
person.