0% found this document useful (0 votes)
12 views25 pages

Unit 4 Soft

The document provides a comprehensive overview of automation testing, detailing its benefits, processes, and differences from manual testing. It outlines criteria for selecting test cases to automate, the steps involved in the automation process, and key considerations for choosing automation tools. Additionally, it discusses the Software Testing Life Cycle (STLC) and the roles of test planning, design, and execution in ensuring software quality.

Uploaded by

Anushika Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views25 pages

Unit 4 Soft

The document provides a comprehensive overview of automation testing, detailing its benefits, processes, and differences from manual testing. It outlines criteria for selecting test cases to automate, the steps involved in the automation process, and key considerations for choosing automation tools. Additionally, it discusses the Software Testing Life Cycle (STLC) and the roles of test planning, design, and execution in ensuring software quality.

Uploaded by

Anushika Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Unit 4 software

Automation Testing

 Automation testing is a type of testing in which we take the help of tools


(automation) to perform the testing.
 It is faster than manual testing because it is done with some automation tools.
 It relies entirely on pre-scripted test which runs automatically to compare actual
results with expected results.
 Automation testing helps the tester determine whether the application performs as
expected or not.
 It allows the execution of repetitive tasks and regression tests.
 Automation requires manual effort to create initial testing scripts.
 Non-functional testing (such as load, stress testing, etc) not done through manual
testing generally.

Which Test Cases to Automate?

Test cases to be automated can be selected using the following criterion to increase the
automation ROI( return on investment)

 High Risk – Business Critical test cases


 Test cases that are repeatedly executed
 Test Cases that are very tedious or difficult to perform manually
 Test Cases which are time-consuming

Test cases are not suitable for automation:

 Test Cases that are newly designed and not executed manually at least once
 Test Cases for which the requirements are frequently changing
 Test cases which are executed on an ad-hoc basis.

Automated Testing Process:

Following steps are followed in an Automation Process


Step 1) Test Tool Selection
Step 2) Define scope of Automation
Step 3) Planning, Design and Development
Step 4) Test Execution
Step 5) Maintenance

1. Test Tool Selection:


 In this step, the team identifies and selects the appropriate automation testing tools
based on the project requirements, technology stack, budget, and skillset of the team.
Examples of popular automation testing tools include Selenium, Appium, JUnit,
TestNG, etc.
2. Define Scope of Automation:
 This step involves defining which tests should be automated and which should remain
manual. It's essential to identify the areas of the application or system that will benefit
the most from automation, such as regression testing, smoke testing, or repetitive
tasks.
3. Planning, Design, and Development:
 Once the scope is defined, the team plans the automation strategy, including creating
a test automation framework, designing test cases, and developing automation scripts.
This step involves scripting, coding, and structuring the automated tests according to
the selected automation tool and the application's architecture.
4. Test Execution:
 In this step, the automated tests are executed using the selected automation tool or
framework. Test data is provided, and the automated tests are run against the
application or system under test. Test results are recorded, and any failures or issues
are reported for further investigation.
5. Maintenance:
 Automation tests require regular maintenance to ensure they remain up-to-date and
relevant. This involves updating test scripts to accommodate changes in the
application or system, refactoring code for better maintainability, and adding new test
cases as the application evolves. Maintenance also includes reviewing and analyzing
test results to improve test coverage and effectiveness.

By following these steps, teams can effectively implement automated testing within their
software development lifecycle, leading to improved efficiency, faster time-to-market, and
higher software quality.

Difference Between Manual Testing and Automated Testing


Parameters Manual Testing Automation Testing

In manual testing, the test In automated testing, the test


Definition cases are executed by the cases are executed by the
human tester. software tools.

Manual testing is time- Automation testing is faster


Processing Time
consuming. than manual testing.
Parameters Manual Testing Automation Testing

Automation testing takes up


Manual testing takes up
Resources requirement automation tools and trained
human resources.
employees.

Exploratory testing is not


Exploratory testing is
Exploratory testing possible in automation
possible in manual testing.
testing.

Automation testing uses


Manual testing doesn’t use
Framework requirement frameworks like Data Drive,
frameworks.
Keyword, etc.

Manual testing is not reliable Automated testing is more


Reliability due to the possibility of reliable due to the use of
manual errors. automated tools and scripts.

In automated testing,
In manual testing, investment
investment is required for
Investment is required for human
tools and automated
resources.
engineers.

In automated testing, the test


In manual testing, the test
results are readily available
results are recorded in an
Test results availability to all the stakeholders in the
excel sheet so they are not
dashboard of the automated
readily available.
tool.

Automated testing is
Manual testing allows human
conducted by automated tools
observation, thus it is useful
Human Intervention and scripts so it does not
in developing user-friendly
involve assurance of user-
systems.
friendliness.
Parameters Manual Testing Automation Testing

Performance testing like load


Performance testing is not
Performance testing testing, stress testing, spike
possible with manual testing.
testing, etc.

In manual testing, batch You can batch multiple tests


Batch Testing
testing is not possible. for fast execution.

Programming knowledge is a
There is no need for
must in case of automation
Programming knowledge programming knowledge in
testing as using tools requires
manual testing.
trained staff.

In automation testing, the


documentation acts as a
training resource for new
In manual testing, there is no
Documentation developer. He/ She can look
documentation.
into unit test cases and
understand the code base
quickly.

Automated testing is suitable


Manual testing is usable for
for Regression testing, Load
When to Use? Exploratory testing, Usability
testing, and Performance
testing, and Adhoc testing.
testing.

benefits of automated testing tools in short:

1. Efficiency: Faster testing with reduced time-to-market.


2. Accuracy: Consistent and reliable test results.
3. Reusability: Test scripts can be reused across cycles.
4. Increased Coverage: Comprehensive validation of functionality.
5. Regression Testing: Quick identification of defects.
6. Parallel Execution: Speedier testing across multiple environments.
7. CI/CD Integration: Seamless automation in software delivery.
8. Cost Savings: Reduced need for manual testing resources.

These benefits highlight the value of automated testing tools in improving software quality,
accelerating development, and minimizing costs.

Advantages of Automated Testing:

1. Efficiency: Faster testing means quicker software validation and releases.


2. Accuracy and Consistency: Tests are reliable and free from human errors.
3. Reusability: Test scripts can be reused, saving time and effort.
4. Increased Test Coverage: Comprehensive testing catches more defects.
5. Regression Testing: Quickly identifies issues introduced by new changes.
6. Parallel Testing: Speeds up testing across different environments.
7. Cost Savings: Reduces the need for manual testing resources over time.

Disadvantages of Automated Testing:

1. Initial Setup and Learning Curve: Requires time and effort to learn and set up.
2. Maintenance Overhead: Regular updates and maintenance are necessary.
3. Limited Human Judgment: Lacks human intuition, affecting certain aspects of testing.
4. Complex Test Scenarios: Some scenarios are challenging to automate.
5. False Positives and Negatives: Automated tests may produce incorrect results.
6. Cost of Tools and Infrastructure: Initial investment may be high.
7. Not Suitable for All Tests: Certain types of testing are better suited for manual testing.
8. Over-reliance on Automation: Neglecting manual testing may miss critical issues.

How to choose a test automation tool

To choose the right test automation tool:

1. Easy to Use: Pick tools that are simple and let you create tests without much hassle.
Imagine you're working on a web application and want to automate testing. You find a tool
like TestProject, which offers a user-friendly interface and allows you to create tests by
simply recording your interactions with the application.

2. Works Everywhere: Make sure the tool can test your app on different browsers and devices
easily.
Suppose your web application needs to be tested on different browsers like Chrome, Firefox,
and Safari, as well as on mobile devices. You choose a tool like Selenium WebDriver, which
supports testing on multiple browsers and platforms, ensuring compatibility across different
environments.
3. Good Analysis Features: Choose a tool that shows test results clearly and helps you
understand what went wrong.
After running your tests, you want to analyze the results to identify any issues. You use a tool
which provides detailed reports and dashboards with clear visualizations, making it easy to
understand test outcomes and pinpoint areas for improvement.

4. Flexible Testing: Look for tools that can handle different types of tests, like smoke tests or
performance tests.
Your application requires various types of tests, including smoke tests to check basic
functionality and performance tests to measure system responsiveness. You opt for a tool
which offers flexibility to perform different types of tests and customize them according to
your requirements.

5. Advanced Features: Find tools with cool features like data-driven testing or custom metrics
to make testing easier.

Other things to think about:

6. Cost: Consider both upfront and long-term costs. Free tools might save money but can be
tricky to use efficiently.
7. Test Stability: Choose a tool that makes tests that don't break easily when your app changes.
8. Support: Check if the tool has good support resources like tutorials or forums, especially if
you need help with complex tests.

By keeping these in mind, you'll find a test automation tool that fits your needs and helps you
test your software better.

Aspect Test Case Test Scenario


A specific set of conditions or inputs to be used A sequence of steps or actions that
during testing to validate whether a particular outlines a particular usage or behavior
Definition aspect of the software behaves as expected. of the software under test.
To verify a single functionality or feature of the To validate end-to-end workflows or
Purpose software. user interactions with the software.
Broad and covers a series of interactions
More detailed and focused on specific inputs, or use cases involving multiple
Granularity actions, and expected outcomes. functionalities.
Typically covers a single test scenario or user Can cover multiple test cases or related
Scope story. functionalities.
Often reusable across different test scenarios or May or may not be reusable, depending
Reusability test suites. on the uniqueness of the scenario.
Test cases may be combined to form test Test scenarios may encompass multiple
Dependency scenarios. test cases.
Test case: Verify that the login button redirects Test scenario: User registration and
to the user dashboard upon successful login flow from registration form
Example authentication. submission to accessing the dashboard.
Aspect Test Case Test Scenario
Each test case is documented individually with Test scenarios are documented as a
its inputs, steps, expected outcomes, and sequence of steps or actions, often with
Documentation preconditions. variations and alternative paths.
Test cases are managed and executed Test scenarios may be grouped together
individually, often organized into test suites or based on functional areas or user
Management test plans. workflows.

STLC
The Software Testing Life Cycle (STLC) is a series of sequential steps or phases followed
during the testing process to ensure the quality and reliability of software products.
STLC can be roughly divided into 3 parts:

1. Test Planning
2. Test Design
3. Test Execution

Test planning

A test plan is a document that outlines the approach, scope, resources, schedule,
and deliverables of the testing process for a specific project.

It serves as a roadmap for the testing team, providing guidance on how testing will
be conducted to ensure the quality of the product or system under test.

The test plan typically includes the following components:

1. Introduction: Provides an overview of the purpose, objectives, scope, and goals of


the test plan.
2. Test Items: Lists the software or system components, features, or modules that will
be tested.
3. Features to be Tested: Identifies the specific functionalities or features of the
software that will be tested.
4. Features not to be Tested: Specifies functionalities or features that will not be
tested and explains the reasons for exclusion.
5. Testing Approach: Describes the overall testing strategy, including the types of
testing (e.g., functional, non-functional), testing techniques, and methodologies to be
used.
6. Test Deliverables: Lists the documents, reports, and artifacts that will be produced
during the testing process, such as test cases, test scripts, test data, and defect
reports.
7. Testing Tasks: Outlines the specific activities or tasks to be performed during each
phase of testing, including planning, preparation, execution, and closure.
8. Test Environment: Describes the hardware, software, tools, and infrastructure
required to conduct testing effectively.
9. Test Schedule: Provides a timeline or schedule for the testing activities, including
milestones, deadlines, and resource allocation.
10. Entry and Exit Criteria: Defines the conditions that must be met before testing can
begin (entry criteria) and the conditions that indicate when testing is complete (exit
criteria).
11. Suspension and Resumption Criteria: Specifies conditions under which testing
activities may be temporarily suspended and resumed.
12. Test Dependencies: Identifies any dependencies or constraints that may impact the
testing process, such as availability of resources or integration with external systems.
13. Risks and Mitigation Strategies: Identifies potential risks to the testing process and
outlines strategies for mitigating or managing those risks.
14. Approvals: Specifies the stakeholders responsible for reviewing and approving the
test plan before testing activities commence.
15. References: Includes references to relevant documents, standards, guidelines, and
resources used in developing the test plan.

By including these components, a test plan ensures that testing activities are well-
defined, organized, and executed systematically to achieve the desired quality
objectives for the software or system under test.

Test design : Test design is a crucial phase in the software testing process where test cases
and test scenarios are developed based on the software requirements and specifications. It
involves creating a detailed plan to verify and validate the functionality, performance, and
reliability of the software under test.
Test execution: Test execution is a critical phase in the software testing process where test
cases are executed, and the actual results are compared against expected results to identify
defects and validate the software's functionality.

Difference Between Test Planning & Test Execution


Comparison
Test Planning Test Execution
area

Person The test manager will be preparing This will be normally done by tester
responsible the Test plan and will be sharing to keeping in mind that the test cases
all the stake holders for their review. prepared has been approved and signed
off.

Main focus The Test plan focus areas are how the The Test execution focuses mainly on
testing should be carried out, what the execution of the test cases provided
should be considered and what not to, to be tested on the software.
environment that can be used, Test
schedules etc.

Recurring or This is a single time activity. Having There are 3 parts in this area when we
iterative mode said that it may or may not require talk about iteration.
modifications for the future releases 1. Functional testing.
of the software. 2. Regression testing.
3. Re-testing.

Inputs The inputs for creation of a test plan The test case document is the major
is really required and to be provided input.
by business analysts, Architect,
clients etc.,

Period when it It has to be started along with the Execution has to be started strictly after
can be started development cycle for effective the development of the software has
outcome and to save time. But there been done.
are few models like water fall model
where in the testing phase will start
only after the development phase has
been completed.
Comparison
Test Planning Test Execution
area

Closure period The test plan will have no such Execution for a specific release or cycle
closure period. Generally a sign off will be considered to be closed when all
from all interested parties for the of the test cases have been executed
software will be provided. against the software.

Deliverable Test plan will be considered as a This will be coming as a last bench
positioning major deliverable for the testing member in the testing phase. Post
activity. This will be done as the first execution the defects/bugs status along
step in testing process. with the test case execution status will be
shared as one of the testing deliverables

Tools usage There will not be many tools used as It will depend on the mode of execution.
the planning activity will be more of In case of manual no tool will be used
discussion and documentation. To for execution. But for logging the defects
keep track of any changes to the plan, and managing, some tools will be used.
the test managers will normally use In case of automation testing, the
any version control tool like VSS or execution will be done with the help of
something else. tools like QTP, SELENIUM etc.

Impacts on the This will impact all of the testing This will impact the subsequent cycle or
deliverables phases in a larger manner release to be tested.

Defect Bash / Bug Bash

 Defect bash or bug bash is an ad hoc testing where people performing


different roles in an organization test the product together at the same time.

Ad hoc testing refers to a spontaneous and unstructured approach to software testing,


where test scenarios are not predefined and testers explore the application freely without
following any formal test plan or script. In ad hoc testing, testers rely on their intuition,
experience, and domain knowledge to identify defects and assess the quality of the
software. This testing before deployment.
 The testing by all the participants during defect bashing is not based on written test
cases. What is to be tested is left to an individual’s decision and creativity.
 A usual defect bash / bug bash lasts half a day and is usually done when the software
is close to being ready to release.
 Defect bash is a unique testing method which can bring out both functional and non-
functional defects.
 There are two types of defects that will emerge during a defect bash.( .( functional
defects and non-functional defects).
 The defects that are in the product, as reported by the users, can be classified as
functional defects.
 Defects that are unearthed while monitoring the system resources, such as memory
leak, long turnaround time, missed requests, high impact and utilization of system
resources and so on are called non-functional defects.

The bash is typically a 60-minute session, organized as follows:

 5 minutes for the introduction


 40 minutes of focused group testing
 15 minutes of debriefing

Benefits of conducting a Defect Bash include:

 Increased Bug Discovery: The collaborative and exploratory nature of the event often leads
to the discovery of defects that may have otherwise gone unnoticed.
 Team Building: Defect Bashes promote teamwork and camaraderie among team members,
fostering a sense of shared responsibility for quality.
 Rapid Feedback: The immediate feedback obtained from testing during the event allows
teams to address issues quickly and iteratively improve the software.
 Enhanced Product Quality: By identifying and addressing defects early in the development
process, the overall quality of the software is improved, leading to higher customer
satisfaction.

Overall, a Defect Bash is an effective and engaging way for development teams to identify
and address defects in their software, ultimately leading to improved quality and customer
satisfaction.

advantages of having a Bug Bash in short:

1. Increased Bug Discovery: More defects are found in a short time frame.
2. Diverse Perspectives: Different team members find different types of issues.
3. Real-world Testing: Simulates actual user scenarios, uncovering relevant bugs.
4. Immediate Feedback: Defects are reported promptly, enabling quick fixes.
5. Team Collaboration: Promotes teamwork and camaraderie among team members.
6. Feature Validation: Validates new features or changes in the software.
7. Enhanced Communication: Encourages open communication and knowledge sharing.
8. Quality Culture: Reinforces the importance of quality and continuous improvement.

While Bug Bashes offer numerous advantages, there are also some potential disadvantages:

1. Time-consuming: Bug Bashes require dedicated time and resources from team members,
which may disrupt regular development activities and project timelines.
2. Resource Intensive: Coordinating Bug Bashes, including planning, organizing, and
facilitating the event, can be resource-intensive for project managers and team leaders.
3. Quality of Bugs Reported: Not all bugs identified during Bug Bashes may be of equal
importance or severity. Participants may prioritize certain defects over others, leading to
discrepancies in bug reporting and resolution.
4. Distraction from Core Tasks: Bug Bashes may divert team members' attention away from
their primary responsibilities, impacting productivity and progress on other project tasks.
5. Lack of Follow-up: Without proper follow-up and action plans, bugs identified during Bug
Bashes may remain unresolved or forgotten, diminishing the event's overall effectiveness.
6. Limited Participation: Participation in Bug Bashes may be limited to only a subset of team
members, potentially excluding valuable perspectives or expertise from the testing process.
7. Fatigue or Burnout: Hosting frequent Bug Bashes or conducting them for extended periods
may lead to tester fatigue or burnout, diminishing enthusiasm and participation in future
events.

What is defect?
A defect, in the context of software development, refers to any deviation or flaw in a
software application that causes it to behave unexpectedly, incorrectly, or
inadequately. It is also commonly referred to as a bug or an issue. Defects can arise at
any stage of the software development life cycle and can affect various aspects of the
software, including its functionality, performance, security, and usability.
For example, clicking a button does not perform the expected action, or calculations
produce incorrect results.

Defect/Bug Life Cycle in Software Testing


The defect life cycle, also known as the bug life cycle, refers to the stages through which a
defect progresses from identification to resolution. While the specific stages may vary
depending on the organization and project, the typical defect life cycle consists of the
following stages:
Note: upr diagram me no jo hai 1,2, 3 wo nhi likhna hai..

1. New: A problem, or defect, is found in the software. The defect is identified by a tester or
other stakeholders and reported in the defect tracking system. At this stage, the defect is
assigned a unique identifier and categorized based on its severity, priority, and other
attributes.
2. Open: The issue is reported to the development team. After being reported, the defect is
reviewed by the development team. If validated, it remains in the "open" status, indicating
that it is acknowledged and awaiting further action.
3. Assigned: A developer works on resolving the problem. The defect is assigned to a developer
or team responsible for fixing it. This stage marks the beginning of the resolution process.
4. In Progress: The developer begins working on fixing the defect. They analyze the issue,
identify the root cause, implement the necessary code changes, and perform unit testing to
verify the fix.
5. Fixed: The fix is tested to make sure it works. Once the developer believes the defect has
been resolved, they mark it as "fixed" and provide details of the fix in the defect tracking
system.
6. Pending Retest: After the defect is fixed, it is returned to the testing team for retesting. At
this stage, the tester verifies whether the fix has successfully addressed the issue and if any
new defects have been introduced.
7. Reopen: If the tester finds that the defect persists or if new issues arise as a result of the fix,
they reopen the defect, and it returns to the "open" status for further investigation and
resolution.
8. Verified/Closed: Once the tester confirms that the defect has been successfully resolved and
validated, they mark it as "verified" or "closed" in the defect tracking system. The defect is
considered resolved, and no further action is required.

A Few More:
 Rejected: If the defect is not considered a genuine defect by the developer then it is
marked as “Rejected” by the developer.
 Duplicate: If the developer finds the defect as same as any other defect or if the
concept of the defect matches any other defect then the status of the defect is changed
to ‘Duplicate’ by the developer.
 Deferred: If the developer feels that the defect is not of very important priority and it
can get fixed in the next releases or so in such a case, he can change the status of the
defect as ‘Deferred’.
 Not a Bug: If the defect does not have an impact on the functionality of the
application, then the status of the defect gets changed to “Not a Bug”.

Throughout the defect life cycle, effective communication and collaboration among
stakeholders, including testers, developers, and project managers, are essential to ensure
timely resolution and maintain software quality. Additionally, the defect tracking system
serves as a central repository for monitoring the progress of defects and facilitating
efficient defect management.

Bug tracking

Bug tracking, also known as defect tracking or issue tracking, is the process of recording,
monitoring, and managing defects or issues identified during the software development life
cycle. Bug tracking is essential for maintaining the quality and integrity of software
applications by systematically identifying, prioritizing, and resolving issues that impact
functionality, performance, security, or usability.

How bug tracking works:

1. Recording: When a defect is identified, it is recorded in a bug tracking system or tool. This
typically involves providing detailed information about the defect, such as its description,
severity, steps to reproduce, environment details, and any supporting documentation or
screenshots.
2. Tracking: Once recorded, defects are tracked throughout their lifecycle, from initial
discovery to resolution. This includes assigning the defect to the appropriate team member,
monitoring its status and progress, and documenting any updates or changes made during the
resolution process.
3. Prioritization: Defects are prioritized based on their severity and impact on the software
application. Critical defects that severely affect functionality or security are given higher
priority and addressed urgently, while less critical defects may be deferred or addressed in
subsequent releases.
4. Assignment and Ownership: Defects are assigned to the relevant individuals or teams
responsible for investigating, fixing, and verifying the issue. Assignees are accountable for
resolving the defect within the specified timeframe and updating its status accordingly.
5. Resolution and Verification: Once a defect has been addressed, the assigned team member
works on fixing the issue. After the fix is implemented, the defect undergoes verification
testing to ensure that the issue has been resolved satisfactorily and does not recur.
6. Communication and Collaboration: Effective communication and collaboration among
team members are essential for successful bug tracking. Team members should regularly
communicate updates, discuss issues, and collaborate on solutions to ensure timely resolution
of defects.
7. Reporting and Analysis: Bug tracking systems generate reports and metrics to provide
insights into the defect management process. These reports may include information such as
defect trends, resolution times, open defects by severity, and defect density, which can help
identify areas for improvement and optimize the software development process.

Why bug tracking is important


1. Finding Problems: Bug tracking helps us find and document issues in software. If something
doesn't work right, like a button not clicking or a page not loading, we can report it as a bug.
2. Fixing Priority: It helps decide which bugs are most important to fix first. Some bugs might
be small and not cause much trouble, while others could be really serious and need fixing
right away.
3. Keeping Track: Bug tracking lets us see what's happening with each bug. We can see when
it was found, who's working on fixing it, and when it's been fixed.
4. Stopping Repeats: By tracking bugs, we can make sure they don't happen again. If we've
fixed a bug before, we can check if it comes back in new versions of the software.
5. Team Communication: It helps us talk to each other about bugs and how to fix them.
Everyone involved in making the software can see what's going on and work together to
solve problems.
6. Learning from Mistakes: Bug tracking helps us learn from our mistakes. We can look at
what caused the bug and try to avoid making the same mistakes in the future.
7. Happy Users: Fixing bugs quickly makes users happy because they get better software that
works the way it's supposed to. It's important to listen to users when they find bugs and make
sure we fix them to make them happy.

Overall, bug tracking plays a critical role in software quality assurance by facilitating the
systematic identification, resolution, and prevention of defects, ultimately contributing to the
delivery of high-quality, reliable software products.

Software Quality Assurance (SQA)

 Software Quality Assurance (SQA) is a systematic process that ensures the quality
and reliability of software products or applications.It's like checking if a cake is baked
perfectly before serving it.
 Software Quality Assurance (SQA) is like having a supervisor overseeing all the steps
of making software to make sure everything follows the rules. These rules could be
standards like ISO 9000 or specific models like CMMI.
 SQA involves activities that aim to prevent problems in software development rather
than just fixing them afterward. It's like making sure ingredients are fresh and
measurements are correct to avoid a cake disaster.
 SQA follows specific standards, guidelines, and best practices to ensure that
software meets quality requirements. It's like following a recipe to make a cake,
where each step is important for the final result.
 SQA involves testing and reviewing software throughout its development lifecycle.
It's like tasting the cake batter to make sure it's sweet enough and checking the cake
while it bakes to ensure it rises properly.
 SQA promotes continuous improvement by learning from past mistakes and finding
ways to make software development processes more efficient and effective. It's like
adjusting the recipe and baking technique to make an even better cake next time.
Software Quality Assurance Plan(SQAP)

software quality assurance plan comprises of the procedures, techniques, and tools that are
employed to make sure that a product or service aligns with the requirements defined in the
SRS(software requirement specification).
The plan identifies the SQA responsibilities of a team, lists the areas that need to be reviewed
and audited. It also identifies the SQA work products.

The SQA plan document consists of the below sections:


1. Purpose section
2. Reference section
3. Software configuration management section
4. Problem reporting and corrective action section
5. Tools, technologies and methodologies section
6. Code control section
7. Records: Collection, maintenance and retention section
8. Testing methodology

Difference between quality of design and quality of conformance

Aspect Quality of Design Quality of Conformance


How well a product meets specified
requirements and fulfills customer How well a product or service adheres to
needs and expectations during the established standards, specifications, and
Definition design phase. requirements during production or delivery.
Planning and design stages of
Focus product development. Execution phase of production or service delivery.
Stage of Evaluated before production or Assessed during and after production or service
Assessment service delivery begins. delivery.
Primarily lies with designers, Shared among various stakeholders involved in
Responsibility engineers, and product developers. the production process.
Usability, functionality, reliability, Defect rates, compliance with specifications,
performance, customer satisfaction adherence to production schedules, regulatory
Measures with the design concept. compliance.
Aspect Quality of Design Quality of Conformance
Focuses on refining product design, Revolves around process optimization, error
enhancing features, addressing prevention, defect reduction, training, and skill
Improvement design flaws, incorporating customer development for production personnel,
Approach feedback. enhancing quality control mechanisms.

In summary, while Quality of Design deals with designing products to meet customer
needs and expectations, Quality of Conformance ensures that the actual products or
services produced adhere to established standards and specifications during the
production process.

SQA activities

1. Creating SQA Management Plan: In a software development project, the SQA manager
creates a detailed plan outlining how SQA activities will be conducted. This plan includes
defining the approach to testing (e.g., manual vs. automated), determining the composition of
the QA team (e.g., testers, analysts), and outlining specific engineering activities (e.g., code
reviews, testing methodologies).
2. Setting Checkpoints: At various stages of the project (e.g., after requirements gathering,
after coding phase), the QA team sets checkpoints to evaluate project quality and progress.
For example, after the completion of each development sprint, a checkpoint is established to
review the implemented features and identify any deviations from the project plan.
3. Applying Software Engineering Techniques: During the project planning phase, software
engineering techniques such as interviews with stakeholders and estimation methods like
Function Point Analysis are used to gather requirements and estimate project effort. For
instance, conducting interviews with end-users helps in understanding their needs and
expectations from the software.
4. Executing Formal Technical Reviews: Before moving to the next phase of development,
formal technical reviews are conducted to assess the quality of the prototype or design. For
example, a code review meeting is organized where developers and QA engineers analyze the
code for bugs, performance issues, and adherence to coding standards.
5. Having a Multi-Testing Strategy: To ensure comprehensive testing coverage, a multi-
testing strategy is adopted. This may include functional testing, regression testing,
performance testing, and security testing. For instance, automated test scripts are developed
to perform regression testing after each code change, while manual exploratory testing is
carried out to identify usability issues.
6. Enforcing Process Adherence: Throughout the software development lifecycle, adherence
to defined procedures and standards is enforced. For example, during the code development
phase, developers are required to follow coding guidelines and document their changes using
version control systems.
7. Controlling Change: When a change request is raised, it undergoes a formal change control
process. This involves evaluating the impact of the change on project scope, schedule, and
quality. For example, a change control board reviews the change request and approves or
rejects it based on its impact assessment.
8. Measuring Change Impact: After implementing a defect fix or change request, the QA team
measures its impact on the project. This involves analyzing quality metrics such as defect
density, test coverage, and code churn to assess the effectiveness of the change.
9. Performing SQA Audits: Periodic SQA audits are conducted to ensure that the SDLC
process is being followed as per established standards. For example, an audit team reviews
project documentation, test artifacts, and development practices to identify any non-
compliance issues and recommend corrective actions.
10. Maintaining Records and Reports: All SQA activities, including test results, audit reports,
and change requests, are documented and maintained for future reference. For instance, a
centralized repository is used to store project documentation, test cases, and defect reports for
traceability and accountability.
11. Managing Good Relations: Building positive relationships between QA and development
teams is crucial for effective collaboration. For example, regular communication channels
such as daily stand-up meetings and periodic review meetings are established to foster
collaboration and resolve issues effectively.

Software Quality Assurance (SQA) standards


Software Quality Assurance (SQA) standards are sets of guidelines, frameworks, or
specifications established to ensure that software development processes, products,
and services meet predefined quality criteria. These standards provide a basis for
organizations to implement effective quality management practices throughout the
software development lifecycle.ISO 9000 is one of the SQA standards.

ISO 9000
ISO 9000 is a series of international standards developed by the International
Organization for Standardization (ISO) that define requirements for establishing,
implementing, maintaining, and continually improving quality management systems
(QMS).
The ISO 9000 series focuses on ensuring organizations meet customer requirements
and enhance customer satisfaction through effective quality management practices.

seven principles of ISO 9000:


1. Customer Focus: Organizations should understand and meet the current and future needs of
their customers. By focusing on customer requirements, organizations can enhance customer
satisfaction and loyalty, leading to improved business performance.
2. Leadership: Leadership plays a crucial role in establishing and maintaining a quality-
focused organizational culture. Leaders should provide direction, set objectives, and create an
environment where people can contribute effectively to achieving quality objectives.
3. Engagement of People: People at all levels of the organization are essential for achieving
quality objectives. Organizations should empower employees, promote teamwork, and foster
a culture of involvement, competence, and accountability.
4. Process Approach: Effective management of activities and resources as a series of
interconnected processes helps achieve consistent and predictable results. Organizations
should identify, understand, and manage interrelated processes to achieve desired outcomes
efficiently.
5. Improvement: Continual improvement is essential for enhancing organizational performance
and achieving quality objectives. Organizations should strive for ongoing improvement in
products, processes, and systems by adopting a systematic approach to innovation and
learning.
6. Evidence-Based Decision Making: Decisions should be based on analysis of data and
information to ensure effectiveness and efficiency. Organizations should collect, analyze, and
interpret relevant data to support decision-making processes and drive improvement
initiatives.
7. Relationship Management: Establishing and maintaining mutually beneficial relationships
with relevant stakeholders, including customers, suppliers, and partners, contributes to
organizational success. Organizations should recognize the importance of these relationships
and seek to build trust, collaboration, and value creation.

By adhering to these seven principles, organizations can develop robust quality management
systems that drive continual improvement, enhance customer satisfaction, and achieve
sustainable business success.

Elements of Software Quality Assurance

1. Quality Planning: Define quality objectives, metrics, and testing strategies for the software
development lifecycle.
2. Quality Control: Monitor and evaluate processes and outputs to ensure they meet specified
quality requirements, including conducting reviews and testing.
3. Quality Assurance Reviews: Systematically evaluate processes, documentation, and
deliverables to ensure compliance with quality standards and identify areas for improvement.
4. Process Improvement: Continuously analyze and enhance software development processes
to increase efficiency, effectiveness, and quality.
5. Documentation Management: Establish and maintain documentation standards and version
control systems to ensure transparency and traceability.
6. Training and Competency Development: Provide training and development opportunities
to ensure team members have the necessary knowledge and skills for their roles.
7. Risk Management: Identify, assess, and mitigate risks that could impact the quality and
success of software projects.
8. Change Management: Evaluate, approve, and implement changes to software requirements,
designs, or code in a controlled manner to prevent unintended consequences.
9. Audits and Assessments: Conduct regular audits and assessments to evaluate adherence to
quality standards and identify areas for improvement.
10. Customer Satisfaction Management: Assess and enhance customer satisfaction with
software products and services through feedback collection and continuous improvement
efforts.

Integrating these elements into software development processes helps organizations deliver
high-quality products and services that meet customer expectations and industry standards.

Software Quality Assurance (SQA) techniques

1. Auditing: A software development company conducts regular audits to ensure that all
development processes adhere to the industry's best practices and standards, such as ISO
9000.
2. Reviewing: Before releasing a new version of their mobile app, a company organizes a
review meeting where stakeholders, including product managers, developers, and QA testers,
examine the app's features, user interface, and functionality to provide feedback and
approval.
3. Code Inspection: A software engineering team conducts a formal code inspection session
where a designated reviewer meticulously examines a section of code, looking for syntax
errors, logic flaws, and adherence to coding standards.
4. Design Inspection: A software architect evaluates a system's design against established
criteria, ensuring that it meets requirements, interfaces seamlessly with other components,
and is logically structured for scalability and maintainability.
5. Simulation: An automotive company uses simulation software to model crash scenarios and
analyze the behavior of vehicle components under various impact conditions, helping
engineers design safer cars.
6. Functional Testing: QA engineers perform functional testing on an e-commerce website by
systematically testing each feature, such as user registration, product search, and checkout
process, to verify that they work as expected.
7. Standardization: A software development team adopts the Agile methodology, following
standardized practices such as daily stand-up meetings, sprint planning sessions, and regular
retrospectives to ensure consistency and efficiency in project execution.
8. Static Analysis: A cybersecurity company utilizes static analysis tools to scan source code
for potential security vulnerabilities, such as SQL injection or cross-site scripting (XSS)
flaws, without executing the code.
9. Walkthroughs: A software development team conducts a walkthrough session where the
lead developer guides team members through the codebase, explaining design decisions,
identifying areas for improvement, and addressing any questions or concerns raised by the
team.
10. Path Testing: A software tester performs path testing on a complex algorithm by executing
different input combinations to ensure that all possible execution paths are covered and that
the algorithm behaves correctly under various conditions.
11. Stress Testing: A web hosting provider conducts stress testing on its servers by simulating a
large number of concurrent user requests to determine the server's capacity and identify
performance bottlenecks under high load conditions.
12. Six Sigma: A manufacturing company implements Six Sigma methodologies to improve the
quality of its production processes, aiming to reduce defects in manufactured products to less
than 3.4 per million opportunities.

Difference between alpha and beta testing

Aspect Alpha Testing Beta Testing

Basic Second phase of testing in Customer


Understanding First phase of testing in Customer Validation Validation

Performed at developer's site - testing


Testing environment. Hence, the activities can be Performed in a real environment, and
Environment controlled hence activities cannot be controlled

Only functionality, usability are tested. Functionality, Usability, Reliability,


Reliability and Security testing are not usually Security testing are all given equal
Testing Focus performed in-depth importance to be performed

Testing White box and/or Black box testing Only Black box testing techniques are
Techniques techniques are involved involved

Build released for Alpha Testing is called Build released for Beta Testing is called
Build Release Alpha Release Beta Release

Testing System Testing is performed before Alpha Alpha Testing is performed before Beta
Sequence Testing Testing

Issues/Bugs are collected from real users


Issues/Bugs are logged into the identified tool in the form of suggestions/feedbacks and
Issues/Bugs directly and are fixed by developer at high are considered as improvements for future
Handling priority releases

Test Goals To evaluate the quality of the product To evaluate customer satisfaction

When Usually after System testing phase or when Usually after Alpha Testing and product is
Conducted the product is 70% - 90% complete 90% - 95% complete

Scope for Features are almost freezed and no scope for Features are freezed and no enhancements
Enhancements major enhancements accepted
Aspect Alpha Testing Beta Testing

Engineers (in-house developers), Quality


Stakeholders Assurance Team, and Product Management Product Management, Quality
Involved Team Management, and User Experience teams

Technical Experts, Specialized Testers with End users to whom the product is
good domain knowledge (new or who were designed; Customers and/or End Users
already part of System Testing phase), Subject can participate in Alpha Testing in some
Participants Matter Expertise cases

Acceptable number of bugs that were missed Major completed product with very less
Expectations in earlier testing activities amount of bugs and crashes

Alpha Tests designed and reviewed for


Business requirements; Beta Tests like what to test and
Traceability matrix should be achieved for all procedures documented for Product usage;
the between alpha tests and requirements; No need for Traceability matrix;
Testing team with knowledge about the Identified end users and customer team
domain and product; up;
Environment setup and build for execution; End user environment setup;
Tool setup should be ready for bug logging Tool setup should be ready to capture the
and test management; feedback/suggestions;
Entry Criteria System testing should be signed-off (ideally) Alpha Testing should be signed off

All the alpha tests should be executed and all


the cycles should be completed; All the cycles should be completed;
Critical/Major issues should be fixed and Critical/Major issues should be fixed and
retested; retested;
Effective review of feedback provided by Effective review of feedback provided by
participants should be completed; participants should be completed;
Alpha Test Summary report; Beta Test summary report;
Exit Criteria Alpha testing should be signed off Beta Testing should be signed off

Rewards No specific rewards or prizes for participants Participants are rewarded

- Helps to uncover bugs that were not found - Product testing is not controllable and
during previous testing activities user may test any available feature in any
Pros - Better view of product usage and reliability way
Aspect Alpha Testing Beta Testing

- Analyze possible risks during and after - corner areas are well tested in this case -
launch of the product Helps to uncover bugs that were not found
- Helps to be prepared for future customer during previous testing activities
support (including alpha)
- Helps to build customer faith on the product - Better view of product usage, reliability,
- Maintenance Cost reduction as the bugs are and security
identified and fixed before Beta/Production – Analyze the real user's perspective and
launch opinion on the product
- Easy Test Management -Feedback/suggestions from real users
helps in improvising the product in the
future
- Helps to increase customer satisfaction
on the product

- Not all the functionality of the product is - Scope defined may or may not be
expected to be tested followed by participants
- Only Business requirements are scoped - - Documentation is more and time-
Documentation is more and time-consuming - consuming - required for using bug
required for using bug logging tool (if logging tool (if required), using tool to
required), using tool to collect collect feedback/suggestion, test
feedback/suggestion, test procedure procedure (installation/uninstallation, user
(installation/uninstallation, user guides) guides)
- Not all the participants assure to give quality - Not all the participants assure to give
testing quality testing
- Not all the feedback are effective - time - Not all the feedback are effective - time
taken to review feedback is high - Test taken to review feedback is high
Cons Management is too difficult - Test Management is too difficult

Assignment (unit 4)

Subject Name & Code: Software Testing( PCA20D06J)

Q1: Differentiate between automation testing and manual testing.


Q2: Explain Bug life cycle with diagram .
Q3: What is test plan ? List out the component of test plan.
Q4: Differentiate between test case and test scenario.

Q5: Differentiate between quality of design and quality of conformance.

Q6: What do you understand by SQA? list out the activities performed under SQA.

Q7: What do you understand by ISO 9000? Explain its principles.

Q8: List out the benefits of automation testing.

You might also like