0% found this document useful (0 votes)
15 views

testing

Software design and testing

Uploaded by

Suhani Nagar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

testing

Software design and testing

Uploaded by

Suhani Nagar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

How to write Test Cases – Software Testing

Last Updated : 26 Sep, 2024



Software testing is known as a process for validating and verifying


the working of a software/application. It makes sure that the
software is working without any errors, bugs, or any other issues
and gives the expected output to the user. The software testing
process isn’t limited to finding faults in the present software but also
finding measures to upgrade the software in various factors such as
efficiency, usability, and accuracy. So, to test software the software
testing provides a particular format called a Test Case .
This article focuses on discussing the following topics in the Test
Case:
Table of Content
 What is a Test Case?
 Test Case vs Test Scenario
 When do we Write Test Cases?
 Why Write Test Cases?
 Test Case Template
 Best Practice for Writing Test Case
 Test Case Management Tools
 Types of Test Cases
 Example test cases for a login page
What is a Test Case?
A test case is a defined format for software testing required to check
if a particular application/software is working or not. A test case
consists of a certain set of conditions that need to be checked to
test an application or software i.e. in more simple terms when
conditions are checked it checks if the resultant output meets with
the expected output or not. A test case consists of various
parameters such as ID, condition, steps, input, expected result,
result, status, and remarks.
Parameters of a Test Case:
 Module Name: Subject or title that defines the functionality of
the test.
 Test Case Id: A unique identifier assigned to every single
condition in a test case.
 Tester Name: The name of the person who would be carrying
out the test.
 Test scenario: The test scenario provides a brief description to
the tester, as in providing a small overview to know about what
needs to be performed and the small features, and components
of the test.
 Test Case Description: The condition required to be checked
for a given software. for eg. Check if only numbers validation is
working or not for an age input box.
 Test Steps: Steps to be performed for the checking of the
condition.
 Prerequisite: The conditions required to be fulfilled before the
start of the test process.
 Test Priority: As the name suggests gives priority to the test
cases that had to be performed first, or are more important and
that could be performed later.
 Test Data: The inputs to be taken while checking for the
conditions.
 Test Expected Result: The output which should be expected at
the end of the test.
 Test parameters: Parameters assigned to a particular test case.
 Actual Result: The output that is displayed at the end.
 Environment Information: The environment in which the test is
being performed, such as the operating system, security
information, the software name, software version, etc.
 Status: The status of tests such as pass, fail, NA, etc.
 Comments: Remarks on the test regarding the test for the
betterment of the software.
Just as a well-structured test case is essential for validating and
verifying the functionality of software, gaining a deep understanding
of software testing methodologies is crucial for ensuring the success
of any testing process. If you’re looking to enhance your skills in
creating and managing test cases, along with mastering the overall
testing process, consider exploring the Complete Guide to
Software Testing & Automation by GeeksforGeeks . This
course provides comprehensive insights into every aspect of
software testing, from writing effective test cases to automating the
testing process, helping you deliver high-quality software with
confidence.
Test Case vs Test Scenario
Below are some of the points of difference between a test case and
a test scenario:
Parameter Test Case Test Scenario

The test Scenario


A test case is a defined format for software
provides a small
testing required to check if a particular
description of what
Definition application/software/module is working or not.
needs to be
Here we check for different conditions
performed based on
regarding the same.
the use case.

Test Scenario
Level of Test cases are more detailed with several provides a small
detailing parameters. description, mostly
one-line statements.

Test scenarios are


Action Level Test cases are low-level actions.
high-level actions.

Test scenarios are


Test cases are mostly derived from test derived from
Derived from
scenarios. documents like BRS,
SRS, etc.

It focuses more on
Objective It focuses on “What to test” and “How to test”.
‘What to test”.

Fewer resources are


Resources Test cases require more resources for
required to write test
required documentation and execution.
scenarios.

It includes all positive and negative inputs, They are one-liner


Inputs
expected results, navigation steps, etc. statements.

Time It requires more time compared to test Test scenarios require


requirement scenarios. less time.
Parameter Test Case Test Scenario

They require less


Maintenance They are hard to maintain.
time to maintain.

When do we Write Test Cases?


Test cases are written in different situations:
 Before development: Test cases could be written before the
actual coding as that would help to identify the requirement of
the product/software and carry out the test later when the
product/software gets developed.
 After development: Test cases are also written directly after
coming up with a product/software or after developing the feature
but before the launching of a product/software as needed to test
the working of that particular feature.
 During development: Test cases are sometimes written during
the development time, parallelly. so whenever a part of the
module/software gets developed it gets tested as well.
So, test cases are written in such cases, as test cases help in further
development and make sure that we are meeting all the needed
requirements.
Why Write Test Cases?
Test cases are one of the most important aspects of software
engineering, as they define how the testing would be carried out.
Test cases are carried out for a very simple reason, to check if the
software works or not. There are many advantages of writing test
cases:
 To check whether the software meets customer
expectations: Test cases help to check if a particular
module/software is meeting the specified requirement or not.
 To check software consistency with conditions: Test cases
determine if a particular module/software works with a given set
of conditions.
 Narrow down software updates: Test cases help to narrow
down the software needs and required updates.
 Better test coverage: Test cases help to make sure that all
possible scenarios are covered and documented.
 For consistency in test execution: Test cases help to maintain
consistency in test execution. A well-documented test case helps
the tester to just have a look at the test case and start testing the
application.
 Helpful during maintenance: Test cases are detailed which
makes them helpful during the maintenance phase.
Test Case Template
Let’s look at a basic test case template for the login functionality.
 The Test case template contains the header section which has a
set of parameters that provide information about the test case
such as the tester’s name, test case description, Prerequisite, etc.
 The body section contains the actual test case content, such as
test ID, test steps, test input, expected result, etc.
Below is the table that shows the basic template of a test case:
Fields Description

Test Case ID Each test case should have a unique ID.

Test Case Each test case should have a proper description to let testers know
Description what the test case is about.

Conditions that are required to be satisfied before executing the test


Pre-Conditions
case.

Mention all test steps in detail and to be executed from the end-
Test Steps
user’s perspective.

Test Data Test data could be used as input for the test cases.

Expected Result The result is expected after executing the test cases.

Conditions need to be fulfilled when the test cases are successfully


Post Condition
executed.

Actual Result The result that which system shows once the test case is executed.

Set the status as Pass or Fail on the expected result against the
Status
actual result.
Fields Description

Project Name Name of the project to which the test case belongs.

Module Name Name of the module to which the test case belongs.

Reference
Mention the path of the reference document.
Document

Created By Name of the tester who created the test cases.

Date of Creation Date of creation of test cases.

Reviewed By Name of the tester who reviews the test case.

Date of Review When the test cases were reviewed.

Executed By Name of the tester who executed the test case.

Date of Execution Date when the test cases were executed.

Comments Include comments which help the team to understand the test cases.

In the given template below it’s identifiable that the section from
module name to test scenario is the header section while the table
that lies below the test scenario (from test case ID to comments) is
the body of the test case template.
Here a test case template for login functionality has been created
with its parameters and values.
Test Case Template

What is Software Testing?


Last Updated : 26 Sep, 2024



Software testing is an important process in the software


development lifecycle . It involves verifying and validating that
a software application is free of bugs, meets the technical
requirements set by its design and development , and satisfies
user requirements efficiently and effectively.
This process ensures that the application can handle all exceptional
and boundary cases, providing a robust and reliable user
experience. By systematically identifying and fixing issues, software
testing helps deliver high-quality software that performs as
expected in various scenarios.
Table of Content
 What is Software Testing?
 Different Types Of Software Testing
 Different Types of Software Testing Techniques
 Different Levels of Software Testing
 Best Practices for Software Testing
 Benefits of Software Testing
 Conclusion
 Frequently Asked Questions on Software Testing
The process of software testing aims not only at finding faults in the
existing software but also at finding measures to improve the
software in terms of efficiency, accuracy, and usability. The article
focuses on discussing Software Testing in detail.
It is important to learn the basics whether it is software testing or
anything else you always need to start from the basics and make
your foundation strong and then proceed to the advance level. If you
want to learn software testing in a structure approach then you can
enroll in our manual to automation testing course.
What is Software Testing?
Software Testing is a method to assess the functionality of the
software program. The process checks whether the actual software
matches the expected requirements and ensures the software is
bug-free. The purpose of software testing is to identify the errors,
faults, or missing requirements in contrast to actual requirements. It
mainly aims at measuring the specification, functionality, and
performance of a software program or application.
Perform end-to-end test automation, including AI-powered codeless
testing, mobile app, cross-browser, visual UI testing, and more with
TestGrid . It is a highly secure and scalable software testing tool
that offers extensive integration with CI/CD pipelines for continuous
testing.
Software testing can be divided into two steps
1. Verification: It refers to the set of tasks that ensure that the
software correctly implements a specific function. It means “Are
we building the product right?”.
2. Validation: It refers to a different set of tasks that ensure that
the software that has been built is traceable to customer
requirements. It means “Are we building the right product?”.
Importance of Software Testing
 Defects can be identified early: Software testing is important
because if there are any bugs they can be identified early and
can be fixed before the delivery of the software.
 Improves quality of software: Software Testing uncovers the
defects in the software, and fixing them improves the quality of
the software.
 Increased customer satisfaction: Software testing ensures
reliability, security, and high performance which results in saving
time, costs, and customer satisfaction.
 Helps with scalability: Software testing type non-functional
testing helps to identify the scalability issues and the point where
an application might stop working.
 Saves time and money: After the application is launched it will
be very difficult to trace and resolve the issues, as performing
this activity will incur more costs and time. Thus, it is better to
conduct software testing at regular intervals during software
development.
Need for Software Testing
Software bugs can cause potential monetary and human loss. There
are many examples in history that clearly depicts that without the
testing phase in software development lot of damage was incurred.
Below are some examples:
 1985: Canada’s Therac-25 radiation therapy malfunctioned due
to a software bug and resulted in lethal radiation doses to
patients leaving 3 injured and 3 people dead.
 1994: China Airlines Airbus A300 crashed due to a software bug
killing 264 people.
 1996: A software bug caused U.S. bank accounts of 823
customers to be credited with 920 million US dollars.
 1999: A software bug caused the failure of a $1.2 billion military
satellite launch.
 2015: A software bug in fighter plan F-35 resulted in making it
unable to detect targets correctly.
 2015: Bloomberg terminal in London crashed due to a software
bug affecting 300,000 traders on the financial market and forcing
the government to postpone the 3bn pound debt sale.
 Starbucks was forced to close more than 60% of its outlet in the
U.S. and Canada due to a software failure in its POS system.
 Nissan cars were forced to recall 1 million cars from the market
due to a software failure in the car’s airbag sensory detectors.
Different Types Of Software Testing
Explore diverse software testing methods
including manual and automated testing for improved quality
assurance . Enhance software reliability and performance through
functional and non-functional testing, ensuring user satisfaction.
Learn about the significance of various testing approaches for robust
software development.

Types Of Software Testing

Software Testing can be broadly classified into 3 types:


1. Functional testing : It is a type of software testing that
validates the software systems against the functional
requirements. It is performed to check whether the application is
working as per the software’s functional requirements or not.
Various types of functional testing are Unit testing, Integration
testing, System testing, Smoke testing, and so on.
2. Non-functional testing : It is a type of software testing that
checks the application for non-functional requirements like
performance, scalability, portability, stress, etc. Various types of
non-functional testing are Performance testing, Stress testing,
Usability Testing, and so on.
3. Maintenance testing : It is the process of changing, modifying,
and updating the software to keep up with the customer’s needs.
It involves regression testing that verifies that recent changes
to the code have not adversely affected other previously working
parts of the software.
Apart from the above classification software testing can be further
divided into 2 more ways of testing:
1. Manual testing : It includes testing software manually, i.e.,
without using any automation tool or script. In this type, the
tester takes over the role of an end-user and tests the software to
identify any unexpected behavior or bug. There are different
stages for manual testing such as unit testing, integration testing,
system testing, and user acceptance testing. Testers use test
plans, test cases, or test scenarios to test software to ensure the
completeness of testing. Manual testing also includes exploratory
testing, as testers explore the software to identify errors in it.
2. Automation testing : It is also known as Test Automation, is
when the tester writes scripts and uses another software to test
the product. This process involves the automation of a manual
process. Automation Testing is used to re-run the test scenarios
quickly and repeatedly, that were performed manually in manual
testing.
Apart from Regression testing , Automation testing is also used
to test the application from a load, performance, and stress point of
view. It increases the test coverage, improves accuracy, and saves
time and money when compared to manual testing.
Different Types of Software Testing
Techniques
Software testing techniques can be majorly classified into two
categories:
1. Black box Testing : Testing in which the tester doesn’t have
access to the source code of the software and is conducted at the
software interface without any concern with the internal logical
structure of the software known as black-box testing.
2. White box Testing : Testing in which the tester is aware of the
internal workings of the product, has access to its source code,
and is conducted by making sure that all internal operations are
performed according to the specifications is known as white box
testing.
3. Grey Box Testing : Testing in which the testers should have
knowledge of implementation, however, they need not be
experts.
S No. Black Box Testing White Box Testing

Internal workings of an Knowledge of the internal


1
application are not required. workings is a must.

Also known as closed Also known as clear


2
box/data-driven testing. box/structural testing.

End users, testers, and Normally done by testers


3
developers. and developers.

Data domains and internal


This can only be done by a
4 boundaries can be better
trial and error method.
tested.

Different Levels of Software Testing


Software level testing can be majorly classified into 4 levels:
1. Unit testing : It a level of the software testing process where
individual units/components of a software/system are tested. The
purpose is to validate that each unit of the software performs as
designed.
2. Integration testing : It is a level of the software testing process
where individual units are combined and tested as a group. The
purpose of this level of testing is to expose faults in the
interaction between integrated units.
3. System testing : It is a level of the software testing process
where a complete, integrated system/software is tested. The
purpose of this test is to evaluate the system’s compliance with
the specified requirements.
4. Acceptance testing : It is a level of the software testing process
where a system is tested for acceptability. The purpose of this
test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.

Software Testing Metrics, its Types and


Example
Last Updated : 25 Sep, 2024



Software testing metrics are quantifiable indicators of the software


testing process progress, quality, productivity, and overall health. The
purpose of software testing metrics is to increase the efficiency and
effectiveness of the software testing process while also assisting in making
better decisions for future testing by providing accurate data about the
testing process. A metric expresses the degree to which a system, system
component, or process possesses a certain attribute in numerical terms. A
weekly mileage of an automobile compared to its ideal mileage specified by
the manufacturer is an excellent illustration of metrics. Here, we discuss the
following points:
Table of Content
 Importance of Metrics in Software Testing
 Types of Software Testing Metrics
 Manual Test Metrics: What Are They and How Do They Work?
 Other Important Metrics
 Test Metrics Life Cycle
 Formula for Test Metrics
 Example of Software Test Metrics Calculation
 Conclusion
Importance of Metrics in Software Testing:
Test metrics are essential in determining the software’s quality and
performance. Developers may use the right software testing metrics to
improve their productivity.
 Early Problem Identification: By measuring metrics such as defect
density and defect arrival rate, testing teams can spot trends and patterns
early in the development process.
 Allocation of Resources: Metrics identify regions where testing efforts
are most needed, which helps with resource allocation optimization. By
ensuring that testing resources are concentrated on important areas, this
enhances the strategy for testing as a whole.
 Monitoring Progress: Metrics are useful instruments for monitoring the
advancement of testing. They offer insight into the quantity of test cases
that have been run, their completion rate, and if the testing effort is
proceeding according to plan.
 Continuous Improvement: Metrics offer input on the testing procedure,
which helps to foster a culture of continuous development.
Software testing metrics are essential for evaluating the quality and
efficiency of testing processes. They provide critical data for early problem
detection, resource allocation, and progress monitoring, helping to improve
overall testing practices. For a concise overview of software testing metrics
and their benefits, explore our Complete Guide to Software Testing
Metrics at GeeksforGeeks .
Types of Software Testing Metrics:
Software testing metrics are divided into three categories:
1. Process Metrics: A project’s characteristics and execution are defined by
process metrics. These features are critical to the SDLC process’s
improvement and maintenance (Software Development Life Cycle).
2. Product Metrics: A product’s size, design, performance, quality, and
complexity are defined by product metrics. Developers can improve the
quality of their software development by utilizing these features.
3. Project Metrics: Project Metrics are used to assess a project’s overall
quality. It is used to estimate a project’s resources and deliverables, as
well as to determine costs, productivity, and flaws.
It is critical to determine the appropriate testing metrics for the process. A
few points to keep in mind:
 Before creating the metrics, carefully select your target audiences.
 Define the aim for which the metrics were created.
 Prepare measurements based on the project’s specific requirements.
Assess the financial gain associated with each statistic.
 Match the measurements to the project lifestyle phase for the best results.
The major benefit of automated testing is that it allows testers to complete
more tests in less time while also covering a large number of variations that
would be practically difficult to calculate manually.
Manual Test Metrics: What Are They and How Do
They Work?
Manual testing is carried out in a step-by-step manner by quality assurance
experts. Test automation frameworks, tools, and software are used to
execute tests in automated testing. There are advantages and
disadvantages to both human and automated testing. Manual testing is a
time-consuming technique, but it allows testers to deal with more
complicated circumstances. There are two sorts of manual test metrics:
1. Base Metrics: Analysts collect data throughout the development and
execution of test cases to provide base metrics. By generating a project
status report, these metrics are sent to test leads and project managers. It is
quantified using calculated metrics.
 The total number of test cases
 The total number of test cases completed.
2. Calculated Metrics: Data from base metrics are used to create calculated
metrics. The test lead collects this information and transforms it into more
useful information for tracking project progress at the module, tester, and
other levels. It’s an important aspect of the SDLC since it allows developers
to make critical software changes.
Other Important Metrics:
The following are some of the other important software metrics:
 Defect metrics: Defect metrics help engineers understand the many
aspects of software quality, such as functionality, performance,
installation stability, usability, compatibility, and so on.
 Schedule Adherence: Schedule Adherence’s major purpose is to
determine the time difference between a schedule’s expected and actual
execution times.
 Defect Severity: The severity of the problem allows the developer to see
how the defect will affect the software’s quality .
 Test case efficiency: Test case efficiency is a measure of how effective
test cases are at detecting problems.
 Defects finding rate: It is used to determine the pattern of flaws over a
period of time.
 Defect Fixing Time: The amount of time it takes to remedy a problem is
known as defect fixing time.
 Test Coverage: It specifies the number of test cases assigned to the
program. This metric ensures that the testing is completed completely. It
also aids in the verification of code flow and the testing of functionality.
 Defect cause: It’s utilized to figure out what’s causing the problem.
Test Metrics Life Cycle:
The below diagram illustrates the different stages in the test metrics life
cycle.
Test Metrics Lifecycle

The various stages of the test metrics lifecycle are:


1. Analysis:
 The metrics must be recognized.
 Define the QA metrics that have been identified.
2. Communicate:
 Stakeholders and the testing team should be informed about the
requirement for metrics.
 Educate the testing team on the data points that must be collected in
order to process the metrics.
3. Evaluation:
 Data should be captured and verified.
 Using the data collected to calculate the value of the metrics
4. Report:
 Create a strong conclusion for the paper.
 Distribute the report to the appropriate stakeholder and
representatives.
 Gather input from stakeholder representatives.
Formula for Test Metrics:
To get the percentage execution status of the test cases, the following
formula can be used:
Percentage test cases executed = (No of test cases executed / Total no
of test cases written) x 100
Similarly, it is possible to calculate for other parameters also such as test
cases that were not executed, test cases that were passed, test cases that
were failed, test cases that were blocked, and so on. Below are some of the
formulas:
1. Test Case Effectiveness:
Test Case Effectiveness = (Number of defects detected / Number of test
cases run) x 100
2. Passed Test Cases Percentage: Test Cases that Passed Coverage is a
metric that indicates the percentage of test cases that pass.
Passed Test Cases Percentage = (Total number of tests ran / Total
number of tests executed) x 100
3. Failed Test Cases Percentage: This metric measures the proportion of
all failed test cases.
Failed Test Cases Percentage = (Total number of failed test cases /
Total number of tests executed) x 100
4. Blocked Test Cases Percentage: During the software testing process,
this parameter determines the percentage of test cases that are blocked.
Blocked Test Cases Percentage = (Total number of blocked tests / Total
number of tests executed) x 100
5. Fixed Defects Percentage: Using this measure, the team may determine
the percentage of defects that have been fixed.
Fixed Defects Percentage = (Total number of flaws fixed / Number of
defects reported) x 100
6. Rework Effort Ratio: This measure helps to determine the rework effort
ratio.
Rework Effort Ratio = (Actual rework efforts spent in that phase/ Total
actual efforts spent in that phase) x 100
7. Accepted Defects Percentage: This measures the percentage of defects
that are accepted out of the total accepted defects.
Accepted Defects Percentage = (Defects Accepted as Valid by Dev
Team / Total Defects Reported) x 100
8. Defects Deferred Percentage: This measures the percentage of the
defects that are deferred for future release.
Defects Deferred Percentage = (Defects deferred for future releases /
Total Defects Reported) x 100

Example of Software Test Metrics Calculation:


Let’s take an example to calculate test metrics:
Data retrieved during
test case
S No. Testing Metric development

1 No. of requirements 5

The average number of


2 test cases written per 40
requirement

Total no. of Test cases


3 written for all 200
requirements

Total no. of Test cases


4 164
executed

5 No. of Test cases passed 100

6 No. of Test cases failed 60


Data retrieved during
test case
S No. Testing Metric development

No. of Test cases


7 4
blocked

No. of Test cases


8 36
unexecuted

Total no. of defects


9 20
identified

Defects accepted as
10 15
valid by the dev team

Defects deferred for


11 5
future releases

12 Defects fixed 12

1. Percentage test cases executed = (No of test cases executed / Total no


of test cases written) x 100
= (164 / 200) x 100
= 82
2. Test Case Effectiveness = (Number of defects detected / Number of test
cases run) x 100
= (20 / 164) x 100
= 12.2
3. Failed Test Cases Percentage = (Total number of failed test cases /
Total number of tests executed) x 100
= (60 / 164) * 100
= 36.59
4. Blocked Test Cases Percentage = (Total number of blocked tests / Total
number of tests executed) x 100
= (4 / 164) * 100
= 2.44
5. Fixed Defects Percentage = (Total number of flaws fixed / Number of
defects reported) x 100
= (12 / 20) * 100
= 60
6. Accepted Defects Percentage = (Defects Accepted as Valid by Dev
Team / Total Defects Reported) x 100
= (15 / 20) * 100
= 75
7. Defects Deferred Percentage = (Defects deferred for future releases /
Total Defects Reported) x 100
= (5 / 20) * 100
= 25

Software Quality Assurance – Software


Engineering
Last Updated : 26 Sep, 2024



Software Quality Assurance (SQA) is simply a way to assure


quality in the software. It is the set of activities that ensure
processes, procedures as well as standards are suitable for the
project and implemented correctly.
Software Quality Assurance is a process that works parallel to
Software Development. It focuses on improving the process of
development of software so that problems can be prevented before
they become major issues. Software Quality Assurance is a kind of
Umbrella activity that is applied throughout the software process.
For those looking to deepen their expertise in SQA and elevate their
professional skills, consider exploring a specialized training program
– Manual to Automation Testing: A QA Engineer’s Guide . This
program offers practical, hands-on experience and advanced
knowledge that complements the concepts covered in this guide.
Table of Content
 What is quality?
 Software Quality Assurance (SQA) encompasses
 Elements of Software Quality Assurance (SQA)
 Software Quality Assurance (SQA) focuses
 Software Quality Assurance (SQA) Include
 Major Software Quality Assurance (SQA) Activities
 Benefits of Software Quality Assurance (SQA)
 Disadvantage of Software Quality Assurance (SQA)
 Conclusion
 Frequently Asked Questions on Software Quality Assurance (SQA)
Generally, the quality of the software is verified by third-party
organizations like international standard organizations .
What is quality?
Quality in a product or service can be defined by several measurable
characteristics. Each of these characteristics plays a crucial role in
determining the overall quality.

What is quality?
Software Quality Assurance (SQA)
encompasse s
SQA process Specific quality assurance and quality control tasks
(including technical reviews and a multitiered testing strategy)
Effective software engineering practice (methods and tools) Control
of all software work products and the changes made to them a
procedure to ensure compliance with software
development standards (when applicable) measurement and
reporting mechanisms
Elements of Software Quality Assurance
(SQA)
1. Standards: The IEEE, ISO, and other standards organizations
have produced a broad array of software engineering standards
and related documents. The job of SQA is to ensure that
standards that have been adopted are followed and that all work
products conform to them.
2. Reviews and audits: Technical reviews are a quality control
activity performed by software engineers for software engineers.
Their intent is to uncover errors. Audits are a type of review
performed by SQA personnel (people employed in an
organization) with the intent of ensuring that quality guidelines
are being followed for software engineering work.
3. Testing: Software testing is a quality control function that has
one primary goal—to find errors. The job of SQA is to ensure that
testing is properly planned and efficiently conducted for primary
goal of software.
4. Error/defect collection and analysis : SQA collects and
analyzes error and defect data to better understand how errors
are introduced and what software engineering activities are best
suited to eliminating them.
5. Change management: SQA ensures that adequate change
management practices have been instituted.
6. Education: Every software organization wants to improve its
software engineering practices. A key contributor to improvement
is education of software engineers, their managers, and other
stakeholders. The SQA organization takes the lead in software
process improvement which is key proponent and sponsor of
educational programs.
7. Security management: SQA ensures that appropriate process
and technology are used to achieve software security.
8. Safety: SQA may be responsible for assessing the impact of
software failure and for initiating those steps required to reduce
risk.
9. Risk management : The SQA organization ensures that risk
management activities are properly conducted and that risk-
related contingency plans have been established.
Software Quality Assurance (SQA) focuses
The Software Quality Assurance (SQA) focuses on the
following
Software Quality Assurance (SQA)

 Software’s portability: Software’s portability refers to its


ability to be easily transferred or adapted to different
environments or platforms without needing significant
modifications. This ensures that the software can run efficiently
across various systems, enhancing its accessibility and flexibility.
 software’s usability: Usability of software refers to how easy
and intuitive it is for users to interact with and navigate through
the application. A high level of usability ensures that users can
effectively accomplish their tasks with minimal confusion or
frustration, leading to a positive user experience.
 software’s reusability: Reusability in software development
involves designing components or modules that can be reused in
multiple parts of the software or in different projects. This
promotes efficiency and reduces development time by
eliminating the need to reinvent the wheel for similar
functionalities, enhancing productivity and maintainability.
 software’s correctness: Correctness of software refers to its
ability to produce the desired results under specific conditions or
inputs. Correct software behaves as expected without errors or
unexpected behaviors, meeting the requirements and
specifications defined for its functionality.
 software’s maintainability: Maintainability of software refers
to how easily it can be modified, updated, or extended over time.
Well-maintained software is structured and documented in a way
that allows developers to make changes efficiently without
introducing errors or compromising its stability.
 software’s error control: Error control in software involves
implementing mechanisms to detect, handle, and recover from
errors or unexpected situations gracefully. Effective error control
ensures that the software remains robust and reliable, minimizing
disruptions to users and providing a smoother experience overall.
Software Quality Assurance (SQA) Include
1. A quality management approach.
2. Formal technical reviews.
3. Multi testing strategy.
4. Effective software engineering technology.
5. Measurement and reporting mechanism.
Major Software Quality Assurance (SQA)
Activities
1. SQA Management Plan: Make a plan for how you will carry out
the SQA throughout the project. Think about which set of software
engineering activities are the best for project. check level of SQA
team skills.
2. Set The Check Points: SQA team should set checkpoints.
Evaluate the performance of the project on the basis of collected
data on different check points.
3. Measure Change Impact: The changes for making the
correction of an error sometimes re introduces more errors keep
the measure of impact of change on project. Reset the new
change to check the compatibility of this fix with whole project.
4. Multi testing Strategy: Do not depend on a single testing
approach. When you have a lot of testing approaches available
use them.
5. Manage Good Relations: In the working environment managing
good relations with other teams involved in the project
development is mandatory. Bad relation of SQA team with
programmers team will impact directly and badly on project.
Don’t play politics.
6. Maintaining records and reports: Comprehensively document
and share all QA records, including test cases, defects, changes,
and cycles, for stakeholder awareness and future reference.
7. Reviews software engineering activities: The SQA group
identifies and documents the processes. The group also verifies
the correctness of software product.
8. Formalize deviation handling: Track and document software
deviations meticulously. Follow established procedures for
handling variances.
Benefits of Software Quality Assurance (SQA)
1. SQA produces high quality software.
2. High quality application saves time and cost.
3. SQA is beneficial for better reliability.
4. SQA is beneficial in the condition of no maintenance for a long
time.
5. High quality commercial software increase market share of
company.
6. Improving the process of creating software.
7. Improves the quality of the software.
8. It cuts maintenance costs. Get the release right the first time, and
your company can forget about it and move on to the next big
thing. Release a product with chronic issues, and your business
bogs down in a costly, time-consuming, never-ending cycle of
repairs.
Disadvantage of Software Quality Assurance
(SQA)
There are a number of disadvantages of quality assurance.
 Cost: Some of them include adding more resources, which cause
the more budget its not, Addition of more resources For
betterment of the product.
 Time Consuming: Testing and Deployment of the project taking
more time which cause delay in the project.
 Overhead : SQA processes can introduce administrative
overhead, requiring documentation, reporting, and tracking of
quality metrics. This additional administrative burden can
sometimes outweigh the benefits, especially for smaller projects.
 Resource Intensive : SQA requires skilled personnel with
expertise in testing methodologies, tools, and quality assurance
practices. Acquiring and retaining such talent can be challenging
and expensive.
 Resistance to Change : Some team members may resist the
implementation of SQA processes, viewing them as bureaucratic
or unnecessary. This resistance can hinder the adoption and
effectiveness of quality assurance practices within an
organization.
 Not Foolproof : Despite thorough testing and quality assurance
efforts, software can still contain defects or vulnerabilities. SQA
cannot guarantee the elimination of all bugs or issues in software
products.
 Complexity : SQA processes can be complex, especially in large-
scale projects with multiple stakeholders, dependencies, and
integration points. Managing the complexity of quality assurance
activities requires careful planning and coordination.

You might also like