0% found this document useful (0 votes)
56 views30 pages

ST Unit3

Uploaded by

hapvttt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views30 pages

ST Unit3

Uploaded by

hapvttt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT – III [12 hours]

• Software Quality Assurance:


• SQA Tasks,

• Goals and Metrics,

• Software Review Techniques:


• Informal reviews,

• Formal Technical Reviews,

• Software Reliability.

• Software risk management:


• Definition,

• types of risk,

• risk identification-risk monitoring and management.


Software Quality Assurance • SQA incorporates all software development processes starting

Software Quality Assurance, as the name says, is a process or a role of a from defining requirements to coding until release. Its prime

software engineer to make sure there is no concession or slippage goal is to ensure quality.

occurring in the software application with respect to the requirement


provided by the customer.

The four phases of Software Quality Assurance are

Plan is to plan the measures necessary to keep the application standards


in high quality.

Do is to the development process that involves the build and testing


processes,.

Check is to observe and examine the implementation routes. and

Act is to act upon the activities required to maintain the application


quality.
Software Quality Assurance Activities: 3. Having Multiple Testing Strategy
1. Prepares an SQA plan for a project:
• One should not rely on a single testing approach and strategy for
• The program is developed during project planning and is reviewed
testing software.
by all stakeholders.
• Multiple testing strategies should be implemented in software so
• The plan governs quality assurance activities performed by the
as to test it from different angles and cover all the areas.
software engineering team and the SQA group.
• For Example an e-commerce website, security testing,
• The plan identifies calculation to be performed, audits and reviews
performance testing, load testing, database testing all should be
to be performed, standards that apply to the project, techniques for
done to ensure a better quality of software.
error reporting and tracking, documents to be produced by the SQA
4. Maintaining Records and Reports
team, and amount of feedback provided to the software project
team. • It is important to keep all the records and documents of the QA
2. Setting the Checkpoint and share them on time to time to stakeholders.

• SQA team sets the checkpoints after specific time intervals in order • Test cases executed, test cycles, defects logged, defects fixed,
to check the progress, quality, performance of software, and test cases created, change in requirements from a client for a
whether the software quality work is done on time as per the specific test case, all should be properly documented for future
schedule and documents. reference.
5. Conduct Formal Technical Reviews This activity is a blend of two sub-activities:

• An FTR is traditionally used to evaluate the quality and design of Process Evaluation: This ensures that the set standards for the
the prototype. project are followed correctly.

• In this process, a meeting is conducted with the technical staff to Periodically, the process is evaluated to make sure it is working as
discuss the quality requirements of the software and the design intended and if any adjustments need to be made.
quality of the prototype.
Process Monitoring: Process-related metrics are collected in this
• This activity helps in detecting errors in the early phase of SDLC and step at a designated time interval and interpreted to understand if
reduces rework effort later. the process is maturing as we expect it to.
6. Enforcing Process Adherence 7. Performing SQA Audits

This activity involves coming up with processes and getting cross- • The SQA audit inspects the actual SDLC process followed vs. the
functional teams to buy in on adhering to set-up systems. established guidelines that were proposed.

• This is to validate the correctness of the planning and strategic


process vs. the actual results.

• This activity could also expose any non-compliance issues.


Software Quality Assurance Standards: CMMI level: CMMI stands for Capability Maturity Model Integration.
This model originated in software engineering. It can be employed to
• Software development life cycle and particularly, SQA may require
direct process improvement throughout a project, department, or
conformance to quality standards such as:
entire organization.
ISO 9000: Based on seven quality management principles that help
5 CMMI levels and their characteristics are described in the below
organizations ensure that their products or services are aligned with
image:
customer needs.
7 principles of ISO 9000 are depicted in the below image:
Test Maturity Model integration (TMMi): Based on CMMi, this model
Elements of Software Quality Assurance:
focuses on maturity levels in software quality management and
testing. 1. Software Engineering Standards: SQA teams are critical to ensure
that we adhere to the above standards for software engineering
5 TMMi levels are depicted in the image below:
teams.

2. Technical Reviews and Audits: Active and passive


verification/validation techniques at every SDLC stage.

3. Software Testing for Quality Control: Testing the software to


identify bugs.

4. Error Collection and Analysis: Defect reporting, managing, and


analysis to identify problem areas and failure trends.

5. Metrics and Measurement: SQA employs a variety of checks and


measures to gather information about the effectiveness and
As an organization moves to a higher maturity level, it achieves a
quality of the product and processes.
higher capability for producing high-quality products with fewer
defects and closely meets the business requirements.
6. Change Management: Actively advocate controlled change and SQA Techniques include:
provide strong processes that limit unexpected negative • Auditing: Auditing is the inspection of the work products and its
outcomes. related information to determine if a set of standard processes

7. Vendor Management: Work with contractors and tool vendors to were followed or not.

ensure collective success. • Reviewing: A meeting in which the software product is examined

8. Safety/Security Management: SQA is often tasked with exposing by both internal and external stakeholders to seek their

weaknesses and bringing attention to them proactively. comments and approval.

9. Risk Management: Risk identification, analysis, and Risk • Code Inspection: It is the most formal kind of review that does

mitigation are spearheaded by the SQA teams to aid in informed static testing to find bugs and avoid defect seepage into the later

decision making stages.

10. Education: Continuous education to stay current with tools, • It is done by a trained mediator/peer and is based on rules,

standards, and industry trends checklists, entry and exit criteria.


Design Inspection: Design inspection is done using a checklist that Simulation: A simulation is a tool that models a real-life situation in
inspects the below areas of software design: order to virtually examine the behavior of the system under study.

• General requirements and design In cases when the real system cannot be tested directly, simulators
are great sandbox system alternatives.
• Functional and Interface specifications
Functional Testing: It is a QA technique that validates what the
• Conventions
system does without considering how it does it. Black Box testing
• Requirement traceability mainly focuses on testing the system specifications or features.

• Structures and interfaces Standardization: Standardization plays a crucial role in quality

• Logic assurance. This decreases ambiguity and guesswork, thus ensuring


quality.
• Performance
Static Analysis: It is a software analysis that is done by an automated
• Error handling and recovery
tool without executing the program. Software metrics and reverse
• Testability, extensibility engineering are some popular forms of static analysis. In newer
teams, static code analysis tools such as SonarCube, VeraCode, etc.
• Coupling and cohesion
are used.
Walkthroughs: A software walkthrough or code walkthrough is a peer
review where the developer guides the members of the development
team to go through the product, raise queries, suggest alternatives,
and make comments regarding possible errors, standard violations,
or any other issues.

Unit Testing: This is a White Box Testing technique where complete


code coverage is ensured by executing each independent path, branch,
and condition at least once.

Stress Testing: This type of testing is done to check how robust a


system is by testing it under heavy load i.e. beyond normal conditions.
Software quality goals, attributes, and metrics: Continued….

Design quality Architectural Existence of architectural


integrity model
Goal Attribute Metric Component Number of components that
Requireme Ambigully Number of ambiguous completeness trace to architectural model
nt quality modifiers (e.., many, Complexity of procedural
large, human–friendly) design
Completeness Number of TBA, TBD Average number of pick to
get to a typical function or
content
Interface Layout appropriateness
Understandabil Number of complexity
Number of patterns used
ity sections/subsections
Volatility Number of changes per
requirement Time (by
activity) when change is Patterns
requested
Traceability Number of requirements
not traceable to Code quality Complexity Cyclomatic complexity
Maintainability Design factors (Chapter 8)
design/code
Understandability Percent internal comments
Model clarity Number of UML models Variable naming conventions
Number of descriptive Percent reused components
pages per model Reusability Readability index
Number of UML errors Documentation
Advantages of SQA:
Continued…
1. Increases Client’s Confidence
QC Resource Staff hour percentage per • Proper quality check at different levels of software like review,
effective allocation activity
ness Completion Actual vs. budgeted Inspection, Auditing, etc and with the involvement of both
rate completion time
internal and external stakeholder increases the confidence of
Review See review metrics
effectiveness Number of errors found and clients in the submission of the Weekly reports of the defect and
Testing criticality
effectiveness requirement metrics also helps a lot in assuring the client that the
Effort required to correct an
error work is being done on time.
Origin of error
2. SQA Saves Money

• It is crucial to identify and rectify defects at an early stage in the


software development life cycle to save time and resources.
Proper SQA measures taken at various stages of the development
process can help in reducing risks and detecting defects before
they become too costly to fix.

• This not only saves money for the company but also helps in
maintaining a good reputation and client satisfaction.
3. Boost Customer Satisfaction 4. Promotes Productivity and Efficiency

• Involving the client in the software development and When development and testing are done in parallel, defects found

testing process can have a significant impact on early just after the development of a single module is done and
fixed by developers timely allows everyone to work in peace and in
customer satisfaction.
a more productive manner rather than be burdened with multiple
• It helps to ensure that the software being developed is bugs at once after the completion of the whole software.
meeting the client's requirements and expectations.
5. Prevents from Unforeseen Emergencies

• Taking their suggestions and feedback into When developing corporate software, stakes are also very high. As
consideration throughout the development process the software deals with a lot of customer’s sensitive data, it needs
can also help in building trust and confidence in the to work as expected without any blackouts, corruption, or

software being developed. communication breakdowns.

The software should be tested very rigorously so that it should


• This ultimately leads to higher customer satisfaction
work as expected.
levels and a better overall experience for the client.
Disadvantages of SQA
6. Reduces End Time Client Conflicts
1. Sometimes Difficult to Implement
• There are many cases found of disagreement of client and
organizations later on regarding the change in requirements, time • As SQA defines all the activities and actions that should be taken
and budget fixed in the starting resulting in the cancellation of the at each step of software development in a very detailed manner,
project, money loss and bad impression of the company in the sometimes it becomes difficult to implement every single activity
market (loss of client as it would create a bad reputation). and process in development. So the person knows that it would
be beneficial but focusing on each step in detail becomes difficult
• In SQA everything is fixed at the starting of the project and
when working in large teams.
documented properly without any ambiguity so that no conflicts
would arise 2. Time Consuming

• Implementing each action in SQA is very time-consuming and


sometimes it wastes more time in documentation and meetings
rather than working on the actual development and testing of
software.
3. High Cost Software Technical Reviews

• While implementing Software Quality Assurance (SQA) can help A software technical review is an essential step in the software
reduce the cost of fixing bugs in later stages of a project, it can be development process.
challenging for small projects with limited budgets.
It involves a team of experienced software engineers who
• As the size of a project increases, so does the number of resources evaluate the product's suitability and identify any errors or defects.
required to implement SQA, which in turn leads to an increase in
By conducting this review, developers can catch potential issues
the project budget.
early on, which ultimately saves time and resources in the long
• For smaller projects, hiring a whole team of QA and implementing run.
SQA can result in a significant increase in project costs, making it a
difficult decision to make.
Types of STRs : Process of Formal Review
The formal review process is given step by step which as follows:
• Formal reviews.

• Informal reviews

What is Formal Review?

Formal review in software testing is a review that characterized by


documented procedures and requirements. Inspection is the most
documented and formal review technique. The formality of the
process is related to factors such as the maturity of the software

The formal review follows the formal process which consists of six
main phases – Planning phase, Kick-off phase, the preparation phase,
review meeting phase, rework phase, and follow-up phase.
1. Planning
• This enables each reviewer to focus on a particular type of defect
• In the review process for a particular product or software, it all during the review, which not only saves time but also reduces the
starts with a request for review from the author to a moderator chances of different reviewers finding the same defect.
who will oversee the review. 2. Kick-off

• The moderator is responsible for scheduling the review, which • The main goal of this phase is to get everybody on the same
includes setting dates, times, and work to be done. wavelength regarding the document under review and to commit
to the time that will be spent on checking.
• Additionally, the moderator performs entry checks to ensure that
the document is ready for review and defines the exit criteria that • In this meeting, reviewers receive a short introduction to the
must be met before the review is complete. objectives of the review and its document.
• Role assignment
• Once the entry check is complete and the document is deemed
• pages to be checked,
ready for review, both the moderator and author decide which part
• check rate, and
of the document needs to be reviewed. • other things that need to be carried out for the review are discussed in this
meeting.
• The formal review team is made up of 4-5 members, and the
moderator assigns different roles and tasks to each member.
• Distribution of the review documents, source documents, and • Spelling mistakes are also recorded but not discussed during the
other relatable documents are also shared during the kick-off meeting.
meeting.
• At the end of the meeting, all these annotated documents are
• A kick-off meeting is highly recommended as it motivates the given to the author of the project.
reviewers.
4. Review Meeting
3. Preparation
• In the review meeting phase, all issues are discussed. team
• In this phase, using related documents, rules, checklist, and member forwards their comments and issues.
procedures each team member work individually on the
• Moderator of the project take care of these issues and ensure
documents.
that all discussed items either have an outcome by the end of the
• These team members individually identify bugs, comments, meeting or are defined as an action point if a discussion cannot
questions according to their understanding of the document, and be solved during the meeting.
role.
• At the end of the meeting, team members take a decision on the
• All these issues are recorded in the logging form. documents based on exit criteria.
If the number of defects found per page exceeds a certain level, then • If nothing is done about an issue for a certain reason, it should
the document must be reviewed again. be reported to at least which indicates that the author has
considered the issue.
If a project is under pressure, then the moderator will sometimes be

forced to skip reviews and exit with a defect lying document. • Changes that are made in the document are easy to find
during the follow-up phase, so the author has to indicate
5. Rework
where the changes are made.
• In the rework phase, based on the defects that are identified in the
preparation and review meeting phase are discussed.

• The author will try to improve the document based on these


defects and rework them.

• Note that not every defect leads to rework.

• it’s the author’s responsibility to examine the defect and decide


whether it need rework or not.

• In some cases, this decision is taken in a review meeting.


6. Follow-up • To control the review process, the moderator collects all the

• In the follow-up phase, the moderator of the project is responsible measurements at each phase of the process.

for ensuring that satisfactory action has been taken for all logged • For example, the number of defects found a number of defects
defects, process improvement, and change requests. found per page, time spent on the documents, time spent to

• although the moderator also checks to make sure that the author correct the defects per page, etc.

of the project has taken appropriate action on all defects. • It’s a moderator responsibility that all details are correct and kept

• It is not compulsory that the moderator need to check all the for future analysis.

rework or corrections in detail. Q: Write a note on Informal Review.

• If it is decided that all team members will check the document and
update the same, then the moderator just takes care of the roles
distribution among the team and collecting feedback from them.
Introduction to Software Reliability
For any given system, it takes a lot of work to achieve a convincing
level of reliability, and the system engineers are going beyond the
expected technical edges in order to achieve an up-to-date software
application.

Advantages of Software Reliability

The advantages of implementing Software Reliability as a part of

Software Development process are

• Software Reliability is used for data preservation.

• It helps to avoid software failure.


Software Reliability is an essential validation performed to determine
the characteristics of a software system in terms of quality assurance, • Straightforward in the system upgrade process.
functional compatibility, applicability, overall efficiency, system
• System Efficiency & higher Performance gives greater
performance, maintainability, system competence, installation
productivity.
coverage, and process documentation continuance.
software reliability measurement can be divided into four categories:
They are derived from attributes like usability, reliability,
maintainability, and portability, and are usually measured from the
actual source code.

These metrics help ensure that the product meets the necessary
standards and requirements.

Product Metrics features:

i. Lines of Code (LOC) is a commonly used measure of software size.


It is a simple and intuitive approach to measuring the size of a
program, which can be used to predict various program
characteristics such as the effort required for software development
and maintenance.
1. Product Metrics
• The size of the program is an essential factor that reflects its
Product metrics are indeed important in building the requirement
complexity, reliability, and ease of maintenance.
specification documents and system design documents. These metrics
are used to assess whether the product is sufficient for the intended • LOC is language-independent and can be used to measure the
purpose. functional complexity of any program.
ii. Function point metric is a technique to measure the functionality • It provides an excellent way to measure the complexity of
of proposed software development based on the count of inputs,
software and helps developers identify areas of the code that
outputs, master files, inquires, and interfaces.
may be problematic
iii. Test coverage metric size fault and reliability by performing tests on
software products, assuming that software reliability is a function of V. Quality metrics measure the quality at various steps of
the portion of software that is successfully verified or tested. software product development. An vital quality metric is Defect

iv. Complexity is a crucial factor in determining the reliability of Removal Efficiency (DRE). DRE provides a measure of quality
software, and it's essential to represent it accurately. because of different quality assurance and control activities
• Complexity-oriented metrics serve as a useful technique for applied throughout the development process.
determining the complexity of a program's control structure by
breaking down the code into a graphical representation.

• One of the most widely used metrics to represent complexity is


McCabe's Complexity Metric.

.
2. Project Management Metrics These metrics are:
• Number of software developers
• Project metrics define project characteristics and execution.
• Staffing pattern over the life-cycle of the software
• If there is proper management of the project by the programmer, • Cost and schedule
then this helps us to achieve better products. • Productivity

• A relationship exists between the development process and the 3. Process Metrics

ability to complete projects on time and within the desired • These metrics play a dynamic role in ensuring that the software
quality objectives. development process is functioning optimally.

• Cost increase when developers use inadequate methods. • By quantifying important attributes such as cycle time and

• Higher reliability can be achieved by using a better development rework time, process metrics provide valuable understandings

process, risk management process, configuration management into the effectiveness and quality of the processes that produce

process. the software product.

• Ultimately, the goal of process metrics is to ensure that the right


job is done on the first time through the process, which helps
improve the reliability and quality of software.
• So, keep tracking the process metrics to estimate, monitor, and • It is important to collect and analyze faults found during testing
improve the effectiveness and quality of your software and problems reported by users after delivery to achieve the
development process. objective of improving software reliability.
Examples are:
• The failure metrics are based on customer feedback regarding
• The effort required in the process faults found after the software release.

• Time to produce the product • By collecting and analyzing this failure data, metrics such as
failure density, Mean Time between Failures (MTBF), and other
• Effectiveness of defect removal during development
parameters can be calculated to measure and predict software
• Number of defects found during testing reliability.

• Maturity of the process

4. Fault and Failure Metrics

A fault is a defect in a program which appears when the programmer


makes an error and causes failure when executed under particular
conditions. These metrics are used to determine the failure-free
execution software.
Reliability Metrics 1. Mean Time to Failure (MTTF)

• Reliability metrics are used to quantitatively expressed the • MTTF is described as the time interval between the two
reliability of the software product. The option of which metric is to continuous failures.
be used depends upon the type of system to which it applies & the
• An MTTF of 200 mean that one failure can be expected each 200-
requirements of the application domain.
time units.
• Some reliability metrics which can be used to quantify the reliability
• The time units are entirely dependent on the system & it can even
of the software product are as follows:
be stated in the number of transactions.

• MTTF is reliable for systems with large transactions.

• For example, It is suitable for computer-aided design systems


where a designer will work on a design for several hours as well
as for Word-processor systems.
• To measure MTTF, we can evidence the failure data for n failures. 4. Rate of occurrence of failure (ROCOF)
Let the failures appear at the time instants t1,t2.....tn.
• MTTF can be calculated as • It is the number of failures appearing in a unit time interval.

• The number of unexpected events over a specific time of


operation.
2. Mean Time to Repair (MTTR)
• ROCOF is the frequency of occurrence with which unexpected
Once failure occurs, some-time is required to fix the error. MTTR
role is likely to appear.
measures the average time it takes to track the errors causing the
failure and to fix them. • A ROCOF of 0.02 mean that two failures are likely to occur in each
100 operational time unit steps.
3. Mean Time Between Failure (MTBR)
• It is also called the failure intensity metric.
We can merge MTTF & MTTR metrics to get the MTBF metric.

MTBF = MTTF + MTTR

Thus, an MTBF of 300 denoted that once the failure appears, the next
failure is expected to appear only after 300 hours. In this method, the
time measurements are real-time & not the execution time as
in MTTF.
2 marks
1. List basic terminology of defect.
2. Write template of test execution report
3. What is RFE?
4. Define SQA
5. Write any two task of SQA
6. What is formal review.

7. What is bug? Explain bug report template in detail


8. Explain SQA Standards
9. What are the reasons for can’t fix status of the bug.

You might also like