0% found this document useful (0 votes)
66 views

What Is Quality:: Unit 1

The document discusses different perspectives on defining and measuring software quality. It describes quality as creating useful products that provide value for both producers and users. It then outlines several frameworks for considering different dimensions of quality, including Garvin's eight dimensions and the ISO 9126 six key factors of functionality, reliability, usability, efficiency, maintainability, and portability. The document advocates using targeted quality factors to conduct pragmatic assessments of specific attributes like an interface's intuitiveness, efficiency, and robustness.

Uploaded by

amir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views

What Is Quality:: Unit 1

The document discusses different perspectives on defining and measuring software quality. It describes quality as creating useful products that provide value for both producers and users. It then outlines several frameworks for considering different dimensions of quality, including Garvin's eight dimensions and the ISO 9126 six key factors of functionality, reliability, usability, efficiency, maintainability, and portability. The document advocates using targeted quality factors to conduct pragmatic assessments of specific attributes like an interface's intuitiveness, efficiency, and robustness.

Uploaded by

amir
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Unit 1

What is Quality:
Quality . . . you know what it is, yet you don’t know what it is. But that’s self contradictory.
But some things are better than others; that is, they have more quality. But when you try to
say what the quality is, apart from the things that have it, it all goes poof! There’s nothing to
talk about. But if you can’t say what Quality is, how do you know what it is, or how do you
know that it even exists? If no one knows what it is, then for all practical purposes it doesn’t
exist at all. But for all practical purposes it really does exist. What else are the grades based
on? Why else would people pay fortunes for some things and throw others in the trash pile?
Obviously some things are better than others . . . but what’s the betterness? . . . So round and
round you go, spinning mental wheels and nowhere finding anyplace to get traction. What the
hell is Quality? What is it?

Software Quality:
Even the most jaded software developers will agree that high-quality software is an important
goal. But how do we define software quality? In the most general sense, software quality can
be defined as: An effective software process applied in a manner that creates a useful product
that provides measurable value for those who produce it and those who use it.
There is little question that the preceding definition could be modified or extended and
debated endlessly. The definition serves to emphasize three important points:

1. An effective software process establishes the infrastructure that supports any effort at
building a high-quality software product. The management aspects of process create the
checks and balances that help avoid project chaos—a key contributor to poor quality.
Software engineering practices allow the developer to analyze the problem and design a solid
solution—both critical to building high-quality software. Finally, umbrella activities such as
change management and technical reviews have as much to do with quality as any other part
of software engineering practice.
2. A useful product delivers the content, functions, and features that the end user desires, but
as important, it delivers these assets in a reliable, error-free way. A useful product always
satisfies those requirements that have been explicitly stated by stakeholders. In addition, it
satisfies a set of implicit requirements (e.g., ease of use) that are expected of all high quality
software.
3. By adding value for both the producer and user of a software product, high-quality
software provides benefits for the software organization and the end-user community. The
software organization gains added value because high-quality software requires less
maintenance effort, fewer bug fixes, and reduced customer support. This enables software
engineers to spend more time creating new applications and less on rework. The user
community gains added value because the application provides a useful capability in a way
that expedites some business process. The end result is (1) greater software product revenue,
(2) better profitability when an application supports a business process, and/or (3) improved
availability of information that is crucial for the business.

Garvin’s Quality Dimensions

David Garvin suggests that quality should be considered by taking a multidimensional


viewpoint that begins with an assessment of conformance and terminates with a
transcendental (aesthetic) view. Although Garvin’s eight dimensions of quality were not
developed specifically for software, they can be applied when software quality is considered:
Performance Quality. Does the software deliver all content, functions, and features that are
specified as part of the requirements model in a way that provides value to the end user?
Feature quality. Does the software provide features that surprise and delight first-time end
users?
Reliability. Does the software deliver all features and capability without failure? Is it
available when it is needed? Does it deliver functionality that is error free?
Conformance. Does the software conform to local and external software standards that are
relevant to the application? Does it conform to de facto design and coding conventions? For
example, does the user interface conform to accepted design rules for menu selection or data
input?
Durability. Can the software be maintained (changed) or corrected (debugged) without the
inadvertent generation of unintended side effects? Will changes cause the error rate or
reliability to degrade with time?
Serviceability. Can the software be maintained (changed) or corrected (debugged) in an
acceptably short time period? Can support staff acquire all information they need to make
changes or correct defects? Douglas Adams makes a wry comment that seems appropriate
here: “The difference between something that can go wrong and something that can’t
possibly go wrong is that when something that can’t possibly go wrong goes wrong it usually
turns out to be impossible to get at or repair.”
Aesthetics. There’s no question that each of us has a different and very subjective vision of
what is aesthetic. And yet, most of us would agree that an aesthetic entity has a certain
elegance, a unique flow, and an obvious “presence” that are hard to quantify but are evident
nonetheless. Aesthetic software has these characteristics.
Perception. In some situations, you have a set of prejudices that will influence your
perception of quality. For example, if you are introduced to a software product that was built
by a vendor who has produced poor quality in the past, your guard will be raised and your
perception of the current software product quality might be influenced negatively. Similarly,
if a vendor has an excellent reputation, you may perceive quality, even when it does not
really exist.
Garvin’s quality dimensions provide you with a “soft” look at software quality. Many (but
not all) of these dimensions can only be considered subjectively. For this reason, you also
need a set of “hard” quality factors that can be categorized in two broad groups: (1) factors
that can be directly measured (e.g., defects uncovered during testing) and (2) factors that can
be measured only indirectly (e.g., usability or maintainability). In each case measurement
must occur. You should compare the software to some datum and arrive at an indication of
quality.

ISO 9126 Quality Factors

The ISO 9126 standard was developed in an attempt to identify the key quality attributes for
computer software. The standard identifies six key quality attributes:
Functionality. The degree to which the software satisfies stated needs as indicated by the
following sub attributes: suitability, accuracy, interoperability, compliance, and security.
Reliability. The amount of time that the software is available for use as indicated by the
following sub attributes: maturity, fault tolerance, recoverability.
Usability . The degree to which the software is easy to use as indicated by the following sub
attributes: understandability, learnability, operability.
Efficiency. The degree to which the software makes optimal use of system resources as
indicated by the following sub attributes: time behavior, resource behavior.
Maintainability. The ease with which repair may be made to the software as indicated by the
following subattributes: analyzability, changeability, stability, testability.
Portability. The ease with which the software can be transposed from one environment to
another as indicated by the following subattributes: adaptability, installability, conformance,
replaceability.

Like other software quality factors discussed in the preceding subsections, the ISO 9126
factors do not necessarily lend themselves to direct measurement.
However, they do provide a worthwhile basis for indirect measures and an excellent checklist
for assessing the quality of a system.

Targeted Quality Factors

The quality dimensions and factors presented in Sections 19.2.1 and 19.2.2 focus on the
software as a whole and can be used as a generic indication of the quality of an application. A
software team can develop a set of quality characteristics and associated questions that would
probe the degree to which each factor has been satisfied. 3 For example, McCall identifies
usability as an important quality
factor. If you were asked to review a user interface and assess its usability, how would you
proceed? You might start with the subattributes suggested by McCall— understandability,
learnability, and operability—but what do these mean in a pragmatic sense?
To conduct your assessment, you’ll need to address specific, measurable (or at least,
recognizable) attributes of the interface. For example:
Intuitiveness. The degree to which the interface follows expected usage patterns so that even
a novice can use it without significant training.
• Is the interface layout conducive to easy understanding?
• Are interface operations easy to locate and initiate?
• Does the interface use a recognizable metaphor?
• Is input specified to economize key strokes or mouse clicks?
• Do aesthetics aid in understanding and usage?
Efficiency. The degree to which operations and information can be located or initiated.
• Does the interface layout and style allow a user to locate operations and information
efficiently?
• Can a sequence of operations (or data input) be performed with an economy of motion?
• Are output data or content presented so that it is understood immediately?
• Have hierarchical operations been organized in a way that minimizes the depth to which a
user must navigate to get something done?
Robustness. The degree to which the software handles bad input data or inappropriate user
interaction.
• Will the software recognize the error if data values are at or just outside prescribed input
boundaries? More importantly, will the software continue to operate without failure or
degradation?
• Will the interface recognize common cognitive or manipulative mistakes and explicitly
guide the user back on the right track?
• Does the interface provide useful diagnosis and guidance when an error condition
(associated with software functionality) is uncovered?
Richness. The degree to which the interface provides a rich feature set.
• Can the interface be customized to the specific needs of a user?
• Does the interface provide a macro capability that enables a user to identify a sequence of
common operations with a single action or command?
As the interface design is developed, the software team would review the design prototype
and ask the questions noted. If the answer to most of these questions is yes, it is likely that the
user interface exhibits high quality. A collection of questions similar to these would be
developed for each quality factor to be assessed.

Software Quality attributes and specification:


“Good Enough” Software
Exactly what is “good enough”? Good enough software delivers high-quality functions and
features that end users desire, but at the same time it delivers other more obscure or
specialized functions and features that contain known bugs. The software vendor hopes that
the vast majority of end users will overlook the bugs because they are so happy with other
application functionality. This idea may resonate with many readers. If you’re one of them,
we can only ask you to consider some of the arguments against “good enough.” It is true that
“good enough” may work in some application domains and for a few major software
companies. After all, if a company has a large marketing budget and can convince enough
people to buy version 1.0, it has succeeded in locking them in. As we noted earlier, it can
argue that it will improve quality in subsequent versions. By delivering a good enough
version 1.0, it has cornered the market.
The Cost of Quality
The argument goes something like this— we know that quality is important, but it costs us
time and money—too much time and money to get the level of software quality we really
want. On its face, this argument seems reasonable. There is no question that quality has a
cost, but lack of quality also has a cost—not only to end users who must live with buggy
software, but also to the software organization that has built and must maintain it. The real
question is this: which cost should we be worried about? To answer this question, you must
understand both the cost of achieving quality and the cost of low-quality software.
The cost of quality includes all costs incurred in the pursuit of quality or in performing
quality-related activities and the downstream costs of lack of quality. To understand these
costs, an organization should collect metrics to provide a baseline for the current cost of
quality, identify opportunities for reducing these costs, and provide a normalized basis of
comparison. The cost of quality can be divided into costs associated with prevention,
appraisal, and failure.
Prevention costs include (1) the cost of management activities required to plan and
coordinate all quality control and quality assurance activities, (2) the cost of added technical
activities to develop complete requirements and design models, (3) test planning costs, and
(4) the cost of all training associated with these activities.
Appraisal costs include activities to gain insight into product condition the “first time
through” each process. Examples of appraisal costs include: (1) the cost of conducting
technical reviews for software engineering work products, (2) the cost of data collection and
metrics evaluation, and (3) the cost of testing and debugging.
Failure costs are those that would disappear if no errors appeared before shipping a product
to customers. Failure costs may be subdivided into internal failure costs and external failure
costs. Internal failure costs are incurred when you detect an error in a product prior to
shipment. Internal failure costs include: (1) the cost required to perform rework (repair) to
correct an error, (2) the cost that occurs when rework inadvertently generates side effects that
must be mitigated, and (3) the costs associated with the collection of quality metrics that
allow an organization to assess the modes of failure. External failure costs are associated
with defects found after the product has been shipped to the customer. Examples of external
failure costs are complaint resolution, product return and replacement, help line support, and
labor costs associated with warranty work. A poor reputation and the resulting loss of
business is another external failure cost that is difficult to quantify but nonetheless very real.
Bad things happen when low-quality software is produced.
Risks
“people bet their jobs, their comforts, their safety, their entertainment, their decisions, and
their very lives on computer software. It better be right.” The implication is that low-quality
software increases risks for both the developer and the end user. In the preceding subsection,
we discussed one of these risks (cost). But the downside of poorly designed and implemented
applications does not always stop with dollars and time.
Poor quality leads to risks, some of them very serious.
Negligence and Liability
The story is all too common. A governmental or corporate entity hires a major software
developer or consulting company to analyze requirements and then design and construct a
software-based “system” to support some major activity. The system might support a major
corporate function (e.g., pension management) or some governmental function (e.g., health
care administration or homeland security).
Work begins with the best of intentions on both sides, but by the time the system is delivered,
things have gone bad. The system is late, fails to deliver desired features and functions, is
error-prone, and does not meet with customer approval. Litigation ensues.
In most cases, the customer claims that the developer has been negligent (in the manner in
which it has applied software practices) and is therefore not entitled to payment. The
developer often claims that the customer has repeatedly changed its requirements and has
subverted the development partnership in other ways. In every case, the quality of the
delivered system comes into question.
Quality and Security
As the criticality of Web-based and mobile systems grows, application security has become
increasingly important. Stated simply, software that does not exhibit high quality is easier to
hack, and as a consequence, low-quality software can indirectly increase the security risk
with all of its attendant costs and problems.
Software security relates entirely and completely to quality. You must think about security,
reliability, availability, dependability—at the beginning, in the design, architecture, test, and
coding phases, all through the software life cycle [process]. Even people aware of the
software security problem have focused on late life-cycle stuff. The earlier you find the
software problem, the better. And there are two kinds of software problems. One is bugs,
which are implementation problems. The other is software flaws—architectural problems in
the design. People pay too much attention to bugs and not enough on flaws.
To build a secure system, you must focus on quality, and that focus must begin during design.

ACHIEVING SOFTWARE QUALITY

Software quality doesn’t just appear. It is the result of good project management and solid
software engineering practice. Management and practice are applied within the context of
four broad activities that help a software team achieve high software quality: software
engineering methods, project management techniques, quality control actions, and software
quality assurance.
Software Engineering Methods
If you expect to build high-quality software, you must understand the problem to be solved.
You must also be capable of creating a design that conforms to the problem while at the same
time exhibiting characteristics that lead to software that exhibits the quality dimensions and
factors.
A wide array of concepts and methods that can lead to a reasonably complete understanding
of the problem and a comprehensive design that establishes a solid foundation for the
construction activity. If you apply those concepts and adopt appropriate analysis and design
methods, the likelihood of creating high-quality software will increase substantially
Project Management Techniques
The impact of poor management decisions on software quality. The implications are clear: if
(1) a project manager uses estimation to verify that delivery dates are achievable, (2)
schedule dependencies are understood and the team resists the temptation to use shortcuts, (3)
risk planning is conducted so problems do not breed chaos, software quality will be affected
in a positive way.
Quality Control
Quality control encompasses a set of software engineering actions that help to ensure that
each work product meets its quality goals. Models are reviewed to ensure that they are
complete and consistent. Code may be inspected in order to uncover and correct errors before
testing commences. A series of testing steps is applied to uncover errors in processing logic,
data manipulation, and interface communication. A combination of measurement and
feedback allows a software team to tune the process when any of these work products fail to
meet quality goals.
Quality Assurance
Quality assurance establishes the infrastructure that supports solid software engineering
methods, rational project management, and quality control actions—all pivotal if you intend
to build high-quality software. In addition, quality assurance consists of a set of auditing and
reporting functions that assess the effectiveness and completeness of quality control actions.
The goal of quality assurance is to provide management and technical staff with the data
necessary to be informed about product quality, thereby gaining insight and confidence that
actions to achieve product quality are working. Of course, if the data provided through
quality assurance identifies problems, it is management’s responsibility to address the
problems and apply the necessary resources to resolve quality issues.

Bugs, Errors, and Defects


The goal of software quality control, and in a broader sense, quality management in general,
is to remove quality problems in the software. These problems are referred to by various
names— bugs, faults, errors, or defects to name a few. Are each of these terms synonymous,
or are there subtle differences between them?
An error (a quality problem found before the software is released to end users) and a defect
(a quality problem found only after the software has been released to end users). We make
this distinction because errors and defects have very different economic, business,
psychological, and human impact. As software engineers, we want to find and correct as
many errors as possible before the customer and/or end user encounter them. We want to
avoid defects—because defects (justifiably) make software people look bad.
It is important to note, however, that the temporal distinction made between errors and
defects is not mainstream thinking. The general consensus within the software engineering
community is that defects and errors, faults, and bugs are synonymous. That is, the point in
time that the problem was encountered has no bearing on the term used to describe the
problem. Part of the argument in favor of this view is that it is sometimes difficult to make a
clear distinction between pre- and post release (e.g., consider an incremental process used in
agile development). Regardless of how you choose to interpret these terms, recognize that the
point in time at which a problem is discovered does matter and that software engineers should
try hard— very hard—to find problems before their customers and end users encounter them.

Defect Rate and Reliability:

You can count the number of defects, you find every week. That is the defect rate. You
might like to count quality, but you can’t. There are too many aspects of quality that are not
countable. But reliability is one of those aspects, and you can count defects to measure it.”

Quality in software is the outcome of meeting the goals, requirements, and actual needs of the
users. It is a positive concept, referring to such qualities as integrity, interoperability,
flexibility, maintainability, portability, expandability, reusability, resilience, and usability

A way to look at the failure behavior in time is to examine the failure rate. Failure rate is
the time rate of change of the probability of failure. Since the latter is generally a function
of time, failure rate is also, generally speaking, a function of time. In terms of failure rate,
however, one can often obtain some indication as to which of the influencing factors is
controlling and at what time it is controlling.

The term “reliability” in engineering refers to the probability that a product, or system, will
perform it’s designed functions under a given set of operating conditions for a specific period
of time. It is also known as the “probability of survival”.

Defect Prevention, Reduction and Containment :


Defect Prevention is one of the most important activities of a software development life
cycle, which has a direct impact on controlling the cost of the project and the quality of
deliverables. The cost of rectifying defect in the product is very high when compared to
preventing. Hence it is always advisable to make measures, which will prevent the defect
being introduced in the product, as early as possible
We can prevent the defect by logging the defects in a defect-tracking tool and documenting
the defects. Precautions should be taken while logging the defects by specifying the correct
defect description so that the developers can reproduce the same defect and it is better to
provide steps to reproduce along with the screen shots. This is to verify that the developers
can very clearly analysis the defect. Root cause analysis defects should be in practice with the
release of each build and the care should be taken that the same critical defects should not be
present in the next build/version. We can also prevent by analyzing the report from lesson
learned or from the post mortem report from the project.
Defect reduction through fault detection and removal. For example, inspection directly
detects and removes faults in the software, while testing removes faults based on related
failure observations. • Defect containment through failure prevention

Software Review:
Software reviews are a “filter” for the software process. That is, reviews are applied at
various points during software engineering and serve to uncover errors and defects that can
then be removed. Software reviews “purify” software engineering work products, including
requirements and design models, code, and testing data.

You’ll make mistakes as you develop software engineering work products. There’s no shame
in that—as long as you try hard, very hard, to find and correct the mistakes before they are
delivered to end users. Technical reviews are the most effective mechanism for finding
mistakes early in the software process.
If you find an error early in the process, it is less expensive to correct. In addition, errors have
a way of amplifying as the process proceeds. So a relatively minor error left untreated early
in the process can be amplified into a major set of errors later in the project.
Finally, reviews save time by reducing the amount of rework that will be required late in the
project.

What is Software Review?


Software review is an important part of Software Development Life Cycle (SDLC) that
assists software engineers in validating the quality, functionality, and other vital features and
components of the software. As mentioned above, it is a complete process that involves
testing the software product and ensuring that it meets the requirements stated by the client. It
is systematic examination of a document by one or more individuals, who work together to
find & resolve errors and defects in the software during the early stages of Software
Development Life Cycle (SDLC). Usually performed manually, software review is used to
verify various documents like requirements, system designs, codes, test plans, & test cases.

Why is Software Review Important?


The reasons that make software review an important element of software development
process are numerous. It is one such methodology that offers an opportunity to the
development team & the client, to get clarity on the project as well as its requirements. With
the assistance of software review, the team can verify whether the software is developed as
per the requested requirements or not, and make the necessary changes before its release in
the market. Other important reasons for software review are:

• It improves the productivity of the development team.

• Makes the process of testing time & cost effective, as more time is spent on testing
the software during the initial development of the product.

• Fewer defects are found in the final software, which helps reduce the cost of the
whole process.

• The reviews provided at this stage are found to be cost effective, as they are identified
at the earlier stage, as the cost of rectifying a defect in the later stages would be much
more than doing it in the initial stages.

• In this process of reviewing software, often we train technical authors for defect
detection process as well as for defect prevention process.

• It is only at this stage the inadequacies are eliminated.

• Elimination of defects or errors can benefit the software to a great extent. Frequent
check of samples of work and identification of small time errors can lead to low error
rate.

• As a matter of fact, this process results in dramatic reduction of time taken in


producing a technically sound document.
Types of software reviews
There are mainly three types of software reviews, all of which are conducted by different
members of the team who evaluate various aspects of the software. Hence, the types of
software review are:

1. Software Peer Review:

Peer review is the process of evaluating the technical content and quality of the
product and it is usually conducted by the author of the work product, along with
some other developers. According to Capacity Maturity Model, the main purpose of
peer review is to provide “a disciplined engineering practise for detecting or
correcting defects in the software artifacts, preventing their leakage into the field
operations”. In short, peer review is performed in order to determine or resolve the
defects in the software, whose quality is also checked by other members of the team.

Types of Peer Review:


o Code Review: To fix mistakes and to remove vulnerabilities from the
software product, systematic examination of the computer source code is
conducted, which further improves the quality & security of the product.

o Pair Programming: This is a type of code review, where two programmers


work on a single workstation and develop a code together.

o Informal: As suggested by its name, this is an informal type of review, which


is extremely popular and is widely used by people all over the world. Informal
review does not require any documentation, entry criteria, or a large group of
people. It is a time saving process that is not documented.

o Walkthrough: Here, a designer or developer lead a team of software


developers to go through a software product, where they ask question and
make necessary comments about various defects & errors. This process differs
from software inspection and technical review in various aspects.

o Technical Review: During the process of technical review a team of qualified


personnels review the software and examine its suitability to define its
intended use as well as to identify various discrepancies.

o Inspection: This is a formal type of peer review, wherein experienced &


qualified individuals examine the software product for bugs and defects using
a defined process. Inspection helps the author improve the quality of the
software.

2. Software Management Review

These reviews take place in the later stages by the management representatives. The
objective of this type of review is to evaluate the work status. Also, on the basis of
such reviews decisions regarding downstream activities are taken.These reviews take
place in the later stages by the management representatives. The objective of this type
of review is to evaluate the work status. Also, on the basis of such reviews decisions
regarding downstream activities are taken.

3. Software Audit Reviews

Software audit review or software review is a type of external review, wherein one or
more auditors, who are not a part of the development team conduct an independent
examination of the software product and its processes to assess their compliance with
stated specifications, standards, and other important criterias. This is done by
managerial level people.

Formal Review vs Informal Review:


Formal and informal review are two very important types of reviews that are used most
commonly by software engineers to identify defects as well as to discuss ways to tackle these
issues or discrepancies. Therefore, to understand these important types of software review,
following is a comparison of the two:

Formal Review:
A type of peer review, formal review follows a formal process and has a specific formal
agenda. It has a well structured and regulated process, which is usually implemented at the
end of each life cycle. During this process, a formal review panel or board considers the
necessary steps for the next life cycle.
Features of Formal Review:

• This evaluates conformance to specification and various standards.

• Conducted by a group of 3 or more individuals.

• The review team petitions the management of technical leadership to act on the
suggested recommendations.

• Here, the leader verifies that the action documents are verified and incorporated into
external processes.

• Formal review consists of six important steps, which are:

o Planning.

o Kick-off.

o Preparation.

o Review meeting.

o Rework.

o Follow up.
Informal Review:
Unlike Formal Reviews, Informal reviews are applied multiple times during the early stages
of software development process. The major difference between the formal and informal
reviews is that the former follows a formal agenda, whereas the latter is conducted as per the
need of the team and follows an informal agenda. Though time saving, this process is not
documented and does not require any entry criteria or large group of members.
Features of Informal Review:

• Conducted by a group of 2-7 members, which includes the designer an any other
interested party.

• Here the team identifies errors & issues as well as examine alternatives.

• It is a forum for learning.

• All the changes are made by the software designer.

• These changes are verified by other project controls.

• The role of informal review is to keep the author informed and to improve the quality
of the product.

Process of Software review:


The process of software review is a simple one and is common for all its types. It is usually
implemented by following a set of activities, which are laid down by IEEE Standard 1028.
All these steps are extremely important and need to be followed rigorously, as skipping even
a single step can lead to a complication with the development process, which can further
affect the quality of the end product.

1. Entry Evaluation:A standard checklist is used by entry criteria in order to ensure an


ideal condition for a successful review.
2. Management Preparation:During this stage of the process, a responsible
management ensures that the software review has all the required resources, which
includes things like staff, time, materials, and tools.
3. Review Planning:To undergo a software review, an objective is identified. Based on
the objective, a recognized team of resources is formed.
4. PreparationThe reviewers are held responsible for preparing group
examination to do the reviewing task.
5. Examination and Exit EvaluationIn the end, the result made by each reviewer is
combined all together. Before the review is finalized, verification of all activities is
done that are considered necessary for an efficacious software review.

You might also like