Research Paper
Research Paper
Unit-II
Software Quality Assurance:
Software Quality Assurance (SQA) is simply a way to assure quality in the software. It is the
set of activities which ensure processes, procedures as well as standards suitable for the project
and implemented correctly.
SQA Activities :
1) Creating an SQA Management Plan:
Along with what SQA approach you are going to follow, what engineering activities will
be carried out, and it also includes ensuring that you have a right talent mix in your team.
The SQA team sets up different checkpoints according to which it evaluates the quality of
the project activities at each checkpoint/project stage. This ensures regular quality inspection and
working as per the schedule.
Later, based on the information gathered, the software designer can prepare the project
estimation using techniques like WBS (work breakdown structure), SLOC (source line of codes),
and FP(functional point) estimation.
In this process, a meeting is conducted with the technical staff to discuss regarding the
actual quality requirements of the software and the design quality of the prototype. This activity
helps in detecting errors in the early phase of SDLC and reduces rework effort in the later
phases.
By multi-testing strategy, we mean that one should not rely on any single testing
approach, instead, multiple types of testing should be performed so that the software product can
be tested well from all angles to ensure better quality.
This activity insists the need for process adherence during the software development
process. The development process should also stick to the defined procedures.
This activity is a blend of two sub-activities which are explained below in detail:
This activity verifies if the correct steps were taken during software development. This is
done by matching the actually taken steps against the documented steps.
7) Controlling Change:
In this activity, we use a mix of manual procedures and automated tools to have a
mechanism for change control.
By validating the change requests, evaluating the nature of change and controlling the
change effect, it is ensured that the software quality is maintained during the development and
maintenance phases.
If any defect is reported by the QA team, then the concerned team fixes the defect.
After this, the QA team should determine the impact of the change which is brought by
this defect fix. They need to test not only if the change has fixed the defect, but also if the change
is compatible with the whole project.
For this purpose, we use software quality metrics which allows managers and developers
to observe the activities and proposed changes from the beginning till the end of SDLC(Software
Development Life Cycle) and initiate corrective action wherever required.
The SQA audit inspects the entire actual SDLC process followed by comparing it against
the established process.
It also checks whatever reported by the team in the status reports were actually performed
or not. This activity also exposes any non-compliance issues.
It is crucial to keep the necessary documentation related to SQA and share the required
SQA information with the stakeholders. The test results, audit results, review reports, change
requests documentation, etc. should be kept for future reference.
We often hear that testers and developers often feel superior to each other. This should be
avoided as it can affect the overall project quality.
Software Reviews:
A review is a systematic examination of a document by one or more people with the main
aim of finding and removing errors early in the software development life cycle. Reviews
are used to verify documents such as requirements, system designs, code, test plans and
test cases.
Reviews are usually performed manually while static analysis of the tools is performed
using tools.
Types of Review:
3) Formal Technical Review: During the process of technical review a team of qualified
personnel's review the software and examine its suitability to define its intended use as
well as to identify various discrepancies.
4) Inspection: This is a formal type of peer review, wherein experienced & qualified
individuals examine the software product for bugs and defects using a defined
process. Inspection helps the author improve the quality of the software.
(1) To uncover errors in function, logic, or implementation for any representation of the
software;
(2) To verify that the software under review meets its requirements;
(3) To ensure that the software has been represented according to predefined standards
(5) To make projects more manageable. In addition, the FTR serves as a training ground,
enabling junior engineers to observe different approaches to software analysis, design,
and implementation
Advance preparation should occur but should require no more than two hours of work for
each person.
The duration of the review meeting should be less than two hours. Given these
constraints, it should be obvious that an FTR focuses on a specific (and small) part of the
overall software.
For example, rather than attempting to review an entire design, walkthroughs are
conducted for each component or small group of components.
Reviewer(s)—expected to spend between one and two hours reviewing the product,
making notes, and otherwise becoming familiar with the work.
Recorder— a reviewer who records (in writing) all important issues raised during the
review.
1. Review the product, not the producer. An FTR involves people and egos. Conducted
properly, the FTR should leave all participants with a warm feeling of accomplishment.
Conducted improperly, the FTR can take on the aura of an inquisition. Errors should be pointed
out gently; the tone of the meeting
should be loose and constructive; the intent should not be to embarrass or belittle. The review
leader should conduct the review meeting to ensure that the proper tone and attitude are
maintained and should immediately halt a review that has gotten out of control.
2. Set an agenda and maintain it. One of the key maladies of meetings of all types is drift. An
FTR must be kept on track and on schedule. The review leader is chartered with the
responsibility for maintaining the meeting schedule and should not be afraid to nudge people
when drift sets in.
3. Limit debate and rebuttal. When an issue is raised by a reviewer, there may not be universal
agreement on its impact. Rather than spending time debating the question, the issue should be
recorded for further discussion off-line.
4. Enunciate problem areas, but don't attempt to solve every problem noted. A review is not
a problem-solving session. The solution of a problem can often be accomplished by the producer
alone or with the help of only one other individual. Problem solving should be postponed until
after the review meeting.
5. Take written notes. It is sometimes a good idea for the recorder to make notes on a wall
board, so that wording and priorities can be assessed by other reviewers as information is
recorded.
6. Limit the number of participants and insist upon advance preparation. Two heads are
better than one, but 14 are not necessarily better than 4. Keep the number of people involved to
the necessary minimum. However, all review team members must prepare in advance. Written
comments should be solicited by the review leader (providing an indication that the reviewer has
reviewed the material).
7. Develop a checklist for each product that is likely to be reviewed. A checklist helps the
review leader to structure the FTR meeting and helps each reviewer to focus on important issues.
Checklists should be developed for analysis, design, code, and even test documents.
Design By: Ladge. R.M.
8. Allocate resources and schedule time for FTRs. For reviews to be effective, they should be
scheduled as a task during the software engineering process. In addition, time should be
scheduled for the inevitable modifications that will occur as the result of an FTR.
9. Conduct meaningful training for all reviewers. To be effective all review participants
should receive some formal training. The training should stress both process-related issues and
the human psychological side of reviews. Freedman and Weinberg estimate a one-month
learning curve for every 20 people who are to participate effectively in reviews.
10. Review your early reviews. Debriefing can be beneficial in uncovering problems with the
review process itself. The very first product to be reviewed should be the review guidelines
themselves.
Because many variables (e.g., number of participants, type of work products, timing and length,
specific review approach) have an impact on a successful review, a software organization should
experiment to determine what approach works best in a local context. Porter and his
colleagues provide excellent guidance for this type of experimentation.
Software Reliability:
Software Reliability is the probability of failure-free software operation for a specified
period of time in a specified environment. Software Reliability is also an important factor
affecting system reliability. It differs from hardware reliability in that it reflects the
design perfection, rather than manufacturing perfection. The high complexity of software
is the major contributing factor of Software Reliability problems
Reliability metrics are used to quantitatively expressed the reliability of the software
product. The option of which metric is to be used depends upon the type of system to
which it applies & the requirements of the application domain.
Some reliability metrics which can be used to quantify the reliability of the software
product are as follows:
MTTF is described as the time interval between the two successive failures.
An MTTF of 200 mean that one failure can be expected each 200-time units. The time
units are entirely dependent on the system & it can even be stated in the number of
transactions. MTTF is consistent for systems with large transactions.
To measure MTTF, we can evidence the failure data for n failures. Let the failures
appear at the time instants t1,t2.....tn.
Once failure occurs, some-time is required to fix the error. MTTR measures the average
time it takes to track the errors causing the failure and to fix them.
We can merge MTTF & MTTR metrics to get the MTBF metric.
Thus, an MTBF of 300 denoted that once the failure appears, the next failure is expected
to appear only after 300 hours. In this method, the time measurements are real-time & not
the execution time as in MTTF.
SQA Plan
The SQA Plan provides a road map for instituting software quality assurance. Developed
by the SQA group, the plan serves as a template for SQA activities that are instituted for
each software project.
A standard for SQA plans has been recommended by the IEEE . Initial sections describe
the purpose and scope of the document and indicate those software process activities that
are covered by quality assurance. All documents noted in the SQA Plan are listed and all
applicable standards are noted. The management section of the plan describes SQA’s
place in the organizational structure, SQA tasks and activities and their placement
throughout the software process, and the organizational roles and responsibilities relative
to product quality.