0% found this document useful (0 votes)
10 views12 pages

Document 1

McCall's software quality factors include product operation, product revision, and product transition, defining quality attributes at three levels. Software quality assurance (SQA) involves six processes, including SQA processes, quality control tasks, and compliance procedures. The document also discusses reliability measures, fault tolerance architectures, and the differences between black-box and white-box testing.

Uploaded by

handomin2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views12 pages

Document 1

McCall's software quality factors include product operation, product revision, and product transition, defining quality attributes at three levels. Software quality assurance (SQA) involves six processes, including SQA processes, quality control tasks, and compliance procedures. The document also discusses reliability measures, fault tolerance architectures, and the differences between black-box and white-box testing.

Uploaded by

handomin2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

What are McCall's software quality factors?

How many levels McCall defines


for quality attributes?
(1) its operation characteristics (Product Operation),
(2) its ability to undergo change (Product Revision), and
(3) its adaptability to new environments (Product Transition).
Pg 5 Figure 1.3

In general, software quality assurance (SQA) involves six processes. Define


them.
In general, Software quality assurance (SQA) involves -
(1) AN SQA process,
(2) Specific quality assurance and quality control tasks (including technical
reviews and a multi-tiered testing strategy),
(3) Effective software engineering practice (methods and tools),
(4) Control of all software work products and the changes made to them,
(5) A procedure to ensure compliance with software development standards (when
applicable), and
(6) Measurement and reporting mechanisms.
Pg 9 Figure 1.4

SEI CMM classifies software development industries into five maturity levels.
Define them.

Pg 32 Figure 1.10
What elements are involves in SQA for a broad range of concerns and
activities?
Software quality assurance encompasses a broad range of concerns and
activities are -
1. Standards
2. Reviews and Audits
3. Testing
4. Error/detect collection and analysis
5. Change Management
6. Education
7. Vendor Management
8. Security Management
9. Safety
10. Risk Management

Describe the steps perform in the statistical software quality assurance


technique.
1. Information about software errors and defects is collected and categorized.
2. An attempt is made to trace each error and defect to its underlying cause.
3. Using the Pareto principle (80 percent of the defects can be traced to 20 percent
of all possible causes), isolate the 20 percent (the vital few).
4. Once the vital few causes have been identified, move to correct the problems that
have caused the errors and defects.
What is the Six Sigma strategy used by the Motorola in the 1980s for quality
assurance?
The Six Sigma methodology defines three core steps:
(1) Define customer requirements and deliverables and project goals via well-
defined methods of customer communication.
(2) Measure the existing process and its output to determine current quality
performance (collect defect metrics).
(3) Analyze defect metrics and determine the vital few causes.
If an existing software process is in place, but improvement is required, Six
Sigma suggests two additional steps:
(4) Improve the process by eliminating the root causes of defects.
(5) Control the process to ensure that future work does not reintroduce the causes
of defects.

Discuss about the DMAIC method in the Six Sigma statistical software quality
assurance technique.
DMAIC means define, measure, analyze, improve and control.
(1) Define customer requirements and deliverables and project goals via well-
defined methods of customer communication.
(2) Measure the existing process and its output to determine current quality
performance (collect defect metrics).
(3) Analyze defect metrics and determine the vital few causes.
(4) Improve the process by eliminating the root causes of defects.
(5) Control the process to ensure that future work does not reintroduce the causes
of defects.
Discuss about the DMADV method in the Six Sigma statistical software quality
assurance technique.
DMADV means define, measure, analyze, design and verify.
(1) Define customer requirements and deliverables and project goals via well-
defined methods of customer communication.
(2) Measure the existing process and its output to determine current quality
performance (collect defect metrics).
(3) Analyze defect metrics and determine the vital few causes.
(4) Design the process to
 avoid the root causes of defects and
 to meet customer requirements.
(5) Verify that the process model will, in fact, avoid defects and meet customer
requirements.

What is the difference between process metrics and product metrics? Give
examples of each.
Product Metrics
Product metrics are predictor metrics used to quantify internal attributes of a
software system. Product metrics help measure the characteristics of a product or
software being developed. Product metrics measure complexity of a module, the
average length of identifiers in a program, and the number of attributes and
operations associated with object classes in a design.
Examples of product metrics are
 LOC and function point to measure size,
 PM (person- month) to measure the effort required to develop it.
Process Metrics
Process quality management and improvement can result in fewer defects in
the software being developed. Process metrics help measure how a process is
performing. Process metrics (or) control metrics are the average effort and the time
required to repair reported defects.
Examples of process metrics are
 review effectiveness,
 average defect correction time,
 productivity.

What are the advantages and disadvantages of ISO 9000 Certification?


The benefits that accrue to organizations obtaining ISO certification:
 Confidence of customers in an organization increases when the organization
qualifies for ISO 9001 certification.
 ISO 9000 requires a well-documented software production process to be in
place.
 ISO 9000 makes the development process focused, efficient, and cost
effective.
 A ISO 9000 certification points out the weak points of an organizations and
recommends remedial action.
 ISO 9000 sets the basic framework for the development of an optimal process
and total quality management (TQM).

Disadvantages of ISO 9000 Certifications are –


 ISO 9000 requires a software production process to be adhered to, but does
not guarantee the process to be of high quality.
 ISO 9000 certification process is not fool-proof and no international
accredition agency exists.
 Organizations getting ISO 9000 certification often tend to downplay domain
expertise and the ingenuity of the developers.
 ISO 9000 does not automatically lead to continuous process improvement. In
other words, it does not automatically lead to TQM.
Define the software reliability and how can we achieve the software reliability?
The reliability of a software product basically represents its trustworthiness or
dependability. The software reliability can be defined as the probability of the
product working correctly over a given period of time.
Software reliability can be achieved by avoiding the introduction of faults, by
detecting and removing faults before system deployment, and by including fault-
tolerance facilities that allow the system to remain operational after a fault has
caused a system failure.

Define the dependability of a computer system. What are the most important
dimensions of dependability?
The dependability of a computer system is the degree of confidence a user has
that the system will operate as they expect and that the system will not fail in normal
use. It is not meaningful to express dependability numerically. The dependability of
a computer system is a system property that reflects the user's degree of trust in the
system. The most important dimensions of dependability are availability, reliability,
safety, security, and resilience.
1. Availability
Informally, the availability of a system is the probability that it will be up and
running and able to deliver useful services to users at any given time.
2. Reliability
Informally, the reliability of a system is the probability, over a given period of time,
that the system will correctly deliver services as expected by the user.
3. Safety
Informally, the safety of a system is a judgment of how likely it is that the system
will cause damage to people or its environment.
4. Security
Informally, the security of a system is a judgment of how likely it is that the system
can resist accidental or deliberate intrusions.
5. Resilience
Informally, the resilience of a system is a judgment of how well that system can
maintain the continuity of its critical services in the presence of disruptive events
such as equipment failure and cyber-attacks).

What do you understand by a reliability growth model? How is reliability


growth modeling useful?
A reliability growth model is a mathematical model of how software reliability
improves as errors are detected and repaired. A reliability growth model can be used
to predict when (or if at all) a particular level of reliability is likely to be attained.
Thus, reliability growth modeling can be used to determine when to stop testing to
attain a given reliability level.

Explain in the different steps of statistical testing.


 The first step is to determine the operation profile of the software.
 The next step is to generate a set of test data corresponding to the determined
operation profile.
 The third step is to apply the test cases to the software and record the time
between each failure.
 After a statistically significant number of failures have been observed, the
reliability can be computed.

Pg 54 Figure 2.7
Explain using one simple sentence each what you understand by the following
reliability measures:

 A POFOD of 0.001
 A ROCOF of 0.002
 MTBF of 200 units
 Availability of 0.998

A POFOD of 0.001

POFOD is the probability of failure on demand. A POFOD of 0.001 means 1 failure


in 1000 demands.

A ROCOF of 0.002

ROCOF is the Rate of Occurrence of Failure. A ROCOF of 0.002 means 2 failures


in each 1000 operational time units.

MTBF of 200 units

MTBF is the Mean Time Between Failure. MTBF of 200 units indicates that once a
failure occurs, the next failure is expected after 200 units.

Availability of 0.998

Availability of 0.998 means software is available for 998 out of 1000 time units.
To handle software design failures, a system has to use diverse software and
hardware. What are the three architectural patterns that have been used in
fault-tolerant systems? Discuss one of them in detail.

The three architectural patterns that have been used in fault-tolerant systems are -

1. Protection Systems Architecture

2. Self-monitoring Architecture

3. N-version Programming Architecture

Self-monitoring Architecture

A self-monitoring architecture is a system architecture in which the system is


designed to monitor its own operation and to take some action if a problem is
detected. Computations are carried out on separate channels, and the outputs of these
computations are compared. If the outputs are identical and are available at the same
time, then the system is judged to be operating correctly. If the outputs are different,
then a failure is assumed. When this occurs, the system raises a failure exception on
the status output line. This signals that control should be transferred to some other
system.

Pg 62 Figure 2.10

To handle software design failures, a system has to use diverse software and
hardware. Illustrate the three architectural patterns that have been used in
fault-tolerant systems?

Protection Systems Architecture

Pg 61 Figure 2.9
Self-monitoring Architecture

Pg 62 Figure 2.10

N-version Programming Architecture

Pg 65 Figure 2.12

What are the set of accepted good programming practices that help reduce
faults in systems?

Dependable programming guidelines

1. Limit the visibility of information in a program.

2. Check all inputs for validity.

3. Provide a handler for all exceptions.

4. Minimize the use of error-prone constructs.

5. Provide restart capabilities.

6. Check array bounds.

7. Include timeouts when calling external components.

8. Name all constants that represent real-world values.


What is the difference between black-box testing and white-box testing?

Black-box approach

In the black-box approach, test cases are designed using only the functional
specification of the software. That is, test cases are designed solely based on an
analysis of the input/out behavior (that is, functional behavior) and does not require
any knowledge of the internal structure of a program. For this reason, black-box
testing is also known as functional testing.

White-box (or glass-box) approach

Designing white-box test cases requires a thorough knowledge of the internal


structure of a program, and therefore white-box testing is also called structural
testing.

What do you understand by system testing? What are the different kinds of
system / testing that are usually performed on large software products?

System tests are designed to validate a fully developed system to assure that
it meets its requirements. The test cases are therefore designed solely based on the
SRS document.

There are essentially three main kinds of system testing depending on who
carries out testing:

1. Alpha Testing: Alpha testing refers to the system testing carried out by the test
team within the developing organization.

2. Beta Testing: Beta testing is the system testing performed by a select group of
friendly customers.
3. Acceptance Testing: Acceptance testing is the system testing performed by the
customer to determine whether to accept the delivery of the system.

Distinguish among a test case, a test suite, a test scenario, and a test script.

A test scenario is an abstract test case in the sense that it only identifies the aspects
of the program that are to be tested without identifying the input, state, or output. A
test case can be said to be an implementation of a test scenario.

A test script is an encoding of a test case as a short program. Test scripts are
developed for automated execution of the test cases.

Test cases are well maintained and well documented procedure designs that can be
used to test the functionality and features of any software product.

A test suite is the set of all tests that have been designed by a tester to test a given
program.

You might also like