0% found this document useful (0 votes)
13 views47 pages

Unit 4 Notes

The document outlines software testing strategies, emphasizing the importance of a strategic approach that integrates various testing methods, including black-box and white-box testing, to ensure software quality. It discusses the roles of verification and validation, the organization of testing efforts, and the different types of testing such as unit, integration, validation, and system testing. Additionally, it highlights the need for metrics in assessing software quality and the significance of continuous improvement in the testing process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views47 pages

Unit 4 Notes

The document outlines software testing strategies, emphasizing the importance of a strategic approach that integrates various testing methods, including black-box and white-box testing, to ensure software quality. It discusses the roles of verification and validation, the organization of testing efforts, and the different types of testing such as unit, integration, validation, and system testing. Additionally, it highlights the need for metrics in assessing software quality and the significance of continuous improvement in the testing process.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

Software Engineering 2

UNIT-IV

Testing Strategies: A Strategic Approach to software testing, Test Strategies for


Conventional Software, Black-Box Testing and White-Box Testing, Validation testing,
System testing, The art of Debugging.

Product Metrics: Software Quality, Frame work for Product Metrics, Metrics for
Analysis Model, Metrics for Design Model, Metrics for Source code, Metrics for testing,
Metrics for maintenance.

Metrics for Process and Projects: Software Measurement, Metrics for software quality.

Testing Strategies

Testing is the process of exercising a program with the specific intent of finding errors
prior to delivery to the end user.

A Strategic Approach to Software Testing

 Testing is a set of activities that can be planned in advance and conducted


systematically.

 A strategy for software testing integrates software test case design methods into a
well-planned series of steps that results in the successful construction of software.

General Characteristics of Strategic Testing:

 To perform effective testing, a software team should conduct effective formal


technical reviews

 Testing begins at the component level and work outward toward the integration of
the entire computer-based system

 Different testing techniques are appropriate at different points in time

 Testing is conducted by the developer of the software and (for large projects) by an
independent test group

 Testing and debugging are different activities, but debugging must be


accommodated in any testing strategy

A strategy for software testing must accommodate low-level tests that are necessary
to verify that a small source code segment has been correctly implemented as well as
high level tests that validates major system functions against customer requirements.

Dept of Computer Science & Engineering


Software Engineering 3

A strategy must provide guidance for the practitioner and a set of milestones for the
manager. Because the steps of the test strategy occur at a time when dead line pressure
begins to rise, progress must be measurable and problems must surface as early as
possible.

Verification and Validation:

 Verification – are we building the product correctly?

It refers to the set of activities that ensure that software correctly implements a
specific function.

 Validation – are we building the correct product?

It refers to the set of activities that ensure that the software that has been built is
traceable to customer requirements.

Verification and validation encompasses a wide range array of SQA activities that
include formal technical reviews, quality and configuration audits, performance
monitoring, simulation, feasibility study, documentation review, database review,
algorithm analysis, development testing, usability testing, qualification testing, and
installation testing. Although testing plays an extremely important role in V & V, many
other activities are also necessary.

Organizing for Software Testing:

 Testing should aim at "breaking" the software

 There are often a number of Common misconceptions that can be erroneously


inferred:

 The developer of software should do no testing at all

 The software should be given to a secret team of testers who will test it
unmercifully

 The testers get involved with the project only when the testing steps are
about to begin

 The software developer is always responsible for testing the individual units of the
program, ensuring that each performs the function or exhibits the behavior for
which it was designed.

Dept of Computer Science & Engineering


Software Engineering 4

 In many cases developer also conducts integration testing-a testing step that leads
to the construction of the complete software architecture. Only after the software
architecture is completed, and independent test involves:

 The role of an Independent Test Group(ITG):

 To remove the inherent problems associated with letting the builder test the
software that has been built

 To remove the conflict of interest that may otherwise be present

 Works closely with the software developer during analysis and design to
ensure that thorough testing occurs

Testing Strategy for Conventional Software:

The software process may be viewed as the spiral illustrated in the following
figure:

A strategy for software testing may also be viewed in the context of the spiral.

 Unit testing begins at the vortex of the spiral and concentrates on each unit of
the software as implemented in source code.
 Testing progresses by moving outward along the spiral to integration testing
where the focus is on design and the construction of the software architecture.
 Taking another turn outward on the spiral, we encounter validation testing,
where requirements established as part of software requirements analysis are
validated against the software that has been constructed.

Dept of Computer Science & Engineering


Software Engineering 5

 Finally, we arrive at system testing, where the software and other system
elements are tested as a whole.

Testing is actually a series of four steps that are implemented sequentially. The
steps are shown in the following figure:

 Unit testing

 Exercises specific paths in a component's control structure to ensure


complete coverage and maximum error detection

 Components are then assembled and integrated

 Integration testing

 Focuses on inputs and outputs, and how well the components fit together
and work together

 Validation testing

 Provides final assurance that the software meets all functional, behavioral,
and performance requirements

 System testing

 Verifies that all system elements (software, hardware, people, databases)


mesh properly and that overall system function and performance is achieved

Testing Strategy for Object-Oriented Software:

 Must broaden testing to include detections of errors in analysis and design models

Dept of Computer Science & Engineering


Software Engineering 6

 Unit testing loses some of its meaning and integration testing changes
significantly

 Use the same philosophy but different approach as in conventional software


testing

 Test "in the small" and then work out to testing "in the large―

 Finally, the system as a whole is tested to detect errors in fulfilling requirements.

Criteria for Completion of Testing:

When is Testing Complete?

 There is no definitive answer to this question

 Every time a user executes the software, the program is being tested

 Sadly, testing usually stops when a project is running out of time, money, or
both

 One approach is to divide the test results into various severity levels

Then consider testing to be complete when certain levels of errors no longer occur
or have been repaired or eliminated

Strategic Issues

Ensuring a Successful Software Test Strategy:

 Specify product requirements in a quantifiable manner long before testing


commences

 State testing objectives explicitly in measurable terms

 Understand the user of the software (through use cases) and develop a profile for
each user category

 Develop a testing plan that emphasizes rapid cycle testing to get quick feedback to
control quality levels and adjust the test strategy

 Build robust software that is designed to test itself and can diagnose certain kinds
of errors

Dept of Computer Science & Engineering


Software Engineering 7

 Use effective formal technical reviews as a filter prior to testing to reduce the
amount of testing required

 Conduct formal technical reviews to assess the test strategy and test cases
themselves

 Develop a continuous improvement approach for the testing process through the
gathering of metrics

Test Strategies for Conventional Software

There are many strategies the can be used to test software.

 At one extreme a software team could wait until the system is fully constructed
and then conduct tests on the overall system to find errors. This approach will
result in buggy software that disappoints the customer and end-user.
 At the other extreme, a software engineer could conduct tests on a daily bases,
whenever any part of the system is constructed. This approach results in an
effective software. Unfortunately most software developers hesitate to use it.

A testing strategy that is chosen by most software teams fails between the two
extremes.

It takes an increment view of testing, beginning with the testing of individual program
units, moving to tests designed to facilitate the integration of the units, and culminating
with tests that exercise the constructed system.

Each of these classes of tests is described as follows:

Unit Testing:

 Unit Testing Focuses verification effort on the smallest unit of software design –
software component or module

 Concentrates on the internal processing logic and data structures within the
boundaries of a component.

 Using component-level design description as a guide, important control paths are


tested to uncover errors within the boundary of the module.

 This type of testing can be conducted in parallel for multiple components.

Unit Test Considerations:

Dept of Computer Science & Engineering


Software Engineering 8

The tests that occur as part of unit test are illustrated schematically in following
figure:

 The module interface is tested to ensure that information properly flows into and
out of the program unit under test.
 Local data structures are examined to ensure that data stored temporarily
maintains its integrity during all steps in an algorithms execution.
 All independents paths through the control structures are exercise to ensure that
all statements in a module have been executed at least once.
 Boundary conditions are tested to ensure that the module operates properly at
boundaries established to limit or restrict processing.
 Finally all error handling paths are tested.

 Common Computational Errors:

 Misunderstood or incorrect arithmetic precedence.

 Mixed mode operations (e.g., int, float, char).

 Incorrect initialization of values.

 Precision inaccuracy and round-off errors.

 Incorrect symbolic representation of an expression (int vs. float).

 Test cases should uncover errors such as:

 Comparison of different data types

 Incorrect logical operators or precedence

 Expectation of equality when precision error makes equality unlikely

Dept of Computer Science & Engineering


Software Engineering 9

 Incorrect comparison of variables

 Improper or non-existent loop termination

 Failure to exit when divergent iteration is encountered

 Improperly modified loop variables.

 Among the potential errors that should be tested when error handling is
evaluated are:

 Error description is unintelligible.

 Error noted does not correspond to error encountered

 Error condition causes operating system intervention prior to error


handling.

 Exception-condition processing is incorrect.

 Error description does not provide enough information to assist in the


location of the cause of the error.

 Unit Testing Procedures: unit testing is normally considered as an adjunct to the


coding step. The design of unit tests can be performed before coding begins or
after source code has been generated. The unit test environment is illustrated in
following figure:

Because a component is not a stand-alone program, driver and/or stub software must
be developed for each unit test.

Dept of Computer Science & Engineering


Software Engineering 10

 Stubs are dummy modules that are always distinguish as "called programs", it
used when sub programs are under construction.

Stubs are considered as the dummy modules that always simulate the low
level modules.

 Drivers are also considered as the form of dummy modules which are always
distinguished as "calling programs‖ , it is only used when main programs are
under construction.

Drivers are considered as the dummy modules that always simulate the
high level modules.

Unit testing is simplified when a component with high cohesion is designed.

When only one function is addressed by a component, the number of test cases is
reduced and errors can be more easily predicted and uncovered.

Integration Testing:

Integration testing is a systematic technique for constructing the software


architecture while at the same time conducting tests to uncover errors associated with
interfacing. The objective is to take unit tested components and build a program
structure that has been dictated by design.

 Once all modules have been unit tested: ―If they all work individually, why do you
doubt that they‘ll work when we put them together?‖

The problem, of course, is ―putting them together‖ —interfacing.

 Data can be lost across an interface;

 One component can have an inadvertent, adverse effect on another;

 Sub functions, when combined, may not produce the desired major
function;

 Global data structures can present problems.

Sadly, the list goes on and on.

Dept of Computer Science & Engineering


Software Engineering 11

 Integration testing focuses on collaborating all unit tested components to form one
single architecture and then implementing test cases to uncover errors associated
with interfacing.

 Integration is possible in two ways:

 Big bang approach

 Incremental approach

Big bang testing :

All components are combined and the entire program is tested as a whole.
Correcting the errors is complicated. Once these errors are corrected, new ones appear
and the process continues (endless). Hence, it is kept aside.

Incremental Integration testing:

Incremental integration testing is the antithesis of the big bang approach.

 The program is constructed and tested in small increments, where errors are
easier to isolate and correct;

 interfaces are more likely to be tested completely; and

 a systematic test approach may be applied.

A T1

T1
A
T1 T2
A B
T2

T2 B T3

T3
B C
T3 T4
C
T4

D T5

Test sequence1 Test sequence2 T


est sequence3

Top-down integration:

Dept of Computer Science & Engineering


Software Engineering 12

Modules are integrated by moving downward through the control hierarchy,


beginning with the main control module in either a depth-first or breadth-first
manner.

Bottom-up integration:

Modules are integrated by moving upward through the control hierarchy,


beginning with the sub control module.

 Top-down integration :

The integration process is performed in a series of five steps:

1. The main control module is used as a test driver and stubs are substituted
for all components directly subordinate to the main control module.

2. Depending on the integration approach selected, subordinate stubs are


replaced one at a time with actual components.

3. Tests are conducted as each component is integrated.

4. On completion of each set of tests, another stub is replaced with the real
component.

5. Regression testing may be conducted to ensure that new errors have not
been introduced.

The process continues from step2 unit the entire program structure is built. The
top-down integration strategy verifies major control or decision points early in the test
process.

The top-down integration follows the pattern illustrated in the following figure:

 Bottom-up integration:

Dept of Computer Science & Engineering


Software Engineering 13

Bottom-up integration testing, begins construction and testing with atomic


modules because components are integrated form the bottom up, processing
required for components subordinate to a given level is always available and the
need for stubs is eliminated.

A bottom-up integration strategy may be implemented with the following steps:

1. Low-level components are combined into clusters that perform a specific


software sub function.

2. A driver (a control program for testing) is written to coordinate test case


input and output.

3. The cluster is tested.

4. Drivers are removed and clusters are combined moving upward in the
program structure.

The Bottom-up integration follows the pattern illustrated in the following figure:

Regression Testing:

 Each time a new module is added as part of integration testing, the software
changes.
 New data flow paths are established, new I/O may occur, and new control logic is
invoked. These changes may cause problems with functions that previously
worked flawlessly.

Dept of Computer Science & Engineering


Software Engineering 14

 In the context of an integration test strategy, regression testing is the re-execution


of some subset of tests that have already been conducted to ensure that changes
have not propagated unintended side effects.
 Regression testing is the activity that helps to ensure that changes (due to testing
or for other reasons) do not introduce unintended behavior or additional errors.

 Regression testing may be conducted manually, by re-executing a subset of all test


cases or using automated capture/playback tools.

 Capture/playback tools enable the software engineer to capture test cases and
results for subsequent playback and comparison.

 The regression test suite (the subset of tests to be executed) contains three
different classes of test cases:

 A representative sample of tests that will exercise all software functions.

 Additional tests that focus on software functions that are likely to be


affected by the change.

 Tests that focus on the software components that have been changed.

Smoke Testing

 Smoke testing is usually applied to under developing software entities.


 It is applied to those projects, which are needed to be completed in short period of
time. Hence, depends on needs to be accessed frequently.
 Following are few important consequences performed during smoke testing:
 Initially, relatively functioning units are joined together to form a cluster. In
this way several clusters are formed.
 Now, Depending on the content of each cluster, a series of test cases are
derived capable of exposing errors in each cluster. These errors usually make
the development of projects to run behind the scheduled time.
 Each of these clusters are now joined together (using top down or bottom up) to
form a single architecture and this architecture is again tested.
 Following are the advantages of smoke testing:
 Project development can be easily witnessed.
 The end product developed possesses high quality.

Dept of Computer Science & Engineering


Software Engineering 15

 Error detection and correction becomes easy.


 The scope of risks during integration process becomes low.

Black-Box & white-box testing:

 Software testing is implemented on given software with an intention of finding


errors, and to make the error free software.

 Basically there are many forms of testing, but two types of testing are of primary
focus. They are

 Black-box testing

 White box-testing

Black-Box Testing:

 Black-box testing can be non-technical (tested by individual non-technical testing


team)

 This type of testing is conducted so as to ensure that the software satisfies its
purpose of development.

 The software is exercised in all its functional aspects and is closely analyzed to
conclude that its modules(together) functions to the expectations.

 The errors detected with the normal functioning of the software are uncovered so
that they can be rectified in the future.

White-Box Testing:

 White-box testing should be technical (tested by developer or technical team)

 This type of testing lays stress on testing the framework of the software satisfies
its purpose of development.

 Each individual unit of the software is tested along with the way each unit
collaborates with others to bring up the required functionality.

 The errors detected during testing are uncovered.

Validation Testing:

 Validation testing may begin after successfully completing the integration testing

Dept of Computer Science & Engineering


Software Engineering 16

 Validation can be defined in many ways, but a simple definition is that validation
succeeds when software functions in a manner that can be reasonably expected
by the customer.

 At this point a battle-hardened software developer might protest: "Who or what is


the arbiter of reasonable expectations?―

 Reasonable expectations are defined in the Software Requirements


Specification a document that describes all user-visible attributes of the
software.

 The specification contains a section called Validation Criteria. Information


contained in that section forms the basis for a validation testing approach.

 Validation Test Criteria: software validation is achieved through a series of tests


that demonstrate conformity with requirements. A test plan outlines the classes of
tests to be conducted, and a test procedure defines specific test cases. Both the
plan and procedure are designed to ensure that all functional requirements are
satisfied, all behavioral characteristics are achieved, all performance requirements
are attained, documentation is correct, and usability and other requirements are
met.

 After each validation test case has been conducted one of two possible conditions
exist:

 The function or performance characteristic conforms to specification and is


accepted

 A deviation from specification is uncovered and a deficiency corrected prior


to scheduled delivery.

Configuration Review:

An important element of the validation process is a configuration review. The


intent of the review is to ensure that all elements of the software configuration have
been properly developed, are cataloged, and have the necessary detail to bolster the
support phase of the software life cycle.

Alpha and Beta Testing:

If software is developed as a product to be used by many customers. Most


software product builders use a process called alpha and beta testing to uncover errors
that only the end-user seems able to find.

Dept of Computer Science & Engineering


Software Engineering 17

The alpha test is conducted at the developer‘s site by end-users. Alpha tests are
conducted in a controlled environment with the developer looking over the typical users
and recording errors and their usage problems.

The beta test is conducted at end-user sites. The beta test is a ―live‖ application of
the software in an environment that cannot be controlled by the developer. The end-user
records all problems that are encountered during beta testing and reports these to the
developer at regular intervals.

System Testing:

 System testing is actually a series of different tests whose primary purpose is to


fully exercise the computer-based system.

 Although each test has a different purpose, all work to verify that system elements
have been properly integrated and perform allocated functions.

 Following are the types of system tests that are worthwhile for software-based
systems.

 Recovery Testing

 Stress Testing

 Performance Testing

Recovery Testing:

 Recovery testing is a system test that forces the software to fail in a variety of ways
and verifies that recovery is properly performed.

 If recovery is automatic (performed by the system itself), re-initialization, check


pointing mechanisms, data recovery, and restart are evaluated for correctness.

 If recovery requires human intervention, the mean-time-to-repair (MTTR) is


evaluated to determine whether it is within acceptable limits.

Stress Testing:

 Stress testing executes a system in a manner that demands resources in


abnormal quantity, frequency, or volume.

 A variation of stress testing is a technique called sensitivity testing.

Dept of Computer Science & Engineering


Software Engineering 18

 In some situations (the most common occur in mathematical algorithms), a very


small range of data contained within the bounds of valid data for a program may
cause extreme and even erroneous processing or profound performance
degradation.

Sensitivity testing attempts to uncover data combinations within valid input classes
that may cause instability or improper processing.

Performance Testing:

 Performance testing is designed to test the run-time performance of software


within the context of an integrated system.

 Performance testing occurs throughout all steps in the testing process. Even at
the unit level, the performance of an individual module may be assessed as white-
box tests are conducted

 However, it is not until all system elements are fully integrated that the true
performance of a system can be ascertained.

The art of Debugging:

 Debugging occurs as a consequence of successful testing i.e., when a test case


uncovers an error, debugging is the process that results in the removal of the
error.

 A software engineer, evaluating the results of a test, is often confronted with a


"symptomatic" indication of a software problem i.e., the external manifestation of
the error and the internal cause of the error may have no obvious relationship to
one another. The poorly understood mental process that connects a symptom to a
cause is debugging.

Debugging Process:

 Debugging is not testing but always occurs as a consequence of testing.

 The debugging process begins with the execution of a test case.

 Results are assessed and a lack of correspondence between expected and actual
performance is encountered.

 In many cases, the non-corresponding data are a symptom of an underlying cause


as yet hidden.

Dept of Computer Science & Engineering


Software Engineering 19

 Debugging will always have one of two outcomes:

 The cause will be found and corrected

 The cause will not be found

In the latter case, The debugging process attempts to match symptom with cause,
thereby leading to error correction.
A few characteristics of bugs provide some clues such as

 The symptom and the cause may be geographically remote. i.e., the
symptom may appear in one part of a program, while the cause may
actually be located at a site that is far removed. Highly coupled program
structures exacerbate this situation.

 The symptom may disappear (temporarily) when another error is corrected.

 The symptom may actually be caused by non-errors (e.g., round-off


inaccuracies).

 The symptom may be caused by human error that is not easily traced.

 The symptom may be a result of timing problems, rather than processing


problems.

 It may be difficult to accurately reproduce input conditions (e.g., a real-time


application in which input ordering is indeterminate).

 The symptom may be intermittent. This is particularly common in


embedded systems that couple hardware and software inextricably.

 The symptom may be due to causes that are distributed across a number of
tasks running on different processors.

Debugging Strategies:

Three debugging strategies have been proposed

 Brute force
 Backtracking
 Cause elimination

Dept of Computer Science & Engineering


Software Engineering 20

Each of these strategies can be conducted manually, but modern debugging tools can
make the process much more effective.

Debugging Tactics: The brute force category of debugging is probably the most common
and least efficient method for isolating the cause of a software error. We apply brute
force debugging methods when all else fails.

Backtracking is a fairly common debugging approach that can be used successfully in


small programs. Beginning at the site where a symptom has been uncovered, the source
code is traced backward until the site of the cause is found. Unfortunately, as the
number of source lines increases, the number of potential backward paths may become
unmanageably large.

The third approach to debugging- cause elimination – is manifested induction or


deduction and introduces the concept of binary partitioning. Data related to the error
occurrence are organized to isolate potential causes.

Automated Debugging: Each of these debugging approaches can be supplemented with


debugging tools that provide semi-automated support for the software engineer as
debugging strategies are attempted. Integrated development environments provide a way
to capture some of the language specific predetermined errors(e.g. missing end-of-
statement characters, undefined variables and so on) without requiring compilations.

A wide variety of debugging compilers, dynamic debugging aids automatic test


case generators, and cross-reference mapping tools are available. However, tools are not
a substitute for careful evaluation base on a complete design model and clear source
code.

Dept of Computer Science & Engineering


Software Engineering 21

Product Metrics: Software Quality, Frame work for Product Metrics, Metrics for
Analysis Model, Metrics for Design Model, Metrics for Source code, Metrics for testing,
Metrics for maintenance.

Introduction:

 A key element of any engineering discipline is measurement.

 Software process and product metrics are quantitative measures that enable
software people to gain insight into the efficacy of the software process and the
projects that are conducted using the process as a framework

 Software metrics are analyzed and assessed by software managers. Measures are
often collected by software engineers.

 If you don‘t measure, judgment can be based only on subjective evaluation. With
measurement, trends (either good or bad) can be spotted, better estimates can be
made, and true improvement can be accomplished over time.

Software quality:

Software quality is “Conformance to explicitly stated functional and


performance requirements, explicitly documented development standards, and implicit
characteristics that are expected of all professionally developed software”.

 Three important points in this definition

 Explicit software requirements are the foundation from which quality is


measured. Lack of conformance to requirements is lack of quality

 Specific standards define a set of development criteria that guide the


manner in which software is engineered. If the criteria are not followed,
lack of quality will most surely result

 There is a set of implicit requirements that often goes unmentioned (e.g., ease
of use). If software conforms to its explicit requirements but fails to meet
implicit requirements, software quality is suspect

McCall’s Quality Factors:

 The factors that affect software quality can be categorized in two broad groups:

 Directly measured factors

 Indirectly measured factors

Dept of Computer Science & Engineering


Software Engineering 22

 McCall Richards and walters propose a useful categorization of factors that affect
software quality. These software quality factors as shown in following figure.

Correctness: The Extent to which a program satisfies its specification and fulfills the
customer‘s mission objectives.

Reliability: The extent to which a program can be expected to perform its intended
function with required precision.

Efficiency: The amount of computing resources and code required by a program to


perform its function.

Integrity: The extent to which access to software or data by unauthorized persons can
be controlled.

Usability: The effort required to learn, operate, prepare input for, and interpret output
of a program.

Maintainability: The effort required to locate and fix an error in a program.

Flexibility: The effort required to modify an operational program.

Testability: The effort required to test a program to ensure that it performs its intended
function

Portability: The effort required to transfer the program from one hardware and/or
software system environment to another.

Reusability: The extent to which a program can be reused in other applications-related


to the packaging and scope of the functions that the program performs.

Dept of Computer Science & Engineering


Software Engineering 23

Interoperability: The effort required to couple one system to another.

ISO 9126 Quality Factors:

 The ISO 9126 standard was developed in an attempt to identify quality attributes
for computer software

 The standard identifies six key quality attributes:

 Functionality: The degree to which the software satisfies stated needs as


indicated by the following sub-attributes: suitability, accuracy,
interoperability, compliance and security.

 Reliability: The amount of time that the software is available for use as
indicated by the following sub-attributes: maturity, fault tolerance,
recoverability.

 Usability: The degree to which the software is easy to use as indicated by


the following sub-attributes: understandability, learnability and operability.

 Efficiency: The degree to which the software makes optimal use of system
resources as indicated by the following sub-attributes: time behavior,
resource behavior.

 Maintainability: The ease with which repair may be made to the software as
indicated by the following sub-attributes:analyzability, changeability,
stability, testability.

 Portability: The ease with which the software can be transposed from one
environment to another as indicated by the following sub-attributes:
adaptability, installability, conformance, replaceability.

A Framework For Product Metrics:

It is worth while to establish a fundamental framework and a set of principles for the
measurement of product metrics for software

Measures, Metrics, and Indicators:

 Although the terms measure, measurement, and metrics are often used
interchangeably, it is important to note the subtle differences between them.

Dept of Computer Science & Engineering


Software Engineering 24

 Within the software engineering context, a measure provides a quantitative


indication of the extent, amount, dimension, capacity, or size of some
attribute of a product or process.

 Measurement is the act of determining a measure.

 The IEEE Standard Glossary of Soft-ware Engineering Terminology [IEE93b]


defines metric as ―a quantitative measure of the degree to which a system,
component, or process possesses a given attribute.‖

 When a single data point has been collected (e.g., the number of errors uncovered
within a single software component), a measure has been established.

 Measurement occurs as the result of the collection of one or more data points
(e.g., a number of component reviews and unit tests are investigated to collect
measures of the number of errors for each).

 A software metric relates the individual measures in some way (e.g., the average
number of errors found per review or the average number of errors found per unit
test).

 A software engineer collects measures and develops metrics so that indicators will
be obtained.

 An indicator is a metric or combination of metrics that provides insight into the


software process, a software project, or the product itself.

 An indicator provides insight that enables the project manager or software


engineers to adjust the process, the project, or the product to make things better.

Measurement Principles:

 Roche suggests a measurement process that can be characterized by five


activities:

 Formulation: The derivation of software measures and metrics appropriate


for the representation of the software that is being considered.

 Collection: The mechanism used to accumulate data required to derive the


formulated metrics.

 Analysis: The computation of metrics and the application of mathematical


tools.

Dept of Computer Science & Engineering


Software Engineering 25

 Interpretation: The evaluation of metrics resulting in insight into the


quality of the representation.

 Feedback: Recommendations derived from the interpretation of product


metrics transmitted to the software team.

 Software metrics will be useful only if they are characterized effectively and
validated so that their worth is proven. The following principles are representative
of many that can be proposed for metrics characterization and validation:

 A metric should have desirable mathematical properties.

 When a metric represents a software characteristic that increases when


positive traits occur or decreases when undesirable traits are encountered,
the value of the metric should increase or decrease in the same manner.

 Each metric should be validated empirically in a wide variety of contexts


before being published or used to make decisions.

 Although formulation, characterization, and validation are critical, collection and


analysis are the activities that drive the measurement process.

 Roche suggests the following guidelines for these activities:

 whenever possible, data collection and analysis should be automated;

 valid statistical techniques should be applied to establish relationships


between internal product attributes and external quality characteristics
(e.g., whether the level of architectural complexity correlates with the
number of defects reported in production use); and

 interpretative guide-lines and recommendations should be established for


each metric.

Goal-Oriented Software Measurement:

 The Goal/Question/Metric (GQM) paradigm has been developed by Basili and


Weiss as a technique for identifying meaningful metrics for any part of the
software process.

 GQM emphasizes the need to

 establish an explicit measurement goal that is specific to the process


activity or product characteristic that is to be assessed,

Dept of Computer Science & Engineering


Software Engineering 26

 define a set of questions that must be answered in order to achieve the goal,
and

 identify well-formulated metrics that help to answer these questions.

 A goal definition template can be used to define each measurement goal. The
template takes the form:

 Analyze {the name of activity or attribute to be measured}

 for the purpose of {the over-all objective of the analysis2}

 with respect to {the aspect of the activity or attribute that is considered}

 from the viewpoint of {the people who have an interest in the


measurement}

 in the context of {the environment in which the measurement takes place}.

Attributes of Effective Software Metrics:

Ejiogu defines a set of attributes that should be encompassed by effective software


metrics. The derived metric and the measures that lead to it should be:

 Simple and computable: It should be relatively easy to learn how to derive the
metric, and its computation should not demand inordinate effort or time.

 Empirically and intuitively persuasive: The metric should satisfy the engineer‘s
intuitive notions about the product attribute under consideration.

 Consistent and objective: the metric should always yield results that are
unambiguous.

 Consistent in its use of units and dimensions: the mathematical computation


of the metric should use measures that do not lead to bizarre combinations of
units.

 Programming language independent: Metric should be based on the analysis


model, the design model or the structure of the program itself.

 An effective mechanism for high-quality feedback. That is, the metric should
lead to a higher quality end product.

A Product Metrics Taxonomy:

Dept of Computer Science & Engineering


Software Engineering 27

A wide variety of metrics taxonomies have been proposed, the following outline
addresses the most important metrics areas:

Metrics for the Analysis Model:

These metrics address various aspects of the analysis model and include:

 Functionality delivered--Provides an indirect measure of the functionality that is


packaged within the software

 System size--Measures the overall size of the system defined in terms of


information available as part of the analysis model

 Specification quality--Provides an indication of the specificity and completeness of


a requirements specification

Metrics for the Design Model:

These metrics quantify design attribute in a manner that allows a software engineer to
assess design quality. Metrics include:

 Architectural metrics--Provide an indication of the quality of the architectural


design

 Component-level metrics--Measure the complexity of software components and


other characteristics that have a bearing on quality

 Interface design metrics--Focus primarily on usability

 Specialized object-oriented design metrics--Measure characteristics of classes and


their communication and collaboration characteristics

Metrics for Source Code:

These metrics measures the source code and can be used to assess its complexity,
maintainability and testability, among other characteristics:

 Halstead metrics: these metrics provide unique measures of a computer program.

 Complexity metrics--Measure the logical complexity of source code (can also be


applied to component-level design)

 Length metrics--Provide an indication of the size of the software

Metrics for Testing:

Dept of Computer Science & Engineering


Software Engineering 28

These metrics assist in the design of test cases and evaluate the efficacy of testing:

 Statement and branch coverage metrics--Lead to the design of test cases that
provide program coverage

 Defect-related metrics--Focus on defects (i.e., bugs) found, rather than on the


tests themselves

 Testing effectiveness metrics--Provide a real-time indication of the effectiveness of


tests that have been conducted

 In-process metrics--Process related metrics that can be determined as testing is


conducted.

Metrics For Analysis Model

These metrics examine the analysis model with the intent of predicting the size of the
resultant system. Size is sometimes an indicator of design complexity and is almost
always an indicator of increased coding, integration and testing effort.

Function-Based Metrics:

 The function point (FP) metric can be used effectively as a means for measuring the
functionality delivered by a system.

 Using historical data, the FP metric can then be used to

 estimate the cost or effort required to design, code, and test the software;

 predict the number of errors that will be encountered during testing; and

 forecast the number of components and/or the number of projected source


lines in the implemented system.

 Function points are derived using an empirical relationship based on countable


(direct) measures of software‘s information domain and qualitative assessments of
software complexity.

 Information domain values are defined in the following manner:

 Number of external inputs (EIs). Each external input originates from a


user or is transmitted from another application. Inputs are often used to
update internal logic files(ILFs). Inputs should be distinguished from
inquiries, which are counted separately.

Dept of Computer Science & Engineering


Software Engineering 29

 Number of external outputs (EOs). Each external output is derived within


the application and provides information to the user. External output refers
to reports, screens, error messages and so on. Individual data items within
a report are not counted separately.

 Number of external inquiries (EQs).an external inquiry is defined as an


online input that results in the generation of some immediate software
response in the form of an on-line output.

 Number of internal logical files (ILFs). Each internal logical file is a logical
grouping of data that resides within the application‘s boundary and is
maintained via external inputs.

 Number of external interface files (EIFs). Each external interface files s a


logical grouping of data that resides external to the application but provides
data that may be of use to the application.

 The Fi (i = 1 to 14) are value adjustment factors (VAF) based on responses to the
following questions:

 Does the system require reliable backup and recovery?

 Are specialized data communications required to transfer information to or


from the application?

Dept of Computer Science & Engineering


Software Engineering 30

 Are there distributed processing functions?

 Is performance critical?

 Will the system run in an existing, heavily utilized operational environment?

 Does the system require online data entry?

 Does the online data entry require the input transaction to be built over
mul-tiple screens or operations?

 Are the ILFs updated online?

 Are the inputs, outputs, files, or inquiries complex?

 Is the internal processing complex?

 Is the code designed to be reusable?

 Are conversion and installation included in the design?

 Is the system designed for multiple installations in different organizations?

 Is the application designed to facilitate change and ease of use by the user?

Each of these questions is answered using a scale that ranges from 0 (not
important or applicable) to 5 (absolutely essential).

Metrics for Specification Quality:

 Specificity (lack of ambiguity),

 Completeness,

 Correctness,

 Understandability,

 Verifiability,

 Internal and external consistency (stability),

 Achievability,

 Concision (pointedness),

Dept of Computer Science & Engineering


Software Engineering 31

 Traceability,

 Modifiability,

 Precision (definiteness), and

 Reusability.

 Although many of these characteristics appear to be qualitative in nature, Davis


suggest that each can be represented using one or more metrics.

Ex : Assume that there are nr requirements in a specification, such that nr =


nf + nnf where nf and nnf is the no. of functional and non-functional requirements.

Metrics For Design Model:

Design metrics for computer software, like all other software metrics are not
perfect. Debate continues over their efficacy and the manner in which they should be
applied.

Architectural Design Metrics:

 Architectural design metrics focus on characteristics of the program architecture


with an emphasis on the architectural structure and the effectiveness of modules
or components within the architecture.

 These metrics are ―black box‖ .

 Card and Glass define three software design complexity measures:

 Structural complexity,

 Data complexity, and

 System complexity.

For hierarchical architectures (e.g., call-and-return architectures):

Dept of Computer Science & Engineering


Software Engineering 32

 Structural complexity of a module i is defined in the following manner:

S(i ) = f 2
out(i )

where f out(i ) is the fan-out of module i.

fan-out is defined as the number of modules that are directly invoked by module i.

 Data complexity provides an indication of the complexity in the internal interface


for a module i and is defined as

D(i ) = v(i ) / [f out(i ) + 1]

where v(i ) is the number of input and output variables that are passed to
and from module i.

 Finally, system complexity is defined as the sum of structural and data


complexity, specified as

C(i ) = S(i ) + D(i )

 As each of these complexity values increases, the overall architectural complexity


of the system also increases.

This leads to a greater likelihood that integration and testing effort will also
increase.

 Fenton suggests a number of simple morphology (i.e., shape) metrics that enable
different program architectures to be compared using a set of straightforward
dimensions. Referring to the call-and-return architecture in Figure, the following
metrics can be defined:

Size = n + a = 35

Depth = 4

Width = 6.

The arc-to-node ratio,

r = a / n, measures the connectivity density of the architecture and may provide a


simple indication of the coupling of the architecture.

r = 18 / 17 = 1.06.

Dept of Computer Science & Engineering


Software Engineering 33

 The U.S. Air Force Systems Command has developed a number of software quality
indicators that are based on measurable design characteristics of a computer
program.

 The Air Force uses information obtained from data and architectural design to
derive a design structure quality index (DSQI) that ranges from 0 to 1.

 The following values must be ascertained to compute the DSQI :

S1 = total number of modules defined in the program architecture

S2 = number of modules whose correct function depends on the source of data


input or that produce data to be used elsewhere (in general, control modules,
among others, would not be counted as part of S2)

S3 = number of modules whose correct function depends on prior processing

S4 = number of database items (includes data objects and all attributes that
define objects)

S5 = total number of unique database items

S6 = number of database segments (different records or individual objects)

S7 = number of modules with a single entry and exit (exception processing is


not considered to be a multiple exit)

 Once values S1 through S7 are determined for a computer program, the following
intermediate values can be computed:

Dept of Computer Science & Engineering


Software Engineering 34

 Program structure: D1, If architectural design developed using a distinct


method (e.g., data flow-oriented design or object-oriented design), then D1 =
1, otherwise D1 = 0.

 Module independence: D2 = 1 - ( S2 / S1 ).

 Modules not dependent on prior processing: D3 = 1 - ( S3 / S1 ).

 Database size: D4 = 1 - ( S5 / S4 ).

 Database compartmentalization: D5 = 1 - ( S6 / S4 ).

 Module entrance/exit characteristic: D6 = 1 - ( S7 / S1 ).

With these intermediate values determined, the DSQI is computed in the


following manner: DSQI =∑wi Di

where i =1 to 6, wi is the relative weighting of the importance of each of the


intermediate values, and wi = 1 (if all Di are weighted equally, then wi = 0.167).

Metrics for Object-Oriented Design:

 Whitmire describes nine distinct and measurable characteristics of an OO design:


1. Size
2. Complexity
3. Coupling
4. Sufficiency.
5. Completeness
6. Cohesion
7. Primitiveness
8. Similarity
9. Volatility.
 Size.

Size is defined in terms of four views:

 Population is measured by taking a static count of OO entities such as


classes or operations.

 Volume measures are identical to population measures but are collected


dynamically - at a given instant of time.

Dept of Computer Science & Engineering


Software Engineering 35

 Length is a measure of a chain of interconnected design elements (e.g., the


depth of an inheritance tree is a measure of length).

 Functionality metrics provide an indirect indication of the value delivered to


the customer by an OO application.

 Complexity.

Like size, there are many differing views of software complexity. Whitmire views
complexity in terms of structural characteristics by examining how classes of an
OO design are interrelated to one another.

 Coupling.

The physical connections between elements of the OO design (e.g., the number of
collaborations between classes or the number of messages passed between
objects) represent coupling within an OO system.

 Sufficiency.

A design component (e.g., a class) is sufficient if it fully reflects all properties of the
application domain object that it is modeling.

 Completeness.

Sufficiency compares the abstraction from the point of view of the current
application. Completeness considers multiple points of view.

 Cohesion.

The Cohesiveness of a class is determined by examining the degree to which ―the


set of properties it possesses is part of the problem or design domain‖

 Primitiveness.

Primitiveness (applied to both operations and classes) is the degree to which an


operation is atomic. A class that exhibits a high degree of primitiveness
encapsulates only primitive operations.

 Similarity.

The degree to which two or more classes are similar in terms of their structure,
function, behavior or purpose is indicated by this measure.

 Volatility.

Dept of Computer Science & Engineering


Software Engineering 36

Volatility of an OO design component measures the likelihood that a change will


occur.

Class-Oriented Metrics—The CK Metrics Suite:

The class is the fundamental unit of an OO system. Therefore, measures and metrics
for an individual class, the class hierarchy, and class collaborations will be invaluable
when you are required to assess OO design quality.

Chidamber and Kemerer have proposed six class-based design metrics for OO systems.

 Weighted methods per class (WMC).

Assume that n methods of complexity c1, c2, . . . , cn are defined for a class C. The
specific complexity metric that is chosen (e.g., cyclomatic complexity) should be
normalized so that nominal complexity for a method takes on a value of 1.0.

for i 1 to n.

WMC should be kept as low as is reasonable.

 Depth of the inheritance tree (DIT).

This metric is ―the maximum length from the node to the root of the tree‖

As DIT grows, it is likely that lower-level classes will inherit many methods.
This leads to potential difficulties when attempting to predict the behavior of a
class. A deep class hierarchy (DIT is large) also leads to greater design complexity.
On the positive side, large DIT values imply that many methods may be reused.

 Number of children (NOC).

The subclasses that are immediately subordinate to a class in the class


hierarchy are termed its children.

As the number of children grows, reuse increases, but also, the abstraction
represented by the parent class can be diluted if some of the children are not
appropriate members of the parent class. As NOC increases, the amount of testing
(required to exercise each child in its operational context) will also increase.

 Coupling between object classes (CBO).

The CRC model may be used to determine the value for CBO.

Dept of Computer Science & Engineering


Software Engineering 37

In essence, CBO is the number of collaborations listed for a class on its CRC
index card. As CBO increases, it is likely that the reusability of a class will
decrease. High values of CBO also complicate modifications and the testing that
ensues when modifications are made.
In general, the CBO values for each class should be kept as low as is
reasonable. This is consistent with the general guideline to reduce coupling in
conventional software.

 Response for a class (RFC).

The response set of a class is ―a set of methods that can potentially be


executed in response to a message received by an object of that class‖

RFC is the number of methods in the response set. As RFC increases, the
effort required for testing also increases because the test sequence grows. It also
follows that, as RFC increases, the overall design complexity of the class
increases.

 Lack of cohesion in methods (LCOM).

Each method within a class C accesses one or more attributes (also called
instance variables). LCOM is the number of meth-ods that access one or more of
the same attributes.10 If no methods access the same attributes, then LCOM 0.

If LCOM is high, methods may be coupled to one another via attributes. This
increases the complexity of the class design. Although there are cases in which a
high value for LCOM is justifiable, it is desirable to keep cohesion high; that is,
keep LCOM low.

Class-Oriented Metrics—The MOOD Metrics Suite:

Harrison, Counsell, and Nithi propose a set of metrics for object-oriented design that
provide quantitative indicators for OO design characteristics. A sampling of MOOD
metrics follows.

 Method Inheritance Factor (MIF).

The degree to which class architecture of OO system make use of inheritance for
both methods and attributes

MIF = ∑ Mi (Ci ) / ∑ Ma (Ci )

i = 1 to Tc. Tc is the total no. of classes in the architecture,Ci is a class within the
architecture, and

Dept of Computer Science & Engineering


Software Engineering 38

Ma (Ci) = Md (Ci) + Mi (Ci)

Ma(Ci) no. of methods that can be invoked in association with Ci

Md(Ci) no. of methods declared in the class Ci

Mi(Ci) no. of methods inherited (and not overridden) in Ci

Coupling Factor (CF).

The MOOD metrics suite defines coupling in the following way:

CF = ∑i ∑j is_client (Ci, Cj ) / (Tc2 - Tc )

i,j = 1 to Tc.

is_client = 1, iff a relationship exist between the client class, Ci and


server class, Cj and Ci ≠ Cj

= 0, otherwise

As the value for CF increases, the complexity of the OO software will also
increase and understandability, maintainability, and the potential for reuse may suffer
as a result.

Component-Level Design Metrics:

Component-level design metrics for conventional software components focus on


internal characteristics of a software component and include measures of the―three
Cs‖ —module cohesion, coupling, and complexity. These measures can help you judge
the quality of a component-level design.

 Cohesion Metrics:

Bieman and Ott define a collection of metrics that provide an indication of the
cohesiveness of a module.

The metrics are defined in terms of five concepts and measures:

 Data slice. A data slice is a backward walk through a module that looks for
data values that affect the module location at which the walk began.

 Data tokens. The variables defined for a module can be defined as data
tokens for the module.

Dept of Computer Science & Engineering


Software Engineering 39

 Glue tokens. This set of data tokens lies on one or more data slice.

 Superglue tokens. These data tokens are common to every data slice in a
module.

 Stickiness. The relative stickiness of a glue token is directly proportional to


the number of data slices that it binds.

 Coupling Metrics:

Module coupling provides an indication of the ―connected-ness‖ of a module to


other modules, global data, and the outside environment. Dhama has proposed a
metric for module coupling that encompasses data and control flow coupling,
global coupling, and environmental coupling.

 For data and control flow coupling.

di= number of input data parameters

ci = number of input control parameters

do= number of output data parameters

co = number of output control parameters

 For global coupling.

gd = number of global variables used as data

gc = number of global variables used as control

 For environmental coupling.

w = number of modules called (fan-out)

r = number of modules calling the module under consideration (fan-


in)

Interface Design Metrics:

Sears suggests that layout appropriateness (LA) is a worthwhile design metric for
human/computer interfaces.

 Layout appropriateness:

Dept of Computer Science & Engineering


Software Engineering 40

A function of layout entities, the geographic position and the ―cost‖ of


making transitions among entities. This is a worthwhile design metric for interface
design.

Metrics For Source Code

 Halstead Metrics HSS(Halstead’s theory of Software science)

Set Primitive measures that may be derived after the code is generated or
estimated once design is complete

n1 = the number of distinct operators that appear in a program

n2 = the number of distinct operands that appear in a program

N1 = the total number of operator occurrences.

N2 = the total number of operand occurrence.

Length N = n1 log2 n1 + n2 log2 n2

Program volume V = N log2 (n1 + n2)

V will vary with programming language and represents the volume in ‗bits‘
required to specify a program

Volume ratio L = 2/ n1 * n2 / N2

Metrics for testing:

 Halstead Metrics Applied to Testing:

Program Level and Effort

PL = 1/[(n1 / 2) x (N2 / n2 l)]

e = V/PL

The percentage of overall testing effort to be allocated to a module k can be


estimated using the following relationship:

Percentage of testing effort (k) = e(k) / ∑e(i)

where e(k) is computed for module k using Equations and the summation in the
denominator of Equation is the sum of Halstead effort across all modules of the system.

Dept of Computer Science & Engineering


Software Engineering 41

Metrics for Object-Oriented Testing:

Binder suggests a broad array of design metrics that have a direct influence on
the ―testability‖ of an OO system.

The metrics consider aspects of encapsulation and inheritance.

 Lack of cohesion in methods (LCOM).

The higher the value of LCOM, the more states must be tested to ensure
that methods do not generate side effects.

 Percent public and protected (PAP).

This metric indicates the percentage of class attributes that are public or
protected. High values for PAP increase the likelihood of side effects among
classes because public and protected attributes lead to high potential for
coupling.16 Tests must be designed to ensure that such side effects are uncovered.

 Public access to data members (PAD).

This metric indicates the number of classes (or methods) that can access
another class‘s attributes, a violation of encapsulation. High values for PAD lead
to the potential for side effects among classes. Tests must be designed to ensure
that such side effects are uncovered.

 Number of root classes (NOR).

This metric is a count of the distinct class hierarchies that are described in
the design model. Test suites for each root class and the corresponding class
hierarchy must be developed. As NOR increases, testing effort also increases.

 Fan-in (FIN).

When used in the OO context, fan-in in the inheritance hierar-chy is an


indication of multiple inheritance. FIN 1 indicates that a class inherits its
attributes and operations from more than one root class. FIN 1 should be avoided
when possible.

 Number of children (NOC) and depth of the inheritance tree (DIT).

Metrics For Maintenance:

 IEEE standard suggests a software maturity index that provides indication of


stability of software product.

Dept of Computer Science & Engineering


Software Engineering 42

The following information is determined:

MT = the number of modules in the current release

Fc = the number of modules in the current release that have been changed

Fa = the number of modules in the current release that have been added.

Fd = the number of modules from the preceding release that were deleted in the
current release

 The Software Maturity Index, SMI, is defined as:

SMI = [MT - (Fc + Fa + Fd) / MT ]

As SMI approaches 1.0 the product begins to stabilize.

Metrics For Process And Projects:

 Process metrics are collected across all projects and over long periods of time.
Their intent is to provide a set of process indicators that lead to long-term software
process improvement.

 Project metrics enable a software project manager to

 assess the status of an ongoing project,

 track potential risks,

 uncover problem areas before they go ―critical,‖

 adjust work flow or tasks, and

 evaluate the project team‘s ability to control quality of software work


products.

Software Measurement

 Software measurement can be categorized in two ways:

 Direct measures of the software process (e.g., cost and effort applied) and
product (e.g., lines of code (LOC) produced, execution speed, memory size,
and defects reported over some set period of time).

Dept of Computer Science & Engineering


Software Engineering 43

 Indirect measures of the product include functionality, quality,


complexity, efficiency, reliability, maintainability, and many other ―–
abilities‖

Size-Oriented Metrics:

Size-oriented software metrics are derived by normalizing quality and/or productiv-ity


measures by considering the size of the software that has been produced. If a soft-ware
organization maintains simple records, a table of size-oriented measures, such as the
one shown in Figure 25.2, can be created.

Size-oriented metrics are not universally accepted as the best way to measure the
software process. Most of the controversy swirls around the use of lines of code as a key
measure.

Function-Oriented Metrics

 Function-oriented software metrics use a measure of the functionality delivered by


the application as a normalization value.

 The most widely used function-oriented metric is the function point (FP).

Computation of the function point is based on characteristics of the


software‘s information domain and complexity.

The function point, like the LOC measure, is controversial.

Dept of Computer Science & Engineering


Software Engineering 44

Reconciling LOC and FP Metrics:

 The relationship between lines of code and function points depends upon the
programming language that is used to implement the software and the quality of
the design.

Object-Oriented Metrics:

Lorenz and Kidd suggest set of metrics for OO projects:

 Number of scenario scripts.

A scenario script is a detailed sequence of steps that describe the


interaction between the user and the application. Each script is organized into
triplets of the form

{initiator, action, participant}

where initiator is the object that requests some service (that initiates a
message), action is the result of the request, and participant is the server object
that satisfies the request.

 Number of key classes.

Key classes are the ―highly independent components‖ and are central to the
problem domain

 Number of support classes.

Dept of Computer Science & Engineering


Software Engineering 45

Support classes are required to implement the sys-tem but are not
immediately related to the problem domain.

 Average number of support classes per key class.

If the average number of support classes per key class were known for a
given problem domain, estimating (based on total number of classes) would be
greatly simplified.

 Number of subsystems.

A subsystem is an aggregation of classes that support a function that is


visible to the end user of a system.

Use-Case Oriented Metrics

 Use cases are used widely as a method for describing customer level or business
domain requirements that imply software features and functions.

 Researchers have suggested use-case points (UCPs) as a mechanism for


estimating project effort and other characteristics.

 The UCP is a function of the number of actors and transactions implied by the
use-case models.

Metrics for Software quality:

 The overriding goal of software engineering is to produce a high quality system,


application, or product within a time frame that satisfies a market need.

 To achieve this goal, you must apply effective methods coupled with modern tools
within the context of a mature software process.

 In addition, a good software engineer (and good software engineering managers)


must measure if high quality is to be realized.

Measuring Quality:

Glib suggested the following Measures:

 Correctness

A program must operate correctly or it provides little value to its users.


Correctness is the degree to which the software performs its required function.

Dept of Computer Science & Engineering


Software Engineering 46

The most common measure for correctness is defects per KLOC, where a defect is
defined as a verified lack of conformance to requirements.

 Maintainability

Software maintenance and support accounts for more effort than any other
software engineering activity.
Maintainability is the ease with which a program can be corrected if an
error is encountered, adapted if its environment changes, or enhanced if the
customer desires a change in requirements. There is no way to measure
maintainability directly; therefore, you must use indirect measures.
A simple time-oriented metric is mean-time-to-change (MTTC), the time it
takes to analyze the change request, design an appropriate modification,
implement the change, test it, and distribute the change to all users. On average,
programs that are maintainable will have a lower MTTC (for equivalent types of
changes) than programs that are not maintainable.

 Integrity

Software integrity has become increasingly important in the age of cyber


terrorists and hackers.
To measure integrity, two additional attributes must be defined: threat and
security.
 Threat is the probability (which can be estimated or derived from empirical
evidence) that an attack of a specific type will occur within a given time.
 Security is the probability (which can be estimated or derived from empirical
evidence) that the attack of a specific type will be repelled.

 Usability

If a program is not easy to use, it is often doomed to failure, even if the


functions that it performs are valuable.

Defect Removal Efficiency:

A quality metric that provides benefit at both the project and process level is defect
removal efficiency (DRE).

 Defect removal efficiency (DRE) provides benefit at both the project and process
level.

DRE is defined in the following manner

Dept of Computer Science & Engineering


Software Engineering 47

DRE = E / (E + D)

where E is the number of errors found before delivery of the software to the
end user and D is the number of defects found after delivery.

The ideal value for DRE is 1. That is, no defects are found in the software.

Realistically, D will be greater than 0, but the value of DRE can still approach 1 as
E increases for a given value of D.

In fact, as E increases, it is likely that the final value of D will decrease (errors are
filtered out before they become defects).

Dept of Computer Science & Engineering

You might also like