Unit 4
Unit 4
1
A strategic Approach for Software testing
• Software Testing :
One of the important phases of software development.
A strategy for software testing must perform low level tests to verify
small source code.
And also high level tests that validate major functions for customer
requirements.
3
A strategic Approach for Software testing
1. Verification and Validation :
Verification : Refers to the set of activities that ensure that software
correctly implements a specific function.
ITG removes conflicts after all they are paid to find errors.
6
A strategic Approach for Software testing
Unit testing concentrates on each unit of software.
7
Criteria for completion of software testing
• A classic question is : When testing is done, how do we know that
testing is enough.
• One response can be : We have never completed testing.
• Every time customer executes program, program is tested.
• No body is absolutely certain that software will not fail.
• Based on statistical modeling and software reliability models 95
percent confidence(probability) that 1000 CPU hours of failure free
operation is at least 0.995.
8
Test strategies for conventional software
• One test strategy can be that software is fully constructed then
conduct the test on overall system to find the errors.
• This approach is not good and result in buggy software.
• Other test strategy is to conduct the test on daily basis
• This approach can be very effective.
• Software teams chose the testing strategy begins with testing of
individual units and then moving to integration testing.
• Unit testing : It focuses verification effort on the smallest unit of
software design i.e each module or component.
• It focuses on internal processing logic and data structures.
• Unit test Considerations : The module interface is tested to ensure
that information properly flows in and out of program unit.
• Local data structures are tested to ensure that stored data
maintains its integrity during execution. 9
• Next figure shows the unit testing.
Test strategies for conventional software
• Independent paths are tested to ensure that all statements in
module is executed at least once.
• Boundary conditions are tested to ensure that modules operates
properly at boundary.
• Finally, all errors handling paths are tested.
• Test cases are designed to find errors due to erroneous
computations, incorrect comparisons, or improper flow.
• Unit test procedures : Next figure shows unit test environment.
• Driver is nothing more than a “main program” that accept data,
passes that data to components and print the results.
• Stubs serve to replace modules that are called by component to be
tested.
• Drivers and stubs represent overhead.
• Means both are software that are written but not delivered with
10
final software product.
Test strategies for conventional software
• Integration testing : Once all modules are unit tested individually.
• If they all work individually , why do we doubt that they will not work w
we put them together.
• The problem is “putting them together” –interfacing.
• Data can be lost during interface, one module can affect other when
combined, data structures can present problems and so on.
• Integration testing is a systematic technique for constructing software
architecture and do combined testing to find errors.
11
Test strategies for conventional software
• Top down integration :Modules are integrated by moving downward
beginning with main module(main program).
• Subordinate modules tested in either depth first or breadth first.
• Next figure shows the depth first integration.
• Selecting left hand path components M1,M2,M5 integrated first.
• Next M8 or M6 will be integrated. Then right hand paths are tested.
• Breadth first integration integrates moving horizontally.
• From the figure, components M2,M3, M4 integrated first.
• Next M5,M6 and so on.
• Bottom-up integration : Begins with constructing and testing with
modules at lowest levels.
• Components are integrated from bottom up.
• Figure shows in which components are combined to form clusters 1,2
and 3. 12
• Each clusters is tested using driver shown by dashed block.
Top down integration testing
• Integration testing : Once all modules are unit tested individually.
• If they all work individually , why do we doubt that they will not
work when we put them together.
• The problem is “putting them together” –interfacing.
• Data can be lost during interface, one module can affect other when
combined, data structures can present problems and so on.
• Integration testing is a systematic technique for constructing
software architecture and do combined testing to find errors.
13
Bottom up integration
• Integration testing : Once all modules are unit tested individually.
• If they all work individually , why do we doubt that they will not
work when we put them together.
• The problem is “putting them together” –interfacing.
• Data can be lost during interface, one module can affect other when
combined, data structures can present problems and so on.
• Integration testing is a systematic technique for constructing
software architecture and do combined testing to find errors.
14
Basis path testing
• Proposed by Tom McCabe and it enables test case designer to derive
logical complexity measure of procedural design.
• Use this measure as a guide for defining basic set of execution paths.
• Guarantees to execute every statement in the program at least once
• A. Flow Graph notation : Before basis path method, representation
of control flow, called a flow graph notation is introduced.
• The flow graph shows logical control flow shown by next figure 1.
• To explain use of flow graph , we consider program design logic PDL
representation by flow chart shown by figure 2(a).
• This flow chart is used to depict program control structure.
• Figure 2(b) maps flow chart into corresponding flow graph.
• In figure2(b), each circle called flow graph node, represents one or
more procedural statements.
• The arrows on flow graph called edges or links , represent flow of
15
control and same as flowchart arrows.
Basis path testing
• Predicate node is that node from which two or more edge starts.
• Independent program paths : It is any path through program that
introduces at least one new set of processing statements.
• In flow graph, an independent path move along at least one edge
that has not been traversed before.
• For example, independent paths for flow graph in figure 2(b) are :
• Path 1 : 1-11
• Path 2 : 1-2-3-4-5-10-1-11
• Path 3 : 1-2-3-6-8-9-10-1-11
• Path 4 : 1-2-3-6-7-9-10-1-11
• Path 1,2,3,4 are basic set(basis paths) for flow graph of figure 2(b).
• If tests are applied to execute these paths, every statement in
program is guaranteed to be executed at least one time.
• How do we know that how many paths to look for? 16
• Cyclomatic complexity provides the answer.
Basis path testing
• Cyclomatic complexity is software metric that provide measure of
logical complexity of a program.
• It defines number of independent paths in basic set program.
• It provides number of tests that are conducted to make sure that all
statements are executed at least once.
• Cyclomatic complexity is computed in one of three ways :
• 1. The number of regions corresponds to cyclomatic complexity
• 2. Cyclomatic complexity, V(G), for flow graph, G, is define as :
• V(G) = E – N + 2, where E is the number of flow graph edges and N is
the number of flow graph nodes.
• 3. Also , V(G) = P + 1, where p is number of predicate nodes.
• The computed cyclomatic complexity for the graph of figure 2(b) :
• 1. Flow graph has for regions
• 2. V(G) = 11 edges – 9 nodes + 2 = 4 17
• 3. V(G) = 3 predicate nodes + 1 = 4
Basis path testing
• Steps of Basis Path Testing
1. Draw the flow graph from flow chart of the program.
2. Calculate the cyclomatic complexity of the resultant flow graph.
3. Determine the basis set of independent paths.
4. Prepare test cases that will force execution of each path of basis set.
18
Control Structure testing
• Basis path testing is simple and effective but not sufficient in itself.
• Control structure broadens test and improves quality of white box.
• Following techniques of control structure testing :
• Condition Testing , Data flow Testing , Loop Testing
• Condition Testing : Exercise the logical conditions contained in a
program module.
• Condition testing in program to ensure that it does not contain errors
• Simple condition : E1<relation operator>E2 where, E1 and E2 are
arithmetic expressions and <relation operator> is one of following :
• <, <=, = , ≠ , > , >=
• Compound condition : Composed of two or more simple conditions,
Boolean operators (AND , OR, NOT ) and parentheses.
• simple condition<Boolean operator>simple condition.
• Errors in condition include Boolean operator errors, Boolean variable
errors, relational operator errors, arithmetic expression errors 19
Control Structure testing
• Loop testing : Focuses on the validity of loop constructs.
• Four categories can be defined :
1. Simple loops
2. Nested loops
3. Concatenated loops
4. Unstructured loops
• Testing of simple loops :
-- N is the maximum number of allowable passes through the loop.
1. Skip the loop entirely
2. Only one pass through the loop
3. Two passes through the loop
4. m passes through the loop where m<N
5. N-1,N,N+1 passes through the loop
20
Control Structure testing
• Nested Loops :
• Start at the innermost loop. Set all other loops to minimum values.
• Conduct simple loop test for the innermost loop while holding the
outer loops at their minimum iteration parameter (loop counter).
• Work outward conducting tests for the next loop but keeping all
other outer loops at minimum.
• Continue until all loops have been tested.
• Concatenated loops :
• Follow the approach defined for simple loops, if each of the loop is
independent of other.
• If two loops are concatenated and loop counter for loop 1 is used as
initial value for loop 2 , then loops are not independent.
• If loops are not independent, follow the approach for nested loops
• Unstructured Loops : Redesign the program to avoid unstructured
loops 21
Validation Testing
• Validation succeeds when the software functions in a manner that
can be expected by the customer.
• Expectations are defined as : a document that describes all user
visible attributes of the software.
1)Validation Test Criteria : Software validation is achieved through a
series of tests.
• A test plan provides the class of tests to be conducted, and a test
procedure defines specific test cases.
• Both plan and procedure are designed to make sure that :
• All functional requirements are satisfied, Behavioral properties are
achieved, All performance requirements are achieved,
Documentation is correct.
• After each validation test case, one of two possible condition is :
• The functions or performance confirms the document and accepted
22
• Deviation from document is found and deficiency list is created.
Validation Testing
• Configuration Review : The intention of review is to make sure that
all elements of software configuration have been developed.
• It is sometimes called an audit.
• Alpha and Beta testing : When custom software is built for one
customer, a series of acceptance tests are conducted to allow the
customer to validate all requirements.
• These tests are conducted by the end user rather than software
engineers are kind of informal “test drive”.
• If software is developed to be used by many customers, it is not
possible to perform acceptance tests with each one.
• Most software builders use a process called alpha and beta testing
to uncover errors that only the end-users able to find.
• Alpha test is conducted at the developer’s site by end users.
• Developer is present with user and recording errors and usage
problems and alpha tests are conducted in controlled environment. 23
Validation Testing
• Beta test is conducted at end user sites.
• Unlike alpha testing, developer is generally not present.
• So, beta test is a “live” application of the software in an environment
that cannot be controlled by developer.
• End user records all problems and reports these problems to
developer at regular intervals.
• According to the problems reported during beta tests, software
engineers make modifications, then release software to customers.
24
System testing
• Its primary purpose is to test the complete software.
• It is actually a series of different tests whose purpose is to fully test
the computer based system.
• 1)Recovery Testing : Recovery testing is a system test that forces the
software to fail in different ways and verifies that recovery is
properly performed.
• If recovery is automatic, then re-initialization, data recovery, and
restart are evaluated for correctness.
• If recovery needs human effort, mean time to repair is evaluated.
• Most computer based systems recover from faults and resume
processing quickly.
• In some cases, system must be fault tolerant, i.e. faults should not
stop overall system function.
• In other case, system faults must be corrected within time or before
severe damage will occur. 25
System testing
• 2)Security Testing : Security testing verifies that protection built into
a system protect it from improper penetration.
• Any system that manages sensitive information is a target for
improper or illegal penetration.
• Penetration has many activities :
• Hacker who attempt to penetrate the system
• Disgruntled employees who attempt to penetrate for revenge
• And dishonest individuals who penetrate for illicit personal gain.
• During security testing , tester try to penetrate the system.
• Tester try to get passwords, try to attack the system, cause system
errors, try to use insecure data, find the key of entry.
• Good enough time and resources, good security testing will
penetrate system.
26
System testing
• 3.Stress Testing : Stress testing is performed to check how program
deals with abnormal situations.
• The tester who performs stress testing asks : How high can we move
this up before it fails ?
• Stress testing executes a system that demands resources in abnormal
quantity , frequency , or volume.
• For example :
• 1.Tests are performed that generate ten interrupts per second, when
one or two interrupts is average.
• 2. Input data rate is increased to check how input functions responds.
• 3. Tests that require maximum memory and resources are executed.
• 4. Test case that cause memory problems are performed.
• A variation of stress testing is called sensitivity testing.
• Sensitivity testing attempts to find that data that cause instable or
27
improper processing.
System testing
• 4)Performance Testing : It is designed to test run time performance
of software.
• At the unit level, performance of individual module is tested.
• It is often necessary to measure resource utilization.
• Performance testing tests hardware and software instrumentation.
• By instrumentation, the tester can find situations that cause degrade
and possible system failure.
28
The Art of Debugging
• Debugging occurs as a result of successful testing.
• It means, when a test case uncover(find) errors, debugging is an
action which removes these errors.
• Debugging Process : Debugging is not testing but it occurs as a
result of testing.
• Debugging starts with execution of test case.
• Results are analyzed and lack of matching between expected and
actual performance is occurred.
• Debugging try to match symptom with cause , and leads to error
correction.
• Debugging always have one of two outcomes :
• 1. Cause will be found and corrected
• 2. Cause will not be found
• Later on person who performs debugging can suspect a cause, 29 and
work towards error correction.
The Art of Debugging
• Debugging occurs as a result of successful testing.
• It means, when a test case uncover(find) errors, debugging is an action which
removes these errors.
• Debugging Process : Debugging is not testing but it occurs as a result of testing.
• Debugging starts with execution of test case.
• Results are analyzed and lack of matching between expected and actual
performance is occurred.
• Debugging try to match symptom with cause , and leads to error correction.
• Debugging always have one of two outcomes :
• 1. Cause will be found and corrected
• 2. Cause will not be found
• Debugging Strategies
1)Brute Force Method.
2)Back Tracking
3)Cause Elimination and 30
4)Automated debugging
The Art of Debugging
The Debugging process
Execution of
test cases
Results
Test
cases Additional Suspected
tests causes
32
Product Metrics
• Key element of any engineering process is measurement.
• Measures are used to assess the quality of products we develop and to understand
the properties of models we create.
• Measurement is "The action of measuring something or The size, length, or
amount of something, as established by measuring."
• A "measure" is a number that is derived from taking a measurement.
• So measurement relates more to the action of measuring.
• Measurements are the raw data
• Measurement can be used by software engineers to help assess the quality of
technical work products and to assist in decision making as a project proceeds.
• Software measurement is a quantified attribute of a software product or the
software process.
• It is a discipline within software engineering.
• The content of software measurement is defined and governed by ISO Standard
ISO 15939 (software measurement process).
33
Product Metrics
• In contrast, a "metric" is a calculation between two measures.
• Metrics relates more to the method of measuring.
• Metrics are "A method of measuring something, or the results obtained from this“.
• Metrics are derived combinations of measurements
• In software testing, Metric is a quantitative measure of the degree to which a
system, system component, or process possesses a given attribute.
• In other words, metrics helps estimating the progress, quality and health of a
software testing effort
• A software metric is a standard of measure of a degree to which a software system
or process possesses some property.
• Metric is not a measurement (metrics are functions, while measurements are the
numbers obtained by the application of metrics).
• Still these two terms are used as synonyms sometimes
• It is a calculated or composite indicator based upon two or more measures.
• Metrics are defined as “standards of measurement” and have been used to indicate
a method of measuring the effectiveness and efficiency of a particular activity within
a project. 34
Product Metrics
• An example of a metric would be that there were only two user-discovered errors
in the first 18 months of operation.
• This provides more meaningful information than a statement that the delivered
system is of top quality.
• This metric indicates the quality of the product under test.
• It can be used as a basis for estimating defects to be addressed in the next phase
or the next release.
• This is an Organizational Measurement.
• Test Metrics is a mechanism to know the effectiveness of the testing that can be
measured quantitatively.
• It is a feedback mechanism to improve the Testing Process that is followed
currently.
• Product metrics describe the characteristics of the product such as size,
complexity, design features, performance, and quality level.
• Process metrics can be used to improve software development and maintenance
• Metrics can be defined as “STANDARDS OF MEASUREMENT”.
• Software Metrics are used to measure the quality of the project.
35
• Simply, Metric is a unit used for describing an attribute.
Software Quality
• Everyone will agree that high quality software is an important.
• But what is quality?
• Software quality is Conformance to functional and performance
requirements, documented development standards, and
characteristics that are expected from developed software.
36
• Direct measures of the software engineering process include cost
and effort applied. Direct measures of the product include lines of
code (LOC) produced, execution speed, memory size, and defects
reported over some set period of time.
• Indirect measures of the product include functionality, quality,
complexity, efficiency, reliability, maintainability, and many other "–
abilities.
• The cost and effort required to build software, the number of lines of
code produced, and other direct measures are relatively easy to
collect, as long as specific conventions for measurement are
established in advance. However, the quality and functionality of
software or its efficiency or maintainability are more difficult to
assess and can be measured only indirectly
37
Software Quality
• McCall’s quality factors : McCall, Richard and Walter proposed categories of
factors that affect software quality.
• It focuses on three aspects of software product :
• Operational characteristics, its ability to undergoes changes, and its adaptability to
new environment.
1. Product operation :
a. Correctness : Extent up-to which program satisfies and fulfills customer’s
objectives and requirements.
b. Reliability : Extent up-to which a program can be expected to perform its
functions with required quality.
c. Efficiency : Amount of resources and code required by a program to perform
its functions.
d. Integrity : Extent up-to which use of software or data by unauthorized person
can be controlled.
e. Usability : Effort required to learn, operate, prepare input and take output of
a program.
38
Software Quality
2. Product Revision :
a. Maintainability : Effort required to locate and fix errors in a program .
b. Flexibility : The effort required to modify an operational program.
c. Testability : The effort required to test a program to make sure that it
performs the required functions or not.
3. Product Transition :
a. Portability : Effort required to transfer the program from one hardware or
software system environment to another.
b. Reusability : Extent to which a program can be reused in other applications.
c. Interoperability : Effort required to couple one system to another.
• ISO 9126 Quality Factors : These standard was developed to identify quality
attributes for computer software.
• The standard identifies sis quality attributes :
1.Functionality : The degree up-to which software satisfies needs of customer
indicated by sub attributes like accuracy, security, suitability, interoperability.
39
McCall’s quality factors
40
Software Quality
2.Reliability : Time for which that software is available for use as indicated by sub
attributes like maturity, fault tolerance and recoverability.
3.Usability : The degree to which software is easy to use as indicated by sub attributes
like understandability, learn ability, operability.
4.Efficiency : The degree up-to which software makes best use of system resources as
indicated by sub attributes like time behavior, resource behavior.
5.Maintainability : The level of easy with which repair can be made to software as
indicated by the sub attributes like
6.Portability : Easiness with which the software can be transferred from one
environment to another as indicated by sub attributes like adaptability , replace
ability, installability.
41
Product metrics
• Product metrics describe the characteristics of the product such as size,
complexity, design features, performance, and quality level.
• Product metrics for computer software helps us to assess quality.
• Measure :
-- Provides a quantitative indication of the extent, amount, dimension, capacity or size
of some attribute of a product or process
• Metric(IEEE 93 definition) :
-- A quantitative measure of the degree to which a system, component or process
possess a given attribute
• Metrics can be defined as “STANDARDS OF MEASUREMENT”.
• Software Metrics are used to measure the quality of the project.
• Simply, Metric is a unit used for describing an attribute
• Indicator :
-- A metric or a combination of metrics that provide insight into the software process,
a software project or a product itself
42
Product metrics for the Analysis model
These metrics examine the analysis model to predict the “size” of resultant system.
Size is an indicator of complexity and always an indicator of increased coding.
Function – Based Metrics :
It is also called Function point Metric first proposed by Albrecht.
It is used to Measure the functions delivered by the system.
Function Points can be used :
1. To estimate the cost or effort required to for coding and testing.
2. To predict the number of errors that will be occurred in testing.
3. To predict the number of components or source lines in system.
Function Point are computed by establishing relationship between those
features of software which are easily measurable and listed below :
1) Number of external inputs(EIS) : Each external input starts from a user or is given
from other application.
2) Number of external outputs(EOS) : Each external output is derived from
application and provides information to the user.
3) Number of external Inquiries(EQS) : It is defined as an input that results in the
generation of immediate software response in the form of output. 43
Product metrics for the Analysis model
4) Number of Internal Logical Files(ILF) : It is a logical grouping of data that resides
with in the application and is maintained by external inputs.
5) Number of external interface files(EIFS) : A logical grouping of data that resides
outside to application but provides data that can be used by the application.
• Each parameter is classified as simple, average or complex and weights are
assigned as follows :
• Weighting factor
Information Domain Count Simple average Complex
EIS 3 4 6
EOS 4 5 7
EQS 3 4 6
ILFS 7 10 15
EIFS 5 7 10
45
Product metrics for the Analysis model
46
Product metrics for the Analysis model
47
Product metrics for the Analysis model
48
METRIC FOR SOURCE CODE
• HSS(Halstead proposed theory of Software science)
• Primitive measures that are derived after the code is generated or
estimated once design is complete.
• The measures are :
• n1 = the number of distinct operators that appear in a program
• n2 = the number of distinct operands that appear in a program
• N1 = the total number of operator occurrences.
• N2 = the total number of operand occurrence.
• Overall program length N and program volume V can be
computed:
• N = n1 log2 n1 + n2 log2 n2
• V = N log2 (n1 + n2)
52
Metrics for Design Model
• S5:Number of unique database items
• S6: Number of database segments(different records or individual objects)
• S7:Number of modules with single entry and exit.
• Calculate intermediate values D1 to D6 from s1 to s7 as follows:
• D1(program structure)=1 if standard design(OOD,DFOD) followed otherwise D1=0
• D2(module independence)=1-(s2/s1)
• D3(module not depending on prior processing)=1-(s3/s1)
• D4(Data base size)=1-(s5/s4)
• D5(Database compartmentalization)=1-(s6/s4)
• D6(Module entry/exit characteristics)=1-(s7/s1)
• DSQI is calculated as :
• DQSI= ∑WiDi, where i= 1 to 6 , Wi is relative weights of each intermediate values
and ∑Wi = 1 (if all Di are weight equally to 0.167).
• Value of DQSI for previous designs are compared with a current design.
• If DQSI is lower than average, further design work and review is indicated.
53
Metrics for Design Model
• Metrics for Object oriented Design :
• Software metrics for OO systems, we describe nine different and
measurable characteristics of OO design :
• Size : It is defined in terms of four views :
• 1. Population : It is measured by taking static count of OO entities like
classes or operations.
• 2.Volume : It is collected dynamically at a given instant of time
• 3.Length : It is a measure of a chain of interconnected design
elements(depth of the interconnected design elements)
• 4.Functionality : This metric provide indirect indication of value
delivered to the customer.
• Complexity : Complexity is viewed by examining how classes of an
OO design are related to one another.
• Coupling : Physical connections between elements of OO design(e.g.
number of messages passed between objects) 54
Metrics for Design Model
• Sufficiency : The degree to which an abstraction possess the features
required of it or what properties does this abstraction(class) need to
possess to be useful to me?
• Primitiveness : The degree to which operation cannot be constructed
• Similarity : Degree to which two or more classes are similar in terms
of structure, function, behavior, purpose.
• Volatility : It measures that a change will occur or not.
• Class oriented Metrics – The CK suite :
• Class oriented Metrics – The MOOD metrics suite :
• OO metrics proposed by Lorenz and Kidd :
• Component level design Metrics :
• Operation oriented Metrics :
• User interface Design Metrics :
55
Metrics for Design Model
• Sufficiency : The degree to which an abstraction possess the features
required of it or what properties does this abstraction(class) need to
possess to be useful to me?
• Primitiveness : The degree to which operation cannot be constructed
• Similarity : Degree to which two or more classes are similar in terms
of structure, function, behavior, purpose.
• Volatility : It measures that a change will occur or not.
• Class oriented Metrics – The CK suite :
• Class oriented Metrics – The MOOD metrics suite :
• OO metrics proposed by Lorenz and Kidd :
• Component level design Metrics :
• Operation oriented Metrics :
• User interface Design Metrics :
56
Metrics for Process and projects
• Process metrics can be used to improve software development and
maintenance.
• They provide set of process indicators that gives long term software
process improvement.
• Project metrics enable software project manager to :
• 1. Assess the status of ongoing project
• 2. Track potential risks
• 3. Uncover problem areas before they become critical
• 4. Adjust work flow or tasks
• 5. Evaluate team’s ability to control quality of software products.
• Process Metrics and software process improvement :
• Only way to improve any process is to measure attributes of process,
develop meaningful metrics based on these attributes and then use
metrics to provide indicators that will lead to improvement
57
Metrics for Process and projects
• Process is only one of the factors to improve quality and performance of software
or organization.
• Process can be seen as a triangle connecting three factors that influence software
quality and performance :
• People, Product, and technology.
• Skill and motivation of people is most affecting factor in quality and performance.
• The complexity of product also have impact on quality performance
• The technology(i.e. SE methods and tools) also has impact.
• Also the triangle include development environment(e.g. CASE tools), business
conditions(e.g. deadlines, rules), and customer characteristics(e.g. communicating,
collaborating).
• We measure the efficiency of software process indirectly.
• It means, we derive metrics based on the outcomes from the process.
• Outcomes include measure of errors found before release, defects reported by the
users, delivered work product(output), schedule confirmation, etc.
• We also derive metrics by measuring characteristics of some particular tasks.
• For e.g. we measure time and effort spent in performing SE activities.
58
Metrics for Process and projects
• Process metrics can provide benefit as an organization works to improve its overall
level of maturity.
• Grady suggests following software process metrics etiquette for both managers
and practitioners :
• Use common sense when interpreting data.
• Provide regular feedback to individuals and teams who collect measures and
metrics.
• Don’t use metrics to appraise individuals.
• Work with practitioners and teams to set goals and set metrics which are used ti
achieve these goals.
• Never use metrics to threaten individual or team.
• Project Metrics : The intent of project metrics are : To minimize the
development schedule by avoiding delays and problems and risks.
• Project metrics are used to assess product quality on ongoing basis and modify the
approach to improve quality.
• As quality improves , defects are minimized, defect count is down, rework during
project is reduced and gives reduction in overall project cost
59
Software Measurement
Software measurement can be categorized as :
1)Direct Measure
2)Indirect Measure
• Direct Measurement :
Direct measure of software process include cost and effort.
Direct measure of product include lines of code, Execution speed, memory size,
defects per reporting time period.
• Indirect Measurement :
Indirect measure examines the quality of software product itself(e.g. :-
functionality, complexity, efficiency, reliability and maintainability)
• Reasons for measurement :
1. To gain baseline for comparison with future assessment
2. To determine status with respect to plan
3. To predict the size, cost and duration estimate
4. To improve the product quality and process improvement
60
Software Measurement
• The metrics in software Measurement are :
Size oriented metrics
Function oriented metrics
Object oriented metrics
Web based application metric
Size Oriented Metrics :
• It totally concerned with the measurement of software.
• A software company maintains a simple record for calculating the size of the
software.
• It includes LOC, Effort,$$,PP document, Error, Defect ,People :
• Function oriented metrics :
• Measures the functionality derived by the application
• The most widely used function oriented metric is Function point
• Function point is independent of programming language
• Measures functionality from user point of view
61
Software Measurement
• Object oriented metric :
• Relevant for object oriented programming
• Based on the following :
Number of scenarios(Similar to use cases)
Number of key classes
Number of support classes
Number of average support class per key class
Number of subsystem
Web based application metric
• Metrics related to web based application measure the following
1. Number of static pages(NSP)
2. Number of dynamic pages(NDP)
Customization(C)=NSP/NSP+NDP
C should approach 1
62
Metrics for Software Quality
• Measuring Software Quality :
1. Correctness=defects/KLOC
2. Maintainability=MTTC(Mean-time to change)
3. Integrity=Sigma[1-(threat(1-security))]
Threat : Probability that an attack of specific type will occur within a given time.
Security : Probability that an attack of a specific type will be repelled.
4. Usability: Ease of use
• Defect Removal Efficiency(DRE)
DRE=E/(E+D)
E is the no. of errors found before delivery and D is no. of defects reported
after delivery
Ideal value of DRE is 1
63