0% found this document useful (0 votes)
36 views33 pages

SE Unit-4

Uploaded by

22wj1a05ah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views33 pages

SE Unit-4

Uploaded by

22wj1a05ah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

UNIT 4

NOTES
• Testing Strategies: A strategic approach to software testing, test strategies
for conventional software, Black-Box and White-Box testing, Validation
testing, System testing, the art of Debugging.
• Product metrics: Software Quality, Metrics for Analysis Model, Metrics for
Design Model, Metrics for source code, Metrics for testing, Metrics for
maintenance.
• Metrics for Process and Projects: Software Measurement, Metrics for
software quality.

A STRATEGIC APPROACH FOR SOFTWARE


TESTING:
SOFTWARE TESTING:
 One of the important phases of software development.
 Testing is the process of execution of a program with the intention of finding errors and
correcting them.
 Involves 40% of total project cost and effort.
 Testing is a set of activities that can be planned in advance and conducted systematically.
All testing strategies have following characteristics:
 Different testing techniques are appropriate at different points.
• Testing done by software developer and independent testing group.
• Testing and debugging are different activities. Debugging follows it.
• Low level tests verify small code segments.
• High level tests validate major system functions against customer requirements
TESTING STRATEGIES:
 A road map that incorporates test planning, test case design, test execution and resultant
data collection and execution.
• Perform Formal Technical reviews (FTR) to uncover errors during software
development.
• Begin testing at component level and move outward to integration of entire component
based system.
 A strategy for software testing must perform low level tests to verify small source code.
 And also high level tests that validate major functions for customer requirements.
1. Verification and Validation:
 Verification: Refers to the set of activities that ensure that software correctly implements
a specific function.
 Validation: Refers to a different set of activities that ensures that the software is
traceable to the customer requirements.
 V&V encompasses a wide array of Software Quality Assurance.
 Verification: Are we building the product right?
 Validation: Are we building the right product?
2. Organizing for software testing:
 Number of misconceptions are there which are incorrect :
 a. Developer should not do testing at all.
 b. Software should be given to strange testers to test it.
 c. Testers involve in project when testing steps are just to begin.
 The software developer is responsible for testing individual units.
 Developer also does integration testing.
 The role of ITG independent testing group is to remove inherent problems.
 ITG removes conflicts after all they are paid to find errors.
 While testing is conducted, the developer must be available to correct the errors that are
found.
3. Software testing strategy for conventional software architectures:
 A strategy for software testing may be viewed as spiral context :
Figure: A strategy for software testing may be viewed as spiral context:
 Unit testing concentrates on each unit of software.
 Testing moves outward along spiral to integration testing where focus is on construction
of software architecture.
 Moving more outward in spiral, validation testing done where requirements are validated
against software that is constructed.
 Finally, system testing, where software is tested as a whole.
Criteria for completion of software testing:
• A classic question is: When testing is done, how do we know that testing is enough.
• One response can be: We have never completed testing.
• Every time customer executes program, program is tested.
• Nobody is absolutely certain that software will not fail.
• Based on statistical modeling and software reliability models 95 percent
confidence(probability) that 1000 CPU hours of failure free operation is at least 0.995.

TEST STRATEGIES FOR CONVENTIONAL


SOFTWARE:
• One test strategy can be that software is fully constructed then conduct the test on overall
system to find the errors.
• This approach is not good and result in buggy software.
• Other test strategy is to conduct the test on daily basis
• This approach can be very effective.
• Software teams chose the testing strategy begins with testing of individual units and then
moving to integration testing.

UNIT TESTING:
• It focuses verification effort on the smallest unit of software design i.e. each module or
component.
• It focuses on internal processing logic and data structures.
Unit test Considerations:
• The module interface is tested to ensure that information properly flows in and out of
program unit.
• Local data structures are tested to ensure that stored data maintains its integrity during
execution.
• Figure: Unit testing

• Independent paths are tested to ensure that all statements in module are executed at least
once.
• Boundary conditions are tested to ensure that modules operate properly at boundary.
• Finally, all errors handling paths are tested.
• Test cases are designed to find errors due to erroneous computations, incorrect
comparisons, or improper flow.
Unit test procedures:
• Next figure shows unit test environment.
• Driver is nothing more than a “main program” that accepts data, passes that data to
components and prints the results.
• Stubs serve to replace modules that are called by component to be tested.
• Drivers and stubs represent overhead.
• Means both are software that are written but not delivered with final software product
Figure: Unit testing environment

INTEGRATION TESTING:
• Once all modules are unit tested individually.
• If they all work individually, why do we doubt that they will not work when we put them
together?
• The problem is “putting them together” –interfacing.
• Data can be lost during interface, one module can affect other when combined, and data
structures can present problems and so on.
• Integration testing is a systematic technique for constructing software architecture and
does combined testing to find errors.
Top down Integration testing:
• Modules are integrated by moving downward beginning with main module (main
program).
• Subordinate modules tested in either depth first or breadth first.
• Next figure.1 shows the top down integration testing.
• Selecting left hand path components M1, M2, M5 integrated first.
• Next M8 or M6 will be integrated. Then right hand paths are tested.
• Breadth first integration integrates moving horizontally.
• Components M2, M3, M4 integrated first, next M5, M6 and so on.

Bottom-up Integration testing:


• Begins with constructing and testing with modules at lowest levels and components are
integrated from bottom up.
• Next Figure.2 shows Bottom –up integration testing and shows components are
combined to form clusters 1,2 and 3.
• Each cluster is tested using driver shown by dashed block.
• Clusters 1 and 2 are subordinate of Ma and drivers D1, D2 are removed and clusters are
directly interfaced to Ma.
• Driver D3 is removed and cluster 3 is integrated interfaced with module Mb.
• Then both Ma, Mb will ultimately integrate with module Mc.

Figure.1: Top down integration testing


Figure.2: Bottom up integration testing

Regression Testing:
• Each time a new module is added, software changes in which new input output may
occur.
• These changes can cause problems to old functions which were working properly.
• With integration testing, regression testing is to perform retest that changed software to
make sure those changes have not made any side effects on software.
• Regression testing makes sure that changes do not introduce errors.
Smoke Testing:
• It is also an integration testing approach that is used when software is being developed.
• Smoke testing helps to test a project on frequent (daily) basis.
• Whenever software is rebuilt, software is smoke tested to find errors on daily basis.
• These frequent tests give easy detection and correction of errors.
• Also it makes the software progress assessment easy.

SOFTWARE TESTING:
• Two major categories of software testing
 Black box testing and White box testing

BLACK BOX TESTING:


• Treats the system as black box whose behavior can be determined by studying its input
and related output.
• It is also called behavioral testing.
• It focuses on the functional requirements of the software i.e. it enables the software
engineer to derive input conditions that fully exercise all the functional requirements for
that program.
• Black box test examine fundamental aspect of a system with little regard for internal
structure of software.
• Concerned with functionality and implementation.
• Not concerned with the internal structure of the program.
Black box testing finds the errors in following categories:
• 1.Incorrect or missing functions
• 2.Interface errors
• 3.Errors in data structures
• 4.Behaviour errors or performance errors
• 5.Initialization Errors and termination errors
• Black box testing is applied during later stages of testing.
TYPES OF BLACK BOX TESTING:
• 1)Graph based testing method
• 2)Equivalence partitioning
• 3)Boundary Value analysis
• 4)Orthogonal array Testing
1. Graph based testing:
• Testing begins by creating a graph of important objects and their relationships.
• Then devise test cases that cover the graph such that each object and its relationship
exercised and errors are uncovered.
• In graph, collection of nodes represents objects.
• Links represent the relationships between objects.
• Node weights that describe the properties of a node.
• Link weights that describe some characteristics of a link.
• Nodes are represented by circles, connected by links.
• Directed link (represented by an arrow) shows that relationship moves in one direction,
• Bi-directed link called symmetric link show relation in both directions.
• Parallel links show different relationships between nodes.
Figure: Graph Based testing

Object#1ob Directed link Object#2

Undirected link Node weight

object#3 Parallel Links

2. Equivalence partitioning:
• Divides all possible inputs into classes such that there are a finite equivalence classes.
• Equivalence class
-- Set of objects that can be linked by relationship
• Reduces the cost of testing
• Example : Input consists of 1 to 10
• Then classes are n<1,1<=n<=10,n>10
• Choose one valid class with value within the allowed range and two invalid classes where
values are greater than maximum value and smaller than minimum value.
3. Boundary Value analysis:
• Select input from equivalence classes such that the input lies at the edge of the
equivalence classes.
• Set of data lies on the edge or boundary of a class of input data or generates the data that
lies at the boundary of a class of output data.
• Example : If 0.0<=x<=1.0
• Then test cases (0.0, 1.0) for valid input and (-0.1 and 1.1) for invalid input.
4. Orthogonal array Testing:
• To problems in which input domain is small but too large for exhaustive testing
• Example: Three inputs A,B,C each having three values will require 27 test cases
• L9 orthogonal testing will reduce the number of test case to 9 as shown below :
A B C
1 1 1
1 2 2
1 3 3
2 1 3
2 2 3
2 3 1
3 1 3
3 2 1
3 3 2

WHITE BOX TESTING:


• Also called glass box testing.
• Involves knowing the internal working of a program
• Guarantees that all independent paths will be exercised at least once.
• Exercises all logical decisions on their true and false sides
• Executes all loops and exercises all data structures for their validity

WHITE BOX TESTING TECHNIQUES:


• Basis path testing
• Control structure testing
1. Basis path Testing:
• Proposed by Tom McCabe and it enables test case designer to derive logical complexity
measure of procedural design.
• Use this measure as a guide for defining basic set of execution paths.
• Guarantees to execute every statement in the program at least once
Flow Graph notation:
• Before basis path method, representation of control flow, called a flow graph notation is
introduced.
• The flow graph shows logical control flow.
• To explain use of flow graph, we consider program design logic PDL representation by
flow chart shown by figure 2(a).
• This flow chart is used to depict program control structure.
• Figure 2(b) maps flow chart into corresponding flow graph.
• In figure2 (b), each circle called flow graph node, represents one or more procedural
statements.
• The arrows on flow graph called edges or links represent flow of control and same as
flowchart arrows.

Figure: 2(a) and 2(b)


• Predicate node is that node from which two or more edge starts.
Independent program paths:
• It is any path through program that introduces at least one new set of processing
statements.
• In flow graph, an independent path moves along at least one edge that has not been
traversed before.
• For example, independent paths for flow graph in figure 2(b) are :
• Path 1 : 1-11
• Path 2 : 1-2-3-4-5-10-1-11
• Path 3 : 1-2-3-6-8-9-10-1-11
• Path 4 : 1-2-3-6-7-9-10-1-11
• Path 1,2,3,4 are basic set(basis paths) for flow graph of figure 2(b).
• If tests are applied to execute these paths, every statement in program is guaranteed to be
executed at least one time.
• How do we know that how many paths to look for?
• Cyclomatic complexity provides the answer.

Cyclomatic complexity:
• It is software metric that provide measure of logical complexity of a program.
• It defines number of independent paths in basic set program.
• It provides number of tests that are conducted to make sure that all statements are
executed at least once.
• Cyclomatic complexity is computed in one of three ways :
• 1. The number of regions corresponds to cyclomatic complexity
• 2. Cyclomatic complexity, V(G), for flow graph, G, is define as :
• V (G) = E – N + 2, where E is the number of flow graph edges and N is the number of
flow graph nodes.
• 3. Also, V (G) = P + 1, where p is number of predicate nodes.
• The computed cyclomatic complexity for the graph of figure 2(b) :
• 1. Flow graph has for regions
• 2. V(G) = 11 edges – 9 nodes + 2 = 4
• 3. V(G) = 3 predicate nodes + 1 = 4
Steps of Basis Path Testing:
1. Draw the flow graph from flow chart of the program.
2. Calculate the cyclomatic complexity of the resultant flow graph.
3. Determine the basis set of independent paths.
4. Prepare test cases that will force execution of each path of basis set.

2. Control Structure Testing:


• Basis path testing is simple and effective but not sufficient in itself.
• Control structure broadens test and improves quality of white box.
Following techniques of control structure testing:
• Condition Testing , Data flow Testing , Loop Testing
Condition Testing:
• Exercise the logical conditions contained in a program module.
• Condition testing in program to ensure that it does not contain errors
• Simple condition : E1<relation operator>E2 where, E1 and E2 are arithmetic expressions
and <relation operator> is one of following :
• <, <=, = , ≠ , > , >=
• Compound condition: Composed of two or more simple conditions, Boolean operators
(AND, OR, NOT) and parentheses.
• Simple condition<Boolean operator>simple condition.
• Errors in condition include Boolean operator errors, Boolean variable errors, relational
operator errors, arithmetic expression errors
Data flow Testing:
• Selects test paths according to the locations of definitions and use of variables in a
program
• Aims to ensure that the definitions of variables and subsequent use is tested
• First construct a definition-use graph from the control flow of a program
• Def(definition):definition of a variable on the left-hand side of an assignment statement
• C- use: Computational use of a variable like read, write or variable on the right hand of
assignment statement
• P- use: Predicate use in the condition
• Every DU chain be tested at least once
Loop Testing:
• Focuses on the validity of loop constructs.
• Four categories can be defined :
1. Simple loops
2. Nested loops
3. Concatenated loops
4. Unstructured loops
Testing of simple loops:
-- N is the maximum number of allowable passes through the loop.
1. Skip the loop entirely
2. Only one pass through the loop
3. Two passes through the loop
4. m passes through the loop where m<N
5. N-1,N,N+1 passes through the loop
Nested Loops:
6. Start at the innermost loop. Set all other loops to minimum values.
7. Conduct simple loop test for the innermost loop while holding the outer loops at their
minimum iteration parameter (loop counter).
8. Work outward conducting tests for the next loop but keeping all other outer loops at
minimum.
9. Continue until all loops have been tested.
Concatenated loops:
10. Follow the approach defined for simple loops, if each of the loops is independent of
other.
11. If two loops are concatenated and loop counter for loop 1 is used as initial value for loop
2, then loops are not independent.
12. If loops are not independent, follow the approach for nested loops
13. Unstructured Loops : Redesign the program to avoid unstructured loops

VALIDATION TESTING:
• Validation succeeds when the software functions in a manner that can be expected by the
customer.
• Expectations are defined as: a document that describes all users’ visible attributes of the
software.
Validation Test Criteria:
Software validation is achieved through a series of tests.
• A test plan provides the class of tests to be conducted, and a test procedure defines
specific test cases.
Both plan and procedure are designed to make sure that:
• All functional requirements are satisfied, Behavioral properties are achieved, All
performance requirements are achieved, Documentation is correct.
• After each validation test case, one of two possible condition is :
• The functions or performance confirms the document and accepted
• Deviation from document is found and deficiency list is created.
Configuration Review:
• The intention of review is to make sure that all elements of software configuration have
been developed.
• It is sometimes called an audit.
ALPHA AND BETA TESTING:
• When custom software is built for one customer, a series of acceptance tests are
conducted to allow the customer to validate all requirements.
• These tests are conducted by the end user rather than software engineers are kind of
informal “test drive”.
• If software is developed to be used by many customers, it is not possible to perform
acceptance tests with each one.
• Most software builders use a process called alpha and beta testing to uncover errors that
only the end-users able to find.
Alpha Testing:
• It is conducted at the developer’s site by end users.
• Developer is present with user and recording errors and usage problems and alpha tests
are conducted in controlled environment.
Beta Testing:
• It is conducted at end user sites.
• Unlike alpha testing, developer is generally not present.
• So, beta test is a “live” application of the software in an environment that cannot be
controlled by developer.
• End user records all problems and reports these problems to developer at regular
intervals.
• According to the problems reported during beta tests, software engineers make
modifications, and then release software to customers.

SYSTEM TESTING:
• Its primary purpose is to test the complete software.
• It is actually a series of different tests whose purpose is to fully test the computer based
system.
1) Recovery Testing:
• Recovery testing is a system test that forces the software to fail in different ways and
verifies that recovery is properly performed.
• If recovery is automatic, then re-initialization, data recovery, and restart are evaluated for
correctness.
• If recovery needs human effort, mean time to repair is evaluated.
• Most computer based systems recover from faults and resume processing quickly.
• In some cases, system must be fault tolerant, i.e. faults should not stop overall system
function.
• In other case, system faults must be corrected within time or before severe damage will
occur.
2) Security Testing:
• Security testing verifies that protection built into a system protect it from improper
penetration.
• Any system that manages sensitive information is a target for improper or illegal
penetration.
Penetration has many activities:
• Hacker who attempt to penetrate the system
• Disgruntled employees who attempt to penetrate for revenge
• And dishonest individuals who penetrate for illicit personal gain.
• During security testing, tester tries to penetrate the system.
• Tester try to get passwords, try to attack the system, cause system errors, try to use
insecure data, find the key of entry.
• Good enough time and resources, good security testing will penetrate system.
3. Stress Testing:
• Stress testing is performed to check how program deals with abnormal situations.
• Tester who performs stress testing asks: How high can we move this up before it fails?
• Stress testing executes a system that demands resources in abnormal quantity, frequency,
or volume.
• For example :
• 1. Tests are performed that generate ten interrupts per second, when one or two interrupts
is average.
• 2. Input data rate is increased to check how input functions responds.
• 3. Tests that require maximum memory and resources are executed.
• 4. Test case that causes memory problems is performed.
• A variation of stress testing is called sensitivity testing.
• Sensitivity testing attempts to find that data that cause instable or improper processing.

4) Performance Testing:
• It is designed to test run time performance of software.
• At the unit level, performance of individual module is tested.
• It is often necessary to measure resource utilization.
• Performance testing tests hardware and software instrumentation.
• By instrumentation, the tester can find situations that cause degrades and possible system
failure.

THE ART OF DEBUGGING:


• Debugging occurs as a result of successful testing.
• It means, when a test case uncover (find) errors, debugging is an action which removes
these errors.
Debugging Process:
• Debugging is not testing but it occurs as a result of testing.
• Debugging starts with execution of test case.
• Results are analyzed and lack of matching between expected and actual performance is
occurred.
• Debugging try to match symptom with cause, and leads to error correction.
Figure: Figure: The Art of Debugging

Debugging always has one of two outcomes:


• 1. Cause will be found and corrected
• 2. Cause will not be found
• Later on person who performs debugging can suspect a cause, and work towards error
correction.
Debugging Strategies:
1) Brute Force Method.
2) Back Tracking
3) Cause Elimination and
4) Automated debugging
1. Brute force: Most common and least efficient
-- Applied when all else fails
-- Memory dumps are taken
-- Tries to find the cause from the load of information
2. Back tracking: Common debugging approach
-- Useful for small programs
-- Beginning at the system where the symptom has been uncovered, the source code traced
backward until the site of the cause is found.
3. Cause Elimination:
-- Based on the concept of binary partitioning
-- A list of all possible causes is developed and tests are conducted to eliminate each

PRODUCT METRICS:
• Product metrics: Describe the characteristics of the product such as size,
complexity, design features, performance, and quality level.
• Product metrics for computer software helps us to assess quality.
• Measurement: is "The action of measuring something or the size, length, or amount of
something, as established by measuring."
• Key element of any engineering process is measurement.
• Measures are used to assess the quality of products we develop and to understand the
properties of models we create.
• A "measure" is a number that is derived from taking a measurement.
• So measurement relates more to the action of measuring.
• Measurements are the raw data
• Measurement can be used by software engineers to help assess the quality of technical
work products and to assist in decision making as a project proceeds.
• Software measurement is quantified attribute of software product or software process.
• It is a discipline within software engineering.
• The content of software measurement is defined and governed by ISO Standard ISO
15939 (software measurement process)
• Metrics: are "A method of measuring something, or the results obtained from this“.
• Metrics are derived combinations of measurements.
• In contrast, a "metric" is a calculation between two measures.
• Metrics relates more to the method of measuring.
• In software testing, Metric is a quantitative measure of the degree to which a system,
system component, or process possesses a given attribute.
• In other words, metrics helps estimating the progress, quality and health of a software
testing effort
• A software metric is a standard of measure of a degree to which a software system or
process possesses some property.
• Metric is not a measurement (metrics are functions, while measurements are the
numbers obtained by the application of metrics).
• Still these two terms are used as synonyms sometimes
• It is a calculated or composite indicator based upon two or more measures.
• Metrics are defined as: “standards of measurement” and have been used to indicate a
method of measuring effectiveness and efficiency of particular activity within a project.
• An example of a metric would be that there were only two user-discovered errors in the
first 18 months of operation.
• This provides more meaningful information than a statement that the delivered system is
of top quality.
• This metric indicates the quality of the product under test.
• It can be used as a basis for estimating defects to be addressed in the next phase or the
next release.
• This is an Organizational Measurement.
• Test Metrics is a mechanism to know the effectiveness of the testing that can be
measured quantitatively.
• It is a feedback mechanism to improve the Testing Process that is followed currently.
• Process metrics can be used to improve software development and maintenance
• Metrics can be defined as “STANDARDS OF MEASUREMENT”.
• Software Metrics are used to measure the quality of the project.
• Metric is a unit used for describing an attribute.
• Measure :
• -- Provides a quantitative indication of the extent, amount, dimension, capacity or size of
some attribute of a product or process
Metric (IEEE 93 definition):
• -- A quantitative measure of the degree to which a system, component or process possess
a given attribute
• Metrics can be defined as “STANDARDS OF MEASUREMENT”.
• Software Metrics are used to measure the quality of the project.
• Simply, Metric is a unit used for describing an attribute
• Indicator :
• -- A metric or a combination of metrics that provide insight into the software process, a
software project or a product itself

SOFTWARE QUALITY:
• Everyone will agree that high quality software is an important.
• But what is quality?
• Software quality is Conformance to functional and performance requirements,
documented development standards, and characteristics that are expected from developed
software.
• Factors that affect software quality can be categorized in two broad groups:
• Factors that can be directly measured (e.g. defects found during testing)
• Factors that can be indirectly measured (e.g. usability or maintainability).
• In each case management should occur.

McCall’s quality factors:


• McCall, Richard and Walter proposed categories of factors that affect software quality.
• It focuses on three aspects of software product :
• Operational characteristics, its ability to undergo changes, and its adaptability to new
environment.
1. Product operation :
a. Correctness: Extent up-to which program satisfies and fulfills customer’s
objectives and requirements.
b. Reliability: Extent up-to which a program can be expected to perform its
functions with required quality.
c. Efficiency: Amount of resources and code required by a program to perform its
functions.
d. Integrity: Extent up-to which use of software or data by unauthorized person can
be controlled.
e. Usability: Effort required learning, operating, preparing input and taking output
of a program.
2. Product Revision:
a. Maintainability: Effort required to locate and fix errors in a program .
b. Flexibility: The effort required to modify an operational program.
c. Testability: The effort required to test a program to make sure that it performs the
required functions or not.
3. Product Transition :
a. Portability: Effort required transferring the program from one hardware or
software system environment to another.
b. Reusability: Extent to which a program can be reused in other applications.
c. Interoperability: Effort required to couple one system to another.

Figure: McCall’s quality factors

ISO 9126 Quality Factors:


This standard was developed to identify quality attributes for computer software.
The standard identifies six quality attributes:
1. Functionality: The degree up-to which software satisfies needs of customer indicated
by sub attributes like accuracy, security, suitability, interoperability
2. Reliability: Time for which that software is available for use as indicated by sub
attributes like maturity, fault tolerance and recoverability.
3. Usability: The degree to which software is easy to use as indicated by sub attributes
like understandability, learn ability, operability.
4. Efficiency: The degree up-to which software makes best use of system resources as
indicated by sub attributes like time behavior, resource behavior.
5. Maintainability: The level of easy with which repair can be made to software as
indicated by the sub attributes like
6. Portability: Easiness with which the software can be transferred from one
environment to another as indicated by sub attributes like adaptability, replace ability,
installability.

METRICS FOR THE ANALYSIS MODEL:


 These metrics examine the analysis model to predict the “size” of resultant system.
 Size is an indicator of complexity and always an indicator of increased coding.
Function – Based Metrics:
 It is also called Function point Metric first proposed by Albrecht.
 It is used to Measure the functions delivered by the system.
Function Points can be used:
 1. To estimate the cost or effort required to for coding and testing.
 2. To predict the number of errors that will be occurred in testing.
 3. To predict the number of components or source lines in system.
Function Point is computed by establishing relationship between those features of software
which are easily measurable.
These function points are listed below:
1) Number of external inputs (EIS): Each external input starts from a user or is given
from other application.
2) Number of external outputs (EOS): Each external output is derived from application
and provides information to the user.
3) Number of external Inquiries (EQS): It is defined as an input that results in the
generation of immediate software response in the form of output.
4) Number of Internal Logical Files (ILF): It is a logical grouping of data that resides
with in the application and is maintained by external inputs.
5) Number of external interface files (EIFS): A logical grouping of data that resides
outside to application but provides data that can be used by the application.
Each parameter is classified as simple, average or complex and weights are assigned as
follows: Weighting factor
Information Domain Count Simple average Complex
EIS 3 4 6
EOS 4 5 7
EQS 3 4 6
ILFS 7 10 15
EIFS 5 7 10

FP=Count total *[0.65+0.01*∑(Fi)], where count


total is the sum of all FP entries obtained from the
above table
Each of these is answered using scale from 0(not applicable or not important) to
5(essential) for following 14 questions:
METRIC FOR SOURCE CODE:
• HSS(Halstead proposed theory of Software science)
• Primitive measures that are derived after the code is generated or estimated once design is
complete.
The measures are:
• n1 = the number of distinct operators that appear in a program
• n2 = the number of distinct operands that appear in a program
• N1 = the total number of operator occurrences.
• N2 = the total number of operand occurrence.
Overall program length N and program volume V can be computed:
• N = n1 log2 n1 + n2 log2 n2
• V = N log2 (n1 + n2)
• V will vary with programming language and represents volume of information (bits)
required specifying a program.

METRIC FOR TESTING:


• Metrics for the testing focus on the process of testing.
• Testers must rely on analysis, design, and code metrics to guide them in design and
execution of test cases.
Testing metrics fall into two categories:
• 1. Metrics that attempt to predict number of test required at various testing level.
• 2. Metrics that focus on test coverage for given component.
Halstead Metrics applied to testing:
• Testing effort can be estimated using metrics derived from Halstead measures.
• n1 = the number of distinct operators that appear in a program
• n2 = the number of distinct operands that appear in a program
• N1 = the total number of operator occurrences.
• N2 = the total number of operand occurrence.
Using program volume V, program level PL, and Effort can be calculated as:
• PL = 1/[(n1 / 2) x (N2 / n2 l)]
• e = V/PL
• % of testing effort (k) = e (k)/∑e (i), where e (k) is calculated using above equation.

METRICS FOR MAINTENANCE:


• Metrics designed for maintenance activities have been proposed.
• IEEE standard suggest a Software maturity index (SMI) that provides an indication of
stability of a software product.
Following information is determined:
• Mt = the number of modules in the current release
• Fc = the number of modules in the current release that have been changed
• Fa = the number of modules in the current release that have been added.
• Fd = the number of modules from the preceding release that were deleted in the current
release
The Software Maturity Index, SMI, is defined as:
• SMI = [Mt – (Fc + Fa + Fd)/ Mt ]
• As SMI reaches to 1, the product begins to stabilize.
• SMI can be used as a metric for planning software maintenance activities.

METRICS FOR DESIGN MODEL:


• It is unacceptable that design of new aircraft, new computer chip, or a new building is
conducted without defining design measures, and determining metrics for various design
quality.
• So design without measurement is an unacceptable.
1. Architectural design metrics:
• These metrics focus on characteristics of the program architecture.
• It emphasize on structure and effectiveness of modules and components.
• DSQI (Design Structure Quality Index) was proposed by US air force.
• US air force has derived the DSQI whose range is from 0 to 1.
• Air force uses the information from data and architectural design to derive DQSI.
Compute S1 to S7 to compute DQSI:
• S1:Total number of modules
• S2: Number of modules whose function depends on the input data.
• S3: Number of modules whose function depends on prior processing.
• S4:Number of data base items
• S5:Number of unique database items
• S6: Number of database segments(different records or individual objects)
• S7: Number of modules with single entry and exit.
Calculate intermediate values D1 to D6 from s1 to s7 as follows:
• D1(program structure)=1 if standard design(OOD,DFOD) followed otherwise D1=0
• D2(module independence)=1-(s2/s1)
• D3(module not depending on prior processing)=1-(s3/s1)
• D4(Data base size)=1-(s5/s4)
• D5(Database compartmentalization)=1-(s6/s4)
• D6(Module entry/exit characteristics)=1-(s7/s1)

DSQI is calculated as:


• DQSI= ∑WiDi, where i= 1 to 6, Wi is relative weights of each intermediate values and
∑Wi = 1 (if all Di are weight equally to 0.167).
• Value of DQSI for previous designs is compared with a current design.
• If DQSI is lower than average, further design work and review is indicated.

2. Metrics for Object oriented Design:


• Software metrics for OO systems, we describe nine different and measurable
characteristics of OO design :
Size: It is defined in terms of four views:
• 1. Population: It is measured by taking static count of OO entities like classes or
operations.
• 2.Volume : It is collected dynamically at a given instant of time
• 3.Length : It is a measure of a chain of interconnected design elements(depth of the
interconnected design elements)
• 4. Functionality: This metric provide indirect indication of value delivered to the
customer.
Complexity: Complexity is viewed by examining how classes of an OO design are related to
one another.
Coupling: Physical connections between elements of OO design(e.g. number of messages
passed between objects)
Sufficiency: The degree to which an abstraction possesses the features required of it or what
properties does this abstraction (class) need to possess to be useful to me?
Primitiveness: The degree to which operation cannot be constructed
Similarity: Degree to which two or more classes are similar in terms of structure, function,
behavior, purpose.
Volatility: It measures that a change will occur or not.

3. Class oriented Metrics – The CK suite:


4. Class oriented Metrics – The MOOD metrics suite:
5. OO metrics proposed by Lorenz and Kidd:
6. Component level design Metrics:
7. Operation oriented Metrics:
8. User interface Design Metrics:

METRICS FOR PROCESS AND PROJECTS:


Process metrics:
• It can be used to improve software development and maintenance.
• They provide set of process indicators that gives long term software process
improvement.
Project metrics enable software project manager to:
• 1. Assess the status of ongoing project
• 2. Track potential risks
• 3. Uncover problem areas before they become critical
• 4. Adjust work flow or tasks
• 5. Evaluate team’s ability to control quality of software products.

Process Metrics and software process improvement:


• Only way to improve any process is to measure attributes of process, develop meaningful
metrics based on these attributes and then use metrics to provide indicators that will lead
to improvement
• Process is only one of the factors to improve quality and performance of software or
organization.
Process can be seen as a triangle connecting three factors that influence
software quality and performance:
• People, Product, and technology.
• Skill and motivation of people is most affecting factor in quality and performance.
• The complexity of product also have impact on quality performance
• The technology (i.e. SE methods and tools) also has impact.
• Also the triangle include development environment (e.g. CASE tools), business
conditions (e.g. deadlines, rules), and customer characteristics (e.g. communicating,
collaborating).
• We measure the efficiency of software process indirectly.
• It means, we derive metrics based on the outcomes from the process.
• Outcomes include measure of errors found before release, defects reported by the users,
delivered work product (output), schedule confirmation, etc.
• We also derive metrics by measuring characteristics of some particular tasks.
• For e.g. we measure time and effort spent in performing SE activities.
• Process metrics can provide benefit as an organization works to improve its overall level
of maturity.
Grady suggests following software process metrics etiquette for both
managers and practitioners:
• Use common sense when interpreting data.
• Provide regular feedback to individuals and teams who collect measures and metrics.
• Don’t use metrics to appraise individuals.
• Work with practitioners and teams to set goals and set metrics which are used ti achieve
these goals.
• Never use metrics to threaten individual or team.
Project Metrics:
The intent of project metrics is:
• To minimize the development schedule by avoiding delays and problems and risks.
• Project metrics are used to assess product quality on ongoing basis and modify the
approach to improve quality.
• As quality improves , defects are minimized, defect count is down, rework during project
is reduced and gives reduction in overall project cost

SOFTWARE MEASUREMENT:
Software measurement can be categorized as:
1) Direct Measure
2) Indirect Measure
Direct Measurement:
 Direct measure of software process includes cost and effort.
 Direct measure of product includes lines of code, Execution speed, memory size, defects
per reporting time period.
Indirect Measurement:
 Indirect measure examines the quality of software product itself(e.g. :-functionality,
complexity, efficiency, reliability and maintainability)
Reasons for measurement:
1. To gain baseline for comparison with future assessment
2. To determine status with respect to plan
3. To predict the size, cost and duration estimate
4. To improve the product quality and process improvement

The metrics in software Measurement are:


 Size oriented metrics
 Function oriented metrics
 Object oriented metrics
 Web based application metric
1. Size Oriented Metrics:
• It totally concerned with the measurement of software.
• A software company maintains a simple record for calculating the size of the software.
• It includes LOC, Effort,$$,PP document, Error, Defect ,People :
2. Function oriented metrics:
• Measures the functionality derived by the application
• The most widely used function oriented metric is Function point
• Function point is independent of programming language
• Measures functionality from user point of view
3. Object oriented metric:
• Relevant for object oriented programming
• Based on the following :
 Number of scenarios(Similar to use cases)
 Number of key classes
 Number of support classes
 Number of average support class per key class
 Number of subsystem
4. Web based application metric:
• Metrics related to web based application measure the following
1. Number of static pages(NSP)
2. Number of dynamic pages(NDP)
Customization(C)=NSP/NSP+NDP
C should approach 1

METRICS FOR SOFTWARE QUALITY:


Measuring Software Quality:
1. Correctness=defects/KLOC
2. Maintainability=MTTC(Mean-time to change)
3. Integrity=Sigma[1-(threat(1-security))]
Threat: Probability that an attack of specific type will occur within a given time.
Security: Probability that an attack of a specific type will be repelled.
4. Usability: Ease of use
5. Defect Removal Efficiency(DRE)
DRE=E/ (E+D)
E is the no. of errors found before delivery and D is no. of defects reported after delivery
Ideal value of DRE is 1.

You might also like