0% found this document useful (0 votes)
10 views29 pages

Unit 3

Software design transforms user requirements into a suitable form for implementation, focusing on correctness, completeness, efficiency, flexibility, consistency, and maintainability. Key principles include problem partitioning, abstraction, and modularity, which help manage complexity and improve software quality. Coupling and cohesion are critical concepts, with low coupling and high cohesion being desirable for effective software design.

Uploaded by

nidhiisaini0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views29 pages

Unit 3

Software design transforms user requirements into a suitable form for implementation, focusing on correctness, completeness, efficiency, flexibility, consistency, and maintainability. Key principles include problem partitioning, abstraction, and modularity, which help manage complexity and improve software quality. Coupling and cohesion are critical concepts, with low coupling and high cohesion being desirable for effective software design.

Uploaded by

nidhiisaini0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Software Design

Software design is a mechanism to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation. It deals with representing the
client's requirement, as described in SRS (Software Requirement Specification) document, into a
form, i.e., easily implementable using programming language.

The software design phase is the first step in SDLC (Software Design Life Cycle), which moves
the concentration from the problem domain to the solution domain. In software design, we
consider the system to be a set of components or modules with clearly defined behaviors &
boundaries.

Objectives of Software Design


Following are the purposes of Software design:

1. Correctness:Software design should be correct as per requirement.


2. Completeness:The design should have all components like data structures, modules, and
external interfaces, etc.
3. Efficiency:Resources should be used efficiently by the program.
4. Flexibility:Able to modify on changing needs.
5. Consistency:There should not be any inconsistency in the design.
6. Maintainability: The design should be so simple so that it can be easily maintainable by
other designers.

Software Design Principles


Software design principles are concerned with providing means to handle the complexity of the
design process effectively. Effectively managing the complexity will not only reduce the effort
needed for design but can also reduce the scope of introducing errors during design.

Following are the principles of Software Design

Problem Partitioning

For small problem, we can handle the entire problem at once but for the significant problem,
divide the problems and conquer the problem it means to divide the problem into smaller pieces
so that each piece can be captured separately.

For software design, the goal is to divide the problem into manageable pieces.

Benefits of Problem Partitioning

1. Software is easy to understand


2. Software becomes simple
3. Software is easy to test
4. Software is easy to modify
5. Software is easy to maintain
6. Software is easy to expand

These pieces cannot be entirely independent of each other as they together form the system. They
have to cooperate and communicate to solve the problem. This communication adds complexity.

Note: As the number of partition increases = Cost of partition and complexity increases

Abstraction
An abstraction is a tool that enables a designer to consider a component at an abstract level
without bothering about the internal details of the implementation. Abstraction can be used for
existing element as well as the component being designed.

Here, there are two common abstraction mechanisms

1. Functional Abstraction
2. Data Abstraction

Functional Abstraction

i. A module is specified by the method it performs.


ii. The details of the algorithm to accomplish the functions are not visible to the user of the function.

Functional abstraction forms the basis for Function oriented design approaches.

Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction forms the basis
for Object Oriented design approaches.

Modularity

Modularity specifies to the division of software into separate modules which are differently
named and addressed and are integrated later on in to obtain the completely functional software.
It is the only property that allows a program to be intellectually manageable. Single large
programs are difficult to understand and read due to a large number of reference variables,
control paths, global variables, etc.

The desirable properties of a modular system are:

o Each module is a well-defined system that can be used with other applications.
o Each module has single specified objectives.
o Modules can be separately compiled and saved in the library.
o Modules should be easier to use than to build.
o Modules are simpler from outside than inside.

Advantages and Disadvantages of Modularity


In this topic, we will discuss various advantage and disadvantage of Modularity.

Advantages of Modularity

There are several advantages of Modularity

o It allows large programs to be written by several or different people


o It encourages the creation of commonly used routines to be placed in the library and used by
other programs.
o It simplifies the overlay procedure of loading a large program into main storage.
o It provides more checkpoints to measure progress.
o It provides a framework for complete testing, more accessible to test
o It produced the well designed and more readable program.
Disadvantages of Modularity

There are several disadvantages of Modularity

o Execution time maybe, but not certainly, longer


o Storage size perhaps, but is not certainly, increased
o Compilation and loading time may be longer
o Inter-module communication problems may be increased
o More linkage required, run-time may be longer, more source lines must be written, and more
documentation has to be done

Modular Design
Modular design reduces the design complexity and results in easier and faster implementation by
allowing parallel development of various parts of a system. We discuss a different section of
modular design in detail in this section:

1. Functional Independence: Functional independence is achieved by developing functions that


perform only one kind of task and do not excessively interact with other modules. Independence
is important because it makes implementation more accessible and faster. The independent
modules are easier to maintain, test, and reduce error propagation and can be reused in other
programs as well. Thus, functional independence is a good design feature which ensures
software quality.

It is measured using two criteria:

o Cohesion: It measures the relative function strength of a module.


o Coupling: It measures the relative interdependence among modules.
2. Information hiding: The fundamental of Information hiding suggests that modules can be
characterized by the design decisions that protect from the others, i.e., In other words, modules
should be specified that data include within a module is inaccessible to other modules that do not
need for such information.

The use of information hiding as design criteria for modular system provides the most significant
benefits when modifications are required during testing's and later during software maintenance.
This is because as most data and procedures are hidden from other parts of the software,
inadvertent errors introduced during modifications are less likely to propagate to different
locations within the software.
Strategy of Design

A good system design strategy is to organize the program modules in such a method that are easy
to develop and latter too, change. Structured design methods help developers to deal with the
size and complexity of programs. Analysts generate instructions for the developers about how
code should be composed and how pieces of code should fit together to form a program.

To design a system, there are two possible approaches:

1. Top-down Approach
2. Bottom-up Approach

1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.

2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an existing system.

Coupling and Cohesion


Module Coupling

In software engineering, the coupling is the degree of interdependence between software


modules. Two modules that are tightly coupled are strongly dependent on each other. However,
two modules that are loosely coupled are not dependent on each other. Uncoupled modules have
no interdependence at all within them.

The various types of coupling techniques are shown in fig:

A good design is the one that has low coupling. Coupling is measured by the number of relations
between the modules. That is, the coupling increases as the number of calls between modules
increase or the amount of shared data is large. Thus, it can be said that a design with high
coupling will have more errors.

Types of Module Coupling

1. No Direct Coupling: There is no direct coupling between M1 and M2.


In this case, modules are subordinates to different modules. Therefore, no direct coupling.

2. Data Coupling: When data of one module is passed to another module, this is called data
coupling.

3. Stamp Coupling: Two modules are stamp coupled if they communicate using composite data
items such as structure, objects, etc. When the module passes non-global data structure or entire
structure to another module, they are said to be stamp coupled. For example, passing structure
variable in C or object in C++ language to a module.

4. Control Coupling: Control Coupling exists among two modules if data from one module is
used to direct the structure of instruction execution in another.

5. External Coupling: External Coupling arises when two modules share an externally imposed
data format, communication protocols, or device interface. This is related to communication to
external tools and devices.

6. Common Coupling: Two modules are common coupled if they share information through
some global data items.
7. Content Coupling: Content Coupling exists among two modules if they share code, e.g., a
branch from one module into another module.

Module Cohesion
In computer programming, cohesion defines to the degree to which the elements of a module
belong together. Thus, cohesion measures the strength of relationships between pieces of
functionality within a given module. For example, in highly cohesive systems, functionality is
strongly related.

Cohesion is an ordinal type of measurement and is generally described as "high cohesion" or


"low cohesion."

Types of Modules Cohesion

1. Functional Cohesion: Functional Cohesion is said to exist if the different elements of a module,
cooperate to achieve a single function.
2. Sequential Cohesion: A module is said to possess sequential cohesion if the element of a module
form the components of the sequence, where the output from one component of the sequence is
input to the next.
3. Communicational Cohesion: A module is said to have communicational cohesion, if all tasks of
the module refer to or update the same data structure, e.g., the set of functions defined on an array
or a stack.
4. Procedural Cohesion: A module is said to be procedural cohesion if the set of purpose of the
module are all parts of a procedure in which particular sequence of steps has to be carried out for
achieving a goal, e.g., the algorithm for decoding a message.
5. Temporal Cohesion: When a module includes functions that are associated by the fact that all
the methods must be executed in the same time, the module is said to exhibit temporal cohesion.
6. Logical Cohesion: A module is said to be logically cohesive if all the elements of the module
perform a similar operation. For example Error handling, data input and data output, etc.
7. Coincidental Cohesion: A module is said to have coincidental cohesion if it performs a set of
tasks that are associated with each other very loosely, if at all.

Differentiate between Coupling and Cohesion


Coupling Coupling Cohesion Cohesion
Coupling is also called Inter-Module Binding. Cohesion is also called Intra-Module Binding.

Coupling shows the relationships between modules. Cohesion shows the relationship within the module.

Coupling shows the relative independence between Cohesion shows the module's
the modules. relative functional strength.

While creating, you should aim for low coupling, While creating you should aim for high cohesion, i.e., a
i.e., dependency among modules should be less. cohesive component/ module focuses on a single
function (i.e., single-mindedness) with little interaction
with other modules of the system.

In coupling, modules are linked to the other In cohesion, the module focuses on a single thing.
modules.

Function Oriented Design


Function Oriented design is a method to software design where the model is decomposed into a
set of interacting units or modules where each unit or module has a clearly defined function.
Thus, the system is designed from a functional viewpoint.

Design Notations
Design Notations are primarily meant to be used during the process of
design and are used to represent design or design decisions. For a function-
oriented design, the design can be represented graphically or
mathematically by the following:

Data Flow Diagram


Data-flow design is concerned with designing a series of functional transformations that convert
system inputs into the required outputs. The design is described as data-flow diagrams. These
diagrams show how data flows through a system and how the output is derived from the input
through a series of functional transformations.

Data-flow diagrams are a useful and intuitive way of describing a system. They are generally
understandable without specialized training, notably if control information is excluded. They
show end-to-end processing. That is the flow of processing from when data enters the system to
where it leaves the system can be traced.

Data-flow design is an integral part of several design methods, and most CASE tools support
data-flow diagram creation. Different ways may use different icons to represent data-flow
diagram entities, but their meanings are similar.

The notation which is used is based on the following symbols:


The report generator produces a report which describes all of the named entities in a data-flow
diagram. The user inputs the name of the design represented by the diagram. The report
generator then finds all the names used in the data-flow diagram. It looks up a data dictionary
and retrieves information about each name. This is then collated into a report which is output by
the system.
Data Dictionaries
A data dictionary lists all data elements appearing in the DFD model of a system. The data items
listed contain all data flows and the contents of all data stores looking on the DFDs in the DFD
model of a system.

A data dictionary lists the objective of all data items and the definition of all composite data
elements in terms of their component data items. For example, a data dictionary entry may
contain that the data grossPay consists of the parts regularPay and overtimePay.

grossPay = regularPay + overtimePay

For the smallest units of data elements, the data dictionary lists their name and their type.

A data dictionary plays a significant role in any software development process because of the
following reasons:

o A Data dictionary provides a standard language for all relevant information for use by engineers
working in a project. A consistent vocabulary for data items is essential since, in large projects,
different engineers of the project tend to use different terms to refer to the same data, which
unnecessarily causes confusion.
o The data dictionary provides the analyst with a means to determine the definition of various data
structures in terms of their component elements.

Structured Charts
It partitions a system into block boxes. A Black box system that functionality is known to the
user without the knowledge of internal design.

Structured Chart is a graphical representation which shows:

o System partitions into modules


o Hierarchy of component modules
o The relation between processing modules
o Interaction between modules
o Information passed between modules
The following notations are used in structured chart:

Pseudo-code
Pseudo-code notations can be used in both the preliminary and detailed design phases. Using
pseudo-code, the designer describes system characteristics using short, concise, English
Language phases that are structured by keywords such as If-Then-Else, While-Do, and End.

Object-Oriented Design
In the object-oriented design method, the system is viewed as a collection of objects (i.e.,
entities). The state is distributed among the objects, and each object handles its state data. For
example, in a Library Automation Software, each library representative may be a separate object
with its data and functions to operate on these data. The tasks defined for one purpose cannot
refer or change data of other objects. Objects have their internal data which represent their state.
Similar objects create a class. In other words, each object is a member of some class. Classes
may inherit features from the superclass.

The different terms related to object design are:

1. Objects: All entities involved in the solution design are known as objects. For example,
person, banks, company, and users are considered as objects. Every entity has some
attributes associated with it and has some methods to perform on the attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a
class. A class defines all the attributes, which an object can have and methods, which
represents the functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the integrity
of the target object, the name of the requested operation, and any other action needed to
perform the function. Messages are often implemented as procedure or function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction.
Abstraction is the removal of the irrelevant and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data and
operations are linked to a single unit. Encapsulation not only bundles essential
information of an object together but also restricts access to the data and methods from
the outside world.
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the
lower or sub-classes can import, implement, and re-use allowed variables and functions
from their immediate superclasses.This property of OOD is called an inheritance. This
makes it easier to define a specific class and to create generalized classes from specific
ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing
similar tasks but vary in arguments, can be assigned the same name. This is known as
polymorphism, which allows a single interface is performing functions for different
types. Depending upon how the service is invoked, the respective portion of the code gets
executed.

Software Metrics in Software Engineering


A software metric is a measure of software characteristics which are measurable or countable.
Software metrics are valuable for many reasons, including measuring software performance,
planning work items, measuring productivity, and many other uses.

Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.

Classification of Software Metrics


Software metrics can be classified into two types as follows:

1. Product Metrics: These are the measures of various characteristics of the software product.
The two important software characteristics are:

1. Size and complexity of software.


2. Quality and reliability of software.

These metrics can be computed for different stages of SDLC.

2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to measure
the characteristics of methods, techniques, and tools that are used for developing software.

Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed
to be of greater importance to a software developer. For example, Lines of Code (LOC) measure.

External metrics: External metrics are the metrics used for measuring properties that are viewed
to be of greater importance to the user, e.g., portability, reliability, functionality, usability, etc.

Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.

Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time and
cost; these estimates are used as a base of new software. Note that as the project proceeds, the
project manager will check its progress from time-to-time and will compare the effort, cost, and
time with the original effort, cost and time. Also understand that these metrics are used to
decrease the development costs, time efforts and risks. The project quality can also be improved.
As quality improves, the number of errors and time, as well as cost required, is also reduced.

Advantage of Software Metrics


 Comparative study of various design methodology of software systems.
 For analysis, comparison, and critical study of different programming language
concerning their characteristics.
 In comparing and evaluating the capabilities and productivity of people involved in
software development.
 In the preparation of software quality specifications.
 In the verification of compliance of software systems requirements and specifications.
 In making inference about the effort to be put in the design and development of the
software systems.
 In getting an idea about the complexity of the code.
 In taking decisions regarding further division of a complex module is to be done or not.
 In guiding resource manager for their proper utilization.
 In comparison and making design tradeoffs between software development and
maintenance cost.
 In providing feedback to software managers about the progress and quality during various
phases of the software development life cycle.
 In the allocation of testing resources for testing the code.

Disadvantage of Software Metrics


 The application of software metrics is not always easy, and in some cases, it is difficult
and costly.
 The verification and justification of software metrics are based on historical/empirical
data whose validity is difficult to verify.
 These are useful for managing software products but not for evaluating the performance
of the technical staff.
 The definition and derivation of Software metrics are usually based on assuming which
are not standardized and may depend upon tools available and working environment.
 Most of the predictive models rely on estimates of certain variables which are often not
known precisely.

Size Oriented Metrics


LOC Metrics
It is one of the earliest and simpler metrics for calculating the size of the computer program. It is
generally used in calculating and comparing the productivity of programmers. These metrics are
derived by normalizing the quality and productivity measures by considering the size of the
product as a metric.

Following are the points regarding LOC measures:

1. In size-oriented metrics, LOC is considered to be the normalization value.


2. It is an older method that was developed when FORTRAN and COBOL programming
were very popular.
3. Productivity is defined as KLOC / EFFORT, where effort is measured in person-months.
4. Size-oriented metrics depend on the programming language used.
5. As productivity depends on KLOC, so assembly language code will have more
productivity.
6. LOC measure requires a level of detail which may not be practically achievable.
7. The more expressive is the programming language, the lower is the productivity.
8. LOC method of measurement does not apply to projects that deal with visual (GUI-
based) programming. As already explained, Graphical User Interfaces (GUIs) use forms
basically. LOC metric is not applicable here.
9. It requires that all organizations must use the same method for counting LOC. This is so
because some organizations use only executable statements, some useful comments, and
some do not. Thus, the standard needs to be established.
10. These metrics are not universally accepted.

Based on the LOC/KLOC count of software, many other metrics can be computed:

a. Errors/KLOC.
b. $/ KLOC.
c. Defects/KLOC.
d. Pages of documentation/KLOC.
e. Errors/PM.
f. Productivity = KLOC/PM (effort is measured in person-months).
g. $/ Page of documentation.
Advantages of LOC
1. Simple to measure

Disadvantage of LOC
1. It is defined on the code. For example, it cannot measure the size of the specification.
2. It characterizes only one specific view of size, namely length, it takes no account of
functionality or complexity
3. Bad software design may cause an excessive line of code
4. It is language dependent
5. Users cannot easily understand it

Halstead's Software Metrics


According to Halstead's "A computer program is an implementation of an algorithm considered
to be a collection of tokens which can be classified as either operators or operand."

Token Count
In these metrics, a computer program is considered to be a collection of tokens, which may be
classified as either operators or operands. All software science metrics can be defined in terms of
these basic symbols. These symbols are called as a token.

The basic measures are

n1 = count of unique operators.


n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.

In terms of the total tokens used, the size of the program can be expressed as N = N1 + N2.

Halstead metrics are:


Program Volume (V)

The unit of measurement of volume is the standard unit for size "bits." It is the actual size of a
program if a uniform binary encoding for the vocabulary is used.

V=N*log2n

Program Level (L)


The value of L ranges between zero and one, with L=1 representing a program written at the
highest possible level (i.e., with minimum size).

L=V*/V

Program Difficulty

The difficulty level or error-proneness (D) of the program is proportional to the number of the
unique operator in the program.

D= (n1/2) * (N2/n2)

Programming Effort (E)

The unit of measurement of E is elementary mental discriminations.

E=V/L=D*V

Estimated Program Length

According to Halstead, The first Hypothesis of software science is that the length of a well-
structured program is a function only of the number of unique operators and operands.

N=N1+N2

And estimated program length is denoted by N^

N^ = n1log2n1 + n2log2n2

The following alternate expressions have been published to estimate program length:

o NJ = log2 (n1!) + log2 (n2!)


o NB = n1 * log2n2 + n2 * log2n1
o NC = n1 * sqrt(n1) + n2 * sqrt(n2)
o NS = (n * log2n) / 2
Potential Minimum Volume

The potential minimum volume V* is defined as the volume of the most short program in which
a problem can be coded.

V* = (2 + n2*) * log2 (2 + n2*)

Here, n2* is the count of unique input and output parameters

Size of Vocabulary (n)


The size of the vocabulary of a program, which consists of the number of unique tokens used to
build a program, is defined as:

n=n1+n2

where

n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands

Language Level - Shows the algorithm implementation program language level. The same
algorithm demands additional effort if it is written in a low-level program language. For
example, it is easier to program in Pascal than in Assembler.

L' = V / D / D
lambda = L * V* = L2 * V
Language levels

Language Language level λ Variance σ

PL/1 1.53 0.92

ALGOL 1.21 0.74

FORTRAN 1.14 0.81

CDC Assembly 0.88 0.42

PASCAL 2.54 -

APL 2.42 -

C 0.857 0.445

Counting rules for C language

1. Comments are not considered.


2. The identifier and function declarations are not considered
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same program are counted as multiple
occurrences of the same variable.
5. Local variables with the same name in different functions are counted as unique
operands.
6. Functions calls are considered as operators.
7. All looping statements e.g., do {...} while ( ), while ( ) {...}, for ( ) {...}, all control
statements e.g., if ( ) {...}, if ( ) {...} else {...}, etc. are considered as operators.
8. In control construct switch ( ) {case:...}, switch as well as all the case statements are
considered as operators.
9. The reserve words like return, default, continue, break, sizeof, etc., are considered as
operators.
10. All the brackets, commas, and terminators are considered as operators.
11. GOTO is counted as an operator, and the label is counted as an operand.
12. The unary and binary occurrence of "+" and "-" are dealt with separately. Similarly "*"
(multiplication operator) are dealt separately.
13. In the array variables such as "array-name [index]" "array-name" and "index" are
considered as operands and [ ] is considered an operator.
14. In the structure variables such as "struct-name, member-name" or "struct-name ->
member-name," struct-name, member-name are considered as operands and '.', '->' are
taken as operators. Some names of member elements in different structure variables are
counted as unique operands.
15. All the hash directive is ignored.

Example: Consider the sorting program as shown in fig: List out the
operators and operands and also calculate the value of software science
measure like n, N, V, E, λ ,etc.
Operators Occurrences Operands Occurrences

int 4 SORT 1

() 5 x 7

, 4 n 3

[] 7 i 8

if 2 j 7

< 2 save 3

; 11 im1 3

for 2 2 2

= 6 1 3

- 1 0 1

<= 2 - -

++ 2 - -

return 2 - -

{} 3 - -

n1=14 N1=53 n2=10 N2=38

Solution: The list of operators and operands is given in the table

Here N1=53 and N2=38. The program length N=N1+N2=53+38=91

Vocabulary of the program n=n1+n2=14+10=24

Volume V= N * log2N=91 x log2 24=417 bits.

The estimate program length N of the program

= 14 log214+10 log2)10
= 14 * 3.81+10 * 3.32
= 53.34+33.2=86.45
Conceptually unique input and output parameters are represented by n2*.

n2*=3 {x: array holding the integer to be sorted. This is used as both input and output}

{N: the size of the array to be sorted}

The Potential Volume V*=5log25=11.6

Since L=V*/V

We may use another formula

V^=V x L^= 417 x 0.038=15.67


E^=V/L^=D^ x V

Therefore, 10974 elementary mental discrimination is required to construct the program.

This is probably a reasonable time to produce the program, which is very simple.

Functional Point (FP) Analysis


Allan J. Albrecht initially developed function Point Analysis in 1979 at IBM and it has been
further modified by the International Function Point Users Group (IFPUG). FPA is used to make
estimate of the software project, including its testing in terms of functionality or function size of
the software product. However, functional point analysis may be used for the test estimation of
the product. The functional size of the product is measured in terms of the function point, which
is a standard of measurement to measure the software application.

Objectives of FPA
The basic and primary purpose of the functional point analysis is to measure and provide the
software application functional size to the client, customer, and the stakeholder on their request.
Further, it is used to measure the software project development along with its maintenance,
consistently throughout the project irrespective of the tools and the technologies.

Following are the points regarding FPs

1. FPs of an application is found out by counting the number and types of functions used in the
applications. Various functions used in an application can be put under five types, as shown in
Table:

Types of FP Attributes

Measurements Parameters Examples

1.Number of External Inputs(EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports

3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

5. Number of external interfaces (EIF) Shared databases and shared routines.

All these parameters are then individually assessed for complexity.

The FPA functional units are shown in Fig:

2. FP characterizes the complexity of the software system and hence can be used to depict the
project time and the manpower requirement.

3. The effort required to develop the project depends on what the software does.

4. FP is programming language independent.


5. FP method is used for data processing systems, business systems like information systems.

6. The five parameters mentioned above are also known as information domain characteristics.

7. All the parameters mentioned above are assigned some weights that have been experimentally
determined and are shown in Table

Weights of 5-FP Attributes

Measurement Low Average High


Parameter
1. Number of 7 10 15
external inputs (EI)
2. Number of 5 7 10
external outputs
(EO)
3. Number of 3 4 6
external inquiries
(EQ)
4. Number of 4 5 7
internal files (ILF)
5. Number of 3 4 6
external interfaces
(EIF)

The functional complexities are multiplied with the corresponding weights against each function,
and the values are added up to determine the UFP (Unadjusted Function Point) of the subsystem.
Here that weighing factor will be simple, average, or complex for a measurement parameter type.

The Function Point (FP) is thus calculated with the following formula.

FP = Count-total * [0.65 + 0.01 * ∑(fi)]


= Count-total * CAF

where Count-total is obtained from the above Table.

CAF = [0.65 + 0.01 *∑(fi)]

and ∑(fi) is the sum of all 14 questionnaires and show the complexity adjustment value/ factor-
CAF (where i ranges from 1 to 14). Usually, a student is provided with the value of ∑(fi)

Also note that ∑(fi) ranges from 0 to 70, i.e.,

0 <= ∑(fi) <=70

and CAF ranges from 0.65 to 1.35 because

a. When ∑(fi) = 0 then CAF = 0.65


b. When ∑(fi) = 70 then CAF = 0.65 + (0.01 * 70) = 0.65 + 0.7 = 1.35

Based on the FP measure of software many other metrics can be computed:

a. Errors/FP
b. $/FP.
c. Defects/FP
d. Pages of documentation/FP
e. Errors/PM.
f. Productivity = FP/PM (effort is measured in person-months).
g. $/Page of Documentation.

8. LOCs of an application can be estimated from FPs. That is, they are interconvertible. This
process is known as backfiring. For example, 1 FP is equal to about 100 lines of COBOL code.

9. FP metrics is used mostly for measuring the size of Management Information System (MIS)
software.

10. But the function points obtained above are unadjusted function points (UFPs). These (UFPs)
of a subsystem are further adjusted by considering some more General System Characteristics
(GSCs). It is a set of 14 GSCs that need to be considered. The procedure for adjusting UFPs is as
follows:
a. Degree of Influence (DI) for each of these 14 GSCs is assessed on a scale of 0 to 5. (b) If
a particular GSC has no influence, then its weight is taken as 0 and if it has a strong
influence then its weight is 5.
b. The score of all 14 GSCs is totaled to determine Total Degree of Influence (TDI).
c. Then Value Adjustment Factor (VAF) is computed from TDI by using the formula: VAF
= (TDI * 0.01) + 0.65

Remember that the value of VAF lies within 0.65 to 1.35 because

a. When TDI = 0, VAF = 0.65


b. When TDI = 70, VAF = 1.35
c. VAF is then multiplied with the UFP to get the final FP count: FP = VAF * UFP

Example: Compute the function point, productivity, documentation, cost per function for the
following data:

1. Number of user inputs = 24


2. Number of user outputs = 46
3. Number of inquiries = 8
4. Number of files = 4
5. Number of external interfaces = 2
6. Effort = 36.9 p-m
7. Technical documents = 265 pages
8. User documents = 122 pages
9. Cost = $7744/ month

Various processing complexity factors are: 4, 1, 0, 3, 3, 5, 4, 4, 3, 3, 2, 2, 4, 5.

Solution:

Measurement Count Weighing factor


Parameter

1. Number of 24 * 4 = 96
external inputs (EI)

2. Number of 46 * 4 = 184
external outputs
(EO)

3. Number of 8 * 6 = 48
external inquiries
(EQ)
4. Number of 4 * 10 = 40
internal files (ILF)

5. Number of 2 * 5 = 10
external interfaces 378
(EIF) Count-total →

So sum of all fi (i ← 1 to 14) = 4 + 1 + 0 + 3 + 5 + 4 + 4 + 3 + 3 + 2 + 2 + 4 + 5 = 43

FP = Count-total * [0.65 + 0.01 *∑(fi)]


= 378 * [0.65 + 0.01 * 43]
= 378 * [0.65 + 0.43]
= 378 * 1.08 = 408

Total pages of documentation = technical document + user document


= 265 + 122 = 387pages

Documentation = Pages of documentation/FP


= 387/408 = 0.94

Differentiate between FP and LOC


FP LOC

1. FP is specification based. 1. LOC is an analogy based.

2. FP is language independent. 2. LOC is language dependent.

3. FP is user-oriented. 3. LOC is design-oriented.

4. It is extendible to LOC. 4. It is convertible to FP (backfiring)

Cyclomatic Complexity
Cyclomatic complexity is a software metric used to measure the complexity of a program.
Thomas J. McCabe developed this metric in 1976.McCabe interprets a computer program as a
set of a strongly connected directed graph. Nodes represent parts of the source code having no
branches and arcs represent possible control flow transfers during program execution. The notion
of program graph has been used for this measure, and it is used to measure and control the
number of paths through a program. The complexity of a computer program can be correlated
with the topological complexity of a graph.

How to Calculate Cyclomatic Complexity?

McCabe proposed the cyclomatic number, V (G) of graph theory as an indicator of software
complexity. The cyclomatic number is equal to the number of linearly independent paths through
a program in its graphs representation. For a program control graph G, cyclomatic number, V
(G), is given as:

V (G) = E - N + 2 * P

E = The number of edges in graphs G


N = The number of nodes in graphs G
P = The number of connected components in graph G.

Example:

Properties of Cyclomatic complexity:


Following are the properties of Cyclomatic complexity:

1. V (G) is the maximum number of independent paths in the graph


2. V (G) >=1
3. G will have one path if V (G) = 1
4. Minimize complexity to 10

You might also like