0% found this document useful (0 votes)
23 views18 pages

Unit-3 Software Engineering

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views18 pages

Unit-3 Software Engineering

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT-3

SOFTWARE DESIGN
• Software design is a process to transform user requirements into some suitable form, which helps the
programmer in software coding and implementation.
• For assessing user requirements, an SRS (Software Requirement Specification) document is created
whereas for coding and implementation, there is a need of more specific and detailed requirements in
software terms. The output of this process can directly be used into implementation in programming
languages.
• Software design is the first step in SDLC (Software Design Life Cycle), which moves the concentration
from problem domain to solution domain. It tries to specify how to fulfill the requirements mentioned
in SRS.

• Software Design Levels


• Software design yields three levels of results:
• Architectural Design - The architectural design is the highest abstract version of the system. It
identifies the software as a system with many components interacting with each other. At this level,
the designers get the idea of proposed solution domain.
• High-level Design- The high-level design breaks the ‘single entity-multiple component’ concept of
architectural design into less-abstracted view of sub-systems and modules and depicts their interaction
with each other. High-level design focuses on how the system along with all of its components can be
implemented in forms of modules. It recognizes modular structure of each sub-system and their
relation and interaction among each other.
• Detailed Design- Detailed design deals with the implementation part of what is seen as a system and
its sub-systems in the previous two designs. It is more detailed towards modules and their
implementations. It defines logical structure of each module and their interfaces to communicate with
other modules.

• Modularization
• Modularization is a technique to divide a software system into multiple discrete and independent
modules, which are expected to be capable of carrying out task(s) independently. These modules may
work as basic constructs for the entire software. Designers tend to design modules such that they can
be executed and/or compiled separately and independently.
• Modular design unintentionally follows the rules of ‘divide and conquer’ problem-solving strategy this is
because there are many other benefits attached with the modular design of a software.
• Advantage of modularization:
• Smaller components are easier to maintain
• Program can be divided based on functional aspects
• Desired level of abstraction can be brought in the program
• Components with high cohesion can be re-used again
• Concurrent execution can be made possible
• Desired from security aspect
• Concurrency
• Back in time, all software are meant to be executed sequentially. By sequential execution we mean that
the coded instruction will be executed one after another implying only one portion of program being
activated at any given time. Say, a software has multiple modules, then only one of all the modules can
be found active at any time of execution.
• In software design, concurrency is implemented by splitting the software into multiple independent units
of execution, like modules and executing them in parallel. In other words, concurrency provides
capability to the software to execute more than one part of code in parallel to each other.
• It is necessary for the programmers and designers to recognize those modules, which can be made
parallel execution.

• Example
• The spell check feature in word processor is a module of software, which runs along side the word
processor itself.

• Coupling and Cohesion


• When a software program is modularized, its tasks are divided into several modules based on some
characteristics. As we know, modules are set of instructions put together in order to achieve some tasks.
They are though, considered as single entity but may refer to each other to work together. There are
measures by which the quality of a design of modules and their interaction among them can be
measured. These measures are called coupling and cohesion.

• Cohesion
• Cohesion is a measure that defines the degree of intra-dependability within elements of a module. The
greater the cohesion, the better is the program design.
• There are seven types of cohesion, namely –
• Co-incidental cohesion - It is unplanned and random cohesion, which might be the result of breaking
the program into smaller modules for the sake of modularization. Because it is unplanned, it may serve
confusion to the programmers and is generally not-accepted.
• Logical cohesion - When logically categorized elements are put together into a module, it is called
logical cohesion.
• Temporal Cohesion - When elements of module are organized such that they are processed at a
similar point in time, it is called temporal cohesion.
• Procedural cohesion - When elements of module are grouped together, which are executed
sequentially in order to perform a task, it is called procedural cohesion.
• Communicational cohesion - When elements of module are grouped together, which are executed
sequentially and work on same data (information), it is called communicational cohesion.
• Sequential cohesion - When elements of module are grouped because the output of one element
serves as input to another and so on, it is called sequential cohesion.
• Functional cohesion - It is considered to be the highest degree of cohesion, and it is highly expected.
Elements of module in functional cohesion are grouped because they all contribute to a single well-
defined function. It can also be reused.
• Coupling
• Coupling is a measure that defines the level of inter-dependability among modules of a program. It tells
at what level the modules interfere and interact with each other. The lower the coupling, the better the
program.
• There are five levels of coupling, namely -
• Content coupling - When a module can directly access or modify or refer to the content of another
module, it is called content level coupling.
• Common coupling- When multiple modules have read and write access to some global data, it is called
common or global coupling.
• Control coupling- Two modules are called control-coupled if one of them decides the function of the
other module or changes its flow of execution.
• Stamp coupling- When multiple modules share common data structure and work on different part of
it, it is called stamp coupling.
• Data coupling- Data coupling is when two modules interact with each other by means of passing data
(as parameter). If a module passes data structure as parameter, then the receiving module should use
all its components.
• Ideally, no coupling is considered to be the best.

• Design Verification
• The output of software design process is design documentation, pseudo codes, detailed logic diagrams,
process diagrams, and detailed description of all functional or non-functional requirements.
• The next phase, which is the implementation of software, depends on all outputs mentioned above.
• It is then becomes necessary to verify the output before proceeding to the next phase. The early any
mistake is detected, the better it is or it might not be detected until testing of the product. If the outputs
of design phase are in formal notation form, then their associated tools for verification should be used
otherwise a thorough design review can be used for verification and validation.
• By structured verification approach, reviewers can detect defects that might be caused by overlooking
some conditions. A good design review is important for good software design, accuracy and quality.

• Structure Chart represent hierarchical structure of modules. It breaks down the entire system into
lowest functional modules, describe functions and sub-functions of each module of a system to a
greater detail. Structure Chart partitions the system into black boxes (functionality of the system is
known to the users but inner details are unknown). Inputs are given to the black boxes and
appropriate outputs are generated.
• Modules at top level called modules at low level. Components are read from top to bottom and
left to right. When a module calls another, it views the called module as black box, passing
required parameters and receiving results.
• Symbols used in construction of structured chart
• Module
It represents the process or task of the system. It is of three types.
• Control Module
A control module branches to more than one sub module.
• Sub Module
Sub Module is a module which is the part (Child) of another module.
• Library Module
Library Module are reusable and invokable from any module.


• Conditional Call
It represents that control module can select any of the sub module on the basis of some
condition.


• Loop (Repetitive call of module)
It represents the repetitive execution of module by the sub module.
A curved arrow represents loop in the module.


• All the sub modules cover by the loop repeat execution of module.
• Data Flow
It represents the flow of data between the modules. It is represented by directed arrow with
empty circle at the end.


• Control Flow
It represents the flow of control between the modules. It is represented by directed arrow with
filled circle at the end.


• Physical Storage
Physical Storage is that where all the information are to be stored.


• Example : Structure chart for an Email server

• Types of Structure Chart:
• Transform Centered Structure:
These type of structure chart are designed for the systems that receives an input which is
transformed by a sequence of operations being carried out by one module.
• Transaction Centered Structure:
These structure describes a system that processes a number of different types of transaction.

• Pseudocodes
• Pseudocode is a term used to describe a made-up programming language. It aids in the development of
algorithms.
• It does not employ appropriate syntax or code. It is a combination of regulation and the English language.
• The basics for generating pseudocode are the programme description and function.
• Pseudo-code enables exact algorithm specification without worrying about programming language
syntax.
• We can spot errors in the Programme using Pseudocodes.
• Pseudocodes already include most of the conceptual material and reduce the significant programming
time.
• Pseudo Code currently lacks a widely recognized standard.
• By becoming sufficiently precise in the DFD, developers and designers can write pseudocode, a hybrid
of English and coding.

• FLOW CHART
• A flowchart is a diagram that depicts a process, system or computer algorithm. They
are widely used in multiple fields to document, study, plan, improve and
communicate often complex processes in clear, easy-to-understand diagrams.
Flowcharts, sometimes spelled as flow charts, use rectangles, ovals, diamonds and
potentially numerous other shapes to define the type of step, along with connecting
arrows to define flow and sequence. They can range from simple, hand-drawn
charts to comprehensive computer-drawn diagrams depicting multiple steps and
routes. If we consider all the various forms of flowcharts, they are one of the most
common diagrams on the planet, used by both technical and non-technical people
in numerous fields. Flowcharts are sometimes called by more specialized names
such as Process Flowchart, Process Map, Functional Flowchart, Business Process
Mapping, Business Process Modeling and Notation (BPMN), or Process Flow
Diagram (PFD). They are related to other popular diagrams, such as Data Flow
Diagrams (DFDs) and Unified Modeling Language (UML) Activity Diagrams.

• Software design is a process to conceptualize the software requirements into software implementation.
Software design takes the user requirements as challenges and tries to find optimum solution. While the
software is being conceptualized, a plan is chalked out to find the best possible design for implementing
the intended solution.
• There are multiple variants of software design. Let us study them briefly:

• Structured Design
• Structured design is a conceptualization of problem into several well-organized elements of solution. It
is basically concerned with the solution design. Benefit of structured design is, it gives better
understanding of how the problem is being solved. Structured design also makes it simpler for designer
to concentrate on the problem more accurately.
• Structured design is mostly based on ‘divide and conquer’ strategy where a problem is broken into
several small problems and each small problem is individually solved until the whole problem is solved.
• The small pieces of problem are solved by means of solution modules. Structured design emphasis that
these modules be well organized in order to achieve precise solution.
• These modules are arranged in hierarchy. They communicate with each other. A good structured design
always follows some rules for communication among multiple modules, namely -
• Cohesion - grouping of all functionally related elements.
• Coupling - communication between different modules.
• A good structured design has high cohesion and low coupling arrangements.

• Function Oriented Design


• In function-oriented design, the system is comprised of many smaller sub-systems known as functions.
These functions are capable of performing significant task in the system. The system is considered as top
view of all functions.
• Function oriented design inherits some properties of structured design where divide and conquer
methodology is used.
• This design mechanism divides the whole system into smaller functions, which provides means of
abstraction by concealing the information and their operation.. These functional modules can share
information among themselves by means of information passing and using information available
globally.
• Another characteristic of functions is that when a program calls a function, the function changes the
state of the program, which sometimes is not acceptable by other modules. Function oriented design
works well where the system state does not matter and program/functions work on input rather than
on a state.

• Design Process
• The whole system is seen as how data flows in the system by means of data flow diagram.
• DFD depicts how functions changes data and state of entire system.
• The entire system is logically broken down into smaller units known as functions on the basis of their
operation in the system.
• Each function is then described at large.

• Object Oriented Design


• Object oriented design works around the entities and their characteristics instead of functions involved
in the software system. This design strategies focuses on entities and its characteristics. The whole
concept of software solution revolves around the engaged entities.
• Let us see the important concepts of Object Oriented Design:
• Objects - All entities involved in the solution design are known as objects. For example, person, banks,
company and customers are treated as objects. Every entity has some attributes associated to it and
has some methods to perform on the attributes.
• Classes - A class is a generalized description of an object. An object is an instance of a class. Class defines
all the attributes, which an object can have and methods, which defines the functionality of the object.
• In the solution design, attributes are stored as variables and functionalities are defined by means of
methods or procedures.
• Encapsulation - In OOD, the attributes (data variables) and methods (operation on the data) are
bundled together is called encapsulation. Encapsulation not only bundles important information of an
object together, but also restricts access of the data and methods from the outside world. This is called
information hiding.
• Inheritance - OOD allows similar classes to stack up in hierarchical manner where the lower or sub-
classes can import, implement and re-use allowed variables and methods from their immediate super
classes. This property of OOD is known as inheritance. This makes it easier to define specific class and
to create generalized classes from specific ones.
• Polymorphism - OOD languages provide a mechanism where methods performing similar tasks but
vary in arguments, can be assigned same name. This is called polymorphism, which allows a single
interface performing tasks for different types. Depending upon how the function is invoked, respective
portion of the code gets executed.
• Design Process
• Software design process can be perceived as series of well-defined steps. Though it varies according to
design approach (function oriented or object oriented, yet It may have the following steps involved:
• A solution design is created from requirement or previous used system and/or system sequence
diagram.
• Objects are identified and grouped into classes on behalf of similarity in attribute characteristics.
• Class hierarchy and relation among them is defined.
• Application framework is defined.

• Software Design Approaches


• Here are two generic approaches for software designing:

• Top Down Design


• We know that a system is composed of more than one sub-systems and it contains a number of
components. Further, these sub-systems and components may have their on set of sub-system and
components and creates hierarchical structure in the system.
• Top-down design takes the whole software system as one entity and then decomposes it to achieve
more than one sub-system or component based on some characteristics. Each sub-system or component
is then treated as a system and decomposed further. This process keeps on running until the lowest level
of system in the top-down hierarchy is achieved.
• Top-down design starts with a generalized model of system and keeps on defining the more specific part
of it. When all components are composed the whole system comes into existence.
• Top-down design is more suitable when the software solution needs to be designed from scratch and
specific details are unknown.

• Bottom-up Design
• The bottom up design model starts with most specific and basic components. It proceeds with composing
higher level of components by using basic or lower level components. It keeps creating higher level
components until the desired system is not evolved as one single component. With each higher level,
the amount of abstraction is increased.
• Bottom-up strategy is more suitable when a system needs to be created from some existing system,
where the basic primitives can be used in the newer system.
• Both, top-down and bottom-up approaches are not practical individually. Instead, a good combination
of both is used.

• Halstead’s software metrics is a set of measures proposed by Maurice Halstead to evaluate the
complexity of a software program. These metrics are based on the number of distinct operators and
operands in the program, and are used to estimate the effort required to develop and maintain the
program.

• The Halstead metrics include the following:


• Program length (N): This is the total number of operator and operand occurrences in the program.
• Vocabulary size (n): This is the total number of distinct operators and operands in the program.
• Program volume (V): This is the product of program length (N) and logarithm of vocabulary size (n),
i.e., V = N*log2(n).
• Program level (L): This is the ratio of the number of operator occurrences to the number of operand
occurrences in the program, i.e., L = n1/n2, where n1 is the number of operator occurrences and n2
is the number of operand occurrences.
• Program difficulty (D): This is the ratio of the number of unique operators to the total number of
operators in the program, i.e., D = (n1/2) * (N2/n2).
• Program effort (E): This is the product of program volume (V) and program difficulty (D), i.e., E = V*D.
• Time to implement (T): This is the estimated time required to implement the program, based on the
program effort (E) and a constant value that depends on the programming language and
development environment.
• Halstead’s software metrics can be used to estimate the size, complexity, and effort required to
develop and maintain a software program. However, they have some limitations, such as the
assumption that all operators and operands are equally important, and the assumption that the
same set of metrics can be used for different programming languages and development
environments.
• Overall, Halstead’s software metrics can be a useful tool for software developers and project
managers to estimate the effort required to develop and maintain software programs.
• A computer program is an implementation of an algorithm considered to be a collection of tokens
that can be classified as either operators or operands. Halstead’s metrics are included in a number
of current commercial tools that count software lines of code. By counting the tokens and
determining which are operators and which are operands, the following base measures can be
collected
• n1 = Number of distinct operators.
n2 = Number of distinct operands.
N1 = Total number of occurrences of operators.
N2 = Total number of occurrences of operands.
• In addition to the above, Halstead defines the following :
• n1* = Number of potential operators.
n2* = Number of potential operands.
• Halstead refers to n1* and n2* as the minimum possible number of operators and operands for a
module and a program respectively. This minimum number would be embodied in the
programming language itself, in which the required operation would already exist (for example, in
C language, any program must contain at least the definition of the function main()), possibly as a
function or as a procedure: n1* = 2, since at least 2 operators must appear for any function or
procedure : 1 for the name of the function and 1 to serve as an assignment or grouping symbol,
and n2* represents the number of parameters, without repetition, which would need to be passed
on to the function or the procedure.

• Halstead metrics –
• Halstead metrics are:
• Halstead Program Length – The total number of operator occurrences and the total number of
operand occurrences.
N = N1 + N2
• And estimated program length is, N^ = n1log2n1 + n2log2n2
• The following alternate expressions have been published to estimate program length:
• NJ = log2(n1!) + log2(n2!)
• NB = n1 * log2n2 + n2 * log2n1
• NC = n1 * sqrt(n1) + n2 * sqrt(n2)
• NS = (n * log2n) / 2
• Halstead Vocabulary – The total number of unique operator and unique operand occurrences.
n = n1 + n2
• Program Volume – Proportional to program size, represents the size, in bits, of space necessary for
storing the program. This parameter is dependent on specific algorithm implementation. The
properties V, N, and the number of lines in the code are shown to be linearly connected and
equally valid for measuring relative program size.
• V = Size * (log2 vocabulary) = N * log2(n)
• The unit of measurement of volume is the common unit for size “bits”. It is the actual size of a
program if a uniform binary encoding for the vocabulary is used. And error = Volume / 3000
• Potential Minimum Volume – The potential minimum volume V* is defined as the volume of the
most succinct program in which a problem can be coded.
• V* = (2 + n2*) * log2(2 + n2*)
• Function Point Analysis (FPA) is a method or set of rules of Functional Size Measurement. It assesses
the functionality delivered to its users, based on the user’s external view of the functional
requirements. It measures the logical view of an application, not the physically implemented view
or the internal technical view.
• The Function Point Analysis technique is used to analyze the functionality delivered by software and
Unadjusted Function Point (UFP) is the unit of measurement.

• Objectives of FPA:
• The objective of FPA is to measure the functionality that the user requests and receives.
• The objective of FPA is to measure software development and maintenance independently of the
technology used for implementation.
• It should be simple enough to minimize the overhead of the measurement process.
• It should be a consistent measure among various projects and organizations.

• Types of FPA:

• Transactional Functional Type –


• External Input (EI): EI processes data or control information that comes from outside the
application’s boundary. The EI is an elementary process.
• External Output (EO): EO is an elementary process that generates data or control information sent
outside the application’s boundary.

• External Inquiries (EQ): EQ is an elementary process made up of an input-output combination that


results in data retrieval.

• Data Functional Type –


• Internal Logical File (ILF): A user identifiable group of logically related data or control information
maintained within the boundary of the application.

• External Interface File (EIF): A group of users recognizable logically related data allusion to the
software but maintained within the boundary of another software.

• Benefits of FPA:

• FPA is a tool to determine the size of a purchased application package by counting all the functions
included in the package.
• It is a tool to help users discover the benefit of an application package to their organization by
counting functions that specifically match their requirements.
• It is a tool to measure the units of a software product to support quality and productivity analysis.
• It is a vehicle to estimate the cost and resources required for software development and
maintenance.
• It is a normalization factor for software comparison.

• The drawback of FPA:


• It requires a subjective evaluation and involves many judgements.
• Many cost and effort models are based on LOC, so it is necessary to change the function points.
• Compared to LOC, there are less research data on function points.
• Run after creating the design spec.
• With subjective judgement, the accuracy rate of the assessment is low.
• Due to the long learning curve, it is not easy to gain proficiency.
• This is a very time-consuming method.

• Cyclomatic Complexity
• Cyclomatic complexity of a code section is the quantitative measure of the number of linearly
independent paths in it. It is a software metric used to indicate the complexity of a program. It is
computed using the Control Flow Graph of the program. The nodes in the graph indicate the
smallest group of commands of a program, and a directed edge in it connects the two nodes i.e. if
second command might immediately follow the first command.
• For example, if source code contains no control flow statement then its cyclomatic complexity will
be 1 and source code contains a single path in it. Similarly, if the source code contains one if
condition then cyclomatic complexity will be 2 because there will be two paths one for true and
the other for false.
• Mathematically, for a structured program, the directed graph inside control flow is the edge
joining two basic blocks of the program as control may pass from first to second.
So, cyclomatic complexity M would be defined as,
• M = E – N + 2P
• where,
E = the number of edges in the control flow graph
N = the number of nodes in the control flow graph
P = the number of connected components

• Steps that should be followed in calculating cyclomatic complexity and test cases design are:

• Construction of graph with nodes and edges from code.


• Identification of independent paths.
• Cyclomatic Complexity Calculation
• Design of Test Cases
• Let a section of code as such:

• A = 10
IF B > C THEN
A=B
ELSE
A=C
ENDIF
Print A
Print B
Print C

• Control Flow Graph of above code


• The cyclomatic complexity calculated for above code will be from control flow graph. The graph
shows seven shapes(nodes), seven lines(edges), hence cyclomatic complexity is 7-7+2 = 2.
• Use of Cyclomatic Complexity:

• Determining the independent path executions thus proven to be very helpful for Developers and
Testers.
• It can make sure that every path have been tested at least once.
• Thus help to focus more on uncovered paths.
• Code coverage can be improved.
• Risk associated with program can be evaluated.
• These metrics being used earlier in the program helps in reducing the risks.
• Advantages of Cyclomatic Complexity:.
• It can be used as a quality metric, gives relative complexity of various designs.
• It is able to compute faster than the Halstead’s metrics.
• It is used to measure the minimum effort and best areas of concentration for testing.
• It is able to guide the testing process.
• It is easy to apply.
• Disadvantages of Cyclomatic Complexity:
• It is the measure of the program’s control complexity and not the data complexity.
• In this, nested conditional structures are harder to understand than non-nested structures.
• In case of simple comparisons and decision structures, it may give a misleading figure.

• Control Flow Graph (CFG)


• A Control Flow Graph (CFG) is the graphical representation of control flow or computation during
the execution of programs or applications. Control flow graphs are mostly used in static analysis as
well as compiler applications, as they can accurately represent the flow inside of a program unit.
The control flow graph was originally developed by Frances E. Allen.
• Characteristics of Control Flow Graph:
• Control flow graph is process oriented.
• Control flow graph shows all the paths that can be traversed during a program execution.
• Control flow graph is a directed graph.
• Edges in CFG portray control flow paths and the nodes in CFG portray basic blocks.
• There exist 2 designated blocks in Control Flow Graph:
• Entry Block:
Entry block allows the control to enter into the control flow graph.
• Exit Block:
Control flow leaves through the exit block.
• Hence, the control flow graph is comprised of all the building blocks involved in a flow diagram
such as the start node, end node and flows between the nodes.

• General Control Flow Graphs:


Control Flow Graph is represented differently for all statements and loops. Following images
describe it:
• 1. If-else:

• 2. while:

• 3. do-while:

• 4. for:

• Example:
• if A = 10 then
if B > C
A=B
else A = C
endif
endif
print A, B, C
• Flowchart of above example will be:

• Control Flow Graph of above example will be:


• Advantage of CFG:
There are many advantages of a control flow graph. It can easily encapsulate the information per
each basic block. It can easily locate inaccessible codes of a program and syntactic structures
such as loops are easy to find in a control flow graph.

You might also like