0% found this document useful (0 votes)
31 views48 pages

ND Software Engineering 4 4 Software Design

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views48 pages

ND Software Engineering 4 4 Software Design

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Chapter 4

Software Design
4. Software Design
4.1. Objectives:

By the end of the course the student(s) should be able to design software as per
requirements specified.

4.2. Introduction

Software Design is about modelling software systems.


Software design is the first step in SDLC (Software Design Life Cycle), which moves the concentration
from problem domain to solution domain. It tries to specify how to fulfil the requirements
mentioned in SRS.
Modelling a system means identifying its main characteristics, states, and behaviour using a
notation.
Building a system can be seen as a process of reification. In other words moving from a very abstract
statement of what is wanted to a concrete implementation.
In doing this you move through a sequence of intermediate descriptions which become more and
more concrete.
These intermediate descriptions are models. The process of building a system can be seen as the
process of building a series of progressively more detailed models.

The main aspects of modelling are captured in the figure below:

Design is a process, not a set of known facts…


– process of learning about a problem
– process of describing a solution
– at first with many gaps …
– eventually in sufficient detail to build the solution
ND Software Engineering
Chapter 4 - Software Design

4.3. Select Applicable Design Models

Various design models exist. The choice of a model to use in a particular system
development process is constrained by a number of factors that include:
– Cost
– Suitability / Applicability
– Complexity / Ease of use
– Time constraints
– Type of system to be developed.
The following are some of the models that can be used:
1. Waterfall Model
Ideal where problem is
well defined and / or
experimentation is not
feasible

2. Spiral Model

Ideal where there


are eminent risks
that need to be
mitigated

3. Incremental Model

Ideal where problem can


be broken down into
independent modules

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

4. Unified Software Development Process – USDP (Object Oriented Model)


USDP is the development process associated to UML (Unified Modelling Language)
– USDP is based on Incremental Process
– Each iteration is like a mini-project that delivers a part of the system
o It is use case driven & risk confronting
o Architecture-centric
o Iterative and incremental
 Iterations & baselines
 Phases & milestones
 Workflows

The Life Cycle Phases and Milestones

1. Inception: Define scope of project and develop business case (What to do)
2. Elaboration: Plan project, specify features and baseline architecture (classes, data
and system structure)
3. Construction: Build product (object interaction, behaviour and state)
4. Transition: Transition product to its users (testing and optimisation)

Milestone acceptance criteria


1. Lifecycle objectives - system scope, key requirements, outline architecture, risk
assessment, business case, feasibility, agreed project objectives with stakeholders

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

2. Lifecycle architecture - executable architectural baseline, updated risk assessment,


project plan to support bidding process, business case verified against plan,
continued stakeholder agreement
3. Initial operational capability - product ready for beta test in user environment
4. Product release - completed beta & acceptance tests, defects fixed & in the user
community.

4.4. Make Architectural Design Choices

Software architecture is based on the requirements for the system. Requirements define
what the system should do, whereas the software architecture describes how this is
achieved.
Architectural design is a creative process so the process differs depending on the type of
system being developed. However, a number of common decisions span all design
processes and these decisions affect the non-functional characteristics of the system:
 Is there a generic application architecture that can be used?
 How will the system be distributed?
 What architectural styles are appropriate?
 What approach will be used to structure the system?
 How will the system be decomposed into modules?
 What control strategy should be used?
 How will the architectural design be evaluated?
 How should the architecture be documented?
Systems in the same domain often have similar architectures that reflect domain concepts.
Application product lines are built around a core architecture with variants that satisfy
particular customer requirements. The architecture of a system may be designed around
one of more architectural patterns/styles, which capture the essence of an architecture
and can be instantiated in different ways.
The particular architectural style should depend on the non-functional system
requirements:
 Performance: localize critical operations and minimize communications. Use
large rather than fine-grain components.
 Security: use a layered architecture with critical assets in the inner layers.
 Safety: localize safety-critical features in a small number of sub-systems.
 Availability: include redundant components and mechanisms for fault
tolerance.
 Maintainability: use fine-grain, replaceable components.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

How can we document the Architecture decision?


In the table below show how to document an architectural decision. Some items can be added or
removed. Attributes can be added according needs for example, related principles, related artefacts,
related decisions, and others.
ID A unique identifier for each decision
Subject Area Area of concern, for example, data integrity
Decision Statement Here we write as a statement indicating what the decision is e.g. “Using cloud IaaS”.
Status Decision status, Not Decided or Decided
Problem Statement A short description of the problem.
Assumptions What are the assumptions in the context of the problem?
Constraints What are the constraints in the context of the problem?
Motivation Why this decision is important?
Alternatives A list of alternatives or options we have.
Argument / Decision On which basis, you selected this option or alternative, it mainly includes the criteria of
criteria selection, such as ease of implementation, resources availability, expandability…
Decision The decision we decided to have
Justification Why this decision was made and we selected it.
Implications What are the implications we might have after having this decision?
Derived requirements A list of requirements that are generated by this decision.
Owner/Major
Contributors The major contributors and owners who took this decision

Example of architecture decision


It is important to say that using the template is not a decision-making process, it is just documenting
them, while you need a decision process and decision matrix to evaluate the best options you need to
take.

ID AD01
Subject Area Application structure
Decision Statement The application should be based on Model, View, and Controller (3—tier Arctct)
Status Decided
The solution should be maintainable and have separations of layers which will make the
Problem Statement application more dynamic for new requirements.
Assumptions The team has proven skills in implementing MVC model.
Constraints None
The customer asked to have maintainable software, as well to be expandable easily and
can use different supporting tools and plug-ins in the different software layers based on
Motivation different technologies
SOA architecture
Three tiers architecture
Alternatives Client/Server architecture
Application maintainability
Application adaptability
Application expandability
Argument / Decision Resources capabilities
criteria Implementation time
Decision Three tiers architecture will be used.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

Based on the customer concerns and criteria has been mentioned, we found that SOA
architecture will be costly to be implemented and will need more time to realize the
business value which can be obtained quickly by three tires. Moreover, the existing
Justification resources are not skilled in SOA architecture.
Implications None
Derived requirements
Owner/Major Lead Architect, Application Architect, Data Architect, and subject matter experts (Name
Contributors them)

4.5. Architectural Views

David Garlan and Mary Shaw suggest that software architecture is a level of design
concerned with issues: "Beyond the algorithms and data structures of the computation;
designing and specifying the overall system structure emerges as a new kind of problem.
Structural issues include gross organization and global control structure; protocols for
communication, synchronization, and data access; assignment of functionality to design
elements; physical distribution; composition of design elements; scaling and performance;
and selection among design alternatives."
But there is more to architecture than just structure; the IEEE Working Group on
Architecture defines it as "the highest-level concept of a system in its environment". It also
encompasses the "fit" with system integrity, with economical constraints, with aesthetic
concerns, and with style. It is not limited to an inward focus, but takes into consideration the
system as a whole in its user environment and its development environment - an outward
focus.
Each architectural model only shows one view or perspective of the system. It might show
how a system is decomposed into modules, how the run-time processes interact or the
different ways in which system components are distributed across a network. For both
design and documentation, you usually need to present multiple views of the software
architecture.
4+1 view model of software architecture:
 A logical view, which shows the key abstractions in the system as objects or
object classes.
 A process view, which shows how, at run-time, the system is composed of
interacting processes.
 A development view, which shows how the software is decomposed for
development.
 A physical view, which shows the system hardware and how software
components are distributed across the processors in the system.
 Related using use cases or scenarios (+1).

4.4.1. A Typical Set of Architectural Views


Architecture is represented by a number of different architectural views, which in their
essence are extracts illustrating the "architecturally significant" elements of the models. The
views can include the following:
1. The Use-Case View, which contains use cases and scenarios that encompasses
architecturally significant behaviour, classes, or technical risks. It is a subset of the
use-case model.
2. The Logical View, which contains the most important design classes and their
organization into packages and subsystems, and the organization of these packages

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

and subsystems into layers. It contains also some use case realizations. It is a subset
of the design model.
3. The Implementation View, which contains an overview of the implementation
model and its organization in terms of modules into packages and layers. The
allocation of packages and classes (from the Logical View) to the packages and
modules of the Implementation View is also described. It is a subset of the
implementation model.
4. The Process View, which contains the description of the tasks (process and threads)
involved, their interactions, and configurations, and the allocation of design objects
and classes to tasks. This view need only be used if the system has a significant
degree of concurrency.
5. The Deployment View, which contains the description of the various physical
nodes for the most typical platform configurations, and the allocation of tasks
(from the Process View) to the physical nodes. This view need only be used if the
system is distributed. It is a subset of the deployment model.
The architectural views are documented in a Software Architecture Document. You can
envision additional views to express different special concerns: user-interface view, security
view, data view, and so on.
In objected oriented design (using UML), the following views are recognized for software
systems:
1. The User View
i. Use Case Diagram. The user view provides a window into the system from
the user's perspective, in that the functionality of the system is modelled in
terms of the user and what the user expects of the system. In UML- the
user of the system is called an actor, which can represent either a human
user or users as other systems. The functionality of the system is defined by
the use cases. The lines connecting the actors and the use cases show that
the actors interact with the functionality provided by the use case.
ii. Business Use Case Diagram. The business use case diagram is an extension
to the use case diagram and is defined in and supported by UML. The first
step in business modelling using the UML is identifying the interactions
between the business processes and those entities outside the business,
such as customers and suppliers.
2. The Structural View
i. Class Diagram. Class diagrams describe the static structure of a system. The
focus of the class diagram is to describe how a system is structured rather
than how it behaves. Class diagrams are probably the most versatile of the
UML diagrams. Data models, software components, software classes, and
business objects are modelled using the class diagram, each in its own
diagram.
3. Behaviour View
i. The Sequence Diagram. Sequence diagrams describe the behaviour of a
use case by diagramming the classes and the messages exchanged between
them, arranged in chronological order. Sequence diagrams do not describe
object relationships; that view of the system is reserved for collaboration
diagrams. Object and actor instances can be displayed in the sequence
diagram along with how the objects communicate by sending messages to
one another.
ii. Collaboration Diagram. The collaboration diagram is a view of the
interactions of objects and unlike the sequence diagram that describes the
objects, messaging over time, collaboration diagrams display the

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

relationships between objects. Some UML tools can automatically generate


collaboration diagrams from sequence diagrams, and vice versa.
Collaboration and sequence diagrams express similar information but they
display it from different perspectives.
iii. Activity Diagram. The activity diagram is a specialization of the state
diagram and it displays the dynamics of a system by showing the workflow.
The activity diagram complements the business use case diagram in that it
shows the process flow behind the use case.
iv. State Diagram. The state diagram describes the sequence of states that an
object goes through during its lifetime. The state diagram displays the
events that act upon the object that enables the transition from one state
to the next.
4. The Implementation View
i. Component Diagram. The component diagram displays the static structure
of the implementation view. The component diagram shows the
organizations and dependencies of the components, subsystems, and their
relationships. The diagram models the interface that can be used to show
the externally visible operations of a class or component.
ii. Deployment Diagram. The deployment diagram shows the configuration of
run-time processing elements and the software components, processes
and objects that execute on them. The deployment diagram can be used
for business modelling where employees and organizations are displayed as
run-time processing elements and the procedures and documents that they
use are displayed as the software components.
Architectural Focus
Although the views above could represent the whole design of a system, the architecture
concerns itself only with some specific aspects:
1. The structure of the model — the organizational patterns, for example, layering.
2. The essential elements — critical use cases, main classes, common mechanisms,
and so on, as opposed to all the elements present in the model.
3. A few key scenarios showing the main control flows throughout the system.
4. The services, to capture modularity, optional features, product-line aspects.
In essence, architectural views are abstractions, or simplifications, of the entire design, in
which important characteristics are made more visible by leaving details aside. These
characteristics are important when reasoning about:
1. System evolution—going to the next development cycle.
2. Reuse of the architecture, or parts of it, in the context of a product line.
3. Assessment of supplementary qualities, such as performance, availability,
portability, and safety.
4. Assignment of development work to teams or subcontractors.
5. Decisions about including off-the-shelf components.
6. Insertion in a wider system.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

4.4.2. Architectural Patterns


Patterns are a means of representing, sharing and reusing knowledge. An architectural pattern is
a stylized description of a good design practice, which has been tried and tested in different
environments. Patterns should include information about when they are and when the are not useful.
Patterns may be represented using tabular and graphical descriptions.

1. Model-View-Controller
 Serves as a basis of interaction management in many web-based systems.
 Decouples three major interconnected components:
o The model is the central component of the pattern that directly manages the
data, logic and rules of the application. It is the application's dynamic data
structure, independent of the user interface.
o A view can be any output representation of information, such as a chart or a
diagram. Multiple views of the same information are possible.
o The controller accepts input and converts it to commands for the model or
view.
 Supported by most language frameworks.

Pattern name Model-View-Controller (MVC)


Separates presentation and interaction from the system data. The system is structured
into three logical components that interact with each other. The Model component
manages the system data and associated operations on that data. The View component
Description
defines and manages how the data is presented to the user. The Controller component
manages user interaction (e.g., key presses, mouse clicks, etc.) and passes these
interactions to the View and the Model.
The display presented to the user frequently changes over time in response to input or
computation. Different users have different needs for how they want to view the
Problem
program.s information. The system needs to reflect data changes to all users in the way
description
that they want to view them, while making it easy to make changes to the user
interface.
This involves separating the data being manipulated from the manipulation logic and
Solution the details of display using three components: Model (a problem-domain component
description with data and operations, independent of the user interface), View (a data display
component), and Controller (a component that receives and acts on user input).

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

Advantages: views and controllers can be easily be added, removed, or changed; views
can be added or changed during execution; user interface components can be changed,
even at runtime. Disadvantages: views and controller are often hard to separate;
Consequences
frequent updates may slow data display and degrade user interface performance; the
MVC style makes user interface components (views, controllers) highly dependent on
model components.

2. Layered architecture
 Used to model the interfacing of sub-systems.
 Organizes the system into a set of layers (or abstract machines) each of which provide a
set of services.
 Supports the incremental development of sub-systems in different layers. When a layer
interface changes, only the adjacent layer is affected.
 However, often artificial to structure systems in this way.

Name Layered architecture

Organizes the system into layers with related functionality associated with each
Description layer. A layer provides services to the layer above it so the lowest-level layers
represent core services that are likely to be used throughout the system.
Used when building new facilities on top of existing systems; when the
When used development is spread across several teams with each team responsibility for a
layer of functionality; when there is a requirement for multi-level security.
Allows replacement of entire layers so long as the interface is maintained.
Advantages Redundant facilities (e.g., authentication) can be provided in each layer to increase
the dependability of the system.
In practice, providing a clean separation between layers is often difficult and a high-
level layer may have to interact directly with lower-level layers rather than through
Disadvantages
the layer immediately below it. Performance can be a problem because of multiple
levels of interpretation of a service request as it is processed at each layer.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

3. Repository architecture
 Sub-systems must exchange data. This may be done in two ways:
o Shared data is held in a central database or repository and may be accessed by
all sub-systems;
o Each sub-system maintains its own database and passes data explicitly to other
sub-systems.
 When large amounts of data are to be shared, the repository model of sharing is most
commonly used a this is an efficient data sharing mechanism.

Name Repository
All data in a system is managed in a central repository that is accessible to all
Description system components. Components do not interact directly, only through the
repository.
You should use this pattern when you have a system in which large volumes
of information are generated that has to be stored for a long time. You may
When used
also use it in data-driven systems where the inclusion of data in the
repository triggers an action or tool.
Components can be independent--they do not need to know of the
existence of other components. Changes made by one component can be
Advantages
propagated to all components. All data can be managed consistently (e.g.,
backups done at the same time) as it is all in one place.
The repository is a single point of failure so problems in the repository affect
the whole system. May be inefficiencies in organizing all communication
Disadvantages
through the repository. Distributing the repository across several computers
may be difficult.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

4. Client-server architecture

 Distributed system model which shows how data and processing is distributed across a
range of components, but can also be implemented on a single computer.
 Set of stand-alone servers which provide specific services such as printing, data
management, etc.
 Set of clients which call on these services.
 Network which allows clients to access servers.

Name Client-server
In a client-server architecture, the functionality of the system is organized into
Description services, with each service delivered from a separate server. Clients are users of these
services and access servers to make use of them.
Used when data in a shared database has to be accessed from a range of locations.
When used Because servers can be replicated, may also be used when the load on a system is
variable.
The principal advantage of this model is that servers can be distributed across a
Advantages network. General functionality (e.g., a printing service) can be available to all clients
and does not need to be implemented by all services.
Each service is a single point of failure so susceptible to denial of service attacks or
server failure. Performance may be unpredictable because it depends on the network
Disadvantages
as well as the system. May be management problems if servers are owned by
different organizations.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

5. Pipe and filter architecture


 Functional transformations process their inputs to produce outputs.
 May be referred to as a pipe and filter model (as in UNIX shell).
 Variants of this approach are very common. When transformations are sequential, this is
a batch sequential model which is extensively used in data processing systems.
 Not really suitable for interactive systems.

Name Pipe and filter


The processing of the data in a system is organized so that each processing
Description component (filter) is discrete and carries out one type of data transformation. The
data flows (as in a pipe) from one component to another for processing.
Commonly used in data processing applications (both batch- and transaction-
When used
based) where inputs are processed in separate stages to generate related outputs.
Easy to understand and supports transformation reuse. Workflow style matches
Advantages the structure of many business processes. Evolution by adding transformations is
straightforward. Can be implemented as either a sequential or concurrent system.
The format for data transfer has to be agreed upon between communicating
transformations. Each transformation must parse its input and unparse its output
Disadvantages to the agreed form. This increases system overhead and may mean that it is
impossible to reuse functional transformations that use incompatible data
structures.

6. Application architectures
Application systems are designed to meet an organizational need. As businesses have much
in common, their application systems also tend to have a common architecture that reflects
the application requirements. A generic application architecture is an architecture for a type
of software system that may be configured and adapted to create a system that meets
specific requirements. application architectures can be used as a:
 Starting point for architectural design.
 Design checklist.
 Way of organizing the work of the development team.
 Means of assessing components for reuse.
 Vocabulary for talking about application types.
Examples of application types:

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

1. Data processing applications: Data driven applications that process data in batches
without explicit user intervention during the processing.
2. Transaction processing applications: Data-centred applications that process user
requests and update information in a system database.
3. Event processing systems: Applications where system actions depend on interpreting
events from the system's environment.
4. Language processing systems: Applications where the users' intentions are specified in a
formal language that is processed and interpreted by the system.

4.6. Apply Techniques of Software Engineering

Two distinctly different approaches are available: the traditional design approach and
the object-oriented design approach.
 Traditional design approach: Traditional design consists of two different activities;
first a structured analysis of the requirements specification is carried out where the
detailed structure of the problem is examined. This is followed by a structured
design activity. During structured design, the results of structured analysis are
transformed into the software design.
 Object-oriented design approach: In this technique, various objects that occur in
the problem domain and the solution domain are first identified, and the different
relationships that exist among these objects are identified. The object structure is
further refined to obtain the detailed design.

Software design yields three levels of results:


 Architectural Design - The architectural design is the highest abstract version of the
system. It identifies the software as a system with many components interacting
with each other. At this level, the designers get the idea of proposed solution
domain.
 High-level Design- The high-level design breaks the ‘single entity-multiple
component’ concept of architectural design into less-abstracted view of sub-systems
and modules and depicts their interaction with each other. High-level design focuses
on how the system along with all of its components can be implemented in forms of
modules. It recognizes modular structure of each sub-system and their relation and
interaction among each other.
 Detailed Design- Detailed design deals with the implementation part of what is seen
as a system and its sub-systems in the previous two designs. It is more detailed
towards modules and their implementations. It defines logical structure of each
module and their interfaces to communicate with other modules. The following are
taken into account when making design decisions:
o Modularization: Modularization is a technique to divide a software system
into multiple discrete and independent modules, which are expected to be
capable of carrying out task(s) independently. These modules may work as
basic constructs for the entire software. Designers tend to design modules
such that they can be executed and/or compiled separately and
independently. Modular design unintentionally follows the rules of ‘divide
and conquer’ problem-solving strategy this is because there are many other
benefits attached with the modular design of software.
o Concurrency: Back in time, all software was meant to be executed
sequentially. By sequential execution we mean that the coded instruction
will be executed one after another implying only one portion of program
being activated at any given time. Say, a software has multiple modules,

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

then only one of all the modules can be found active at any time of
execution. In software design, concurrency is implemented by splitting the
software into multiple independent units of execution, like modules and
executing them in parallel. In other words, concurrency provides capability
to the software to execute more than one part of code in parallel to each
other. It is necessary for the programmers and designers to recognize those
modules, which can be made parallel execution.

o Coupling and Cohesion: When a software program is modularized, its tasks


are divided into several modules based on some characteristics. As we
know, modules are set of instructions put together in order to achieve some
tasks. They are though, considered as single entity but may refer to each
other to work together. There are measures by which the quality of a design
of modules and their interaction among them can be measured. These
measures are called coupling and cohesion.
 Cohesion is a measure that defines the degree of intra-
dependability within elements of a module. The greater the
cohesion, the better is the program design.
 Coupling is a measure that defines the level of inter-dependability
among modules of a program. It tells at what level the modules
interfere and interact with each other. The lower the coupling, the
better the program.
A good structured design has high cohesion and low coupling
arrangements.
o Structured Design: Structured design is a conceptualization of problem into
several well-organized elements of solution. It is basically concerned with
the solution design. Benefit of structured design is, it gives better
understanding of how the problem is being solved. Structured design also
makes it simpler for designer to concentrate on the problem more
accurately. Structured design is mostly based on ‘divide and conquer’
strategy where a problem is broken into several small problems and each
small problem is individually solved until the whole problem is solved
o Function Oriented Design: In function-oriented design, the system is
comprised of many smaller sub-systems known as functions. These
functions are capable of performing significant task in the system. The
system is considered as top view of all functions. Function oriented design
inherits some properties of structured design where divide and conquer
methodology is used.
 Design Process
 The whole system is seen as how data flows in the system
by means of data flow diagram.
 DFD depicts how functions change the data and state of
entire system.
 The entire system is logically broken down into smaller
units known as functions on the basis of their operation in
the system.
 Each function is then described at large.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

o Object Oriented Design: Object oriented design works around the entities
and their characteristics instead of functions involved in the software
system. This design strategy focuses on entities and its characteristics. The
whole concept of software solution revolves around the engaged entities.

o Important concepts of Object Oriented Design:


 Objects - All entities involved in the solution design are known as
objects. For example, person, banks, company and customers are
treated as objects. Every entity has some attributes associated to it
and has some methods to perform on the attributes.
 Classes - A class is a generalized description of an object. An object
is an instance of a class. Class defines all the attributes, which an
object can have and methods, which defines the functionality of the
object. In the solution design, attributes are stored as variables and
functionalities are defined by means of methods or procedures.
 Encapsulation - In OOD, the attributes (data variables) and methods
(operation on the data) are bundled together is called
encapsulation. Encapsulation not only bundles important
information of an object together, but also restricts access of the
data and methods from the outside world. This is called information
hiding.
 Inheritance - OOD allows similar classes to stack up in hierarchical
manner where the lower or sub-classes can import, implement and
re-use allowed variables and methods from their immediate super
classes. This property of OOD is known as inheritance. This makes it
easier to define specific class and to create generalized classes from
specific ones.
 Polymorphism - OOD languages provide a mechanism where
methods performing similar tasks but vary in arguments, can be
assigned same name. This is called polymorphism, which allows a
single interface performing tasks for different types. Depending
upon how the function is invoked, respective portion of the code
gets executed.
 Design Process: Software design process can be perceived as series
of well-defined steps. Though it varies according to design approach
(function oriented or object oriented, yet It may have the following
steps involved:
 A solution design is created from requirement or previous
used system and/or system sequence diagram.
 Objects are identified and grouped into classes on behalf of
similarity in attribute characteristics.
 Class hierarchy and relation among them are defined.
 Application framework is defined.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

 Software Design Approaches: There are two generic approaches for software
designing:
o Top down Design: We know that a system is composed of more than one
sub-systems and it contains a number of components. Further, these sub-
systems and components may have their one set of sub-system and
components and creates hierarchical structure in the system. Top-down
design takes the whole software system as one entity and then decomposes
it to achieve more than one sub-system or component based on some
characteristics.
Each sub-system or component is then treated as a system and decomposed
further. This process keeps on running until the lowest level of system in the
top-down hierarchy is achieved. Top-down design starts with a generalized
model of system and keeps on defining the more specific part of it. When all
components are composed the whole system comes into existence. Top-
down design is more suitable when the software solution needs to be
designed from scratch and specific details are unknown.
o Bottom-up Design: The bottom up design model starts with most specific
and basic components. It proceeds with composing higher level of
components by using basic or lower level components. It keeps creating
higher level components until the desired system is not evolved as one
single component. With each higher level, the amount of abstraction is
increased. Bottom-up strategy is more suitable when a system needs to be
created from some existing system, where the basic primitives can be used
in the newer system.
Both, top-down and bottom-up approaches are not practical individually. Instead, a
good combination (hybrid) of both is used.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

4.7. Software Reuse

A definition of software reuse is the process of creating software systems from predefined
software components.
- Reuse is the use of previously acquired concepts or objects in a new situation, it
involves encoding development information at different levels of abstraction,
storing this representation for future reference, matching of new and old
situations, duplication of already developed objects and actions, and their
adaptation to suit new requirements;
- Reusability is a measure of the ease with which one can use those previous
concepts or objects in the new situations.
There are eleven software reusability approaches (Sommerville, 2004) which are design
patterns, component-based development, application frameworks, Legacy system wrapping,
Service-oriented systems, Application product lines, COTS integration, Program libraries,
Program generators, aspect-oriented software development and configurable vertical
applications.
The reuse landscape
Although reuse is often simply thought of as the reuse of system components, there are
many different approaches to reuse that may be used. Reuse is possible at a range of levels
from simple functions to complete application systems. The reuse landscape covers the
range of possible reuse techniques.

Key factors for reuse planning:

 The development schedule for the software.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

 The expected software lifetime.


 The background, skills, and experience of the development team.
 The criticality of the software and its non-functional requirements.
 The application domain.
 The execution platform for the software.

The Software Reuse Types


1. Design patterns: design pattern is a common reusable solution to a commonly
arising problem in software design. Design pattern is a description or pattern for
how to solve a problem that can be adapted in several different circumstances.
Examples:
2. Component-based development: Component Based Software Development (CBSD)
is an emerging technology that emphasizes on constructing systems by integrating
existing software components. Reusability, time reduction, cost reduction, and
decreasing complexity are some of the benefits of CBSD.
3. Application frameworks: Application frameworks are widely used in order to boost
efficiency and reliability in software development. An application framework is a
reusable software product which delivers reusable design and implementation
common to applications of a specific domain. Frameworks simplify application
development and overcome problems by creating a common, standardized and
consistent framework that enhances the overall development process and increase
the quality of both the product and the process.
4. Legacy system wrapping: Legacy systems can be wrapped by defining a set of
interfaces through which access to services they offer can be attained. Software
wrapping is cost effective and a short term solution at the lowest risk in system
migration strategies. The merits of legacy systems such as reliability and stability
can be exploited at a minimum cost in the short-term. It also reduces the
framework gap between new and legacy systems.
5. Service-oriented systems: Service-orientation delivers unique benefits for
heterogeneous and independently controlled systems. Service-oriented systems
are made up of distributable components using several technologies, working
together to accomplish a common goal or deliver a service.
6. Application product lines: Software product line (SPL) is a set of software systems
which share a similar set of features and is constructed through similar set of core
resources in order to enhance the efficiency of the product, reduce cost, time-to-
market and boost software reusability, mass customization
7. COTS integration: The software development world has rapidly evolved in the last
decade. Commercial off-the-shelf (COTS) products, can now be integrated into
other software products to provide needed functionality.
8. Program libraries: Libraries are written programs and functions embedded in
frameworks or languages to avoid “reinvention of the wheel”. The program
libraries are ready to use pieces of code. Normally, the libraries serve routine
functions.
9. Program generators: Program generator is a program that allows an individual to
develop code easily with minimum programming effort and knowledge. With a

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

program generator a developer may only be required to state the phases or rules
needed for his or her program to be created.
10. Aspect-oriented software development: Aspect-oriented software development is
a software development technology that endeavours new modularizations of
systems to segregate the supporting functions from the core of the business logic
program. Aspect-oriented software development allows multiple concerns to be
expressed automatically and individually unified into the existing system working.
11. Configurable vertical applications: Configurable vertical application is a common
system that is developed so that it can be configured to the requirements of
particular customers. Sample of a vertical application is system that assists
scientists manage their records, patient, and insurance billing. The software is
configurable and anticipates the future needs of users, expected to interact with
other software tools and efficiently create models.
The advantage of software reuse:
 The systematic development of reusable components.
 The systematic reuse of these components as building blocks to create new systems.
 A reusable component may be code, but the bigger benefits of reuse come from a
broader and higher-level view of what can be reused. Software specifications,
designs, tests cases, data, prototypes, plans, documentation, frameworks, and
templates are all candidates for reuse.
 Software reuse can cut software development time and costs. The major advantages
for software reuse are to:
 Increase software productivity.
 Shorten software development time.
 Improve software system interoperability.
 Develop software with fewer people.
 Move personnel more easily from project to project.
 Reduce software development and maintenance costs.
 Produce more standardized software.
 Produce better quality software and provide a powerful competitive
advantage.
Problems with software reuse:
 Creating, maintaining, and using a component library: Populating a reusable
component library and ensuring the software developers can use this library can be
expensive. Development processes have to be adapted to ensure that the library is
used.
 Finding, understanding, and adapting reusable components: Software components
have to be discovered in a library, understood and, sometimes, adapted to work in a
new environment. Engineers must be reasonably confident of finding a component
in the library before they include a component search as part of their normal
development process.
 Increased maintenance costs: If the source code of a reused software system or
component is not available then maintenance costs may be higher because the
reused elements of the system may become increasingly incompatible with system
changes.
 Lack of tool support: Some software tools do not support development with reuse.
It may be difficult or impossible to integrate these tools with a component library
system. The software process assumed by these tools may not take reuse into

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

account. This is particularly true for tools that support embedded systems
engineering, less so for object-oriented development tools.
 Not-invented-here syndrome: Some software engineers prefer to rewrite
components because they believe they can improve on them. This is partly to do
with trust and partly to do with the fact that writing original software is seen as
more challenging than reusing other people's software.

4.8. Metrics to Assess and Improve Software Quality

According to Ian Sommerville (2011: 652), Software quality management for software
systems has three principal concerns:
1. At the organizational level, quality management is concerned with establishing a
framework of organizational processes and standards that will lead to high quality
software. This means that the quality management team should take responsibility
for defining the software development processes to be used and standards that
should apply to the software and related documentation, including the system
requirements, design, and code.
2. At the project level, quality management involves the application of specific quality
processes, checking that these planned processes have been followed, and ensuring
that the project outputs are conformant with the standards that are applicable to
that project.
3. Quality management at the project level is also concerned with establishing a quality
plan for a project. The quality plan should set out the quality goals for the project
and define what processes and standards are to be used.
Quality assurance (QA) is the definition of processes and standards that should lead to high-
quality products and the introduction of quality processes into the manufacturing process.
Qquality assurance includes verification and validation and the processes of checking that
quality procedures have been properly applied.
Quality control is the application of these quality processes to weed out products that are
not of the required level of quality.
Quality management provides an independent check on the software development process.
QA team should be independent from the development team so that they can take an
objective view of the software.
Quality planning is the process of developing a quality plan for a project. The quality plan
should set out the desired software qualities and describe how these are to be assessed.
Software quality attributes include:
Safety Understandability Portability
Security Testability Usability
Reliability Adaptability Reusability
Resilience Modularity Efficiency
Robustness Complexity Learnability

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

To determine quality, metrics or measurements must be applied to these attributes. Metrics


give you the ability to identify, resolve, and/or curtail risk issues before they surface.

Typical software measurements focus on:


 Quality: Is the degree to which a product or service meets prior defined standards.
 Size: Size is the critical factor in determining cost, schedule, and effort. Poor size
estimation is one of the main reasons major software-intensive acquisition
programs ultimately fail. Source-Lines-Of-Code (SLOC) and Function Points are
common size metrics.
 Complexity: Complexity is how easy a system is to build, use, and maintain for its
intended users.
A software metric is a characteristic of a software system, system documentation, or
development process that can be objectively measured. Examples of metrics include:
1. the size of a product in lines of code;
2. the Fog index (Gunning, 1962), which is a measure of the readability of a passage
of written text;
3. the number of reported faults in a delivered software product;
4. and the number of person-days required to develop a system component.
Software metrics may be either control metrics or predictor metrics. As the names imply,
control metrics support process management, and predictor metrics help you predict
characteristics of the software. Control
metrics are usually associated with software
processes. Examples of control or process
metrics are the average effort and the time
required to repair reported defects.
Predictor metrics are associated with the
software itself and are sometimes known as
‘product metrics’. Examples of predictor
metrics are the cyclomatic complexity of a
module.

Predictor and control measurements (Sommerville)


Product metrics
Product metrics fall into two classes:
1. Dynamic metrics, which are collected by measurements made of a program in execution.
These metrics can be collected during system testing or after the system has gone into
use. An example might be the number of bug reports or the time taken to complete a
computation.
2. Static metrics, which are collected by measurements made of representations of
the system, such as the design, program, or documentation. Examples of static
metrics are the code size and the average length of identifiers used.

Software Metric Description


Fan-in/Fan-out Fan-in is a measure of the number of functions or methods that call another function or
method (say X). Fan-out is the number of functions that are called by function X. A high
value for fan-in means that X is tightly coupled to the rest of the design and changes to X
will have extensive knock-on effects. A high value for fan-out suggests that the overall
complexity of X may be high because of the complexity of the control logic needed to
coordinate the called components.
Length of code This is a measure of the size of a program. Generally, the larger the size of the code of a
component, the more complex and error-prone that component is likely to be. Length of

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

code has been shown to be one of the most reliable metrics for predicting error-proneness
in components.
Cyclomatic This is a measure of the control complexity of a program. This control complexity may be
complexity related to program understandability.
Length of This is a measure of the average length of identifiers (names for variables, classes,
identifiers methods, etc.) in a program. The longer the identifiers, the more likely they are to be
meaningful and hence the more understandable the program.
Depth of This is a measure of the depth of nesting of if-statements in a program. Deeply nested if-
conditional statements are hard to understand and potentially error-prone.
nesting
Fog index This is a measure of the average length of words and sentences in documents. The higher
the value of a document’s Fog index, the more difficult the document is to understand.
Static software product metrics
Software Component Analysis
Each system component can be analyzed separately using a range of metrics. The values of
these metrics may then be compared for different components and, perhaps, with historical
measurement data collected on previous projects. Anomalous measurements, which deviate
significantly from the norm, may imply that there are problems with the quality of these
components.
Chidamber and Kemerer’s suite (sometimes called the CK suite) of six object oriented
metrics:
Metric Description
Weighted methods This is the number of methods in each class, weighted by the complexity of each
per class (WMC) method. Therefore, a simple method may have a complexity of 1, and a large and
complex method a much higher value. The larger the value for this metric, the more
complex the object class. Complex objects are more likely to be difficult to understand.
They may not be logically cohesive, so cannot be reused effectively as superclasses in an
inheritance tree.
Depth of inheritance This represents the number of discrete levels in the inheritance tree where subclasses
tree (DIT) inherit attributes and operations (methods) from superclasses. The deeper the
inheritance tree, the more complex the design. Many object classes may have to be
understood to understand the object classes at the leaves of the tree.
Number of children This is a measure of the number of immediate subclasses in a class. It measures the
(NOC) breadth of a class hierarchy, whereas DIT measures its depth. A high value for NOC may
indicate greater reuse. It may mean that more effort should be made in validating base
classes because of the number of subclasses that depend on them.
Coupling between Classes are coupled when methods in one class use methods or instance variables
object classes (CBO) defined in a different class. CBO is a measure of how much coupling exists. A high value
for CBO means that classes are highly dependent, and therefore it is more likely that
changing one class will affect other classes in the program.
Response for a class RFC is a measure of the number of methods that could potentially be executed in
(RFC) response to a message received by an object of that class. Again, RFC is related to
complexity. The higher the value for RFC, the more complex a class and hence the more
likely it is that it will include errors.
Lack of cohesion in LCOM is calculated by considering pairs of methods in a class. LCOM is the difference
methods (LCOM) between the number of method pairs without shared attributes and the number of
method pairs with shared attributes. The value of this metric has been widely debated
and it exists in several variations. It is not clear if it really adds any additional, useful
information over and above that provided by other metrics.

The CK object-oriented metrics suite

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

4.9. Use of CASE (Computer-Aided Software Engineering) Tools

CASE tools are a class of software that automate many of the activities involved in various
life cycle phases. For example, when establishing the functional requirements of a proposed
application, prototyping tools can be used to develop graphic models of application screens
to assist end users to visualize how an application will look after development.
Subsequently, system designers can use automated design tools to transform the
prototyped functional requirements into detailed design documents. Programmers can then
use automated code generators to convert the design documents into code. Automated
tools can be used collectively, or individually.
Existing CASE tools can be classified along 4 different dimensions:
1. Life-cycle support
2. Integration dimension
3. Construction dimension
4. Knowledge-based CASE dimension
Life-Cycle Based CASE Tools
This dimension classifies CASE Tools on the basis of the activities they support in the
information systems life cycle. They can be classified as Upper or Lower CASE tools.
 Upper CASE Tools support strategic planning and construction of concept-level
products and ignore the design aspect. They support traditional diagrammatic
languages such as ER diagrams, Data flow diagram, Structure charts, Decision Trees,
Decision tables, etc.
 Lower CASE Tools concentrate on the back end activities of the software life cycle,
such as physical design, debugging, construction, testing, component integration,
maintenance, reengineering and reverse engineering.
Integration dimension
Three main CASE Integration dimensions have been proposed:
1. CASE Framework
2. ICASE Tools
3. Integrated Project Support Environment(IPSE)
Workbenches
Workbenches integrate several CASE tools into one application to support specific software-
process activities. Hence they achieve:
 a homogeneous and consistent interface (presentation integration).
 easy invocation of tools and tool chains (control integration).

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

 access to a common data set managed in a centralized way (data integration).


CASE workbenches can be further classified into following 8 classes:
1. Business planning and modelling
2. Analysis and design
3. User-interface development
4. Programming
5. Verification and validation
6. Maintenance and reverse engineering
7. Configuration management
8. Project management

Environments
An environment is a collection of CASE tools and workbenches that supports the software
process. CASE environments are classified based on the focus / basis of integration
1. Toolkits
2. Language-centred
3. Integrated
4. Fourth generation
5. Process-centred
Toolkits
Toolkits are loosely integrated collections of products easily extended by aggregating
different tools and workbenches. Typically, the support provided by a toolkit is limited to
programming, configuration management, and project management. And the toolkit itself is
environments extended from basic sets of operating system tools, for example, the Unix
Programmer's Work Bench and the VMS VAX Set. In addition, toolkits' loose integration
requires user to activate tools by explicit invocation or simple control mechanisms. The
resulting files are unstructured and could be in different format, therefore the access of file
from different tools may require explicit file format conversion. However, since the only
constraint for adding a new component is the formats of the files, toolkits can be easily and
incrementally extended.
Language-centred
The environment itself is written in the programming language for which it was developed,
thus enabling users to reuse, customize and extend the environment. Integration of code in
different languages is a major issue for language-centred environments. Lack of process and
data integration is also a problem. The strengths of these environments include good level of
presentation and control integration. Interlisp, Smalltalk, Rational, and KEE are examples of
language-centred environments.
Integrated
These environments achieve presentation integration by providing uniform, consistent, and
coherent tool and workbench interfaces. Data integration is achieved through
the repository concept: they have a specialized database managing all information produced
and accessed in the environment. Examples of integrated environment are IBM AD/Cycle
and DEC Cohesion.
Fourth-generation
Fourth-generation environments were the first integrated environments. They are sets of
tools and workbenches supporting the development of a specific class of program: electronic
data processing and business-oriented applications. In general, they include programming

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

tools, simple configuration management tools, document handling facilities and, sometimes,
a code generator to produce code in lower level languages. Informix 4GL, and Focus fall into
this category.
Process-centred
Environments in this category focus on process integration with other integration
dimensions as starting points. A process-centred environment operates by interpreting a
process model created by specialized tools. They usually consist of tools handling two
functions:
 Process-model execution
 Process-model production
Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia
Making a case for and against and CASE Tools
For Against
Helps standardization of notations
Limitations in the flexibility of documentation
and diagrams
Help communication between
May lead to restriction to the tool's capabilities
development team members
Major danger: completeness and syntactic
Automatically check the quality of
correctness does NOT mean compliance with
the models
requirements
Costs associated with the use of the tool: purchase
Reduction of time and effort
+ training
Enhance reuse of models or
Staff resistance to CASE tools
models' components

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

4.10. Use of Valid Software Cost Estimation Models

Cost estimation models are necessary to determine the viability of software processes
and products both in the short and long term. Therefore it is necessary that valid and
effective cost estimation models be used to determine cost effectives.

Software Cost Estimation Accuracy versus Phase [BOEHM81]

Time Detected
Cost to fix a defect
Requirements Architecture Construction System Testing
Requirements 1x 3x 5-10x 10X
Time
Architecture - 1x 10x 15x
Introduced
Construction - - 1x 10x

The magnitude of cost relative to phase (time) at which defect is uncovered

The following are some of the proven methods used to measure size of software:

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

FUNCTION POINTS SOURCE LINES-OF-CODE


Specification-based Analogy-based
Language independent Language dependent
User-oriented Design-oriented
Variations a function of counting conventions Variations a function of languages
Expandable to source lines-of-code Convertible to function points
Function Points versus Lines-of-Code
Most SLOC estimates count all executable instructions and data declarations but exclude
comments, blanks, and continuation lines. SLOC can be used to estimate size through
analogy— by comparing the new software’s functionality to similar functionality found in
other historic applications.

Estimation Models
1. The COCOMO (COnstructive COst MOdelling) model
 COCOMO is based on a physical measure (source lines of code)
 Estimations become more precise as we move with development
 Estimation errors:
 Initial estimations can be wrong by a factor of 4x
 As we move with the development process, estimations become
more precise (and the model takes into account more detailed
parameters)

General Structure
OUTPUT = A x (size)B x M
 OUTPUT can be effort or time
 fundamental measure is code size (expressed in source lines of code)
 Code size has an exponential effect on effort and size (although very close
to 1)
 Various adjustment factors are used to make the model more precise
Combination of three models with different levels of detail and complexity:
- BASIC: quick estimation, early development stage
- INTERMEDIATE: more accurate, needs some product characteristics, more
mature development stage
- ADVANCED: most detailed, requires more information

– In all COCOMO models:


o 1 person month = 152 work-hours
o SLOC is DSI (delivered source instructions)(only the code delivered to
the client. E.g. unit testing, conversion code, utilities, ... do not
count)
Types of Projects
COCOMO 81 distinguishes among three different types of projects:
– ORGANIC - small teams, familiar environment, well-understood applications,
simple non-functional requirements (EASY)

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

– SEMI DETACHED - project team may have experience mixture, system may have
more significant non-functional constraints, organization may
have less familiarity with application (HARDER)
– EMBEDDED - tight constraints, including local regulations and operational
procedures; unusual for team to have deep application
experience (HARD)

1. Basic Model
PM = APM * (KSLOC)BPM
TDEV = ATDEV(PM) BTDEV
where
– KSLOC: thousands of delivered source lines of code!
– M is equal to 1 (and therefore it does not appear in the formulae)

Application Example
Estimation of 50 KDSI for an organic project
^1.05
– PM = 2.4 (50) ~= 146 mm
– TDEV = 2.5 (371.54)^0.38 ~= 16 month
– Team = 371.54 / 23.69 ~= 9 person

o The effect of different project parameters

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

Intermediate COCOMO
It uses a more fine grained characterization, which uses attributes (effort
multipliers) to take into account:
o functional and non-functional requirements
o project attributes
 The effort multipliers are organized in 4 classes and 15 sub-items.
 The importance of each attribute is qualitatively evaluated
between 1 (very low) and 6 (extra high)
 Each value corresponds to multiplier, in the range [0.7, 1.66]
(multiplier < 1 implies reduced cost)

 All the values are multiplied together to modulate effort

PM nominal = APM * (KSLOC)BPM


PM = PMnominal * ∏15i=iEMi
TDEV = ATDEV(PM)BTDEV

Intermediate COCOCMO Parameters

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

Attributes
– PRODUCT = RELY * DATA * CPLX
– COMPUTER = TIME * STOR * VIRT * TURN
– PERSONNEL = ACAP * AEXP * PCAP * VEXP * LEXP
– PROJECT = MODP * TOOL * SCED
o The impact of the parameters is between [0.09, 73.28]
o The PM (or team) estimate the values of parameters to predict
actual effort
Example:
o If the “required software reliability” is low, the predicted effort
is 0.88 of the one computed with the basic formula
COCOMO 81: Detailed Model
The detailed model:
– has more detailed multipliers for each development phase
– organizes the parameters hierarchically, to simplify the computation of
systems made of several modules
 Projects are organized in four phases:
 Requirements Planning and Product Design (PRD)
 Detailed Design (DD)
 Code and Unit Test (CUT)
 Integration Test (IT)
 EM are given and estimated per phase
 Phase data is then aggregated to get the total estimation

Example of parameter:

Maintenance Phase
- The COCOMO model can also be applied to predict effort during system
maintenance (system maintenance = small updates and repairs during the
operational life of a system)
- Most of development parameters apply both to development and
maintenance
(some do not: SCED, RELY, MODP)
- One essential input is an estimation of the ACT. (annual change traffic)

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

Maintenance example
ACT = %Added + %Modified
100
PM = ACT · PMnom · EAFmaint

Putnam's model and SLIM


Putnam derives his model based on Norden/Rayleigh manpower distribution and his
finding in analyzing many completed projects. The central part of Putnam's model is
called software equation as follows:

S = E(Effort)1/3td4/ 3
where td is the software delivery time; E is the environment factor that reflects the
development capability, which can be derived from historical data using the software
equation. The size S is in LOC and the Effort is in person-year. Another important relation
found by Putnam is
Effort = D0 td3
where D0 is a parameter called manpower build-up which ranges from 8 (entirely new
software with many interfaces) to 27 (rebuilt software). Combining the above equation
with the software equation, we obtain the power function form:

Putnam's model is also widely used in practice and SLIM is a software tool based on this
model for cost estimation and manpower scheduling.
Function Points:
This is a measurement based on the functionality of the program and was first introduced
by Albrecht. The total number of function points depends on the counts of distinct (in
terms of format or processing logic) types in the following five classes:
1. User-input types: data or control user-input types
2. User-output types: output data types to the user that leaves the system
3. Inquiry types: interactive inputs requiring a response
4. Internal file types: files (logical groups of information) that are used and shared
inside the system
5. External file types: files that are passed or shared between the system and other
systems

Each of these types is individually assigned one of three complexity levels of {1 = simple,
2 = medium, 3 = complex} and given a weighting value that varies from 3 (for simple
input) to 15 (for complex internal files).

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

The unadjusted function-point counts (UFC) is given as:

where Nij and Wij are respectively the number and weight of types of class i with
complexity j.
For example, if the raw function-point counts of a project are 2 simple inputs (Wij = 3), 2
complex outputs (Wij = 7) and 1 complex internal file (Wij = 15).
Then UFC = 2*3 + 2*7 +1*15 = 35.

This initial function-point count is either directly used for cost estimation or is further
modified by factors whose values depend on the overall complexity of the project. This
will take into account the degree of distributed processing, the amount of reuse, the
performance requirement, etc. The final function-point count is the product of the UFC
and these project complexity factors. The advantage of the function-point measurement
is that it can be obtained based on the system requirement specification in the early
stage of software development.
The UFC is also used for code-size estimation using the following linear formula:
LOC = a * UFC + b
The parameters a, b can be obtained using linear regression and previously completed
project data.
Extensions of function point: Feature point extends the function points to include
algorithms as a new class. An algorithm is defined as the set of rules which must be
completely expressed to solve a significant computational problem. For example, a
square root routine can be considered as an algorithm. Each algorithm used is given a
weight ranging from 1 (elementary) to 10 (sophisticated algorithms) and the feature point
is the weighted sum of the algorithms plus the function points. This measurement is
especially useful for systems with few input/output and high algorithmic complexity, such
as mathematical software, discrete simulations, and military applications.
Another extension of function points is full function point (FFP) for measuring real-time
applications, by also taking into consideration the control aspect of such applications. FFP
introduces two new control data function types and four new control transactional function types.

4.11. Systematic Use of Reviews and Inspections

The following are some of the methods used to conduct systematic design reviews and
inspections.
Software design reviews are a systematic, comprehensive, and well-documented inspection
of design that aims to check whether the specified design requirements are adequate and
the design meets all the specified requirements. In addition, they also help in identifying the
problems (if any) in the design process.
IEEE defines software design review as 'a formal meeting at which a system's preliminary or
detailed design is presented to the user, customer, or other interested parties for comment
and approval.'

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

These reviews are held at the end of the design phase to resolve issues (if any) related to
software design decisions, that is, architectural design and detailed design (component-level
and interface design) of the entire software or a part of it (such as a database).
Types of Software Design Reviews
Generally, the review process is carried out in three steps, which corresponds to the steps
involved in the software design process. First, a preliminary design review is conducted with
the customers and users to ensure that the conceptual design (which gives an idea to the
user of what the system will look like) satisfies their requirements. Next, a critical design
review is conducted with analysts and other developers to check the technical design (which
is used by the developers to specify how the system will work) in order to critically evaluate
technical merits of the design. Next, a program design review is conducted with the
programmers in order to get feedback before the design is implemented

4.10.1. Preliminary Design Review


During preliminary design review, the high-level architectural design is reviewed to
determine whether the design meets all the stated requirements as well as the non-
functional requirements. This review is conducted to serve the following purposes.
1. To ensure that the software requirements are reflected in the software
architecture
2. To specify whether effective modularity is achieved
3. To define interfaces for modules and external system elements
4. To ensure that the data structure is consistent with the information domain
5. To ensure that maintainability has been considered
6. To assess the quality factors.
In this review, it is verified that the proposed design includes the required hardware and
interfaces with the other parts of the computer-based system. This review team
comprises the following individuals:
1. Customers: Responsible for defining the software's requirements.
2. Moderator: Presides over the review. The moderator encourages discussions,
maintains the main objective throughout the review, settles disputes and gives
unbiased observations. In short, he is responsible for the smooth functioning of
the review.
3. Secretary: A silent observer who does not take part in the review process but
records the main points of the review.
4. System designers: Includes people involved in designing not only the software
but also the entire computer-based system.
5. Other stakeholders (developers) not involved in the project: Provide an
outsider's idea on the proposed design. This is beneficial as they can infuse 'fresh
ideas', address issues of correctness, consistency, and good design practice.
If errors are noted in the review process then the faults are assessed on the basis of their
severity. That is, if there is a minor fault it is resolved by the review team. However, if
there is a major fault, the review team may agree to revise the proposed conceptual
design. Note that preliminary design review is again conducted to assess the
effectiveness of the revised (new) design.
Critical Design Review

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

Once the preliminary design review is successfully completed and the customer(s) is
satisfied with the proposed design, a critical design review is conducted. This review is
conducted to serve the following purposes.
1. To assure that there are no defects in the technical and conceptual designs
2. To verify that the design being reviewed satisfies the design requirements
established in the architectural design specifications
3. To assess the functionality and maturity of the design critically
4. To justify the design to the outsiders so that the technical design is more clear,
effective and easy to understand
In this review, diagrams and data (sometimes both) are used to evaluate alternative
design strategies and how and why the major design decisions have been taken. In
addition to the team members involved in the preliminary design review, the review
team comprises the following individuals.
1. System tester: Understands the technical issues of design and compare them
with the design created for similar projects.
2. Analyst: Responsible for writing system documentation.
3. Program designer for this project: Understands the design in order to derive
detailed program designs.
Similar to a preliminary design review, if discrepancies are noted in the critical design
review process the faults are assessed on the basis of their severity. A minor fault is
resolved by the review team. If there is a major fault, the review team may agree to
revise the proposed technical design. Note that a critical design review is conducted again
to assess the effectiveness of the revised (new) design.
Note: Critical design review team does not involve customers.
Program Design Review
Once the critical design review is successfully completed, a program design review is
conducted to obtain feedback before the implementation (coding) of the design. This
review is conducted to serve the following purposes.
1. To assure the feasibility of the detailed design
2. To assure that the interface is consistent with the architectural design
3. To specify whether the design is compatible to implementation language
4. To ensure that structured programming constructs are used throughout
5. To ensure that the implementation team is able to understand the proposed
design.
A review team comprising system designers, a system tester, moderator, secretary and
analyst is formed to conduct the program design review. The review team also includes
program designers and developers. The program designers, after completing the program
designs, present their plans to a team of other designers, analysts and programmers for
comments and suggestions. Note that a successful program design review presents
considerations relating to coding plans before coding begins.

4.12. Software Design Review Process

Design reviews are considered important as in these reviews the product is logically
viewed as the collection of various entities/components and use-cases. These reviews are
conducted at all software design levels and cover all parts of the software units.
Generally, the review process comprises three criteria, as listed below.
Entry criteria: Software design is ready for review.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

Activities: This criterion involves the following steps.


1. Select the members for the software design review team, assign them their roles,
and prepare schedules for the review.
2. Distribute the software design review package to all the reviewing participants.
3. Participants check the completeness and conformance of the design to the
requirements in addition to the efficiency of the design. They also check the
software for defectsal1,d if defects are found, they discuss those defects with one
another. The recorder documents the defects along with the suggested action
items and recommendations.
4. The design team rectifies the defects (if any) in design and makes the required
changes in the appropriate design review material.
5. The software development manager obtains the approval of the software design
from the software project manager.
 Exit criteria: The software design is approved.

Evaluating Software Design Reviews


The software design review process is beneficial for everyone as the faults can be
detected at an early stage, thereby reducing the cost of detecting errors and reducing the
likelihood of missing a critical issue. Every review team member examines the integrity of
the design and not the persons involved in it (that is, designers), which in turn
emphasizes that the common 'objective of developing a highly rated design is achieved.
To check the effectiveness of the design, the review team members should address the
following questions.
1. Is the solution achieved with the developed design?
2. Is the design reusable?
3. Is the design well structured and easy to understand?
4. Is the design compatible with other platforms?
5. Is it easy to modify or enlarge the design?
6. Is the design well documented?
7. Does the design use suitable techniques in order to handle faults and
prevent failures?
8. Does the design reuse components from other projects, wherever necessary

In addition to these questions, if the proposed system is developed using a phased


development (like waterfall and incremental model), then the phases should be
interfaced sufficiently so that an easy transition can take place from one phase to the
other.

Code Review
1. Code review is systematic examination (often as peer review) of computer source
code.
2. Pair programming is a type of code review where two persons develop code
together at the same workstation.
3. Inspection is a very formal type of peer review where the reviewers are following
a well-defined process to find defects.
4. Walkthrough is a form of peer review where the author leads members of the
development team and other interested parties through a software product and
the participants ask questions and make comments about defects.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

5. Technical review is a form of peer review in which a team of qualified personnel


examines the suitability of the software product for its intended use and identifies
discrepancies from specifications and standards.

4.13. Outline Procedural Oriented Programs Design Principles

Procedural programming is the most natural way of telling a computer what to do as the
computer processors own language and machine code is procedural. It is also referred as
structured or modular programming. Procedural programming is performed by telling the
computer what to do and how to do it through a list of step-by-step instructions. Therefore,
procedural programming involves procedures, which implies that there are steps that need
to be followed to complete a specific task.
Characteristics of Procedural oriented programming:-
- It focuses on process rather than data.
- It takes a problem as a sequence of things to be done such as reading,
calculating and printing. Hence, a number of functions are written to solve a
problem.
- A program is divided into a number of functions and each function has clearly
defined purpose.
- Most of the functions share global data.
- Data moves openly around the system from function to function.
Well-designed procedural programs have certain characteristics. You will design better
procedural programs if you employ the following principles when factoring a structure chart
into methods.
Macro Procedural Design Principles: which apply the structure chart as a whole
1. Keep the structure chart reasonably balanced
2. Place “managers” above “workers”
3. Employ information hiding (use “need to know” principle)
4. Minimize couplings
Micro Procedural Design Principles: which apply to each method within the structure chart
1. Make each as portable as possible
2. Make each as strong as possible
3. Make each as weakly coupled as possible
4. Make each have single area of responsibility

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

4.14. Apply Object Oriented Design Principles

In the world of object-oriented programming (OOP), there are many design guidelines,
patterns or principles. Five of these principles are usually grouped together and are known
by the acronym SOLID. While each of these five principles describes something specific, they
overlap as well such that adopting one of them implies or leads to adopting another.
Robert Martin, who's credited with writing down the SOLID principles, points out some
symptoms of rotting design due to improperly managed dependencies across modules:
 Rigidity: Implementing even a small change is difficult since it's likely to translate
into a cascade of changes.
 Fragility: Any change tends to break the software in many places, even in areas
not conceptually related to the change.
 Immobility: We're unable to reuse modules from other projects or within the
same project because those modules have lots of dependencies.
 Viscosity: When code changes are needed, developers will prefer the easier
route even if they break existing design.

Antipatterns and improper understanding of design principles can lead to STUPID code:
 Singleton
 Tight Coupling
 Untestability
 Premature Optimization
 Indescriptive Naming
 and Duplication.
SOLID can help developers stay clear of these.
The essence of SOLID is managing dependencies. This is done via interfaces and abstractions.
Modules and classes should not be tightly coupled.

SOLID Principles of OOP


1. Single Responsibility Principle (SRP): Single Responsibility Principle is another
SOLID design principle, and represent “S” on the SOLID acronym. As per SRP, there
should not be more than one reason for a class to change, or a class should always
handle single functionality. The key benefit of this principle is that it reduces
coupling between the individual component of the software and Code. For
Example: A Book class has properties to store its own name, author and text. But
the task of printing the book must belong to the BookPrinter class.
2. Open Closed Design Principle: According to this OOP design principle, “Classes,
methods or functions should be Open for extension (new functionality) and Closed
for modification”. The key benefit of this design principle is that already tried and
tested code is not touched which means they won’t break. This means that
functions or base class methods should not get polluted with details of subclasses.
For example, a Car class might have methods AccelerateAudi, AccelerateBMW,
and so on. This is a violation of OCP. Instead, we should have an Car interface with
method Accelerate. Each car subclass can implement this interface.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

3. Liskov Substitution Principle (LSP): According to the Liskov Substitution Principle,


Subtypes must be substitutable for supertype I mean methods or functions which
uses superclass type must be able to work with the object of subclass without any
issue”. LSP is closely related to the Single responsibility principle and Interface
Segregation Principle. If a class has more functionality than subclass might not
support some of the functionality and does violate LSP.
4. Interface Segregation Principle (ISP): Interface Segregation Principle stats that, a
client should not implement an interface if it doesn’t use that. This happens
mostly when one interface contains more than one functionality, and the client
only needs one functionality and no other.
5. Dependency Injection or Inversion principle: Don’t ask for dependency it will be
provided to you by the framework. This has been very well implemented in Spring
framework, one of the most popular Java framework for writing real-worth
applications. The beauty of this design principle is that any class which is injected
by DI framework is easy to test with the mock object and easier to maintain
because object creation code is centralized in the framework and client code is not
littered with that.

4.15. Implement Application Architectures

The purpose of system architecture activities is to define a comprehensive solution based on


principles, concepts, and properties logically related and consistent with each other. The
solution architecture has features, properties, and characteristics satisfying, as far as
possible, the problem or opportunity expressed by a set of system requirements (traceable
to mission/business and stakeholder requirements) and life cycle concepts (e.g., operational,
support) and are implementable through technologies (e.g., mechanics, electronics,
hydraulics, software, services, procedures, human activity).
System Architecture is abstract, conceptualization-oriented, global, and focused to achieve
the mission and life cycle concepts of the system. It also focuses on high‐level structure in
systems and system elements. It addresses the architectural principles, concepts, properties,
and characteristics of the system-of-interest. It may also be applied to more than one
system, in some cases forming the common structure, pattern, and set of requirements for
classes or families of similar or related systems.

Layered (n-tier) architecture

This approach is probably the most common because it is usually built around the database,
and many applications in business naturally lend
themselves to storing information in tables.

This is something of a self-fulfilling prophecy.


Many of the biggest and best software
frameworks—like Java EE, Drupal, and Express—
were built with this structure in mind, so many of
the applications built with them naturally come
out in a layered architecture.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

The code is arranged so the data enters the top layer and works its way down each layer
until it reaches the bottom, which is usually a database. Along the way, each layer has a
specific task, like checking the data for consistency or reformatting the values to keep them
consistent. It’s common for different programmers to work independently on different
layers.

The Model-View-Controller (MVC) structure, which is


Image credit: Izhaki
the standard software development approach offered
by most of the popular web frameworks, is clearly a layered architecture. Just above the
database is the model layer, which often contains business logic and information about the
types of data in the database. At the top is the view layer, which is often CSS, JavaScript, and
HTML with dynamic embedded code. In the middle, you have the controller, which has
various rules and methods for transforming the data moving between the view and the
model.

The advantage of a layered architecture is the separation of concerns, which means that
each layer can focus solely on its role. This makes it:

 Maintainable
 Testable
 Easy to assign separate "roles"
 Easy to update and enhance layers separately
Proper layered architectures will have isolated layers that aren’t affected by certain changes
in other layers, allowing for easier refactoring. This architecture can also contain additional
open layers, like a service layer, that can be used to access shared services only in the
business layer but also get bypassed for speed.
Slicing up the tasks and defining separate layers is the biggest challenge for the architect.
When the requirements fit the pattern well, the layers will be easy to separate and assign to
different programmers.
Caveats:
 Source code can turn into a “big ball of mud” if it is unorganized and the modules
don’t have clear roles or relationships.
 Code can end up slow thanks to what some developers call the “sinkhole anti-
pattern.” Much of the code can be devoted to passing data through layers without
using any logic.
 Layer isolation, which is an important goal for the architecture, can also make it hard
to understand the architecture without understanding every module.
 Coders can skip past layers to create tight coupling and produce a logical mess full of
complex interdependencies.
 Monolithic deployment is often unavoidable, which means small changes can
require a complete redeployment of the application.
Best for:
 New applications that need to be built quickly

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

 Enterprise or business applications that need to mirror traditional IT departments


and processes
 Teams with inexperienced developers who don’t understand other architectures
yet
 Applications requiring strict maintainability and testability standards

Event-driven architecture
Many programs spend most of their time waiting for something to happen. This is especially
true for computers that work directly with humans, but it’s also common in areas like
networks. Sometimes there’s data that needs processing, and other times there isn’t.
The event-driven architecture helps manage this by building a central unit that accepts all
data and then delegates it to the separate modules that handle the particular type. This
handoff is said to generate an “event,” and it is delegated to the code assigned to that type.
Programming a web page with JavaScript involves writing the small modules that react to
events like mouse clicks or keystrokes. The browser itself orchestrates all of the input and
makes sure that only the right code sees the right events. Many different types of events are
common in the browser, but the modules interact only with the events that concern them.
This is very different from the layered architecture where all data will typically pass through
all layers. Overall, event-driven architectures:
 Are easily adaptable to complex, often chaotic environments
 Scale easily
 Are easily extendable when new event types appear
Caveats:
 Testing can be complex if the modules can affect each other. While individual
modules can be tested independently, the interactions between them can only be
tested in a fully functioning system.
 Error handling can be difficult to structure, especially when several modules must
handle the same events.
 When modules fail, the central unit must have a backup plan.
 Messaging overhead can slow down processing speed, especially when the central
unit must buffer messages that arrive in bursts.
 Developing a system-wide data structure for events can be complex when the
events have very different needs.
 Maintaining a transaction-based mechanism for consistency is difficult because the
modules are so decoupled and independent.
Best for:
 Asynchronous systems with asynchronous data flow
 Applications where the individual data blocks interact with only a few of the many
modules
 User interfaces
Microkernel architecture
Many applications have a core set of operations that are used again and again in different
patterns that depend upon the data and the task at hand. The popular development tool
Eclipse, for instance, will open files, annotate them, edit them, and start up background
processors. The tool is famous for doing all of these jobs with Java code and then, when a
button is pushed, compiling the code and running it.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

In this case, the basic routines for displaying a file and editing it are part of the microkernel.
The Java compiler is just an extra part that’s bolted on to support the basic features in the
microkernel. Other programmers have extended Eclipse to develop code for other languages
with other compilers. Many don’t even use the Java compiler, but they all use the same
basic routines for editing and annotating files.
The extra features that are layered on top are often called plug-ins. Many call this extensible
approach a plug-in architecture instead.
Richards likes to explain this with an example from the insurance business: “Claims
processing is necessarily complex, but the actual steps are not. What makes it complex are
all of the rules.”
The solution is to push some basic tasks—like asking for a name or checking on payment—
into the microkernel. The different business units can then write plug-ins for the different
types of claims by knitting together the rules with calls to the basic functions in the kernel.
Caveats:
 Deciding what belongs in the microkernel is often an art. It ought to hold the code
that’s used frequently.
 The plug-ins must include a fair amount of handshaking code so the microkernel is
aware that the plug-in is installed and ready to work.
 Modifying the microkernel can be very difficult or even impossible once a number of
plug-ins depend upon it. The only solution is to modify the plug-ins too.
 Choosing the right granularity for the kernel functions is difficult to do in advance
but almost impossible to change later in the game.

Best for:
 Tools used by a wide variety of people
 Applications with a clear division between basic routines and higher order rules
 Applications with a fixed set of core routines and a dynamic set of rules that must be
updated frequently
Microservices Architecture
Software can be like a baby elephant: It is cute and fun when it’s little, but once it
gets big, it is difficult to steer and resistant to change. The microservice architecture
is designed to help developers avoid letting their babies grow up to be unwieldy,
monolithic, and inflexible. Instead of building one big program, the goal is to create a
number of different tiny programs and then create a new little program every time
someone wants to add a new feature.

4.16. Information Systems – (Transaction Processing


Applications)

When you purchase a book from an online bookstore, you exchange money (in the form of
credit) for a book. If your credit is good, a series of related operations ensures that you get
the book and the bookstore gets your money. However, if a single operation in the series
fails during the exchange, the entire exchange fails. You do not get the book and the
bookstore does not get your money.
The technology responsible for making the exchange balanced and predictable is called
transaction processing. Transactions ensure that data-oriented resources are not

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

permanently updated unless all operations within the transactional unit complete
successfully. By combining a set of related operations into a unit that either completely
succeeds or completely fails, you can simplify error recovery and make your application
more reliable.
Transaction processing systems consist of computer hardware and software hosting a
transaction-oriented application that performs the routine transactions necessary to
conduct business. Examples include systems that manage sales order entry, airline
reservations, payroll, employee records, manufacturing, and shipping.
Transaction processing is a style of computing, typically performed by large server
computers, that supports interactive applications. In transaction processing, work is divided
into individual, indivisible operations, called transactions. By contrast, batch processing is a
style of computing in which one or more programs processes a series of records (a batch)
with little or no action from the user or operator.
A transaction processing system allows application programmers to concentrate on writing
code that supports the business, by shielding application programs from the details of
transaction management:
1. It manages the concurrent processing of transactions.
2. It enables the sharing of data.
3. It ensures the integrity of data.
4. It manages the prioritization of transaction execution.
Overall transaction processing, also known as data processing, reflects the principal business
activities of a firm.
The principal transaction processing subsystems in a firm are those supporting:
1. Sales
2. Production
3. Inventory
4. Purchasing, Shipping, Receiving
5. Accounts payable & receivable
6. Billing
7. Payroll
8. General ledger
Systems design of TPS
To build an OLTP system, a designer must know that the large number of concurrent users
does not interfere with the system's performance. To increase the performance of an OLTP
system, a designer must avoid excessive use of indexes and clusters. The following elements
are crucial for the performance of OLTP systems:
1. Rollback segments: Rollback segments are the portions of database that record the
actions of transactions in the event that a transaction is rolled back. Rollback
segments provide read consistency, rollback transactions, and recovery of the
database.
2. Clusters: A cluster is a schema that contains one or more tables that have one or
more columns in common. Clustering tables in a database improves the
performance of join operations.
3. Discrete transactions: A discrete transaction defers all change to the data until the
transaction is committed. It can improve the performance of short, non-distributed
transactions.
4. Block size: The data block size should be a multiple of the operating system's block
size within the maximum limit to avoid unnecessary I/O.

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

5. Buffer cache size: SQL statements should be tuned to use the database buffer cache
to avoid unnecessary resource consumption.
6. Dynamic allocation of space to tables and rollback segments
7. Transaction processing monitors and the multi-threaded server: A transaction
processing monitor is used for coordination of services. It is like an operating system
and does the coordination at a high level of granularity and can span multiple
computing devices.
8. Partition (database): Partition use increases performance for sites that have regular
transactions while still maintaining availability and security.
9. Database tuning: With database tuning, an OLTP system can maximize its
performance as efficiently and rapidly as possible.
The following features are desirable in a database system used in transaction processing
systems:
1. Good data placement: The database should be designed to access patterns of data from
many simultaneous users.
2. Short transactions: Short transactions enables quick processing. This avoids concurrency and
paces the systems.
3. Real-time backup: Backup should be scheduled between low times of activity to prevent lag
of the server.
4. High normalization: This lowers redundant information to increase the speed and improve
concurrency, this also improves backups.
5. Archiving of historical data: Uncommonly used data are moved into other databases or
backed up tables. This keeps tables small and also improves backup times.
6. Good hardware configuration: Hardware must be able to handle many users and provide
quick response times.

4.17. Language Processing Systems

Natural Language Processing (NLP) is a subfield of artificial intelligence that helps computers
understand human language. Using NLP, machines can make sense of unstructured online
data so that we can gain valuable insights. NLP helps computers read and respond by
simulating the human ability to understand the everyday language that people use to
communicate.
Today, we can ask Siri or Google or Cortana to help us with simple questions or tasks, but
much of their actual potential is still untapped. The reason why involves language.
This is where natural language processing (NLP) comes into play in artificial intelligence
applications. Without NLP, artificial intelligence only can understand the meaning of
language and answer simple questions, but it is not able to understand the meaning of
words in context. Natural language processing applications allow users to communicate with
a computer in their own worlds, i.e. in natural language.
Examples
1. Facebook announced its M service that promises to become your personal assistant
(with the public launch date tbd): “M can do anything a human can.” When you request
something that M can’t do on its own, it sends a message to a Facebook worker and, as
they work with the software, the AI begins to learn.
2. Another interesting application of natural language processing is Skype Translator,
which offers on-the-fly translation to interpret live speech in real time across a number
of languages. Skype Translator uses AI to help facilitate conversation among people who
speak different languages.
3. Customer Review: Natural language processing in artificial intelligence applications
makes it easy to gather product reviews from a website and understand what

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

consumers are actually saying as well as their sentiment in reference to a specific


product. Companies with a large volume of reviews can actually understand them and
use the data collected to recommend new products or services based on customer
preferences. This application helps companies discover relevant information for their
business, improve customer satisfaction, suggest more relevant products or services and
better and understand the customer’s needs.
4. Virtual digital assistants: Thanks to smartphone, virtual digital assistant (VDA)
technologies (automated software applications or platforms that assist the human user
by understanding natural language) are currently the most well known type of artificial
intelligence. VDAs are able to assist the consumers with transaction activities or
optimize the call center operations to offer a better customer experience and reduce the
operational costs.

Read more… Language Processing or Natural Language Processing | Software Reuse | Object
Oriented Design | From Analysis to Design | Software Architectures Notes | Software
Architecture 2016 | Software Design | Software Design Review | Software Estimation |
Software Tools and Techniques | Software Quality Management | Cost Estimation for
Software | Software Engineering (Sommerville) | Design Principles and more | Concise Guide
to Software Engineering

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

4.18. Chapter 4: Questions

1. Software Design is about modelling a solution to a problem. Comment. [6]


2. Software design is both an art and a science. Argue on this proposition. [10]
3. Describe how each of the following factors impact the software design process:
a. Cost [3]
b. Suitability / Applicability [3]
c. Complexity / Ease of use [3]
d. Time constraints [3]
e. Type of system to be developed. [3]
4. Discuss the differences between the Waterfall Model and Spiral Model? [10]
5. Explain three benefits that may arise when an Incremental Model is used to
develop software. [6]
6. The choice of Design Models is the responsibility of the software developer only.
Illustrate the consequences of adopting this approach.
7. When using the UML modelling language to develop software, there are four (4)
distinct phases that are followed. Explain the activities carried out and the
deliverables produced at each of the four phases. [12]
8. In making Architectural Design Choices, the following questions may be asked:
a. Is there a generic application architecture that can be used?
b. How will the system be distributed?
c. What architectural styles are appropriate?
d. What approach will be used to structure the system?
e. How will the system be decomposed into modules?
f. What control strategy should be used?
g. How will the architectural design be evaluated?
h. How should the architecture be documented?
Provide relevant answers to the questions above in your capacity as the team
leader in a software development project. [24]
9. An Architecture decision is documented. Explain any five elements contained in the
Architecture decision documentation [10]
10. Explain the 4+1 view model of software architecture [5]
11. Explain the aspects falling under each of the views below:
a. The User View
i. Use Case Diagram. [3]
ii. Business Use Case Diagram. [3]
b. The Structural View
i. Class Diagram. [3]
c. Behaviour View
i. The Sequence Diagram. [3]
ii. Collaboration Diagram. [3]
iii. Activity Diagram. [3]
iv. State Diagram. [3]
d. The Implementation View
i. Component Diagram. [3]
ii. Deployment Diagram. [3]

Compiled by C. Uta [email protected]


ND Software Engineering
Chapter 4 - Software Design

12. Describe each of the following architectures:


a. MVC architecture [10]
b. Layered architecture [10]
c. Repository architecture [10]
d. Client-server architecture [10]
e. Pipe and filter architecture [10]
13. Explain the following examples of applications
a. Data processing
b. Transaction processing
c. Event processing systems
d. Language processing systems
14. Describe the importance of the following in software development?
a. High Level Design [6]
b. Modularization [6]
c. Coupling [6]
d. Cohesion [6]
15. Describe the following concepts:
a. Component-based development [6]
b. Application frameworks [6]
c. Legacy system wrapping [6]
d. Service-oriented systems [6]
e. Application product lines [6]
f. COTS integration [6]
g. Program libraries [6]
h. Program generators [6]
16. Define Software Quality Management. [3]
17. Why are metrics important in software quality management? [5]
18. Explain the following approaches that can be used to enhance software quality:
a. COCOMO Model [20]
b. Putman Model [10]
c. Function Point Analysis [15]
19. Write brief notes on the following CASE Tools used during SDLC:
a. Upper Case Tools [5]
b. Lower Case Tools [5]
c. Workbenches [5]
d. IDEs [5]
e. Environments [5]
f. Toolkits [5]
g. Process-centred tools [5]
20. Spell out three (3) aspects of Object-Oriented Programming that differentiate it
from Procedure Oriented Programming. [12]
21. Explain the significance of the SOLID Principles of OOP in developing computer applications. [15]
22. Describe an Event-Driven-Architecture in the context of its application. [10]
23. Give two examples of Microservices Architecture. [4]
24. Describe safeguards that must be taken into account when engineering
Transaction Processing Systems. [10]
25. Why is; rollback, backup, logging, and recovery important aspects of a database
driven transaction processing system? [20]
26. Using examples where applicable, discuss the pros and cons of Language
Processing Systems in the business world. [20]

Compiled by C. Uta [email protected]

You might also like