0% found this document useful (0 votes)
4 views

Software Design Principles

Software design principles aim to manage complexity in the design process, which helps reduce errors and development effort. Key concepts include problem partitioning, abstraction, modularity, and the importance of coupling and cohesion in module design. The document also discusses design strategies such as top-down and bottom-up approaches, along with the Putnam Resource Allocation Model for project management.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Software Design Principles

Software design principles aim to manage complexity in the design process, which helps reduce errors and development effort. Key concepts include problem partitioning, abstraction, modularity, and the importance of coupling and cohesion in module design. The document also discusses design strategies such as top-down and bottom-up approaches, along with the Putnam Resource Allocation Model for project management.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Software Design Principles

Software design principles are concerned with providing means to handle the
complexity of the design process effectively. Effectively managing the complexity
will not only reduce the effort needed for design but can also reduce the scope of
introducing errors during design.

Following are the principles of Software Design

Problem Partitioning

Problem partitioning in software engineering is the process of dividing a


complex problem into smaller, manageable subproblems. This approach
simplifies development by focusing on individual parts of the problem, which can
be solved independently before integrating them into the larger solution.
Problem partitioning is particularly useful in designing large systems, as it allows
teams to address specific functionalities and enables parallel development.

Here’s a breakdown of the typical steps involved in problem partitioning:

1.​ Identify Requirements: Start by thoroughly understanding and gathering


all the requirements. This includes understanding the system’s overall
goals, user needs, constraints, and performance requirements. Clear
requirements help in dividing the problem logically.
2.​ Divide into Subproblems: Once requirements are understood, break down
the main problem into smaller subproblems, each of which should
represent a logical component or feature of the system. For example, in a
web application, subproblems might include user authentication, data
storage, and user interface design.
3.​ Define Interfaces and Dependencies: After dividing the problem, specify
how each subproblem will interact with others. Define the interfaces for
data exchange between components and identify dependencies. For
example, an authentication component may rely on a database
component for storing user data.
4.​ Assign Responsibilities: Clearly assign each subproblem to a development
team or team member. This allows for parallel development, enabling
different teams to work on different components independently, which
can significantly reduce development time.
5.​ Implement and Integrate: Each subproblem is implemented individually
according to the specified requirements and interfaces. Once all
components are developed, they are integrated into the full system.
Testing is crucial at this stage to ensure components work together
seamlessly.
6.​ Testing and Validation: Test the integrated system thoroughly to ensure
that all components function as expected and that the overall system
meets the initial requirements. This stage often involves both unit testing
of individual components and system testing for the integrated solution.

Problem partitioning improves modularity, maintainability, and scalability,


making it an essential strategy in software engineering for managing complex
systems efficiently.

For software design, the goal is to divide the problem into manageable pieces.

Benefits of Problem Partitioning


1.​ Software is easy to understand
2.​ Software becomes simple
3.​ Software is easy to test
4.​ Software is easy to modify
5.​ Software is easy to maintain
6.​ Software is easy to expand
These pieces cannot be entirely independent of each other as they together form
the system. They have to cooperate and communicate to solve the problem. This
communication adds complexity.

Note: As the number of partition increases = Cost of partition and complexity increases

Abstraction
An abstraction is a tool that enables a designer to consider a component at an
abstract level without bothering about the internal details of the
implementation. Abstraction can be used for existing elements as well as the
component being designed.

Here, there are two common abstraction mechanisms

1.​ Functional Abstraction


2.​ Data Abstraction

Functional Abstraction
1.​ A module is specified by the method it performs.
2.​ The details of the algorithm to accomplish the functions are not visible to the user
of the function.

Functional abstraction forms the basis for Function oriented design approaches.

Data Abstraction
Details of the data elements are not visible to the users of data. Data Abstraction
forms the basis for Object Oriented design approaches.

Modularity
Modularity specifies the division of software into separate modules which are
differently named and addressed and are integrated later on in order to obtain
the completely functional software. It is the only property that allows a program
to be intellectually manageable. Single large programs are difficult to understand
and read due to a large number of reference variables, control paths, global
variables, etc.

The desirable properties of a modular system are:


○​ Each module is a well-defined system that can be used with other
applications.
○​ Each module has single specified objectives.
○​ Modules can be separately compiled and saved in the library.
○​ Modules should be easier to use than to build.
○​ Modules are simpler from outside than inside.

Advantages and Disadvantages of Modularity


In this topic, we will discuss various advantages and disadvantages of Modularity.

Advantages of Modularity

There are several advantages of Modularity

○​ It allows large programs to be written by several or different people


○​ It encourages the creation of commonly used routines to be placed in
the library and used by other programs.
○​ It simplifies the overlay procedure of loading a large program into main
storage.
○​ It provides more checkpoints to measure progress.
○​ It provides a framework for complete testing, more accessible to test
○​ It produced a well designed and more readable program.

Disadvantages of Modularity

There are several disadvantages of Modularity


○​ Execution time maybe, but not certainly, longer
○​ Storage size perhaps, but is not certainly, increased
○​ Compilation and loading time may be longer
○​ Inter-module communication problems may be increased
○​ More linkage required, run-time may be longer, more source lines must
be written, and more documentation has to be done

Modular Design
Modular design reduces the design complexity and results in easier and faster
implementation by allowing parallel development of various parts of a system.
We discuss a different section of modular design in detail in this section:

1. Functional Independence: Functional independence is achieved by developing


functions that perform only one kind of task and do not excessively interact with
other modules. Independence is important because it makes implementation
more accessible and faster. The independent modules are easier to maintain,
test, and reduce error propagation and can be reused in other programs as well.
Thus, functional independence is a good design feature which ensures software
quality.

It is measured using two criteria:

○​ Cohesion: It measures the relative function strength of a module.


○​ Coupling: It measures the relative interdependence among modules.

2. Information hiding: The fundamentals of Information hiding suggests that


modules can be characterized by the design decisions that protect from the
others, i.e., In other words, modules should be specified that data include within a
module is inaccessible to other modules that do not need for such information.

The use of information hiding as design criteria for modular system provides the
most significant benefits when modifications are required during testing's and
later during software maintenance. This is because as most data and procedures
are hidden from other parts of the software, inadvertent errors introduced during
modifications are less likely to propagate to different locations within the
software.

Strategy of Design
A good system design strategy is to organize the program modules in such a
method that are easy to develop and latter too, change. Structured design
methods help developers to deal with the size and complexity of programs.
Analysts generate instructions for the developers about how code should be
composed and how pieces of code should fit together to form a program.

To design a system, there are two possible approaches:

1.​ Top-down Approach


2.​ Bottom-up Approach

1. Top-down Approach: This approach starts with the identification of the main
components and then decomposing them into their more detailed
sub-components.

2. Bottom-up Approach: A bottom-up approach begins with the lower details and
moves towards up the hierarchy, as shown in fig. This approach is suitable in case
of an existing system.
Coupling and Cohesion

Module Coupling

In software engineering, coupling is the measure of interdependence between


modules. Tightly coupled modules are highly dependent on each other, while
loosely coupled modules have minimal dependency. Uncoupled modules are
entirely independent.

A good design aims for low coupling, which is evaluated by the number of
interconnections and the amount of shared data between modules. As the
degree of coupling increases, so does the likelihood of errors.

Types of Module Coupling:

1.​ No Direct Coupling: Modules are unrelated and do not directly interact.
2.​ Data Coupling: Modules pass only simple data values to each other.
3.​ Stamp Coupling: Modules share complex data structures, such as objects
or structures.
4.​ Control Coupling: Data from one module controls the flow of another
module.
5.​ External Coupling: Modules share externally imposed data formats or
communication protocols.
6.​ Common Coupling: Modules share data through global variables.
7.​ Content Coupling: One module directly accesses or modifies another
module’s data or behavior.
Module Cohesion

Cohesion measures how closely related the tasks within a single module are.
High cohesion indicates that the module’s functionality is tightly related, making
it more reliable and maintainable.

Cohesion is generally described as "high cohesion" (desirable) or "low cohesion."

Types of Module Cohesion:

1.​ Functional Cohesion: Elements work together to achieve a single, specific


function.
2.​ Sequential Cohesion: Elements form a sequence where the output of one
part serves as input to the next.
3.​ Communicational Cohesion: All elements operate on the same data
structure.
4.​ Procedural Cohesion: Elements are part of a procedure that must follow
specific steps to achieve a goal.
5.​ Temporal Cohesion: Elements are related by timing and must execute
together.
6.​ Logical Cohesion: Elements perform similar tasks, like error handling or
data input/output.
7.​ Coincidental Cohesion: Elements have no meaningful relationship and are
grouped arbitrarily.

Comparison of Coupling and Cohesion

●​ Coupling refers to inter-module connections, aiming for low coupling to


reduce dependency.
●​ Cohesion focuses on intra-module relationships, where high cohesion is
preferred for stronger, single-purpose modules.

Aspect Coupling Cohesion

Definition Degree of Degree of relatedness within


interdependence elements of a module
between modules

Also Known Inter-Module Binding Intra-Module Binding


As
Focus Relationships between Relationship within the elements
different modules of a single module

Goal Aim for low coupling to Aim for high cohesion for
minimize dependencies focused functionality

Impact on High coupling increases High cohesion makes modules


Design complexity and error rates easier to understand, test, and
maintain

Type of Shows relative Shows module's relative


Independence independence between functional strength
modules

Desirable Low High


Level

Behavior Linked to other modules Focuses on a single function or


purpose

Examples Control, data, stamp, Functional, sequential,


content, and common communicational, procedural
coupling cohesion

Effect on High coupling can lead to High cohesion leads to more


System more error propagation reliable and reusable modules

In software design, low coupling and high cohesion are preferred as they
contribute to a modular, maintainable, and robust system.

Top-Down and Bottom-Up design are two primary approaches in software


development, each with its own strategies for organizing and implementing a
system.

Top-Down Design

In Top-Down Design, development begins with identifying the high-level


components of the system, and then each component is broken down into
smaller, more detailed subcomponents. This approach emphasizes the overall
structure and ensures that the main functions and goals of the system are
addressed first before getting into finer details. It’s especially useful for projects
where a clear understanding of the system’s goals and primary structure is
needed from the beginning.

Advantages of Top-Down Design:

●​ Provides a clear, high-level overview of the system early in development.


●​ Helps in maintaining focus on main objectives.
●​ Facilitates straightforward decomposition, which can simplify project
management.

Bottom-Up Design

In Bottom-Up Design, the development starts at the detailed component level


and builds upward, combining these components into higher-level systems until
the complete system is formed. This approach is effective when there is existing
code or reusable components, as it focuses on building complex systems by
combining tested, reliable parts.

Advantages of Bottom-Up Design:

●​ Encourages reuse of existing components and tested code.


●​ Allows early testing of individual modules, promoting reliability.
●​ Useful for enhancing or expanding existing systems.

Both approaches can also be combined as needed, with Top-Down helping to


define structure and Bottom-Up refining individual parts to meet specific
requirements.

Comparison between Top-down and bottom up approaches:

Aspect Top-Down Design Bottom-Up Design

Approach Begins with high-level Starts with detailed


structure and breaks it down components and integrates
into detailed subcomponents them into higher-level
structures

Focus System architecture and main Individual components and


functions their integration

Development Moves from general to specific Moves from specific to


Sequence general
Best For Projects requiring a clear, Systems built from reusable
overall structure or pre-existing components

Main Provides a clear overview of Enables reuse of existing


Advantage the entire system early in components, enhancing
development reliability

Testing Testing is often deferred until Allows for early testing of


higher-level design is individual modules
complete

Flexibility Less flexible if major changes More flexible for incremental


are needed later or modular improvements

Examples of Ideal for new, large projects Suitable for enhancing or


Use with clearly defined goals expanding existing systems

Complexity Manages complexity by Manages complexity by


Handling breaking down functions building up from
hierarchically manageable, testable
components

Drawbacks Requires thorough initial May lack initial focus on


understanding of the overall high-level system structure
system

In practice, both Top-Down and Bottom-Up approaches can be combined to


leverage the strengths of each, creating a well-structured and flexible
development process.

Putnam Resource Allocation Model


The Lawrence Putnam model describes the time and effort required to finish a
software project of a specified size. Putnam makes a use of a so-called The
Norden/Rayleigh Curve to estimate project effort, schedule & defect rate as
shown in fig:
Putnam noticed that software staffing profiles followed the well known Rayleigh
distribution. Putnam used his observation about productivity levels to derive the
software equation:

The various terms of this expression are as follows:

K is the total effort expended (in PM) in product development, and L is the
product estimate in KLOC .

td correlates to the time of system and integration testing. Therefore, td can be


relatively considered as the time required for developing the product.

Ck Is the state of technology constant and reflects requirements that impede the
development of the program.
Typical values of Ck = 2 for poor development environment

Ck= 8 for good software development environment

Ck = 11 for an excellent environment (in addition to following software


engineering principles, automated tools and techniques are used).

The exact value of Ck for a specific task can be computed from the historical data
of the organization developing it.

Putnam proposed that optimal staff develop on a project should follow the
Rayleigh curve. Only a small number of engineers are required at the beginning
of a plan to carry out planning and specification tasks. As the project progresses
and more detailed work are necessary, the number of engineers reaches a peak.
After implementation and unit testing, the number of project staff falls.

Effect of a Schedule change on Cost


Putnam derived the following expression:

Where, K is the total effort expended (in PM) in the product development

L is the product size in KLOC

td corresponds to the time of system and integration testing

Ck Is the state of technology constant and reflects constraints that impede the
progress of the program

Now by using the above expression, it is obtained that,

For the same product size, C =L3 / Ck3 is a constant.

(As project development effort is equally proportional to project development


cost)

What is the outcome of the Rayleigh curve? explain each step.


The Rayleigh Curve is a graphical representation that describes the ideal
distribution of effort and staffing over time in a software project. It is widely used
in project management and software engineering to understand how resources
should be allocated at each phase of a project to optimize efficiency. In the
context of the Putnam Model, it shows how workforce levels change over time.

Key Outcomes of the Rayleigh Curve

1.​ Gradual Increase in Staffing (Initial Phase):


○​ Explanation: In the beginning, only a small number of resources
(engineers or staff) are needed. The focus is on planning,
requirement analysis, and initial design work.
○​ Outcome: This phase ensures that the project's foundation is solid,
minimizing the likelihood of significant design changes later.
Staffing increases slowly as more clarity emerges about the project
requirements.
2.​ Peak Staffing Level (Middle Phase):
○​ Explanation: As the project progresses into the detailed design,
coding, and testing stages, the staffing level reaches its peak. This
phase demands the most resources because it involves the bulk of
the development effort.
○​ Outcome: The project sees maximum productivity, as most
resources are actively working on implementation and testing. This
phase requires the highest effort in terms of time, cost, and staffing.
3.​ Gradual Decrease in Staffing (Final Phase):
○​ Explanation: After the primary implementation and testing are
complete, fewer resources are needed for final integration, system
testing, and debugging. The staffing level begins to drop as the
project nears completion.
○​ Outcome: Resource demand decreases as only a few engineers are
needed to perform integration testing, final adjustments, and
maintenance. This minimizes costs toward the end of the project.

Summary of the Rayleigh Curve's Impact

The Rayleigh Curve shows that a gradual increase in resources early on, a peak at
mid-project, and a decrease as the project nears completion leads to an efficient
allocation of effort. By aligning staffing levels with the project's needs at each
phase, this curve helps optimize productivity, minimize costs, and avoid resource
wastage.
A PERT (Program Evaluation and Review Technique) chart is a project
management tool used to visualize the timeline and dependencies of a project. It
helps in planning and scheduling tasks, assessing the minimum time required to
complete a project, and identifying critical paths and potential bottlenecks. PERT
charts are particularly useful for complex projects with multiple interdependent
tasks.

Components of a PERT Chart

1.​ Nodes (or Circles): Represent tasks or milestones in the project.


2.​ Arrows: Show dependencies between tasks, indicating the sequence and
flow of the project.
3.​ Time Estimates:
○​ Optimistic Time (O): The minimum time required to complete the
task.
○​ Most Likely Time (M): The most probable time required to complete
the task.
○​ Pessimistic Time (P): The maximum time the task might take.
○​ Expected Time (TE): A calculated estimate based on the formula:
TE=O+4M+P6TE = \frac{O + 4M + P}{6}TE=6O+4M+P​

How to Create a PERT Chart

1.​ List All Tasks: Identify all tasks required to complete the project.
2.​ Determine Dependencies: Determine which tasks depend on the
completion of other tasks.
3.​ Estimate Time for Each Task: Use optimistic, most likely, and pessimistic
times to calculate the expected time for each task.
4.​ Draw the Chart: Represent each task as a node and use arrows to show
dependencies.
5.​ Identify the Critical Path: The longest path through the network (in terms
of time) is the critical path, showing the shortest time to complete the
project.

Example of a PERT Chart

Let’s say we have a project to launch a new product, and the tasks involved are as
follows:
Tas Description Dependen Optimistic Most Likely Pessimistic
k cies (O) (M) (P)

A Market Research None 1 week 2 weeks 4 weeks

B Product Design A 2 weeks 4 weeks 6 weeks

C Prototype B 2 weeks 3 weeks 5 weeks


Development

D Product Testing C 1 week 2 weeks 3 weeks

E Marketing A 1 week 3 weeks 4 weeks


Strategy

F Final Review & D, E 1 week 2 weeks 3 weeks


Launch

1.​ ​
Calculate Expected Times (TE) for Each Task:
○​ Task A: TE=1+4(2)+46=2.17TE = \frac{1 + 4(2) + 4}{6} =
2.17TE=61+4(2)+4​=2.17 weeks
○​ Task B: TE=2+4(4)+66=4TE = \frac{2 + 4(4) + 6}{6} = 4TE=62+4(4)+6​=4
weeks
○​ Task C: TE=2+4(3)+56=3.17TE = \frac{2 + 4(3) + 5}{6} =
3.17TE=62+4(3)+5​=3.17 weeks
○​ Task D: TE=1+4(2)+36=2TE = \frac{1 + 4(2) + 3}{6} = 2TE=61+4(2)+3​=2
weeks
○​ Task E: TE=1+4(3)+46=3TE = \frac{1 + 4(3) + 4}{6} = 3TE=61+4(3)+4​=3
weeks
○​ Task F: TE=1+4(2)+36=2TE = \frac{1 + 4(2) + 3}{6} = 2TE=61+4(2)+3​=2
weeks
2.​ Draw the PERT Chart:
○​ Start with Task A, which has no dependencies.
○​ Tasks B and E depend on Task A.
○​ Task C depends on Task B.
○​ Task D depends on Task C, and Task F depends on both D and E.
3.​ Identify the Critical Path:
○​ The critical path is the sequence of dependent tasks that has the
longest duration, as it determines the minimum project completion
time.
○​ In this case, the path A → B → C → D → F is the critical path with a total
time of 2.17+4+3.17+2+2=13.342.17 + 4 + 3.17 + 2 + 2 =
13.342.17+4+3.17+2+2=13.34 weeks.

Benefits of a PERT Chart

●​ Enhanced Planning: Helps visualize the project schedule and


dependencies.
●​ Time Estimation: Provides a better understanding of how long the project
will take.
●​ Critical Path Analysis: Identifies key tasks that impact the overall project
duration.
●​ Risk Management: Allows for assessment of possible delays and time
variations, enabling proactive measures.

Using a PERT chart, project managers can make more informed decisions about
resource allocation, timelines, and identifying tasks that need additional focus to
avoid delays.

Explain the critical path in detail? Calculate critical path of aforesaid


numerical.

The Critical Path in project management is the sequence of tasks that


determine the shortest time needed to complete a project. If any task on the
critical path is delayed, it directly affects the project completion time.
Understanding the critical path is crucial because it highlights the tasks that
cannot be delayed without impacting the overall schedule.

Steps to Determine the Critical Path


1.​ Identify All Tasks and Dependencies: List all project tasks and determine
which tasks depend on the completion of others.
2.​ Estimate Duration for Each Task: Calculate the expected duration (TE) for
each task using the formula:​
TE=O+4M+P6TE = \frac{O + 4M + P}{6}TE=6O+4M+P​​
where OOO is the optimistic time, MMM is the most likely time, and PPP is
the pessimistic time.
3.​ Draw the Network Diagram: Represent each task as a node, and use
arrows to connect dependent tasks.
4.​ Calculate the Earliest and Latest Start and Finish Times:
○​ Earliest Start (ES): The earliest time a task can start after the
preceding tasks have been completed.
○​ Earliest Finish (EF): EF=ES+TEEF = ES + TEEF=ES+TE
○​ Latest Start (LS): The latest time a task can start without delaying
the project.
○​ Latest Finish (LF): LF=LS+TELF = LS + TELF=LS+TE
5.​ Determine the Float (Slack) for Each Task:
○​ Float is the amount of time a task can be delayed without delaying
the project.
○​ Float=LS−ES\text{Float} = LS - ESFloat=LS−ES or
Float=LF−EF\text{Float} = LF - EFFloat=LF−EF
○​ Tasks with zero float are critical tasks and form the critical path.
6.​ Identify the Critical Path: The path with the longest duration from start to
finish with zero float.

Example Calculation of the Critical Path

Using the example tasks and durations provided:

Tas Description Dependen Optimisti Most Pessimist Expected


k cies c (O) Likely ic (P) Time (TE)
(M)

A Market None 1 week 2 weeks 4 weeks 2.17 weeks


Research
B Product A 2 weeks 4 weeks 6 weeks 4 weeks
Design

C Prototype B 2 weeks 3 weeks 5 weeks 3.17 weeks


Developmen
t

D Product C 1 week 2 weeks 3 weeks 2 weeks


Testing

E Marketing A 1 week 3 weeks 4 weeks 3 weeks


Strategy

F Final Review D, E 1 week 2 weeks 3 weeks 2 weeks


& Launch

1.​ ​
Calculate Expected Times (TE) (already provided in the table).
2.​ Draw the Network Diagram:
○​ Start with Task A.
○​ Tasks B and E depend on A.
○​ Task C depends on B.
○​ Task D depends on C, and Task F depends on both D and E.
3.​ Calculate Earliest Start (ES) and Earliest Finish (EF):
○​ Task A: ES = 0; EF = 2.17
○​ Task B: ES = 2.17; EF = 2.17 + 4 = 6.17
○​ Task C: ES = 6.17; EF = 6.17 + 3.17 = 9.34
○​ Task D: ES = 9.34; EF = 9.34 + 2 = 11.34
○​ Task E: ES = 2.17; EF = 2.17 + 3 = 5.17
○​ Task F: ES = max(EF of D, EF of E) = 11.34; EF = 11.34 + 2 = 13.34
4.​ Calculate the Critical Path:
○​ The critical path is the longest path from start to finish, with no slack.
○​ From the calculations, Path A → B → C → D → F has the longest
duration:​
2.17+4+3.17+2+2=13.342.17 + 4 + 3.17 + 2 + 2 = 13.342.17+4+3.17+2+2=13.34
weeks.
5.​ Conclusion:
○​ The critical path is A → B → C → D → F, and the total time to complete
the project along this path is 13.34 weeks.
○​ Tasks on this path have zero float, meaning any delay in these tasks
will delay the entire project.

Summary

In this example, the critical path ensures that project managers focus on the
sequence A → B → C → D → F as it determines the project’s completion time.
Monitoring these tasks closely will help in managing time effectively and keeping
the project on track.

You might also like