0% found this document useful (0 votes)
0 views

CSC 206 lecture note

The document discusses the history and principles of structured programming, which emerged in the 1960s to address the complexities of programming by introducing organized methods for writing code. It defines structured programming as a technique that minimizes complexity through top-down analysis, modular programming, and structured code, emphasizing the importance of control structures and design principles like coupling and cohesion. The document also outlines the benefits of structured programming, including improved maintainability and reusability of code.

Uploaded by

onyedikae38
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

CSC 206 lecture note

The document discusses the history and principles of structured programming, which emerged in the 1960s to address the complexities of programming by introducing organized methods for writing code. It defines structured programming as a technique that minimizes complexity through top-down analysis, modular programming, and structured code, emphasizing the importance of control structures and design principles like coupling and cohesion. The document also outlines the benefits of structured programming, including improved maintainability and reusability of code.

Uploaded by

onyedikae38
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

CSC 206: STRUCTURED PROGRAMMING

HISTORY/RATIONALE OF STRUCTURED PROGRAMMING

A computer program is simply a set of instructions that directs the computer in its
calculations and movement of data. Before the introduction of structured programming,
programs were developed with no specific structure/pattern, programmers just start writing
codes and when program refuse to work, a programmer will simply “patch” (fix) problems
error by error until he/she had found them all.

This approach makes error to continue to appear even after a program has been released.
Studies have shown that in general, programmers averaged only 5 to 10 completed debugged
source statements (programming lines) in a day. The reason was because, too much time was
spent debugging programs of errors introduced at the time they were written. Most of the
time was spent correcting what might be called planning errors and logical errors in the
overall design and organization of the program. Prior to the advent of structured
programming, this general state of affairs has been considered inevitable, it was simple part
and parcel of computer programming. Such approach to programming was tolerable in the
first and second generation computers until between 50’s and 60’s.

During the 50’s and 60’s, few people thought about the question of developing a general
method for organizing a program. At mid 60’s, hardware became more powerful and requires
more powerful software to run it. At this point, the “hit – and – miss” approach to
programming could not keep pace.

The history of structured programming began in 1964 at a conference held in Israel. There,
CorradoBohm and GuiseppeJacopinipresented a paper (in Italian) that proved
mathematically that only three control structures were necessary to write any program. The
turning point however, occurred in 1968 when EdsgerDijkstra of the Netherlands published
a letter to the editor in the communications of the ACM. The letter titled “GOTO statement
considered harmful”. Dijkstra crusaded a better way of writing/organizing programs called
Structured Programming.

The widespread of this programming style did not received general acceptance until a
tremendous success of a now famous New York Times project was completed in 1972. It was
the first major successful project that used structured programming. The project was
developed by a programming team at IBM under the direction of Harlan Mills. The project
was to automate newspaper’s clipping file. Using a list of index terms, users could browse
through abstracts of all the paper’s article and then retrieve the full-length articles of their
choice from microfiche for display on a terminal screen. These results shocked the
programming community. Software developers began to pay attention to what Dijkstra had
been saying and writing.

By mid 70’s, structured programming was being used for everything from home computers to
multi-million dollar defence projects. It will be safe to say that today, virtually all

1
practitioners of the art of programming at least acknowledge the merits of the discipline –
Structured Programming.

DEFINITION OF STRUCTURED PROGRAMMING

Many people define structured programming as programming style without the use of GOTO
statement. Though this may be right, because structured programming discourage frequent
and indiscriminate use of GOTO statement, but there is more to it than just GOTO statement.

Once major problem structured programming addressed was the problem of complexity; of
course most programs that do anything significant in real world are rather long. We agree
with me that software like word processors, compilers and operating systems staggers
someone’s imagination. Infact some computer scientist claim that these very large software
systems are the most logically complex things human has ever invented. Hence, complexity
is precisely the problem that structured programming addresses.

An author defined structured programming as a method of designing computer system


components and their relationships to minimize complexity. Structured programming uses
three elements (ways) to minimize complexity and this will serve as our full working
definition of structured programming.

Structured Programming is a method of writing a computer program that uses top-down


analysis for problem solving, modularization for program structure/organization and structure
code for individual modules.

STRUCTURED PROGRAMMING ELEMENTS

1. Top – down analysis


Top – down analysis or decomposition is of the process of breaking down the overall
procedure (task) into component parts (modules) and then subdivide each component
module until the lowest level of detail has been reached.

In top-down design, you start ‘at the top’ with the general problem and design specific
solutions to its subproblem. The essential idea of top-down design is to subdivide a
large problem into several smaller tasks or parts. To obtain an effective solution for
any main task, subtasks/subproblems should be independent of each other and each
subproblem should be solved and tested by itself.

There is no single formula that subdivides a complex task into smaller manageable
task. The strategy here involves top-down reduction of the processing until a level is
reached where each of the individual processes consist of one self-contained task,
which is relatively easy to understand and can be programmed using few instructions.
Once a task is broken down into manageable size, each task can be solved and tested

2
for errors in logic, corrected or modified without effect on other subtask or sub
problem.

2. Modular Programming
Modularity is the single attribute of softwarethat allows a program to be
intellectuallymanageable.” Large programs are broken down into separate, smaller
sections called modules, subroutines or subprograms. Each module has a specified job
to do and is relatively easy to write. Thus modular programming simplifies the task of
programming by making use of a highly structured organizational plan. There is, of
course, a direct correlation between the subdivisions of the problem obtained through
a top-down analysis and these modules: each subdivision will correspond to a module
in the program. Modular structure also simplified programming by greatly reducing
the need for the GOTO statement, which when used frequently, tends to obscure
program organization and introduce errors.
The advantage of modular programming is the ability to write and test each module
independently and in some cases reuse modules in other programs.

Why modularize a system?


 Management: Partition the overall development effort – Divide and conquer

 Evolution: Decouple parts of a system so that changes to one part are isolated from
changes to other parts

o Principle of directness (clear allocation of requirements to modules, ideally


one requirement (or more) maps to one module)
o Principle of continuity/locality (small change in requirements triggers a
change to one module only)

 Understanding: Permit system to be understood


- as composition of mind-sized chunks
- with one issue at a time, e.g., principles of locality,
encapsulation,separation of concerns

3. Structured Code
If programs are broken down into modules, into what are modules subdivided?
Obviously, each consists of a set of instructions to the computer. But are these
instructions organized in any special way? That is, are they grouped and executed in
any clearly definable patterns? In structured programming, they are. They are
organized within various control structures.

A control structure represents a unique pattern of execution for a specific set of


instructions. It determines the precise order in which that set of instructions is
executed.

We have three (3) basic control structures we may encounter in solving a particular
task: Sequence, Selection and Repetition Control Structure.

3
- Sequence Control Structure:
o Steps of an algorithm are carried out in a sequential manner where each
step is executed exactly once.

Eg:- We need to obtain temperature expressed in Fahrenheit degrees


and convert it to degree Celsius.

Step 1: Start
Step 2: Read temperature in Fahrenheit
Step 3: Convert to Celsius
Step 4: Display result in Celsius
Step 5: Stop
Note that these steps follow a particular order with each statement
executed only once.

- Selection Control Structure:


o Only a number of alternative steps are executed.

Eg 1:- Control deciding whether a student pass an examination


Step 1: Start
Step 2: Read score
Step 3: if (score >= 40) then student Pass
Step 4: Stop

Eg 2:- Control depending on whether a student Pass or Fail an


examination
Step 1: Start
Step 2: Read score
Step 3: if (score >= 40)
then student Pass
Step 4: Else
Student Fail
Step 5: Stop

In this case, a decision is made but more complex. If the


condition is true then student Pass, else student Fail

Eg 3:- Control deciding student’s grade based on score obtained in an


examination
Step 1: Start
Step 2: Read score
Step 3: if (score >= 70)
then grade = A
Step 4: elseif (score >= 60 and score < 70)
then grade = B
Step 5: elseif (score >= 50 and score < 60)
then grade = C
Step 6: elseif (score >= 40 and score < 50)
then grade = D
Step 7: else

4
grade = F
Step 8: Stop

- Repetition Control Structure:


o One or more steps are performed repeatedly

Eg: Compute the average of ten numbers


Step 1: Start
Step 2: total = 0, count = 1
Step 3: Read number
Step 4: total = total + number
Step 5: count = count + 1
Step 6: If count <= 10 then step 3
Step 7: Average = total/10
Step 8: Stop
Sequence, Selection and Repetition are by themselves simple. But put together, they can
construct any algorithm that we can imagine.

STRUCTURED DESIGN PRINCIPLES

There are two basic design principles in structured programming – coupling and cohesion.

Coupling
Measure of interconnection among modules.The degree to which one module depends on
others. Coupling measure how strongly one element is connected to, has knowledge of or
relies on other elements. An element with low (or weak) coupling is not dependent on too
many other elements.

5
Levels of Coupling
We have five levels of coupling

5. Data (best)
4. Stamp
3. Control
2. Common
1. Content (worst)

1. Content Coupling - Definition: A module directly references the content of another


module
modulep modifies a statement of module q
Module p refers to local data of module q (in terms of a numerical displacement)
Module p branches to a local label of module q

● Why is this bad?


Content coupled modules are inextricably interlinked
– Change to module p requires a change to module q (including recompilation)
– Reusing module p requires using module q also
– Typically only possible in assembly languages

2. Common Coupling - All modules have read/write access to a global data block.
Modules exchange data using the global data block (instead of arguments)
Note that, single module with write access where all other modules have read access
is not common coupling

 Why is this bad?


– Have to look at many modules to determine the current state of a variable
– Side effects require looking at all the code in a function to see if there are any
global effects
– Changes in one module to the declaration requires changes in all other
modules
– Identical list of global variables must be declared for module to be reused
– Module is exposed to more data than is needed

3. Control Coupling – Two components are control coupled if one passes to the other a
piece of information intended to control the internal logic of another.

May be either good or bad, depending on situation.


a. Bad when component must be aware of internal structure and logic of another
module
b. Good if parameters allow factoring and reuse of functionality

Example:

6
• Acceptable: Module p calls module q and q returns a flag that indicates an error (if
any)
• Not Acceptable: Module p calls module q and q returns a flag back to p that says
it must output the error “I goofed up”

4. Stamp Coupling - Component passes a data structure to another component that does
not have access to the entire structure.Requires second component to know how to
manipulate the data structure (e.g., needs to know about implementation).

 Why is this bad?


- affects understanding - not clear, without reading entire module, which fields of
record are accessed or changed.
- unlikely to be reusable - other products have to use the same higher level data
structures.
- passes more data than necessary - e.g., uncontrolled data access can lead to
computer crime

5. Data Coupling - Every argument is either a simple argument or a data structure in


which all elements are used by the called module. Data coupling is good, if it can be
achieved.

Example
display time of arrival (flight number)
get job with highest priority (job queue)

 Why is this good?


- maintenance is easier
- good design has high cohesion & weak coupling
- easy to write contracts for this and modify component independently.

Cohesion
Cohesion is the degree of interaction within module. The degree to which all elements of a
component are directed towards a single task and all elements directed towards that task are
contained in a single component.

Levels of Cohesion
We have seven levels of cohesion

7. Functional | Informational (best)


5. Communicational
4. Procedural
3. Temporal
2. Logical
1. Coincidental (worst)

1. Coincidental Cohesion - module performs multiple, completely unrelated actions.


This happens when there is hard organizational rules about module size.

7
Example
module prints next line, reverses the characters of the 2 nd argument, adds 7 to 3rd
argument

 Why is this bad?


– no reusability
– difficult corrective maintenance or enhancement
– Elements needed to achieve some functionality are scattered throughout the
system.

Easy to fix. How?


- break into separate modules each performing one task

2. Logical Cohesion - module performs series of related actions, one of which is selected
by calling module. It is a case where several logically related elements are in the same
component and one of the elements is selected by the caller.

 Why is this bad?


– interface difficult to understand
– code for more than one action may be intertwined
– difficult to reuse

3. Temporal Cohesion - module performs series of actions related in time.

Initialization example
open old db, new db, transaction db, print db, initialize sales district table,
read first transaction record, read first old db record

 Why is this bad?


– actions weakly related to one another, but strongly related to actions in other
modules
– code spread out - not maintainable or reusable

Initialization example fix


– define these initializers in the proper modules & then have an initialization
module call each

4. Procedural Cohesion - module performs series of actions related by procedure to be


followed by product. The elements in a component make up a single control sequence
where control flows from each activity to the next. Elements are involved in different
and (potentially) unrelated activities.

Example
a function which checks file permissions and then opens the file

 Why is this bad?


– actions are still weakly related to one another
– not reusable

8
– elements of a component are related only to ensure a particular order of
execution.

Solution
break up!

5. Communicational Cohesion - module performs series of actions related by procedure


to be followed by product, but in addition all the actions operate on same data

Example
update record in db and write it to audit trail

 Why is this bad?


– still leads to less reusability -> break it up

6. Informational Cohesion - module performs a number of actions, each with its own
entry point, with independent code for each action, all performed on the same data
structure.

7. Functional Cohesion - module performs exactly one action

Examples
– get temperature of furnace
– compute orbital of electron
– calculate sales commission

 Why is this good?


– more reusable

9
– corrective maintenance easier
– fault isolation
– reduced regression faults
– easier to extend product

Properties of a good structured design


i. Minimize coupling between modules
a. Modules don’t need to know much about one another to interact.
b. Low coupling makes future change easier.
ii. Maximize cohesion within modules
a. The contents of each module are strongly inter-related
b. High cohesion makes a module easier to understand.

ABSTRACTION MODULARITY

Major issues of modern software are its size and complexity, and its major problems involve
finding effective techniques and tools for organization and maintenance. Controlling software
development and maintenance has always involved managing the intellectual complexity of
programs. Not only must the systems be created, they must be tested, maintained, and
extended. As a result, many different people must understand and modify them at various
times during their lifetimes

Modern programming's key concept for controlling complexity is abstraction - that is.
selective emphasis on detail. A dominant theme in the evolution of methodologies and
languages is the development of tools for dealing with abstractions. An abstraction is a
simplified description, or specificationof a system that emphasizes some of the system's
details or properties while suppressing others. A good abstraction is one in which information
that is significant to the reader (i.e., the user) is emphasized while details that are immaterial
or diversionary, at least for the moment, are suppressed.

What we call "abstraction" in programming systems corresponds closely to what is called


"analyticmodelling" in many other fields. It shares many of the same problems: deciding
which characteristicsof the system are important, what variability (i.e., parameters) should be
included, which descriptiveformalism to use, how the model can be validated, and so on. As
in many other fields, we often definehierarchies of models in which lower-level models
provide more detailed explanations for thephenomena that appear in higher-level models. Our
models also share the property that thedescription is sufficiently different from the underlying
system to require explicit validation. We referto the abstract description of a model as its
specification and to the next lower-level model in thehierarchy as its implementation. The
validation that the specification is consistent with theimplementation is called verification.
The abstractions we use for software tend to emphasizefunctional properties of the software,
emphasizing what results are to be obtained and suppressingdetails about how this is to be
achieved.

A central form of abstraction in computing is language abstraction: new artificial languages


are developed to express specific aspects of a system. Modeling languages help in planning.
Computer languages can be processed with a computer. An example of this abstraction
process is the generational development of programming languages from the machine

10
language to the assembly language and the high-level language. Each stage can be used as a
stepping stone for the next stage. The language abstraction continues for example in scripting
languages and domain-specific programming languages.

Abstraction can apply to control or to data: Control abstraction is the abstraction of actions
while data abstraction is that of data structures.

 Control abstraction involves the use of subprograms and related concepts control
flows
 Data abstraction allows handling data bits in meaningful ways. For example, it is the
basic motivation behind datatype.

Control abstraction

Programming languages offer control abstraction as one of the main purposes of their use.
Computer machines understand operations at the very low level such as moving some bits
from one location of the memory to another location and producing the sum of two sequences
of bits. Programming languages allow this to be done in the higher level. For example,
consider this statement written in a Pascal-like fashion:

a := (1 + 2) * 5

To a human, this seems a fairly simple and obvious calculation ("one plus two is three, times
five is fifteen"). However, the low-level steps necessary to carry out this evaluation, and
return the value "15", and then assign that value to the variable "a", are actually quite subtle
and complex. The values need to be converted to binary representation (often a much more
complicated task than one would think) and the calculations decomposed (by the compiler or
interpreter) into assembly instructions (again, which are much less intuitive to the
programmer: operations such as shifting a binary register left, or adding the binary
complement of the contents of one register to another, are simply not how humans think
about the abstract arithmetical operations of addition or multiplication). Finally, assigning the
resulting value of "15" to the variable labelled "a", so that "a" can be used later, involves
additional 'behind-the-scenes' steps of looking up a variable's label and the resultant location
in physical or virtual memory, storing the binary representation of "15" to that memory
location, etc.

Without control abstraction, a programmer would need to specify all the register/binary-level
steps each time she simply wanted to add or multiply a couple of numbers and assign the
result to a variable. Such duplication of effort has two serious negative consequences:

1. it forces the programmer to constantly repeat fairly common tasks every time a
similar operation is needed
2. it forces the programmer to program for the particular hardware and instruction set

Data abstraction

Data abstraction enforces a clear separation between the abstract properties of a data type and
the concrete details of its implementation. The abstract properties are those that are visible to
client code that makes use of the data type—the interface to the data type—while the
concrete implementation is kept entirely private, and indeed can change, for example to

11
incorporate efficiency improvements over time. The idea is that such changes are not
supposed to have any impact on client code, since they involve no difference in the abstract
behaviour.

For example, one could define an abstract data type called lookup table which uniquely
associates keys with values, and in which values may be retrieved by specifying their
corresponding keys. Such a lookup table may be implemented in various ways: as a hash
table, a binary search tree, or even a simple linear list of (key:value) pairs. As far as client
code is concerned, the abstract properties of the type are the same in each case.

Structured programming involves the splitting of complex program tasks into smaller pieces
with clear flow-control and interfaces between components, with reduction of the complexity
potential for side-effects.

In a simple program, this may aim to ensure that loops have single or obvious exit points and
(where possible) to have single exit points from functions and procedures.

In a larger system, it may involve breaking down complex tasks into many different modules.
Consider a system which handles payroll on ships and at shore offices:

 The uppermost level may feature a menu of typical end-user operations.


 Within that could be standalone executables or libraries for tasks such as signing on
and off employees or printing checks.
 Within each of those standalone components there could be many different source
files, each containing the program code to handle a part of the problem, with only
selected interfaces available to other parts of the program. A sign on program could
have source files for each data entry screen and the database interface (which may
itself be a standalone third party library or a statically linked set of library routines).
 Either the database or the payroll application also has to initiate the process of
exchanging data with between ship and shore, and that data transfer task will often
contain many other components.

These layers produce the effect of isolating the implementation details of one component and
its assorted internal methods from the others. Object-oriented programming embraced and
extended this concept.

Levels of abstraction

Computer science commonly presents levels (or, less commonly, layers) of abstraction,
wherein each level represents a different model of the same information and processes, but
uses a system of expression involving a unique set of objects and compositions that apply
only to a particular domain. Each relatively abstract, "higher" level builds on a relatively
concrete, "lower" level, which tends to provide an increasingly "granular" representation. For
example, gates build on electronic circuits, binary on gates, machine language on binary,
programming language on machine language, applications and operating systems on
programming languages. Each level is embodied, but not determined, by the level beneath it,
making it a language of description that is somewhat self-contained.

12
Let us take a look at database systems, since many users of database systems lack in-depth
familiarity with computer data-structures, database developers often hide complexity through
the following levels:

Data abstraction levels of a database system

Physical level: The lowest level of abstraction describes how a system actually stores data.
The physical level describes complex low-level data structures in detail.

Logical level: The next higher level of abstraction describes what data the database stores,
and what relationships exist among those data. The logical level thus describes an entire
database in terms of a small number of relatively simple structures. Although implementation
of the simple structures at the logical level may involve complex physical level structures, the
user of the logical level does not need to be aware of this complexity. This referred to as
physical data independence. Database administrators, who must decide what information to
keep in a database, use the logical level of abstraction.

View level: The highest level of abstraction describes only part of the entire database. Even
though the logical level uses simpler structures, complexity remains because of the variety of
information stored in a large database. Many users of a database system do not need all this
information; instead, they need to access only a part of the database. The view level of
abstraction exists to simplify their interaction with the system. The system may provide many
views for the same database.

The ability to provide a design of different levels of abstraction can

 simplify the design considerably


 enable different role players to effectively work at various levels of abstraction
 support the portability of software artefacts (model-based ideally)

STEPWISE REFINEMENT
Stepwise refinement is a form of top-down design. A problem is refined into a sequence of
high-level commands. Typically pseudo-code is used for the commands. Then each command

13
is in turn refined into additional steps. The process is repeated until all commands are
implemented via existing procedures or a programming language. Stepwise refinement
encourages structured programming.

Stepwise refinement can be seen as breaking down of complex problem into a number of
simpler steps, each of which can be solved by an algorithm which is smaller and simpler than
the one required solving the overall problem.

Refinement means replacing existing steps instructions with a new version that fills in more
details.

Example: Making tea. Suppose we have a robot which carries out household tasks. We wish
to program the robot to make a cup of tea. An initial attempt at an algorithm might be:

• 1. Put tea leaves in pot

• 2. Boil water

• 3. Add water to pot

• 4. Wait 5 minutes

• 5. Pour tea into cup

These steps are probably not detailed enough for the robot. We therefore refine each step
into a sequence of smaller steps:

1. Put tea leaves in pot might be refined to

1.1 Open box of tea

1.2 Extract one spoonful of tea leaves

1.3 Tip spoonful into pot

1.4 Close box of tea

2. Boil water might be refined to

2.1. Fill kettle with water

2.2 Switch on kettle

2.3 Wait until water is boiled

2.4 Switch off kettle

5. Pour tea into cup might be refined to

5.1. Pour tea from pot into cup until cup is full

14
Some of the sub-algorithms need further refinement. For example, the step

2.1. Fill kettle with water could be refined to

2.1.1. Put kettle under tap

2.1.2. Turn on tap

2.1.3. Wait until kettle is full

2.1.4. Turn off tap

The above algorithm consists of a sequence of steps, each of which will be executed exactly
once and in order – termination of the last step implies termination of the algorithm.
However, algorithms with only sequences of steps can’t do much…
— Example: What happens if the tea-box is empty?

If the tea-box is empty we wish to specify an extra step:


• Get new box of tea from cupboard

We can express this by rewriting step 1.1 as


1.1.1. Take tea box from shelf
1.1.2. If box is empty

15
then get new box from cupboard
1.1.3. Remove lid from box

(More complicated conditions can use AND, OR, NOT)

Example 2

Consider this partial brainstormed list of things involved in getting a chicken dinner ready.

Cook. Clean house. Find table cloth. Set table. Get plates out. Choose recipe. Make shopping
list. Shop for ingredients. Go to store. Chop vegetables. Preheat oven to 400. Vacuum dining
room. Tidy up living room. Put out glassware. Put out silverware. Napkins. Roast chicken.
Put chicken in pan. Leave chicken for 90 minutes.

As is common, the brainstorm has resulted in ideas (actually, things for our to-do list) that are
at different levels of "detail" — some of them are "lower down" than others, or, we might
say, some contain others. Let's arrange these hierarchically. In other words, let's identify
which steps are a part of another step. One way to do this is to think of each action as a set of
action:

(1) Cleanhouse={Vacuumdiningroom.Tidyuplivingroom}
(2) Cook={Chooserecipe.Shopforingredients.RoastChicken.DoVeggies}
(3)

Settable={Findtablecloth.Getplatesout.Putoutglassware.Putoutsilverware.
Napkins.}
(4)
Roastchicken.={Preheatovento400.Putchickeninpan.Leavechickenin400ov
enfor90minutes.}
(5) DoVeggies={Chopvegetables.Cookvegetables.}

Level zero is just "Make chicken dinner for guests" (bigger circle). Below this, at the first
level, we have "clean house," "cook," "set table," and "eat drink and be merry." At the next
level, things like vacuum dining room, choose recipe, find table cloth and roast chicken. At
the third level, we have the steps that go into roasting the chicken: put it in the pan, preheat
oven, etc.

16
Our zeroth order flowchart would contain just the first steps — they are the first level of
stepwise refinement.

17
For our next refinement, we can look more closely at the first step, breaking it down into its
sub-level process components.

And, next, we can take a second level 1 step and break it down into its lower level, level 2
sub-processes.

Review

So what did we do here? First we brainstormed — just thinking about having folks over for
dinner sets off all manner of task-thinking in our heads and so we don't fight it. But then we
notice that there is some structure, some levels of generality/detail in these things. In fact, we
can say that some of them "contain" others — some are steps within others. And when we
group everything into a hierarchy of collections we can start to describe the process at the
very top level (here it was 1) clean, 2) cook, 3) table, 4) enjoy).

18
We then treated each of these as if it were a "black box" — that is, a process that we didn't
need to know the inside of — and we could just arranged these in the right sequence even as
we knew that there were details to be filled in, we made the decision that the big prep
tasks :cleanup, cooking, set up could be thought of as the three main steps.

Then, when it came time to think about cooking, we broke it up into four parts, one of which,
"roast chicken," we knew had some other steps in it and so we just treated it as a black box,
deferring details until our next step of refinement.

There are two important "skills" here — one is being able to recognize "levels" and the other
is being comfortable leaving things in black boxes until it becomes necessary to unpack them.

Practice Problem

Consider your "morning routine" — all the things you need to do to get from being asleep to
being in class. Brainstorm and list 20-25 things you do to accomplish this transition. Be
deliberately "sloppy" and put things at different levels of detail on the list.

1. Arrange the items hierarchically. You can use the set technique above, or the circle
diagram, or just make an outline.
2. Draw (on paper) a series of four refinements of a flow chart that represents this
process. The first should use three sequential process rectangles. Each successive
refinement should "blow up" one step from the previous refinement. You may have to
"invent" new sub-processes if your original list doesn't have four levels (note that this
is a place where we would use the word analysis — it's original meaning is "to break
up into parts").

Hint: Get up. Drive to school. Go to class. Start car. Follow usual route. If heavy traffic
follow alternative route. Drink OJ. Eat breakfast. Shower. Wash hair. Brush teeth. Make
coffee. Fill coffee maker. Perform ablutions.

Example 3

Suppose that our process is offer hospitality by which we mean to serve beverages — coffee
or tea. The simplest flow chart would look like this:

19
Notice that we are deferring detail here. This might be called a "zeroth order" flowchart. It
has an entry/beginning and an exit/end and there is an action step in between. Now let's
image how we can turn up the magnification one notch and ask what that action step consists
of. In plain language: we ask whether our guest would like coffee or tea and then we prepare
and serve accordingly.

20
Again, we are deliberately avoiding going into detail here. This is a general principle: defer
detail. Don't worry, we are going to produce several iterations of most flow charts and we'll
add appropriate levels of detail as we go along.

Any step of a process is subject to further refinement, of being represented in greater detail.
What would be the next level of detail for the process of making coffee?

Some candidates: measure coffee, decide on beans to use, brew the coffee, grind beans,
prepare dry coffee, boil water, add boiling water, get filter, add milk, add sugar. But these are
not at the same level of refinement or detail. Some of them "contain" others. For example:

(6)
brewcoffee={preparedrycoffee,boilwater,addboilingwater}
(7)
preparedrycoffee={decideonbeans,grindbeans,measurecoffee,getfilter}

If we think only about one level of refinement beyond "make coffee" (remember: defer detail
whenever possible) we might list

 brew coffee
 pour
 add sugar
 add milk
 serve

But do we always want to add sugar and milk? No, only if the coffee drinker wants them.
Therefore, our flow chart is going to have to represent these contingencies. Here is the above
flow chart with the single step, "make coffee," blown up into its next level of refinement:

21
TAKEAWAYS

1. Start from the outside and move in. Top-down. Stepwise refinement. Defer detail.
2. Strive for consistent "levels"
3. Every unit/module has single entry and exit point.

Practice Problem

Draw a flow chart representing the task of getting home from work by car: (1) leave; (2)
drive; (3) arrive.

It is easy to get lost in the complexity of nuance and micro-considerations when describing a
process, the more so when you know a lot about something. The skilled flow charter is able
to defer consideration of detail in an orderly manner. She might, for example, draw a first
flow chart of the above process that looks like this:

22
And then she might think: "What is the NEXT level of detail here?" and think about the fact
that getting home consists of three phases. One is getting to the car, putting books and such in
the back seat, turning on engine, buckling up, selecting a radio station, and making her way to
the campus exit. Then there is the getting home part. This involves selecting a route, perhaps
making some stops, being subject to delays and such thrown at her by the rest of the world.
And finally there is arriving at home, finding a parking spot, grabbing all of her stuff, locking
the car, finding the car keys, etc. She has just articulated all manner of detail but defers
thinking about these, instead refining the task into just three major sub-processes:

And what will be her next refinement. A good practice is to further refine just one thing at a
time. Here she selects the "drive home" step because it's the one that has some real variation
from day to day. Specifically, it involves listening to the traffic report on the radio and then
picking a route home based on where the traffic seems to be worse. She doesn't detail her
decision making process yet, but shows where it comes in the sequence of things.

23
Finally, she'll refine the "take the best route" sub-process. It involves using the information
she gets from listening to the radio to decide between one of two routes. The question or
condition she evaluates is: "Is 580 backed up?" Her "protocol" is "take highway 13 if it
is,other wise take highway 580."

24
The figure below illustrates these four phases of stepwise refinement. Two things to notice.
First, we refined one thing at time. The light trapezoids show how a step in a diagram to the
left is refined (expanded) in next flow chart to the right.

Second, the new flow chart elements "fit completely inside" the element they are a refinement
of. For example, in the third chart, the box "take best route" has a single entry point and a
single exit point. At a certain point in the process we "enter" this step and then when it is
done we exit. In the third chart, just what happens during the step is not specified; it is a so-
called "black box." In the fourth chart this step is expanded, but notice that the "if block" it
expands into has a begin circle at the entry point and an end circle at the exit point. These
correspond to the single entry and single exit in the previous step.

25
Here we want to learn how to read and make flowcharts, understand the concepts of stepwise
refinement, top-down design, and "black-boxing," and how to translate verbal (and
ethnographic) descriptions into flow charts.

STRUCTURED DESIGN TECHNIQUES

Various design approaches have been suggested. They all attempt to tackle the design task by
a direct onslaught. In step-wise refinement the starting point is an abstract program which, if
it could be implemented, would solve the whole problem: subsequent steps involve refining
the statements of this abstract program into further programs and the statements of those
programs into further programs still. In functional decomposition the starting point is a
dissection of the whole problem into a number of functions, each of which may then be
decomposed in turn. In programming by action clusters, the starting point is the recognition
of clusters of associated actions which guarantee that specific requirements of the problem
will always be met. Broadly, we may characterise the first two approaches as top-down,
while the third is bottom-up. But all of them tackle the problem directly by considering the
functional specification.

Structured programming task can be categorized as structured analysis and design technique.
Structured analysis is a front-end methodology that allows users and/or systems analysts to
convert a real-world problem into a pictorial diagram or other logical representation that can

26
subsequently be used by the systems developers and/or programmers to design an
information system. Structured design is concerned with physical design based on the results
of structured analysis. More generally, structured analysis transforms the abstract problem
into a feasible logical design, while structured design concentrates on converting the logical
design into a physical information system.

The major steps in structured design techniques are outlined in the chart below.

Construct a structure chart

As the name implies, the purpose of this step is to construct a


structure chart that shows the hierarchical relationship and
structure of all the data flows identified during structured analysis.
In addition, control flows are added to the model to facilitate
subsequent systems development.

Examine the coupling (interdependency) relationships

A key objective of structured design is to define loosely coupled,


independent modules. Generally, a module’s degree of
independence is inversely proportional to the number of data
elements (or composites) that flow between the module and the
rest of the system. Consequently, the focus of this step is to
increase module independence by identifying and restructuring
modules with excessive data flows.

Examine module cohesion

A second objective of structured design is to define cohesive


modules that perform a single, complete function. The focus of
this step is on combining modules that perform common functions,
consolidating functions to reduce the number of interfaces, and
relocating modules to increase system efficiency.

Refine the structure chart

Using the results of the previous two steps, a final version of the
structure chart is prepared.

Perform transform analysis

The purpose of transform analysis is to group together the modules


(or processes) that manipulate a particular set of data or a
particular data structure. For example, the processes that accept
inventory transaction data, modify inventory levels, and update the
master inventory data are probably related. The afferent (input),
efferent (output), transform (data modification), and coordinate

27
(controlling) modules are identified first. Grouping the modules to
form a control structure might involve designating one module as
the master (promoting a boss) or creating a new master (hiring a
new boss). The subordinate modules are called slaves.

Perform transaction analysis

The purpose of transaction analysis is to group all modules (or


processes) triggered by the same transaction to form a transaction
center. For example, all the tasks performed in response to the
arrival of an order from a supplier are related. Often, the control
center serves as a control module.

Create module specifications

The primitives defined in the data flow diagram are defined in


terms of logical sequence, selection, and repetition blocks.

Package the physical modules

The key purpose of this step is to ensure that the parent-child


relationships between the modules are preserved when the
procedures are grouped to form physical load modules for efficient
execution on a computer. Often, a procedural analysis is
performed to determine which procedures must be grouped within
the same load module to avoid severe execution and/or testing
errors.

Difference between Structured and Unstructured Programming Language

Key Difference: The main difference between structured and unstructured programming
language is that a structured programming language allows a programmer to code a program
by diving the whole program into smaller units or modules. In unstructured programming
language, the program must be written as a single continuous, i.e. nonstop or unbroken block.

When it comes to programming, there are two main types: Structured and Unstructured
Programming. Each has its own languages. Unstructured Programming is historically the
earliest type of programming that was capable of creating Turing-complete algorithms. As it
was the earliest, it had its own set of advantages and disadvantages. Eventually, unstructured
programming morphed and evolved into structured programming, which was easier to use.
Structured programming eventually evolved into procedural programming and then object-
oriented programming. Again, all with their own set of advantages and disadvantages.

With reference to programming, the main difference between structured and unstructured
programming language is that a structured programming language allows a programmer to
code a program by diving the whole program into smaller units or modules. This makes it

28
easier to code, as the programmer can work on one segment of the code at a time. This also
allows the programmer to check the module individually, before combining it with the
program. Hence, it becomes easier to modify and debug, as the programmer can check and
modify a single module, while leaving the rest of the program as is.

In unstructured programming language, however, the program must be written as a single


continuous, i.e. nonstop or unbroken block. This makes it a bit complicated as the whole
program is taken as one unit. Also, it becomes harder to modify and to debug, such as if there
is a bug in the program, which there always is, the programmer much check the code of the
entire program, as opposed to just one module.

Additionally, unstructured programming languages allow only basic data types, such as
numbers, strings and arrays (numbered sets of variables of the same type), which is not the
case with structured programming languages. However, unstructured programming languages
are often touted for providing freedom to programmers to program as they want. Structured
programming languages often make extensive use of subroutines, block structures and for
and while loops, as opposed to using simple tests and jumps such as the GOTO statement
which could lead to “spaghetti code”, which unstructured programming languages do. Still,
spaghetti code is highly difficult to follow and to maintain, which is why many people don’t
prefer to use unstructured programming languages.

Comparison between Structured and Unstructured Programming Language:

Structured Programming Unstructured Programming


Language Language

Also known as Modular Programming Non-structured Programming

To enforce a logical structure


on the program being written
Purpose to make it more efficient and Just to code.
easier to understand and
modify.

Divides the program into The entire program must be coded


Programming
smaller units or modules. in one continuous block.

Producing hardly-readable
Code Produces readable code
(“spaghetti”) code

Usually considered a good Sometimes considered a bad


For Projects approach for creating major approach for creating major
projects projects

Freedom Has some limitations Offers freedom to programmers to

29
program as they want

Non-structured languages allow


only basic data types, such as
Allowed data Structured languages allow a
numbers, strings and arrays
types variety of data types.
(numbered sets of variables of the
same type).

Modify and Very difficult to modify and to


Easy to modify and to debug
debug debug

early versions of BASIC (such as


MSX BASIC and GW-BASIC),
JOSS, FOCAL, MUMPS,
TELCOMP, COBOL, machine-
C, C+, C++, C#, Java, PERL,
level code, early assembler
Languages Ruby, PHP, ALGOL, Pascal,
systems (without procedural
PL/I and Ada
metaoperators), assembler
debuggers and some scripting
languages such as MS-DOS batch
file language.

30

You might also like