Module 2 Mte 504
Module 2 Mte 504
2 Discus how software crisis manifested itself in the early day of software
engineering.
Disjkstra, Edsger (originally published March 1968; re-published, January 2008). "(A
Look Back at) Go To Statement Considered Harmful". Association for Computing
Machinery, Inc. (ACM). http://mags.acm.org/communications/200801/?pg=9. Retrieved
2008-06-12.
1.0 Introduction
32
In the last unit, you have learnt about the software crisis- its manifestation, causes, as
well as solution to the crisis. In this unit, we are going to look at the overview of software
development. You will learn specifically about the overview of various stages involved in
software development. After studying this unit you are expected to have achieved the
following objectives listed below.
2.0 Objectives
Software development is the set of activities that results in software products. Software
development may include research, new development, modification, reuse, re-
engineering, maintenance, or any other activities that result in software products.
Particularly the first phase in the software development process may involve many
departments, including marketing, engineering, research and development and general
management.
The term software development may also refer to computer programming, the process of
writing and maintaining the source code.
There are several different approaches to software development. While some take a more
structured, engineering-based approach, others may take a more incremental approach,
where software evolves as it is developed piece-by-piece. In general, methodologies
share some combination of the following stages of software development:
Market research
Gathering requirements for the proposed business solution
Analyzing the problem
Devising a plan or design for the software-based solution
Implementation (coding) of the software
Testing the software
Deployment
Maintenance and bug fixing
These stages are collectively referred to as the software development lifecycle (SDLC).,
These stages may be carried out in different orders, depending on approach to software
development. Time devoted on different stages may also vary. The detail of the
documentation produced at each stage may not be the same.. In ―waterfall‖ based
approach, stages may be carried out in turn whereas in a more "extreme" approach, the
stages may be repeated over various cycles or iterations. It is important to note that more
33
―extreme‖ approach usually involves less time spent on planning and documentation, and
more time spent on coding and development of automated tests. More ―extreme‖
approaches also encourage continuous testing throughout the development lifecycle. It
ensures bug-free product at all times. The ―waterfall‖ based approach attempts to assess
the majority of risks and develops a detailed plan for the software before implementation
(coding) begins. It avoids significant design changes and re-coding in later stages of the
software development lifecycle.
Each methodology has its merits and demerits. The choice of an approach to solving a
problem using software depends on the type of problem. If the problem is well
understood and a solution can be effectively planned out ahead of time, the more
"waterfall" based approach may work the best choice. On the other hand, if the problem
is unique (at least to the development team) and the structure of the software solution
cannot be easily pictured, then a more "extreme" incremental approach may work best..
4.0 Conclusion
This unit has introduce you to software development. You have been informed of the
various stages of software development.
5.0 Summary
A.M. Davis (2005). Just enough requirements management: where software development
meets marketing.
34
Edward Hasted. (2005). Software That Sells : A Practical Guide to Developing and
Marketing Your Software Project.
John W. Horch (2005). "Two Orientations On How To Work With Objects." In: IEEE
Software. vol. 12, no. 2, pp. 117-118, Mar., 1995.
Karl E. Wiegers (2005). More About Software Requirements: Thorny Issues and
Practical Advice.
1.0 Introduction
35
The last unit exposed you to the overview of software development. In this unit you
will learn about the various lifecycle models (the phases of the software life cycle) in
general. You will also specifically learn about the requirement and the design phases
2.0 Objectives
Software life cycle models describe phases of the software cycle and the order in which
those phases are executed. There are a lot of models, and many companies adopt their
own, but all have very similar patterns. According to Raymond Lewallen (2005), the general,
basic model is shown below:
Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.
Each phase produces deliverables needed by the next phase in the life cycle.
Requirements are converted into design. Code is generated during implementation that is
driven by the design. Testing verifies the deliverable of the implementation phase against
requirements.
This is the most common life cycle models, also referred to as a linear-sequential life
cycle model. It is very simple to understand and use. In a waterfall model, each phase
must be completed before the next phase can begin. At the end of each phase, there is
36
always a review to ascertain if the project is in the right direction and whether or not to
carry on or abandon the project. Unlike the general model, phases do not overlap in a
waterfall model.
Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.
3.2.1 Advantages
3.2.2 Disadvantages
Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of
processes. Each phase must be completed before the next phase begins. Testing is
emphasized in this model more so than the waterfall model The testing procedures are
37
developed early in the life cycle before any coding is done, during each of the phases
preceding implementation.
Requirements begin the life cycle model just like the waterfall model. Before
development is started, a system test plan is created. The test plan focuses on meeting the
functionality specified in the requirements gathering.
The high-level design phase focuses on system architecture and design. An integration
test plan is created in this phase as well in order to test the pieces of the software systems
ability to work together.
The low-level design phase is where the actual software components are designed, and
unit tests are created in this phase as well.
The implementation phase is, again, where all coding takes place. Once coding is
complete, the path of execution continues up the right side of the V where the test plans
developed earlier are now put to use.
Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.
3.3.1 Advantages
38
Simple and easy to use.
Each phase has specific deliverables.
Higher chance of success over the waterfall model due to the development of test
plans early on during the life cycle.
Works well for small projects where requirements are easily understood.
3.3.2 Disadvantages
The first iteration produces a working version of software and this makes possible to have
working software early on during the software life cycle. Subsequent iterations build on
the initial software produced during the first iteration.
Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.
3.4.1 Advantages
Generates working software quickly and early during the software life cycle.
39
More flexible – inexpensive to change scope and requirements.
Easier to test and debug during a smaller iteration.
Easier to manage risk because risky pieces are identified and handled during its
iteration.
Each of the iterations is an easily managed landmark
3.4.2 Disadvantages
The spiral model is similar to the incremental model, with more emphases placed on risk
analysis. The spiral model has four phases namely Planning, Risk Analysis, Engineering
and Evaluation. A software project continually goes through these phases in iterations
which are called spirals. In the baseline spiral requirements are gathered and risk is
assessed. Each subsequent spiral builds on the baseline spiral.
Requirements are gathered during the planning phase. In the risk analysis phase, a
process is carried out to discover risk and alternate solutions. A prototype is produced at
the end of the risk analysis phase.
Software is produced in the engineering phase, alongside with testing at the end of the
phase. The evaluation phase provides the customer with opportunity to evaluate the
output of the project to date before the project continues to the next spiral.
In the spiral model, the angular component denotes progress, and the radius of the spiral
denotes cost.
40
Fig 5 Spiral Life Cycle Model
Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.
3.5.1 Merits
3.5.2 Demerits
41
Business requirements are gathered in this phase. This phase is the main center of
attention of the project managers and stake holders. Meetings with managers, stake
holders and users are held in order to determine the requirements. Th general questions
that require answers during a requirements gathering phase are: Who is going to use the
system? How will they use the system? What data should be input into the system?
What data should be output by the system? A list of functionality that the system should
provide, which describes functions the system should perform, business logic that
processes data, what data is stored and used by the system, and how the user interface
should work is produced at this point. The requirements development phase may have
been preceded by a feasibility study, or a conceptual analysis phase of the project. The
requirements phase may be divided into requirements elicitation (gathering the
requirements from stakeholders), analysis (checking for consistency and completeness),
specification (documenting the requirements) and validation (making sure the specified
requirements are correct)
42
maintainability requirement may be decomposed into restrictions on software constructs
or limits on lines or code.
43
3.6.5.1 Fix system boundaries
This is initial step and helps in identifying how the new application fit in into the business
processes, how it fits into the larger picture as well as its capacity and limitations.
3.6.5.2 Identify the customer
This focuses on identifying who the ‗users‘ or ‗customers‘ of an application are that is to
say knowing the group or groups of people who will be directly or indirectly impacted by
the new application. This allows the Requirements Analyst to know in advance where he
has to look for answers.
44
User interfaces
45
3.7 Requirements Management
Requirements Management is the all-inclusive process that includes all aspects of
software requirements analysis and as well ensures verification, validation and
traceability of requirements. Effective requirements management practices assure that all
system requirements are stated unmistakably, that omissions and errors are corrected and
that evolving specifications can be included later in the project lifecycle.
The software system design is formed from the results of the requirements phase. This is
where the details on how the system will work are produced. Deliverables in this phase
include hardware and software, communication, software design.
The design process is very important. As a labourer, for example one would not attempt
to build a house without an approved blueprint so as not to risk the structural integrity
and customer satisfaction. In the same way, the approach to building software products is
no unlike. The emphasis in design is on quality. It is pertinent to note that, this is the only
phase in which the customer‘s requirements can be precisely translated into a finished
software product or system. As such, software design serves as the foundation for all
software engineering steps that follow regardless of which process model is being
employed.
During the design process the software specifications are changed into design models that
express the details of the data structures, system architecture, interface, and components.
Each design product is re-examined for quality before moving to the next phase of
software development. At the end of the design process a design specification document
is produced. This document is composed of the design models that describe the data,
architecture, interfaces and components.
46
Architectural design - defines the relationships among the major structural
elements of the software, the ―design patterns‖ that can be used to attain the
requirements that have been defined for the system, and the constraint that affect
the way in which the architectural patterns can be applied. It is derived from the
system specification, the analysis model, and the subsystem interactions defined
in the analysis model (DFD).
Interface design - explains how the software elements communicate with each
other, with other systems, and with human users. Much of the necessary
information required is provided by the e data flow and control flow diagrams.
Component-level design – It converts the structural elements defined by the
software architecture into procedural descriptions of software components using
information acquired from the process specification (PSPEC), control
specification (CSPEC), and state transition diagram (STD).
In order to assess the quality of a design (representation) the yardstick for a good design
should be established. Such a design should:
These criteria are not acquired by chance. The software design process promotes good
design through the application of fundamental design principles, systematic methodology
and through review.
―The design process is a series of steps that allow the designer to describe all aspects of
the software to be built. However, it is not merely a recipe book; for a competent and
successful design, the designer must use creative skill, past experience, a sense of what
makes ―good‖ software, and have a commitment to quality.
47
The set of principles which has been established to help the software engineer in directing
the design process are:
The design process should not suffer from tunnel vision – Alternative
approaches should be considered by a good designer. Designer should judge
each approach based on the requirements of the problem, the resources
available to do the job and any other constraints.
The design should be traceable to the analysis model – because a single
element of the design model often traces to multiple requirements, it is
necessary to have a means of tracking how the requirements have been
satisfied by the model
The design should not reinvent the wheel – Systems are constructed using a
suite of design patterns, many of which may have likely been encountered
before. These patterns should always be chosen as an alternative to
reinvention. Design time should be spent in expressing truly fresh ideas and
incorporating those patterns that already exist.
The design should reduce intellectual distance between the software and the
problem as it exists in the real world – This means that, the structure of the
software design should imitate the structure of the problem domain.
The design should show uniformity and integration – a design is uniform if it
appears that one person developed the whole thing. Rules of style and format
should be defined for a design team before design work begins. A design is
integrated if care is taken in defining interfaces between design components.
The design should be structured to degrade gently, even with bad data, events,
or operating conditions are encountered – Well-designed software should
never ―bomb‖. It should be designed to accommodate unusual circumstances,
and if it must terminate processing, do so in a graceful manner.
The design should be reviewed to minimize conceptual (semantic) errors –
there is sometimes the tendency to focus on minute details when the design is
reviewed, missing the forest for the trees. The designer team should ensure
that major conceptual elements of the design have been addressed before
worrying about the syntax if the design model.
Design is not coding, coding is not design – Even when detailed designs are
created for program components, the level of abstraction of the design model
is higher than source code. The only design decisions made of the coding level
address the small implementation details that enable the procedural design to
be coded.
The design should be structured to accommodate change
The design should be assessed for quality as it is being created
With proper application of design principles, the design displays both external and
internal quality factors. External quality factors are those factors that can readily be
observed by the user, (e.g. speed, reliability, correctness, usability). Internal quality
factors have to do with technical quality more so the quality of the design itself. To
achieve internal quality factors the designer must understand basic design concepts.
48
3.12 Fundamental Software Design Concepts
Over the past four decades, a set of fundamental software design concepts has evolved,
each providing the software designer with a foundation from which more sophisticated
design methods can be applied. Each concept assists the soft ware engineer to answer the
following questions:
Are there uniform criteria that define the technical quality of a software
design?
49
4.0 Conclusion
Software life cycle models describe phases of the software cycle and the order in which
those phases are executed.
5.0 Summary
Software life cycle models describe phases of the software cycle and the order
in which those phases are executed. .
In general model, each phase produces deliverables required by the next phase
in the life cycle. Requirements are translated into design. Code is produced
during implementation that is driven by the design. Testing verifies the
deliverable of the implementation phase against requirements.
In a waterfall model, each phase must be completed in its entirety before the
next phase can begin. At the end of each phase, a review takes place to
determine if the project is on the right path and whether or not to continue or
discard the project. Unlike what I mentioned in the general model, phases do
not overlap in a waterfall model.
Just like the waterfall model, the V-Shaped life cycle is a sequential path of
execution of processes. Each phase must be completed before the next phase
begins. Testing is emphasized in this model more so than the waterfall model
though. The testing procedures are developed early in the life cycle before
any coding is done, during each of the phases preceding implementation.
The incremental model is an intuitive approach to the waterfall model.
Multiple development cycles take place here, making the life cycle a ―multi-
waterfall‖ cycle. Cycles are divided up into smaller, more easily managed
iterations. Each iteration passes through the requirements, design,
implementation and testing phases.
The spiral model is similar to the incremental model, with more emphases
placed on risk analysis. The spiral model has four phases: Planning, Risk
Analysis, Engineering and Evaluation. A software project repeatedly passes
through these phases in iterations (called Spirals in this model). The baseline
spirals, starting in the planning phase, requirements are gathered and risk is
assessed. Each subsequent spirals builds on the baseline spiral.
In requirement phase business requirements are gathered and that the phase is
the main focus of the project managers and stake holders.
The software system design is produced from the results of the requirements
phase and it is the phase is where the details on how the system will work is
produced
50
1 What is software life cycle model?
2 Explain the general model
3 Compare and contrast General and Waterfall Models
4 Explain V-Shaped Life Cycle Model
5 Explain Incremental Model
6 Compare and contrast Incremental and Spiral Models
7 Discus the requirement and design phases
Blanchard, B. S., & Fabrycky, W. J.(2006) Systems engineering and analysis (4th ed.)
New Jersey: Prentice Hall.
Ummings, Haag (2006). Management Information Systems for the Information Age.
Toronto, McGraw-Hill Ryerson
Unit 3 Modularity
1.0 Introduction
In unit 2 we discussed about software lifecycle models in general and also in detailed
the requirement and the design phases of software development. In this unit we will
look at Modudularity in programming.
51
2.0 Objectives
What is Modularity?
The concept of modularity in computer software has been promoted for about five
decades. In essence, the software is divided into separately names and addressable
components called modules that are integrated to satisfy problem requirements. It is
important to note that a reader cannot easily understand large programs with a single
module. The number of variables, control paths and sheer complexity make
understanding almost impossible. As a result a modular approach will allow for the
software to be intellectually manageable. However, it is important to note that software
cannot be subdivided indefinitely so as to make the effort required to understand or
develop it negligible. This is because the more the number of modules, the less the effort
to develop them.
52
functionalities to the outside world while hiding its implementation details.
53
Functionally Scalable: depending on the level of sophistication of your modular
design, it's possible to introduce new functionalities with little or no change to
existing modules. This allows your software system to scale in functionality
without becoming brittle and a burden on developers.
Process-oriented design
This approach places the emphasis on the process with the objective being to design
modules that have high cohesion and low coupling. (Data flow analysis and data flow
diagrams are often used.)
Data-oriented design
In this approach the data comes first. That is the structure of the data is determined first
and then procedures are designed in a way to fit to the structure of the data.
Object-oriented design
In this approach, the objective is to first identify the objects and then build the product
around them. In concentrate, this technique is both data- and process-oriented.
54
3.19 Attributes of a good Module
Evaluate the first iteration of the program structure to reduce coupling and
improve cohesion. Once program structure has been developed modules may be
exploded or imploded with aim of improving module independence.
o An exploded module becomes two or more modules in the final program
structure.
o An imploded module is the result of combining the processing implied by
two or more modules.
An exploded module normally results when common processing exists in two or more
modules and can be redefined as a separate cohesive module. When high coupling is
expected, modules can sometimes be imploded to reduce passage of control, reference to
global data and interface complexity.
Attempt to minimise structures with high fan-out; strive for fan-in as structure
depth increases. The structure shown inside the cloud in Fig. 3 does not make
effective use of factoring.
55
Fig 6 Example of a program structure
Keep the scope of effect of a module within the scope of control for that module.
o The scope of effect of a module is defined as all other modules that are
affected by a decision made by that module. For example, the scope of
control of module e is all modules that are subordinate i.e. modules f, g, h,
n, p and q.
Define modules whose function is predictable and not overly restrictive (e.g. a
module that only implements a single task).
o A module is predictable when it can be treated as a black box; that is, the
same external data will be produced regardless of internal processing
details. Modules that have internal ―memory‖ can be unpredictable unless
care is taken in their use.
o A module that restricts processing to a single task exhibits high cohesion
and is viewed favourably by a designer.
56
Strive for controlled entry modules, avoid pathological connection (e.g. branches
into the middle of another module)
o This warns against content coupling. Software is easier to understand and
maintain if the module interfaces are constrained and controlled.
Languages that formally support the module concept include IBM/360 Assembler,
COBOL, RPG and PL/1, Ada, D, F, Fortran, Haskell, OCaml, Pascal, ML, Modula-2,
Erlang, Perl, Python and Ruby. The IBM System i also uses Modules in RPG, COBOL
and CL, when programming in the ILE environment. Modular programming can be
performed even where the programming language lacks explicit syntactic features to
support named modules.
Software tools can create modular code units from groups of components. Libraries of
components built from separately compiled modules can be combined into a whole by
using a linker.
The method of writing a program using top-down approach is to write a main procedure
that names all the major functions it will need. After that the programming team
examines the requirements of each of those functions and repeats the process. These
compartmentalized sub-routines finally will perform actions so straightforward they can
be easily and concisely coded. The program is done when all the various sub-routines
have been coded.
57
Merits of top-down programming:
Separating the low level work from the higher level abstractions leads to a
modular design.
Modular design means development can be self contained.
Having "skeleton" code illustrates clearly how low level modules integrate.
Fewer operations errors
Much less time consuming (each programmer is only concerned in a part of the
big project).
Very optimized way of processing (each programmer has to apply their own
knowledge and experience to their parts (modules), so the project will become an
optimized one).
Easy to maintain (if an error occurs in the output, it is easy to identify the errors
generated from which module of the entire program).
In a bottom-up approach the individual base elements of the system are first specified in
great detail. These elements are then connected together to form bigger subsystems,
which are linked, sometimes in many levels, until a complete top-level system is formed.
This strategy often resembles a "seed" model, whereby the beginnings are small, but
eventually grow in complexity and completeness.
. This bottom-up approach has one drawback. We need to use a lot of perception to
decide the functionality that is to be provided by the module. This approach is more
suitable if a system is to be developed from existing system, because it starts from some
existing modules. Modern software design approaches usually mix both top-down and
bottom-up approaches.
4.0 Conclusion
5.0 Summary
58
Modularity is a general systems concept, the degree to which a system‘s
components may be separated and recombined. It refers to both the tightness of
coupling between components, and the degree to which the ―rules‖ of the system
architecture enable (or prohibit) the mixing and matching of components
Physical Modularity is probably the earliest form of modularity introduced in
software creation. Physical modularity consists of two main components namely:
(1) a file that contains compiled code and other resources and (2) an executing
environment that understand how to execute the file. Developers build and
assemble their modules into compiled assets that can be distributed as single or
multiple files.
Logical Modularity is concerned with the internal organization of code into
logically-related units.
Modular programming is beneficial in that:It allows for scalar development, it
facilitates code testing, helps in building robust system, allows for easier
modification and maintenance.
The three basic approaches of designing Modular program are: Process-oriented
design, Data-oriented design and Object-oriented design.
Criteria for using Modular Design include: Modular decomposability, Modular
compos ability, Modular understand ability, Modular continuity, and Modular
protection.
Attributes of a good Module include: Functional independence, Cohesion, and
Coupling
Steps to Creating Effective Module include: Evaluate the first iteration of the
program structure to reduce coupling and improve cohesion, Attempt to minimise
structures with high fan-out; strive for fan-in as structure depth increases, Define
modules whose function is predictable and not overly restrictive (e.g. a module
that only implements a single task), Strive for controlled entry modules, avoid
pathological connection (e.g. branches into the middle of another module)
Top-down is a programming style, the core of traditional procedural languages, in
which design begins by specifying complex pieces and then dividing them into
successively smaller pieces. Finally, the components are precise enough to be
coded and the program is written.
In a bottom-up approach the individual base elements of the system are first
specified in great detail. These elements are then connected together to form
bigger subsystems, which are linked, sometimes in many levels, until a complete
top-level system is formed
What is Modularity?
Differentiate between logical and physical modularity
What are the benefits of modular design
Explain the approaches of writing modular program
59
What are the Criteria for using modular design
Outlines the attributes of a good module
Outline the steps to creating effective module
Differentiate between Top-down and Bottom-up programming approach
Laplante, Phil (2009). Requirements Engineering for Software and Systems (1st ed.).
Redmond, WA: CRC Press. ISBN 1-42006-467-3.
http://beta.crcpress.com/product/isbn/9781420064674.
McConnell, Steve (1996). Rapid Development: Taming Wild Software Schedules (1st
ed.). Redmond, WA: Microsoft Press. ISBN 1-55615-900-5.
http://www.stevemcconnell.com/.
1.0 Introduction
In the last unit, you have learnt about Modudularity in programming. Its benefits,
design approaches and criteria, attributes of a good Module and the steps to creating
effective module. You equally learnt about Top-Down and Bottom-up approaches in
programming. This unit ushers you into Pseudo code a way to create a logical structure
60
that will describing the actions, which will be executed by the application. After studying
this unit you are expected to have achieved the following objectives listed below.
2.0 Objectives
Here are a few general guidelines for writing your pseudo code:
Mimic good code and good English. Using aspects of both systems means
adhering to the style rules of both to
some degree. It is still important that variable names be mnemonic, comments
be included where useful, and English
phrases be comprehensible (full sentences are usually not necessary).
Ignore unnecessary details. If you are worrying about the placement of commas,
you are using too much detail. It is a
good idea to use some convention to group statements (begin/end, brackets, or
whatever else is clear), but you shouldn't
obsess about syntax.
Don't belabor the obvious. In many cases, the type of a variable is clear from
context; unless it is critical that it is specified to be an integer or real, it is
often unnecessary to make it explicit.
Take advantage of programming shorthands. Using if-then-else or looping
structures is more concise than writing
out the equivalent in English; general constructs that are not peculiar to a
small number of languages are good candidates
for use in pseudocode. Using parameters in specifying procedures is concise,
clear, and accurate, and hence should not
be omitted from pseudocode.
Consider the context. If you are writing an algorithm for quicksort, the statement
use quicksort to sort the values is
hiding too much detail; if you have already studied quicksort in a class and
later use it as a subroutine in another
61
algorithm, the statement would be appropriate to use.
Don't lose sight of the underlying model. It should be possible to see through"
your pseudocode to the model below;
if not (that is, you are not able to analyze the algorithm easily), it is written at
too high a level.
Check for balance. If the pseudocode is hard for a person to read or difficult to
translate into working code (or worse
yet, both!), then something is wrong with the level of detail you have chosen
to use.
Example 1 - Computing Sales Value Added (VAT) Tax : Pseudo-code the task of
computing the final price of an item after figuring in sales tax. Note the three types of
instructions: input (get), process/calculate (=) and output (display)
Variables: price of item, sales tax rate, sales tax, final price
Note that the operations are numbered and each operation is unambiguous and effectively
computable. We also extract and list all variables used in our pseudo-code. This will be
useful when translating pseudo-code into a programming language
Example 2 - Computing Weekly Wages: Gross pay depends on the pay rate and the
number of hours worked per week. However, if you work more than 50 hours, you get
paid time-and-a-half for all hours worked over 50. Pseudo-code the task of computing
gross pay given pay rate and hours worked.
62
6. halt
This example presents the conditional control structure. On the basis of the true/false
question asked in line 3, line 3.1 is executed if the answer is True; otherwise if the answer
is False the lines subordinate to line 4 (i.e. line 4.1) is executed. In both cases pseudo-
code is resumed at line 5.
This example presents an iterative control statement. As long as the condition in line 4 is
True, we execute the subordinate operations 4.1 - 4.3. When the condition is False, we
return to the pseudo-code at line 5.
For looping and selection, The keywords that are to be used include Do While...EndDo;
Do Until...Enddo; Case...EndCase; If...Endif; Call ... with (parameters); Call; Return..... ;
Return; When; Always use scope terminators for loops and iteration.
As verbs, use the words Generate, Compute, Process, etc. Words such as set, reset,
increment, compute, calculate, add, sum, multiply, .....print, display, input, output, edit,
test , etc. with careful indentation tend to foster desirable pseudocode.
63
Do not include data declarations in your pseudo code.
Activity I Write a pseudo code to find the average of even number between 1 and 20
4.0 Conclusion
5.0 Summary
In this unit, you have learnt about the essence of pseudo code in program design
1.0 Introduction
In the last unit, you have learnt about pseudo code. In this unit you will be exposed to
Programming Environment, CASE Tools & HIPO Diagrams. After studying this unit you
are expected to have achieved the following objectives listed below.
2.0 Objectives
By the end of this unit, you should be able to:
Explain Programming Environment
64
Discuss Case Tools.
Explain Hipo Diagrams.
The history of software tools began with the first computers in the early 1950s that used
linkers, loaders, and control programs. In the early 1970s the tools became famous with
Unix with tools like grep, awk and make that were meant to be combined flexibly with
pipes. The term "software tools" came from the book of the same name by Brian
Kernighan and P. J. Plauger. Originally, Tools were simple and light weight. As some
tools have been maintained, they have been integrated into more powerful integrated
development environments (IDEs). These environments combine functionality into one
place, sometimes increasing simplicity and productivity, other times part with flexibility
and extensibility. The workflow of IDEs is routinely contrasted with alternative
approaches, such as the use of Unix shell tools with text editors like Vim and Emacs.
The difference between tools and applications is unclear. For example, developers use
simple databases (such as a file containing a list of important values) all the time as tools.
However a full-blown database is usually thought of as an application in its own right.
For many years, computer-assisted software engineering (CASE) tools were preferred.
CASE tools emphasized design and architecture support, such as for UML. But the most
successful of these tools are IDEs.
The ability to use a variety of tools productively is one quality of a skilled software
engineer.
Software development tools can be roughly divided into the following categories:
65
correctness checking tools
memory usage tools
application build tools
integrated development environment
Debuggers: gdb, GNU Binutils, valgrind. Debugging tools also are used in the
process of debugging code, and can also be used to create code that is more
compliant to standards and portable than if they were not used.
66
Source code formatting
Source code generation tools
Static code analysis: C++test, Jtest, lint, Splint, PMD, Findbugs, .TEST
Text editors: emacs, vi, vim
Integrated development environments (IDEs) merge the features of many tools into one
complete package. They are usually simpler and make it easier to do simple tasks, such as
searching for content only in files in a particular project. IDEs are often used for
development of enterprise-level applications.Some examples of IDEs are:
Delphi
C++ Builder (CodeGear)
Microsoft Visual Studio
EiffelStudio
GNAT Programming Studio
Xcode
IBM Rational Application Developer
Eclipse
NetBeans
IntelliJ IDEA
WinDev
Code::Blocks
Lazarus
CASE tools are a class of software that automates many of the activities involved in
various life cycle phases. For example, when establishing the functional requirements of
a proposed application, prototyping tools can be used to develop graphic models of
application screens to assist end users to visualize how an application will look after
development. Subsequently, system designers can use automated design tools to
transform the prototyped functional requirements into detailed design documents.
Programmers can then use automated code generators to convert the design documents
into code. Automated tools can be used collectively, as mentioned, or individually. For
example, prototyping tools could be used to define application requirements that get
passed to design technicians who convert the requirements into detailed designs in a
traditional manner using flowcharts and narrative documents, without the assistance of
automated design software.
It is the scientific application of a set of tools and methods to a software system which is
meant to result in high-quality, defect-free, and maintainable software products. It also
refers to methods for the development of information systems together with automated
tools that can be used in the software development process.
67
3.6 Types of CASE Tools
Many CASE tools not only yield code but also generate other output typical of various
systems analysis and design methodologies such as:
The term CASE was originally formulated by software company, Nastec Corporation of
Southfield, Michigan in 1982 with their original integrated graphics and text editor
GraphiText, which also was the first microcomputer-based system to use hyperlinks to
cross-reference text strings in documents Under the direction of Albert F. Case, Jr. vice
president for product management and consulting, and Vaughn Frick, director of product
management, the DesignAid product suite was expanded to support analysis of a wide
range of structured analysis and design methodologies, notable Ed Yourdon and Tom
DeMarco, Chris Gane & Trish Sarson, Ward-Mellor (real-time) SA/SD and Warnier-Orr
(data driven).
The next competitor into the market was Excelerator from Index Technology in
Cambridge, Mass. While DesignAid ran on Convergent Technologies and later
Burroughs Ngen networked microcomputers, Index launched Excelerator on the IBM PC/
AT platform. While, at the time of launch, and for several years, the IBM platform did
not support networking or a centralized database as did the Convergent Technologies or
Burroughs machines, the allure of IBM was strong, and Excelerator came to prominence.
Hot on the heels of Excelerator were a rash of offerings from companies such as
Knowledgeware (James Martin, Fran Tarkenton and Don Addington), Texas Instrument's
IEF and Accenture's FOUNDATION toolset (METHOD/1, DESIGN/1, INSTALL/1,
FCP).
68
CASE tools were at their peak in the early 1990s. At the time IBM had proposed
AD/Cycle which was an alliance of software vendors centered around IBM's Software
repository using IBM DB2 in mainframe and OS/2:
The application development tools can be from several sources: from IBM, from vendors,
and from the customers themselves. IBM has entered into relationships with Bachman
Information Systems, Index Technology Corporation, and Knowledgeware, Inc. wherein
selected products from these vendors will be marketed through an IBM complementary
marketing program to provide offerings that will help to achieve complete life-cycle
coverage.
With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening
the market for the mainstream CASE tools of today. Interestingly, nearly all of the
leaders of the CASE market of the early 1990s ended up being purchased by Computer
Associates, including IEW, IEF, ADW, Cayenne, and Learmonth & Burchett
Management Systems (LBMS).
Workbenches and environments are generally built as collections of tools. Tools can
therefore be either stand alone products or components of workbenches and
environments.
Toolkits
Language-centered
Integrated
Fourth generation
Process-centered
69
3.9.1 Toolkits
Toolkits are loosely integrated collections of products easily extended by aggregating
different tools and workbenches. Typically, the support provided by a toolkit is limited to
programming, configuration management and project management. And the toolkit itself
is environments extended from basic sets of operating system tools, for example, the
Unix Programmer's Work Bench and the VMS VAX Set. In addition, toolkits' loose
integration requires user to activate tools by explicit invocation or simple control
mechanisms. The resulting files are unstructured and could be in different format,
therefore the access of file from different tools may require explicit file format
conversion. However, since the only constraint for adding a new component is the
formats of the files, toolkits can be easily and incrementally extended.
3.9.2 Language-centered
The environment itself is written in the programming language for which it was
developed, thus enable users to reuse, customize and extend the environment. Integration
of code in different languages is a major issue for language-centered environments. Lack
of process and data integration is also a problem. The strengths of these environments
include good level of presentation and control integration. Interlisp, Smalltalk, Rational,
and KEE are examples of language-centered environments.
3.9.3 Integrated
These environments achieve presentation integration by providing uniform, consistent,
and coherent tool and workbench interfaces. Data integration is achieved through the
repository concept: they have a specialized database managing all information produced
and accessed in the environment. Examples of integrated environment are IBM AD/Cycle
and DEC Cohesion.
3.9.5 Process-centered
Environments in this category focus on process integration with other integration
dimensions as starting points. A process-centered environment operates by interpreting a
process model created by specialized tools. They usually consist of tools handling two
functions:
70
Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia.[6]
All aspects of the software development life cycle can be supported by software tools,
and so the use of tools from across the spectrum can, arguably, be described as CASE;
from project management software through tools for business and functional analysis,
system design, code storage, compilers, translation tools, test software, and so on.
However, it is the tools that are concerned with analysis and design, and with using
design information to create parts (or all) of the software product, that are most
frequently thought of as CASE tools. CASE applied, for instance, to a database software
product, might normally involve:
71
Weak Repository Controls : Failure to adequately control access to CASE
repositories may result in security breaches or damage to the work documents,
system designs, or code modules stored in the repository. Controls include
protecting the repositories with appropriate access, version, and backup controls.
The HIPO (Hierarchy plus Input-Process-Output) technique is a tool for planning and/or
documenting a computer program. A HIPO model consists of a hierarchy chart that
graphically represents the program‘s control structure and a set of IPO (Input-Process-
Output) charts that describe the inputs to, the outputs from, and the functions (or
processes) performed by each module on the hierarchy chart.
Using the HIPO technique, designers can evaluate and refine a program‘s design, and
correct flaws prior to implementation. Given the graphic nature of HIPO, users and
managers can easily follow a program‘s structure. The hierarchy chart serves as a useful
planning and visualization document for managing the program development process.
The IPO charts define for the programmer each module‘s inputs, outputs, and algorithms.
In theory, HIPO provides valuable long-term documentation. However, the ―text plus
flowchart‖ nature of the IPO charts makes them difficult to maintain, so the
documentation often does not represent the current state of the program.
By its very nature, the HIPO technique is best used to plan and/or document a
hierarchically structured program.
The HIPO technique is often used to plan or document a structured program A variety of
tools, including pseudocode (and structured English can be used to describe processes on
an IPO chart. System flowcharting symbols are sometimes used to identify physical
input, output, and storage devices on an IPO chart.
A completed HIPO package has two parts. A hierarchy chart is used to represent the top-
down structure of the program. For each module depicted on the hierarchy chart, an IPO
(Input-Process-Output) chart is used to describe the inputs to, the outputs from, and the
process performed by the module.
72
3.14.1 The hierarchy chart
Manage inventory
Update stock
Process sale
Process return
Process shipment
Generate report
Respond to query
Display status report
Maintain inventory data
Modify record
Add record
Delete record
73
Figure 7 A hierarchy chart for an interactive inventory control program.
Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html
At the top of Figure 7 is the main control module, Manage inventory (module 1.0). It
accepts a transaction, determines the transaction type, and calls one of its three
subordinates (modules 2.0, 3.0, and 4.0).
Lower-level modules are identified relative to their parent modules; for example,
modules 2.1, 2.2, and 2.3 are subordinates of module 2.0, modules 2.1.1, 2.1.2, and 2.1.3
are subordinates of 2.1, and so on. The module names consist of an active verb followed
by a subject that suggests the module‘s function.
The objective of the module identifiers is to uniquely identify each module and to
indicate its place in the hierarchy. Some designers use Roman numerals (level I, level II)
or letters (level A, level B) to designate levels. Others prefer a hierarchical numbering
scheme; e.g., 1.0 for the first level; 1.1, 1.2, 1.3 for the second level; and so on. The key
is consistency.
The box at the lower-left of Figure 7 is a legend that explains how the arrows on the
hierarchy chart and the IPO charts are to be interpreted. By default, a wide clear arrow
represents a data flow, a wide black arrow represents a control flow, and a narrow arrow
indicates a pointer.
An IPO chart is prepared to document each of the modules on the hierarchy chart.
74
3.14.2.1 Overview diagrams
An overview diagram is a high-level IPO chart that summarizes the inputs to, processes
or tasks performed by, and outputs from a module. For example, shows an overview
diagram for process 2.0, Update stock. Where appropriate, system flowcharting symbols
are used to identify the physical devices that generate the inputs and accept the outputs.
The processes are typically described in brief paragraph or sentence form. Arrows show
the primary input and output data flows.
Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html
Overview diagrams are primarily planning tools. They often do not appear in the
completed documentation package.
A detail diagram is a low-level IPO chart that shows how specific input and output data
elements or data structures are linked to specific processes. In effect, the designer
75
.
integrates a system flowchart into the overview diagram to show the flow of data and
control through the module.
Figure 7.2 shows a detail diagram for module 2.0, Update stock. The process steps are
written in pseudocode. Note that the first step writes a menu to the user screen and input
data (the transaction type) flows from that screen to step 2. Step 3 is a case structure. Step
4 writes a transaction complete message to the user screen.
The solid black arrows at the top and bottom of the process box show that control flows
from module 1.0 and, upon completion, returns to module 1.0. Within the case structure
(step 3) are other solid black arrows.
Following case 0 is a return (to module 1.0). The two-headed black arrows following
cases 1, 2, and 3 represent subroutine calls; the off-page connector symbols (the little
home plates) identify each subroutine‘s module number. Note that each subroutine is
documented in a separate IPO chart. Following the default case, the arrow points to an
on-page connector symbol numbered 1. Note the matching on-page connector symbol
pointing to the select structure. On-page connectors are also used to avoid crossing
arrows on data flows.
76
Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html
Often, detailed notes and explanations are written on an extended description that is
attached to each detail diagram. The notes might specify access methods, data types, and
so on.
Figure 64.4 shows a detail diagram for process 2.1. The module writes a template to the
user screen, reads a stock number and a quantity from the screen, uses the stock number
as a key to access an inventory file, and updates the stock on hand. Note that the logic
repeats the data entry process if the stock number does not match an inventory record. A
real IPO chart is likely to show the error response process in greater detail.
Some designers simplify the IPO charts by eliminating the arrows and system flowchart
symbols and showing only the text. Often, the input and out put blocks are moved above
the process block (Figure 64.5), yielding a form that fits better on a standard 8.5 × 11
(portrait orientation) sheet of paper. Some programmers insert modified IPO charts
similar to Figure 64.5 directly into their source code as comments. Because the
documentation is closely linked to the code, it is often more reliable than stand-alone
HIPO documentation, and more likely to be maintained.
77
Fig 7.3 Simplified HIPO diaram
Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html
Detail diagram —
A low-level IPO chart that shows how specific input and output data elements or
data structures are linked to specific processes.
Hierarchy chart —
A diagram that graphically represents a program‘s control structure.
HIPO (Hierarchy plus Input-Process-Output) —
A tool for planning and/or documenting a computer program that utilizes a
hierarchy chart to graphically represent the program‘s control structure and a set
78
of IPO (Input-Process-Output) charts to describe the inputs to, the outputs from,
and the functions performed by each module on the hierarchy chart.
IPO (Input-Process-Output) chart —
A chart that describes or documents the inputs to, the outputs from, and the
functions (or processes) performed by a program module.
Overview diagram —
A high-level IPO chart that summarizes the inputs to, processes or tasks
performed by, and outputs from a module.
Visual Table of Contents (VTOC) —
A more formal name for a hierarchy chart.
3.15 Software
In the 1970s and early 1980s, HIPO documentation was typically prepared by hand using
a template. Some CASE products and charting programs include HIPO support. Some
forms generation programs can be used to generate HIPO forms. The examples in this #
were prepared using Visio.
4.0 Conclusion
5.0 Summary.
Gane, C. and Sarson, T., Structured Systems Analysis: Tools and Techniques, Prentice-
Hall, Englewood Cliffs, NJ, 1979.
79