0% found this document useful (0 votes)
4 views48 pages

Module 2 Mte 504

The document discusses the software crisis, its manifestations, causes, and solutions, followed by an overview of software development and its stages. It outlines various software development life cycle models, including Waterfall, V-Shaped, Incremental, and Spiral models, detailing their advantages and disadvantages. Additionally, it emphasizes the importance of requirements gathering and analysis in ensuring successful software projects.

Uploaded by

Omotoso Eniola
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views48 pages

Module 2 Mte 504

The document discusses the software crisis, its manifestations, causes, and solutions, followed by an overview of software development and its stages. It outlines various software development life cycle models, including Waterfall, V-Shaped, Incremental, and Spiral models, detailing their advantages and disadvantages. Additionally, it emphasizes the importance of requirements gathering and analysis in ensuring successful software projects.

Uploaded by

Omotoso Eniola
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

1 What is a software crisis?

2 Discus how software crisis manifested itself in the early day of software
engineering.

3 Explain the causes of software crisis.

7.0 Further Reading And Other Resources

Frederick P. (1987). No Silver Bullet: Essence and Accidents of Software Engineering.


(Reprinted in the 1995 edition of The Mythical Man-Month)

Disjkstra, Edsger (originally published March 1968; re-published, January 2008). "(A
Look Back at) Go To Statement Considered Harmful". Association for Computing
Machinery, Inc. (ACM). http://mags.acm.org/communications/200801/?pg=9. Retrieved
2008-06-12.

MODULE 2: Software Development

Unit 1: Overview of Software Development

1.0 Introduction

32
In the last unit, you have learnt about the software crisis- its manifestation, causes, as
well as solution to the crisis. In this unit, we are going to look at the overview of software
development. You will learn specifically about the overview of various stages involved in
software development. After studying this unit you are expected to have achieved the
following objectives listed below.

2.0 Objectives

By the end of this unit, you should be able to:


 Define clearly software development.
 List clearly the stages of software development

3.0 Definition of Software Development

Software development is the set of activities that results in software products. Software
development may include research, new development, modification, reuse, re-
engineering, maintenance, or any other activities that result in software products.
Particularly the first phase in the software development process may involve many
departments, including marketing, engineering, research and development and general
management.

The term software development may also refer to computer programming, the process of
writing and maintaining the source code.

3.1 Stages of Software Development

There are several different approaches to software development. While some take a more
structured, engineering-based approach, others may take a more incremental approach,
where software evolves as it is developed piece-by-piece. In general, methodologies
share some combination of the following stages of software development:

 Market research
 Gathering requirements for the proposed business solution
 Analyzing the problem
 Devising a plan or design for the software-based solution
 Implementation (coding) of the software
 Testing the software
 Deployment
 Maintenance and bug fixing

These stages are collectively referred to as the software development lifecycle (SDLC).,
These stages may be carried out in different orders, depending on approach to software
development. Time devoted on different stages may also vary. The detail of the
documentation produced at each stage may not be the same.. In ―waterfall‖ based
approach, stages may be carried out in turn whereas in a more "extreme" approach, the
stages may be repeated over various cycles or iterations. It is important to note that more

33
―extreme‖ approach usually involves less time spent on planning and documentation, and
more time spent on coding and development of automated tests. More ―extreme‖
approaches also encourage continuous testing throughout the development lifecycle. It
ensures bug-free product at all times. The ―waterfall‖ based approach attempts to assess
the majority of risks and develops a detailed plan for the software before implementation
(coding) begins. It avoids significant design changes and re-coding in later stages of the
software development lifecycle.

Each methodology has its merits and demerits. The choice of an approach to solving a
problem using software depends on the type of problem. If the problem is well
understood and a solution can be effectively planned out ahead of time, the more
"waterfall" based approach may work the best choice. On the other hand, if the problem
is unique (at least to the development team) and the structure of the software solution
cannot be easily pictured, then a more "extreme" incremental approach may work best..

Activity F What do you think determine the choice of approach in software


development?

4.0 Conclusion

This unit has introduce you to software development. You have been informed of the
various stages of software development.

5.0 Summary

In this unit, we have learnt that:

 Software development is the set of activities that results in software products.


 . Most methodologies share some combination of the following stages of
software development: market research, gathering requirements for the
proposed business solution, analyzing the problem, devising a plan or design
for the software-based solution , implementation (coding) of the software,
testing the software, deployment, maintenance and bug fixing

6.0 Tutor Marked Assignment

1 What is software development?

2 Briefly explain the various stages of software development.

7.0 Further Reading And Other Resources

A.M. Davis (2005). Just enough requirements management: where software development
meets marketing.

34
Edward Hasted. (2005). Software That Sells : A Practical Guide to Developing and
Marketing Your Software Project.

John W. Horch (2005). "Two Orientations On How To Work With Objects." In: IEEE
Software. vol. 12, no. 2, pp. 117-118, Mar., 1995.

Karl E. Wiegers (2005). More About Software Requirements: Thorny Issues and
Practical Advice.

Robert K. Wysocki (2006). Effective Software Project Management.

Unit 2:Software Development Life Cycle Model

1.0 Introduction

35
The last unit exposed you to the overview of software development. In this unit you
will learn about the various lifecycle models (the phases of the software life cycle) in
general. You will also specifically learn about the requirement and the design phases

2.0 Objectives

By the end of this unit, you should be able to:


 Define software life cycle model
 Explain the general model
 Explain Waterfall Model
 Explain V-Shaped Life Cycle Model
 Explain Incremental Model
 Explain Spiral Model
 Discus the requirement and design phases

3.0 Definition of Life Cycle Model

Software life cycle models describe phases of the software cycle and the order in which
those phases are executed. There are a lot of models, and many companies adopt their
own, but all have very similar patterns. According to Raymond Lewallen (2005), the general,
basic model is shown below:

3.1 The General Model

General Life Cycle Model

Fig 1 the General Model

Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.

Each phase produces deliverables needed by the next phase in the life cycle.
Requirements are converted into design. Code is generated during implementation that is
driven by the design. Testing verifies the deliverable of the implementation phase against
requirements.

3.2 Waterfall Model

This is the most common life cycle models, also referred to as a linear-sequential life
cycle model. It is very simple to understand and use. In a waterfall model, each phase
must be completed before the next phase can begin. At the end of each phase, there is

36
always a review to ascertain if the project is in the right direction and whether or not to
carry on or abandon the project. Unlike the general model, phases do not overlap in a
waterfall model.

Waterfall Life Cycle

Fig 2 Waterfall Life Cycle

Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.

3.2.1 Advantages

 Simple and easy to use.


 Easy to manage due to the rigidity of the model – each phase has specific
deliverables and a review process.
 Phases are processed and completed one at a time.
 Works well for smaller projects where requirements are very well understood.

3.2.2 Disadvantages

 Adjusting scope during the life cycle can kill a project


 No working software is produced until late during the life cycle.
 High amounts of risk and uncertainty.
 Poor model for complex and object-oriented projects.
 Poor model for long and ongoing projects.
 Poor model where requirements are at a moderate to high risk of changing.

3.3 V-Shaped Model

Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of
processes. Each phase must be completed before the next phase begins. Testing is
emphasized in this model more so than the waterfall model The testing procedures are

37
developed early in the life cycle before any coding is done, during each of the phases
preceding implementation.

Requirements begin the life cycle model just like the waterfall model. Before
development is started, a system test plan is created. The test plan focuses on meeting the
functionality specified in the requirements gathering.

The high-level design phase focuses on system architecture and design. An integration
test plan is created in this phase as well in order to test the pieces of the software systems
ability to work together.

The low-level design phase is where the actual software components are designed, and
unit tests are created in this phase as well.

The implementation phase is, again, where all coding takes place. Once coding is
complete, the path of execution continues up the right side of the V where the test plans
developed earlier are now put to use.

Fig 3 V-Shaped Life Cycle Model

Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.

3.3.1 Advantages

38
 Simple and easy to use.
 Each phase has specific deliverables.
 Higher chance of success over the waterfall model due to the development of test
plans early on during the life cycle.
 Works well for small projects where requirements are easily understood.

3.3.2 Disadvantages

 Very rigid, like the waterfall model.


 Little flexibility and adjusting scope is difficult and expensive.
 Software is developed during the implementation phase, so no early prototypes of
the software are produced.
 Model doesn‘t provide a clear path for problems discovered during testing phases.

3.4 Incremental Model

The incremental model is an intuitive approach to the waterfall model. It is a kind of a


―multi-waterfall‖ cycle. In that multiple development cycles take at this point. Cycles are
broken into smaller, more easily managed iterations. Each of the iterations goes through
the requirements, design, implementation and testing phases.

The first iteration produces a working version of software and this makes possible to have
working software early on during the software life cycle. Subsequent iterations build on
the initial software produced during the first iteration.

Incremental Life Cycle Model

Fig 4 Incremental Life Cycle Model

Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.

3.4.1 Advantages

 Generates working software quickly and early during the software life cycle.

39
 More flexible – inexpensive to change scope and requirements.
 Easier to test and debug during a smaller iteration.
 Easier to manage risk because risky pieces are identified and handled during its
iteration.
 Each of the iterations is an easily managed landmark

3.4.2 Disadvantages

 Each phase of an iteration is rigid and do not overlap each other.


 Problems as regard to system architecture may arise as a result of inability to
gathered requirements up front for the entire software life cycle.

3.5 Spiral Model

The spiral model is similar to the incremental model, with more emphases placed on risk
analysis. The spiral model has four phases namely Planning, Risk Analysis, Engineering
and Evaluation. A software project continually goes through these phases in iterations
which are called spirals. In the baseline spiral requirements are gathered and risk is
assessed. Each subsequent spiral builds on the baseline spiral.

Requirements are gathered during the planning phase. In the risk analysis phase, a
process is carried out to discover risk and alternate solutions. A prototype is produced at
the end of the risk analysis phase.

Software is produced in the engineering phase, alongside with testing at the end of the
phase. The evaluation phase provides the customer with opportunity to evaluate the
output of the project to date before the project continues to the next spiral.

In the spiral model, the angular component denotes progress, and the radius of the spiral
denotes cost.

Spiral Life Cycle Model

40
Fig 5 Spiral Life Cycle Model

Source: http://codebetter.com/blogs/raymond.lewallen/archive/2005/07/13/129114.aspx.

3.5.1 Merits

 High amount of risk analysis


 Good for large and mission-critical projects.
 Software is produced early in the software life cycle.

3.5.2 Demerits

 Can be a costly model to use.


 Risk analysis requires highly specific expertise.
 Project‘s success is highly dependent on the risk analysis phase.
 Doesn‘t work well for smaller projects.

3.6 Requirements Phase

41
Business requirements are gathered in this phase. This phase is the main center of
attention of the project managers and stake holders. Meetings with managers, stake
holders and users are held in order to determine the requirements. Th general questions
that require answers during a requirements gathering phase are: Who is going to use the
system? How will they use the system? What data should be input into the system?
What data should be output by the system? A list of functionality that the system should
provide, which describes functions the system should perform, business logic that
processes data, what data is stored and used by the system, and how the user interface
should work is produced at this point. The requirements development phase may have
been preceded by a feasibility study, or a conceptual analysis phase of the project. The
requirements phase may be divided into requirements elicitation (gathering the
requirements from stakeholders), analysis (checking for consistency and completeness),
specification (documenting the requirements) and validation (making sure the specified
requirements are correct)

In systems engineering, a requirement can be a description of what a system must do,


referred to as a Functional Requirement. This type of requirement specifies something
that the delivered system must be able to do. Another type of requirement specifies
something about the system itself, and how well it performs its functions. Such
requirements are often called Non-functional requirements, or 'performance requirements'
or 'quality of service requirements.' Examples of such requirements include usability,
availability, reliability, supportability, testability, maintainability, and (if defined in a way
that's verifiably measurable and unambiguous) ease-of-use.

3.6.1 Types of Requirements

Requirements are categorised as:

 Functional requirements which describe the functionality that the system is to


execute; for example, formatting some text or modulating a signal.
 Non-functional requirements which are the ones that act to constrain the solution.
Nonfunctional requirements are sometimes known as quality requirements or
Constraint requirements No matter how the problem is solved the constraint
requirements must be adhered to.

It is important to note that functional requirements can be directly implemented in


software. The non-functional requirements are controlled by other aspects of the system.
For example, in a computer system reliability is related to hardware failure rates,
performance controlled by CPU and memory. Non-functional requirements can in some
cases be broken into functional requirements for software. For example, a system level
non-functional safety requirement can be decomposed into one or more functional
requirements. In addition, a non-functional requirement may be converted into a process
requirement when the requirement is not easily measurable. For example, a system level

42
maintainability requirement may be decomposed into restrictions on software constructs
or limits on lines or code.

3.6.2 Requirements analysis

Requirements analysis in systems engineering and software engineering, consist of


those activities that go into determining the needs or conditions to meet for a new or
altered product, taking account of the possibly conflicting requirements of the various
stakeholders, such as beneficiaries or users.

Requirements analysis is critical to the success of a development project. Requirements


must be actionable, measurable, testable, related to identified business needs or
opportunities, and defined to a level of detail sufficient for system design.

3.6.3 The Need for Requirements Analysis


Studies reveal that insufficient attention to Software Requirements Analysis at the
beginning of a project is the major reason for critically weak projects that often do not
fulfil basic tasks for which they were designed. Software companies are now spending
time and resources on effective and streamlined Software Requirements Analysis
Processes as a condition to successful projects that support the customer‘s business goals
and meet the project‘s requirement specifications.
3.6.4 Requirements Analysis Process: Requirements Elicitation, Analysis And
Specification
Requirements Analysis is the process of understanding the client needs and expectations
from a proposed system or application. It is a well-defined stage in the Software
Development Life Cycle model.
Requirements are a description of how a system should behave, in other words, a
description of system properties or attributes. Considering the numerous levels of
dealings between users, business processes and devices in worldwide corporations today,
there are immediate and composite requirements from a single application, from different
levels within an organization and outside it
The Software Requirements Analysis Process involves the complex task of eliciting and
documenting the requirements of all customers, modelling and analyzing these
requirements and documenting them as a foundation for system design.
This job (requirements analysis process) is dedicated to a specialized Requirements
Analyst. The Requirements Analysis function may also come under the scope of Project
Manager, Program Manager or Business Analyst, depending on the organizational
hierarchy.

3.6.5 Steps in the Requirements Analysis Process

43
3.6.5.1 Fix system boundaries
This is initial step and helps in identifying how the new application fit in into the business
processes, how it fits into the larger picture as well as its capacity and limitations.
3.6.5.2 Identify the customer
This focuses on identifying who the ‗users‘ or ‗customers‘ of an application are that is to
say knowing the group or groups of people who will be directly or indirectly impacted by
the new application. This allows the Requirements Analyst to know in advance where he
has to look for answers.

3.6.5.3 Requirements elicitation


Here information is gathered from the multiple stakeholders identified. The Requirements
Analyst brings out from each of these groups what their requirements from the
application are and what they expect the application to achieve. Taking into account the
multiple stakeholders involved, the list of requirements gathered in this manner could go
into pages. The level of detail of the requirements list depends on the number and size of
user groups, the degree of complexity of business processes and the size of the
application.

3.6.5.3.1 Problems faced in Requirements Elicitation


 Ambiguous understanding of processes
 Inconsistency within a single process by multiple users
 Insufficient input from stakeholders
 Conflicting stakeholder interests
 Changes in requirements after project has begun
3.6.5.3.2 Tools used in Requirements Elicitation
Tools used in Requirements Elicitation include stakeholder interviews and focus group
studies. Other methods like flowcharting of business processes and the use of existing
documentation like user manuals, organizational charts, process models and systems or
process specifications, on-site analysis, interviews with end-users, market research and
competitor analysis are also used widely in Requirements Elicitation.
There are of course, modern tools that are better equipped to handle the complex and
multilayered process of Requirements Elicitation. Some of the current Requirements
Elicitation tools in use are:
 Prototypes
 Use cases
 Data flow diagrams
 Transition process diagrams

44
 User interfaces

3.6.5.4 Requirements Analysis


The moment all stakeholder requirements have been gathered, a structured analysis of
these can be done after modeling the requirements. Some of the Software Requirements
Analysis techniques used are requirements animation, automated reasoning, knowledge-
based critiquing, consistency checking, analogical and case-based reasoning.
3.6.5.5. Requirements Specification
After requirements have been elicited, modeled and analyzed, they should be
documented in clear, definite terms. A written requirements document is crucial and as
such its circulation should be among all stakeholders including the client, user-groups,
the development and testing teams. It has been observed that a well-designed, clearly
documented Requirements Specification is vital and serves as a:
 Base for validating the stated requirements and resolving stakeholder conflicts, if any
 Contract between the client and development team
 Basis for systems design for the development team
 Bench-mark for project managers for planning project development lifecycle and
goals
 Source for formulating test plans for QA and testing teams
 Resource for requirements management and requirements tracing
 Basis for evolving requirements over the project life span
Software requirements specification involves scoping the requirements so that it meets
the customer‘s vision. It is the result of teamwork between the end-user who is usually
not a technical expert, and a Technical/Systems Analyst, who is expected to approach the
situation in technical terms.
The software requirements specification is a document that lists out stakeholders‘ needs
and communicates these to the technical community that will design and build the
system. It is really a challenge to communicate a well-written requirements specification,
to both these groups and all the sub-groups within. To overcome this, Requirements
Specifications may be documented separately as:
 User Requirements - written in clear, precise language with plain text and use cases,
for the benefit of the customer and end-user
 System Requirements - expressed as a programming or mathematical model, meant
to address the Application Development Team and QA and Testing Team.
Requirements Specification serves as a starting point for software, hardware and database
design. It describes the function (Functional and Non-Functional specifications) of the
system, performance of the system and the operational and user-interface constraints that
will govern system development.

45
3.7 Requirements Management
Requirements Management is the all-inclusive process that includes all aspects of
software requirements analysis and as well ensures verification, validation and
traceability of requirements. Effective requirements management practices assure that all
system requirements are stated unmistakably, that omissions and errors are corrected and
that evolving specifications can be included later in the project lifecycle.

3.7 Design Phase

The software system design is formed from the results of the requirements phase. This is
where the details on how the system will work are produced. Deliverables in this phase
include hardware and software, communication, software design.

3.8 Definition of software design

A software design is a meaningful engineering representation of some software product


that is to be built. A design can be traced to the customer's requirements and can be
assessed for quality against predefined criteria. In the software engineering context,
design focuses on four major areas of concern: data, architecture, interfaces and
components.

The design process is very important. As a labourer, for example one would not attempt
to build a house without an approved blueprint so as not to risk the structural integrity
and customer satisfaction. In the same way, the approach to building software products is
no unlike. The emphasis in design is on quality. It is pertinent to note that, this is the only
phase in which the customer‘s requirements can be precisely translated into a finished
software product or system. As such, software design serves as the foundation for all
software engineering steps that follow regardless of which process model is being
employed.

During the design process the software specifications are changed into design models that
express the details of the data structures, system architecture, interface, and components.
Each design product is re-examined for quality before moving to the next phase of
software development. At the end of the design process a design specification document
is produced. This document is composed of the design models that describe the data,
architecture, interfaces and components.

3.9 Design Specification Models

 Data design – created by changing the analysis information model (data


dictionary and ERD) into data structures needed to implement the software. Part
of the data design may occur in combination with the design of software
architecture. More detailed data design occurs as each software component is
designed.

46
 Architectural design - defines the relationships among the major structural
elements of the software, the ―design patterns‖ that can be used to attain the
requirements that have been defined for the system, and the constraint that affect
the way in which the architectural patterns can be applied. It is derived from the
system specification, the analysis model, and the subsystem interactions defined
in the analysis model (DFD).
 Interface design - explains how the software elements communicate with each
other, with other systems, and with human users. Much of the necessary
information required is provided by the e data flow and control flow diagrams.
 Component-level design – It converts the structural elements defined by the
software architecture into procedural descriptions of software components using
information acquired from the process specification (PSPEC), control
specification (CSPEC), and state transition diagram (STD).

3.10 Design Guidelines

In order to assess the quality of a design (representation) the yardstick for a good design
should be established. Such a design should:

 exhibit good architectural structure


 be modular
 contain distinct representations of data, architecture, interfaces, and components
(modules)
 lead to data structures that are appropriate for the objects to be implemented and
be drawn from recognizable design patterns
 lead to components that exhibit independent functional characteristics
 lead to interfaces that reduce the complexity of connections between modules and
with the external environment
 be derived using a reputable method that is driven by information obtained during
software requirements analysis

These criteria are not acquired by chance. The software design process promotes good
design through the application of fundamental design principles, systematic methodology
and through review.

3.11 Design Principles

Software design can be seen as both a process and a model.

―The design process is a series of steps that allow the designer to describe all aspects of
the software to be built. However, it is not merely a recipe book; for a competent and
successful design, the designer must use creative skill, past experience, a sense of what
makes ―good‖ software, and have a commitment to quality.

47
The set of principles which has been established to help the software engineer in directing
the design process are:

 The design process should not suffer from tunnel vision – Alternative
approaches should be considered by a good designer. Designer should judge
each approach based on the requirements of the problem, the resources
available to do the job and any other constraints.
 The design should be traceable to the analysis model – because a single
element of the design model often traces to multiple requirements, it is
necessary to have a means of tracking how the requirements have been
satisfied by the model
 The design should not reinvent the wheel – Systems are constructed using a
suite of design patterns, many of which may have likely been encountered
before. These patterns should always be chosen as an alternative to
reinvention. Design time should be spent in expressing truly fresh ideas and
incorporating those patterns that already exist.
 The design should reduce intellectual distance between the software and the
problem as it exists in the real world – This means that, the structure of the
software design should imitate the structure of the problem domain.
 The design should show uniformity and integration – a design is uniform if it
appears that one person developed the whole thing. Rules of style and format
should be defined for a design team before design work begins. A design is
integrated if care is taken in defining interfaces between design components.
 The design should be structured to degrade gently, even with bad data, events,
or operating conditions are encountered – Well-designed software should
never ―bomb‖. It should be designed to accommodate unusual circumstances,
and if it must terminate processing, do so in a graceful manner.
 The design should be reviewed to minimize conceptual (semantic) errors –
there is sometimes the tendency to focus on minute details when the design is
reviewed, missing the forest for the trees. The designer team should ensure
that major conceptual elements of the design have been addressed before
worrying about the syntax if the design model.
 Design is not coding, coding is not design – Even when detailed designs are
created for program components, the level of abstraction of the design model
is higher than source code. The only design decisions made of the coding level
address the small implementation details that enable the procedural design to
be coded.
 The design should be structured to accommodate change
 The design should be assessed for quality as it is being created

With proper application of design principles, the design displays both external and
internal quality factors. External quality factors are those factors that can readily be
observed by the user, (e.g. speed, reliability, correctness, usability). Internal quality
factors have to do with technical quality more so the quality of the design itself. To
achieve internal quality factors the designer must understand basic design concepts.

48
3.12 Fundamental Software Design Concepts

Over the past four decades, a set of fundamental software design concepts has evolved,
each providing the software designer with a foundation from which more sophisticated
design methods can be applied. Each concept assists the soft ware engineer to answer the
following questions:

 What criteria can be used to partition software into individual components?

 How is function or data structure detail separated from a conceptual


representation of software?

 Are there uniform criteria that define the technical quality of a software
design?

The fundamental design concepts are:

 Abstraction - allows designers to focus on solving a problem without being


concerned about irrelevant lower level details (procedural abstraction - named
sequence of events, data abstraction - named collection of data objects)
 Refinement - process of elaboration where the designer provides successively
more detail for each design component
 Modularity - the degree to which software can be understood by examining its
components independently of one another
 Software architecture - overall structure of the software components and the
ways in which that structure provides conceptual integrity for a system
 Control hierarchy or program structure - represents the module organization
and implies a control hierarchy, but does not represent the procedural aspects of
the software (e.g. event sequences)
 Structural partitioning - horizontal partitioning defines three partitions (input,
data transformations, and output); vertical partitioning (factoring) distributes
control in a top-down manner (control decisions in top level modules and
processing work in the lower level modules).
 Data structure - representation of the logical relationship among individual data
elements (requires at least as much attention as algorithm design)
 Software procedure - precise specification of processing (event sequences,
decision points, repetitive operations, data organization/structure)
 Information hiding - information (data and procedure) contained within a
module is inaccessible to modules that have no need for such information

Activity G 1 What are the steps in requirement Analysis process?


2 What are the fundamental design concepts ?

49
4.0 Conclusion
Software life cycle models describe phases of the software cycle and the order in which
those phases are executed.

5.0 Summary

In this unit, we have learnt that:

 Software life cycle models describe phases of the software cycle and the order
in which those phases are executed. .
 In general model, each phase produces deliverables required by the next phase
in the life cycle. Requirements are translated into design. Code is produced
during implementation that is driven by the design. Testing verifies the
deliverable of the implementation phase against requirements.
 In a waterfall model, each phase must be completed in its entirety before the
next phase can begin. At the end of each phase, a review takes place to
determine if the project is on the right path and whether or not to continue or
discard the project. Unlike what I mentioned in the general model, phases do
not overlap in a waterfall model.
 Just like the waterfall model, the V-Shaped life cycle is a sequential path of
execution of processes. Each phase must be completed before the next phase
begins. Testing is emphasized in this model more so than the waterfall model
though. The testing procedures are developed early in the life cycle before
any coding is done, during each of the phases preceding implementation.
 The incremental model is an intuitive approach to the waterfall model.
Multiple development cycles take place here, making the life cycle a ―multi-
waterfall‖ cycle. Cycles are divided up into smaller, more easily managed
iterations. Each iteration passes through the requirements, design,
implementation and testing phases.
 The spiral model is similar to the incremental model, with more emphases
placed on risk analysis. The spiral model has four phases: Planning, Risk
Analysis, Engineering and Evaluation. A software project repeatedly passes
through these phases in iterations (called Spirals in this model). The baseline
spirals, starting in the planning phase, requirements are gathered and risk is
assessed. Each subsequent spirals builds on the baseline spiral.
 In requirement phase business requirements are gathered and that the phase is
the main focus of the project managers and stake holders.
 The software system design is produced from the results of the requirements
phase and it is the phase is where the details on how the system will work is
produced

6.0 Tutor Marked Assignment

50
1 What is software life cycle model?
2 Explain the general model
3 Compare and contrast General and Waterfall Models
4 Explain V-Shaped Life Cycle Model
5 Explain Incremental Model
6 Compare and contrast Incremental and Spiral Models
7 Discus the requirement and design phases

7.0 Further Reading And Other Resources

Blanchard, B. S., & Fabrycky, W. J.(2006) Systems engineering and analysis (4th ed.)
New Jersey: Prentice Hall.

Ummings, Haag (2006). Management Information Systems for the Information Age.
Toronto, McGraw-Hill Ryerson

Unit 3 Modularity

1.0 Introduction

In unit 2 we discussed about software lifecycle models in general and also in detailed
the requirement and the design phases of software development. In this unit we will
look at Modudularity in programming.

51
2.0 Objectives

By the end of this unit, you should be able to:


 Define Modularity
 Differentiate between logical and physical modularity
 Explain benefits of modular design
 Explain approaches of writing modular program
 Explain Criteria for using modular design
 Outlines the attributes of a good module
 Outline the steps to creating effective module
 Differentiate between Top-down and Bottom-up programming approach

What is Modularity?

Modularity is a general systems concept which is the degree to which a system‘s


components may be separated and recombined. It refers to both the tightness of coupling
between components, and the degree to which the ―rules‖ of the system architecture
enable (or prohibit) the mixing and matching of components

The concept of modularity in computer software has been promoted for about five
decades. In essence, the software is divided into separately names and addressable
components called modules that are integrated to satisfy problem requirements. It is
important to note that a reader cannot easily understand large programs with a single
module. The number of variables, control paths and sheer complexity make
understanding almost impossible. As a result a modular approach will allow for the
software to be intellectually manageable. However, it is important to note that software
cannot be subdivided indefinitely so as to make the effort required to understand or
develop it negligible. This is because the more the number of modules, the less the effort
to develop them.

3.14 Logical Modularity

Generally in software, modularity can be categorized as logical or physical. Logical


Modularity is concerned with the internal organization of code into logically-related
units. In modern high level languages, logical modularity usually starts with the class, the
smallest code group that can be defined. In languages such as Java and C#, classes can be
further combined into packages which allow developers to organize code into group of
related classes. Depending on the environment, a module can be implemented as a single
class, several classes in a package, or an entire API (a collection of packages). You
should be able to describe the functionality of tour module in a single sentence (i.e.
this module calculates tax per zip code) regardless of the implementation scale of your
module,). Your module should expose its functionality as simple interfaces that shield
callers from all implementation details. The functionality of a module should be
accessible through a published interface that allows the module to expose its

52
functionalities to the outside world while hiding its implementation details.

3.15 Physical Modularity

Physical Modularity is probably the earliest form of modularity introduced in software


creation. Physical modularity consists of two main components namely: (1) a file that
contains compiled code and other resources and (2) an executing environment that
understand how to execute the file. Developers build and assemble their modules into
compiled assets that can be distributed as single or multiple files. In Java for example,
the jar file is the unit of physical modularity for code distribution (.Net has the assembly).
The file and its associated meta-data are designed to be loaded and executed by the run
time environment that understands how to run the compiled code.
Physical modularity can also be affected by the context and scale of abstraction. Within
Java, for instance, the developer community has created and accept several physical
modularity strategies to address different aspects of enterprise development 1) WAR
for web components 2) EJB for distributed enterprise components 3) EAR for enterprise
application components 4) vendor specific modules such as JBoss Service Archive
(SAR). These are usually a variation of the JAR file format with special meta data to
target the intended runtime environment. The current trend of adoption seems to be
pointing to OSGi as a generic physical module format. OSGi provides the Java
environment with additional functionalties that should allow developers to model their
modules to scale from small emddeable to complex enterprise components (a lofty
goal in deed).

3.16 Benefits of Modular Design

 Scalable Development: a modular design allows a project to be naturally


subdivided along the lines of its modules. A developer (or groups of developers)
can be assigned a module to implement independently which can produce an
asynchronous project flow.
 Testable Code Unit: when your code is partition into functionally-related chunks,
it facilitates the testing of each module independently. With the proper testing
framework, developers can exercise each module (and its constituencies) without
having to bring up the entire project.
 Build Robust System: in the monolithic software design, as your system grows
in complexity so does its propensity to be brittle (changes in one section causes
failure in another). Modularity lets you build complex system composed of
smaller parts that can be independently managed and maintained. Fixes in
one portion of the code does not necessarily affect the entire system.
 Easier Modification & Maintenance: post-production system maintenance is
another crucial benefit of modular design. Developers have the ability to fix and
make non-infrastructural changes to module without affecting other modules.
The updated module can independently go through the build and release cycle
without the need to re-build and redeploy the entire system.

53
 Functionally Scalable: depending on the level of sophistication of your modular
design, it's possible to introduce new functionalities with little or no change to
existing modules. This allows your software system to scale in functionality
without becoming brittle and a burden on developers.

3.17 Approaches of writing Modular program

The three basic approaches of designing Modular program are:

 Process-oriented design

This approach places the emphasis on the process with the objective being to design
modules that have high cohesion and low coupling. (Data flow analysis and data flow
diagrams are often used.)

 Data-oriented design

In this approach the data comes first. That is the structure of the data is determined first
and then procedures are designed in a way to fit to the structure of the data.

 Object-oriented design

In this approach, the objective is to first identify the objects and then build the product
around them. In concentrate, this technique is both data- and process-oriented.

3.18 Criteria for using Modular Design

 Modular decomposability – If the design method provides a systematic


means for breaking problem into sub problems, it will reduce the complexity
of the overall problem, thereby achieving a modular solution.
 Modular compos ability - If the design method enables existing (reusable)
design components to be assembled into a new system, it will yield a modular
solution that does not reinvent the wheel.
 Modular understand ability – If a module can be understood as a stand-
alone unit (without reference to other modules) it will be easier to build and
easier to change.
 Modular continuity – If small changes to the system requirements result in
changes to individual modules, rather than system-wide changes, the impact
of change-induced side-effects will be minimised
 Modular protection – If an abnormal condition occurs within a module and
its effects are constrained within that module, then impact of error-induced
side-effects are minimised

54
3.19 Attributes of a good Module

 Functional independence - modules have high cohesion and low coupling


 Cohesion - qualitative indication of the degree to which a module focuses on just
one thing
 Coupling - qualitative indication of the degree to which a module is connected to
other modules and to the outside world

3.20 Steps to Creating Effective Module

 Evaluate the first iteration of the program structure to reduce coupling and
improve cohesion. Once program structure has been developed modules may be
exploded or imploded with aim of improving module independence.
o An exploded module becomes two or more modules in the final program
structure.
o An imploded module is the result of combining the processing implied by
two or more modules.

An exploded module normally results when common processing exists in two or more
modules and can be redefined as a separate cohesive module. When high coupling is
expected, modules can sometimes be imploded to reduce passage of control, reference to
global data and interface complexity.

 Attempt to minimise structures with high fan-out; strive for fan-in as structure
depth increases. The structure shown inside the cloud in Fig. 3 does not make
effective use of factoring.

55
Fig 6 Example of a program structure

 Keep the scope of effect of a module within the scope of control for that module.
o The scope of effect of a module is defined as all other modules that are
affected by a decision made by that module. For example, the scope of
control of module e is all modules that are subordinate i.e. modules f, g, h,
n, p and q.

 Evaluate module interfaces to reduce complexity, reduce redundancy, and


improve consistency.
o Module interface complexity is a prime cause of software errors.
Interfaces should be designed to pass information simply and should be
consistent with the function of a module. Interface inconsistency (i.e.
seemingly unrelated data passed via an argument list or other technique) is
an indication of low cohesion. The module in question should be re-
evaluated.

 Define modules whose function is predictable and not overly restrictive (e.g. a
module that only implements a single task).
o A module is predictable when it can be treated as a black box; that is, the
same external data will be produced regardless of internal processing
details. Modules that have internal ―memory‖ can be unpredictable unless
care is taken in their use.
o A module that restricts processing to a single task exhibits high cohesion
and is viewed favourably by a designer.

56
 Strive for controlled entry modules, avoid pathological connection (e.g. branches
into the middle of another module)
o This warns against content coupling. Software is easier to understand and
maintain if the module interfaces are constrained and controlled.

3.21 Programming Languages that formally support module concept

Languages that formally support the module concept include IBM/360 Assembler,
COBOL, RPG and PL/1, Ada, D, F, Fortran, Haskell, OCaml, Pascal, ML, Modula-2,
Erlang, Perl, Python and Ruby. The IBM System i also uses Modules in RPG, COBOL
and CL, when programming in the ILE environment. Modular programming can be
performed even where the programming language lacks explicit syntactic features to
support named modules.

Software tools can create modular code units from groups of components. Libraries of
components built from separately compiled modules can be combined into a whole by
using a linker.

3.22 Module Interconnection Languages

Module interconnection languages (MILs) provide formal grammar constructs for


deciding the various module interconnection specifications required to assemble a
complete software system. MILs enable the separation between programming-in-the-
small and programming-in-the-large. Coding a module represents programming in the
small, while assembling a system with the help of a MIL represents programming in the
large. An example of MIL is MIL-75.

3.23 Top-Down Design

Top-down is a programming style, the core of traditional procedural languages, in which


design begins by specifying complex pieces and then dividing them into successively
smaller pieces. Finally, the components are precise enough to be coded and the program
is written. It is the exact opposite of the bottom-up programming approach which is
common in object-oriented languages such as C++ or Java.

The method of writing a program using top-down approach is to write a main procedure
that names all the major functions it will need. After that the programming team
examines the requirements of each of those functions and repeats the process. These
compartmentalized sub-routines finally will perform actions so straightforward they can
be easily and concisely coded. The program is done when all the various sub-routines
have been coded.

57
Merits of top-down programming:

 Separating the low level work from the higher level abstractions leads to a
modular design.
 Modular design means development can be self contained.
 Having "skeleton" code illustrates clearly how low level modules integrate.
 Fewer operations errors
 Much less time consuming (each programmer is only concerned in a part of the
big project).
 Very optimized way of processing (each programmer has to apply their own
knowledge and experience to their parts (modules), so the project will become an
optimized one).
 Easy to maintain (if an error occurs in the output, it is easy to identify the errors
generated from which module of the entire program).

3.24 Bottom-up approach

In a bottom-up approach the individual base elements of the system are first specified in
great detail. These elements are then connected together to form bigger subsystems,
which are linked, sometimes in many levels, until a complete top-level system is formed.
This strategy often resembles a "seed" model, whereby the beginnings are small, but
eventually grow in complexity and completeness.

Object-oriented programming (OOP) is a programming paradigm that uses "objects" to


design applications and computer programs.

. This bottom-up approach has one drawback. We need to use a lot of perception to
decide the functionality that is to be provided by the module. This approach is more
suitable if a system is to be developed from existing system, because it starts from some
existing modules. Modern software design approaches usually mix both top-down and
bottom-up approaches.

Activity H What are the steps to create effective modules?

4.0 Conclusion

The benefits of modular programming cannot be overemphasised. It among other things,


allows for scalar development, it facilitates code testing, helps in building robust system,
allows for easier modification and maintenance.

5.0 Summary

In this unit, we have learnt that:

58
 Modularity is a general systems concept, the degree to which a system‘s
components may be separated and recombined. It refers to both the tightness of
coupling between components, and the degree to which the ―rules‖ of the system
architecture enable (or prohibit) the mixing and matching of components
 Physical Modularity is probably the earliest form of modularity introduced in
software creation. Physical modularity consists of two main components namely:
(1) a file that contains compiled code and other resources and (2) an executing
environment that understand how to execute the file. Developers build and
assemble their modules into compiled assets that can be distributed as single or
multiple files.
 Logical Modularity is concerned with the internal organization of code into
logically-related units.
 Modular programming is beneficial in that:It allows for scalar development, it
facilitates code testing, helps in building robust system, allows for easier
modification and maintenance.
 The three basic approaches of designing Modular program are: Process-oriented
design, Data-oriented design and Object-oriented design.
 Criteria for using Modular Design include: Modular decomposability, Modular
compos ability, Modular understand ability, Modular continuity, and Modular
protection.
 Attributes of a good Module include: Functional independence, Cohesion, and
Coupling
 Steps to Creating Effective Module include: Evaluate the first iteration of the
program structure to reduce coupling and improve cohesion, Attempt to minimise
structures with high fan-out; strive for fan-in as structure depth increases, Define
modules whose function is predictable and not overly restrictive (e.g. a module
that only implements a single task), Strive for controlled entry modules, avoid
pathological connection (e.g. branches into the middle of another module)
 Top-down is a programming style, the core of traditional procedural languages, in
which design begins by specifying complex pieces and then dividing them into
successively smaller pieces. Finally, the components are precise enough to be
coded and the program is written.
 In a bottom-up approach the individual base elements of the system are first
specified in great detail. These elements are then connected together to form
bigger subsystems, which are linked, sometimes in many levels, until a complete
top-level system is formed

6.0 Tutor Marked Assignment

 What is Modularity?
 Differentiate between logical and physical modularity
 What are the benefits of modular design
 Explain the approaches of writing modular program

59
 What are the Criteria for using modular design
 Outlines the attributes of a good module
 Outline the steps to creating effective module
 Differentiate between Top-down and Bottom-up programming approach

7.0 Futher Reading And Other Resouces

Laplante, Phil (2009). Requirements Engineering for Software and Systems (1st ed.).
Redmond, WA: CRC Press. ISBN 1-42006-467-3.
http://beta.crcpress.com/product/isbn/9781420064674.

McConnell, Steve (1996). Rapid Development: Taming Wild Software Schedules (1st
ed.). Redmond, WA: Microsoft Press. ISBN 1-55615-900-5.
http://www.stevemcconnell.com/.

Wiegers, Karl E. (2003). Software Requirements 2: Practical techniques for


gathering and managing requirements throughout the product development cycle
(2nd ed.). Redmond: Microsoft Press. ISBN 0-7356-1879-8.

Andrew Stellman and Jennifer Greene (2005). Applied Software Project


Management. Cambridge, MA: O'Reilly Media. ISBN 0-596-00948-8.
http://www.stellman-greene.com.

Unit 4 Pseudo code

1.0 Introduction

In the last unit, you have learnt about Modudularity in programming. Its benefits,
design approaches and criteria, attributes of a good Module and the steps to creating
effective module. You equally learnt about Top-Down and Bottom-up approaches in
programming. This unit ushers you into Pseudo code a way to create a logical structure

60
that will describing the actions, which will be executed by the application. After studying
this unit you are expected to have achieved the following objectives listed below.

2.0 Objectives

By the end of this unit, you should be able to:


 Define Pseudo code
 Explain General guidelines for writing Pseudo code.
 Give examples of Pseudo codes

3.26 Definition of Pseudo code


Pseudo-code is a non-formal language, a way to create a logical structure, describing the
actions, which will be executed by the application. Using pseudo-code, the developer
shows the application logic using his local language, without applying the structural rules
of a specific programming language. The big advantage of the pseudo-code is that the
application logic can be easily comprehended by any developer in the development team.
In addition, when the application algorithm is expressed in pseudo-code, it is very easy to
convert the pseudo-code into real code (using any programming language).

3.26 General guidelines for writing Pseudo code

Here are a few general guidelines for writing your pseudo code:
Mimic good code and good English. Using aspects of both systems means
adhering to the style rules of both to
some degree. It is still important that variable names be mnemonic, comments
be included where useful, and English
phrases be comprehensible (full sentences are usually not necessary).
Ignore unnecessary details. If you are worrying about the placement of commas,
you are using too much detail. It is a
good idea to use some convention to group statements (begin/end, brackets, or
whatever else is clear), but you shouldn't
obsess about syntax.
Don't belabor the obvious. In many cases, the type of a variable is clear from
context; unless it is critical that it is specified to be an integer or real, it is
often unnecessary to make it explicit.
Take advantage of programming shorthands. Using if-then-else or looping
structures is more concise than writing
out the equivalent in English; general constructs that are not peculiar to a
small number of languages are good candidates
for use in pseudocode. Using parameters in specifying procedures is concise,
clear, and accurate, and hence should not
be omitted from pseudocode.
Consider the context. If you are writing an algorithm for quicksort, the statement
use quicksort to sort the values is
hiding too much detail; if you have already studied quicksort in a class and
later use it as a subroutine in another

61
algorithm, the statement would be appropriate to use.
Don't lose sight of the underlying model. It should be possible to see through"
your pseudocode to the model below;
if not (that is, you are not able to analyze the algorithm easily), it is written at
too high a level.
Check for balance. If the pseudocode is hard for a person to read or difficult to
translate into working code (or worse
yet, both!), then something is wrong with the level of detail you have chosen
to use.

3.27 Examples of Pseudocode

Example 1 - Computing Sales Value Added (VAT) Tax : Pseudo-code the task of
computing the final price of an item after figuring in sales tax. Note the three types of
instructions: input (get), process/calculate (=) and output (display)

1. get price of item


2. get VAT rate
3. VAT = price of time times VAT rate
4 final price = price of item plus VAT
5. display final price
6. stop

Variables: price of item, sales tax rate, sales tax, final price

Note that the operations are numbered and each operation is unambiguous and effectively
computable. We also extract and list all variables used in our pseudo-code. This will be
useful when translating pseudo-code into a programming language

Example 2 - Computing Weekly Wages: Gross pay depends on the pay rate and the
number of hours worked per week. However, if you work more than 50 hours, you get
paid time-and-a-half for all hours worked over 50. Pseudo-code the task of computing
gross pay given pay rate and hours worked.

1. get hours worked


2. get pay rate
3. if hours worked ≤ 50 then
3.1 gross pay = pay rate times hours worked
4. else
4.1 gross pay = pay rate times 50 plus 1.5 times pay rate times (hours
worked minus 50)
5. display gross pay

62
6. halt

variables: hours worked, ray rate, gross pay

This example presents the conditional control structure. On the basis of the true/false
question asked in line 3, line 3.1 is executed if the answer is True; otherwise if the answer
is False the lines subordinate to line 4 (i.e. line 4.1) is executed. In both cases pseudo-
code is resumed at line 5.

Example 3 - Computing a Question Average: Pseudo-code a routine to calculate your


question average.

1. get number of questions


2. sum = 0
3. count = 0
4. while count < number of questions
4.1 get question grade
4.2 sum = sum + question grade
4.3 count = count + 1
5. average = sum / number of question
6. display average
7. stop

variables: number of question, sum ,count, question grade, average

This example presents an iterative control statement. As long as the condition in line 4 is
True, we execute the subordinate operations 4.1 - 4.3. When the condition is False, we
return to the pseudo-code at line 5.

This is an example of a top-test or while do iterative control structure. There is also a


bottom-test or repeat until iterative control structure which executes a block of statements
until the condition tested at the end of the block is False.

Some Keywords That Should be Used

For looping and selection, The keywords that are to be used include Do While...EndDo;
Do Until...Enddo; Case...EndCase; If...Endif; Call ... with (parameters); Call; Return..... ;
Return; When; Always use scope terminators for loops and iteration.

As verbs, use the words Generate, Compute, Process, etc. Words such as set, reset,
increment, compute, calculate, add, sum, multiply, .....print, display, input, output, edit,
test , etc. with careful indentation tend to foster desirable pseudocode.

63
Do not include data declarations in your pseudo code.

Activity I Write a pseudo code to find the average of even number between 1 and 20

4.0 Conclusion

The role of pseudo-code in program design cannot be underestimated. When it used, it is


not only that logic of application can easily be understood but it can easily be converted
into real code.

5.0 Summary

In this unit, you have learnt about the essence of pseudo code in program design

6.0 Tutor Marked Assignment

 What is Pseudo code


 Explain the General guidelines for writing Pseudo code.
 Write a Pseudo code to find the average of even number between 1 and 20.

7.0 Futher Reading And Other Resouces

Robertson, L. A. (2003) Simple Program Design: A Step-by-Step Approach. 4th ed.


Melbourne: Thomson.

Unit 5 Programming Environment, CASE Tools & HIPO Diagrams

1.0 Introduction

In the last unit, you have learnt about pseudo code. In this unit you will be exposed to
Programming Environment, CASE Tools & HIPO Diagrams. After studying this unit you
are expected to have achieved the following objectives listed below.

2.0 Objectives
By the end of this unit, you should be able to:
 Explain Programming Environment

64
 Discuss Case Tools.
 Explain Hipo Diagrams.

3.0 Definition of Programming Environment

Programming environments gives the basic tools and Application Programming


Interfaces, or APIs, necessary to construct programs. Programming environments help
the creation, modification, execution and debugging of programs. The goal of integrating
a programming environment is more than simply building tools that share a common data
base and provide a consistent user interface. Altogether, the programming environment
appears to the programmer as a single tool; there are no firewalls separating the various
functions provided by the environment.

3.1 History of Programming Environment

The history of software tools began with the first computers in the early 1950s that used
linkers, loaders, and control programs. In the early 1970s the tools became famous with
Unix with tools like grep, awk and make that were meant to be combined flexibly with
pipes. The term "software tools" came from the book of the same name by Brian
Kernighan and P. J. Plauger. Originally, Tools were simple and light weight. As some
tools have been maintained, they have been integrated into more powerful integrated
development environments (IDEs). These environments combine functionality into one
place, sometimes increasing simplicity and productivity, other times part with flexibility
and extensibility. The workflow of IDEs is routinely contrasted with alternative
approaches, such as the use of Unix shell tools with text editors like Vim and Emacs.

The difference between tools and applications is unclear. For example, developers use
simple databases (such as a file containing a list of important values) all the time as tools.
However a full-blown database is usually thought of as an application in its own right.

For many years, computer-assisted software engineering (CASE) tools were preferred.
CASE tools emphasized design and architecture support, such as for UML. But the most
successful of these tools are IDEs.

The ability to use a variety of tools productively is one quality of a skilled software
engineer.

3.2 Types of Programming Environment

Software development tools can be roughly divided into the following categories:

 performance analysis tools


 debugging tools
 static analysis and formal verification tools

65
 correctness checking tools
 memory usage tools
 application build tools
 integrated development environment

3.3 Forms of Software tools

Software tools come in many forms namely :

 Bug Databases: Bugzilla, Trac, Atlassian Jira, LibreSource, SharpForge


 Build Tools: Make, automake, Apache Ant, SCons, Rake, Flowtracer, cmake,
qmake
 Code coverage: C++test,GCT, Insure++, Jtest, CCover
 Code Sharing Sites: Freshmeat, Krugle, Sourceforge. See also Code search
engines.
 Compilation and linking tools: GNU toolchain, gcc, Microsoft Visual Studio,
CodeWarrior, Xcode, ICC

 Debuggers: gdb, GNU Binutils, valgrind. Debugging tools also are used in the
process of debugging code, and can also be used to create code that is more
compliant to standards and portable than if they were not used.

 Disassemblers: Generally reverse-engineering tools.


 Documentation generators: Doxygen, help2man, POD, Javadoc, Pydoc/Epydoc,
asciidoc
 Formal methods: Mathematically-based techniques for specification, development
and verification
 GUI interface generators
 Library interface generators: Swig
 Integration Tools

 Memory Use/Leaks/Corruptions Detection: dmalloc, Electric Fence, duma, Insure


++. Memory leak detection: In the C programming language for instance, memory
leaks are not as easily detected - software tools called memory debuggers are
often used to find memory leaks enabling the programmer to find these problems
much more efficiently than inspection alone.

 Parser generators: Lex, Yacc


 Performance analysis or profiling
 Refactoring Browser
 Revision control: Bazaar, Bitkeeper, Bonsai, ClearCase, CVS, Git, GNU arch,
Mercurial, Monotone, Perforce, PVCS, RCS, SCM, SCCS, SourceSafe, SVN,
LibreSource Synchronizer
 Scripting languages: Awk, Perl, Python, REXX, Ruby, Shell, Tcl
 Search: grep, find
 Source-Code Clones/Duplications Finding

66
 Source code formatting
 Source code generation tools
 Static code analysis: C++test, Jtest, lint, Splint, PMD, Findbugs, .TEST
 Text editors: emacs, vi, vim

3.4 Integrated development environments

Integrated development environments (IDEs) merge the features of many tools into one
complete package. They are usually simpler and make it easier to do simple tasks, such as
searching for content only in files in a particular project. IDEs are often used for
development of enterprise-level applications.Some examples of IDEs are:

 Delphi
 C++ Builder (CodeGear)
 Microsoft Visual Studio
 EiffelStudio
 GNAT Programming Studio
 Xcode
 IBM Rational Application Developer
 Eclipse
 NetBeans
 IntelliJ IDEA
 WinDev
 Code::Blocks
 Lazarus

3.5 What is CASE Tools?

CASE tools are a class of software that automates many of the activities involved in
various life cycle phases. For example, when establishing the functional requirements of
a proposed application, prototyping tools can be used to develop graphic models of
application screens to assist end users to visualize how an application will look after
development. Subsequently, system designers can use automated design tools to
transform the prototyped functional requirements into detailed design documents.
Programmers can then use automated code generators to convert the design documents
into code. Automated tools can be used collectively, as mentioned, or individually. For
example, prototyping tools could be used to define application requirements that get
passed to design technicians who convert the requirements into detailed designs in a
traditional manner using flowcharts and narrative documents, without the assistance of
automated design software.

It is the scientific application of a set of tools and methods to a software system which is
meant to result in high-quality, defect-free, and maintainable software products. It also
refers to methods for the development of information systems together with automated
tools that can be used in the software development process.

67
3.6 Types of CASE Tools

Some typical CASE tools are:

 Configuration management tools


 Data modeling tools
 Model transformation tools
 Program transformation tools
 Refactoring tools
 Source code generation tools, and
 Unified Modeling Language

Many CASE tools not only yield code but also generate other output typical of various
systems analysis and design methodologies such as:

 data flow diagram


 entity relationship diagram
 logical schema
 Program specification
 SSADM.
 User documentation

3.7 History of CASE

The term CASE was originally formulated by software company, Nastec Corporation of
Southfield, Michigan in 1982 with their original integrated graphics and text editor
GraphiText, which also was the first microcomputer-based system to use hyperlinks to
cross-reference text strings in documents Under the direction of Albert F. Case, Jr. vice
president for product management and consulting, and Vaughn Frick, director of product
management, the DesignAid product suite was expanded to support analysis of a wide
range of structured analysis and design methodologies, notable Ed Yourdon and Tom
DeMarco, Chris Gane & Trish Sarson, Ward-Mellor (real-time) SA/SD and Warnier-Orr
(data driven).

The next competitor into the market was Excelerator from Index Technology in
Cambridge, Mass. While DesignAid ran on Convergent Technologies and later
Burroughs Ngen networked microcomputers, Index launched Excelerator on the IBM PC/
AT platform. While, at the time of launch, and for several years, the IBM platform did
not support networking or a centralized database as did the Convergent Technologies or
Burroughs machines, the allure of IBM was strong, and Excelerator came to prominence.
Hot on the heels of Excelerator were a rash of offerings from companies such as
Knowledgeware (James Martin, Fran Tarkenton and Don Addington), Texas Instrument's
IEF and Accenture's FOUNDATION toolset (METHOD/1, DESIGN/1, INSTALL/1,
FCP).

68
CASE tools were at their peak in the early 1990s. At the time IBM had proposed
AD/Cycle which was an alliance of software vendors centered around IBM's Software
repository using IBM DB2 in mainframe and OS/2:

The application development tools can be from several sources: from IBM, from vendors,
and from the customers themselves. IBM has entered into relationships with Bachman
Information Systems, Index Technology Corporation, and Knowledgeware, Inc. wherein
selected products from these vendors will be marketed through an IBM complementary
marketing program to provide offerings that will help to achieve complete life-cycle
coverage.

With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening
the market for the mainstream CASE tools of today. Interestingly, nearly all of the
leaders of the CASE market of the early 1990s ended up being purchased by Computer
Associates, including IEW, IEF, ADW, Cayenne, and Learmonth & Burchett
Management Systems (LBMS).

3.8 Categories of Case Tools

CASE Tools can be classified into 3 categories:

 Tools support only specific tasks in the software process.


 Workbenches support only one or a few activities.
 Environments support (a large part of) the software process.

Workbenches and environments are generally built as collections of tools. Tools can
therefore be either stand alone products or components of workbenches and
environments.

3.9 CASE Environment

An environment is a collection of CASE tools and workbenches that supports the


software process. CASE environments are classified based on the focus/basis of
integration

 Toolkits
 Language-centered
 Integrated
 Fourth generation
 Process-centered

69
3.9.1 Toolkits
Toolkits are loosely integrated collections of products easily extended by aggregating
different tools and workbenches. Typically, the support provided by a toolkit is limited to
programming, configuration management and project management. And the toolkit itself
is environments extended from basic sets of operating system tools, for example, the
Unix Programmer's Work Bench and the VMS VAX Set. In addition, toolkits' loose
integration requires user to activate tools by explicit invocation or simple control
mechanisms. The resulting files are unstructured and could be in different format,
therefore the access of file from different tools may require explicit file format
conversion. However, since the only constraint for adding a new component is the
formats of the files, toolkits can be easily and incrementally extended.

3.9.2 Language-centered
The environment itself is written in the programming language for which it was
developed, thus enable users to reuse, customize and extend the environment. Integration
of code in different languages is a major issue for language-centered environments. Lack
of process and data integration is also a problem. The strengths of these environments
include good level of presentation and control integration. Interlisp, Smalltalk, Rational,
and KEE are examples of language-centered environments.

3.9.3 Integrated
These environments achieve presentation integration by providing uniform, consistent,
and coherent tool and workbench interfaces. Data integration is achieved through the
repository concept: they have a specialized database managing all information produced
and accessed in the environment. Examples of integrated environment are IBM AD/Cycle
and DEC Cohesion.

3.9.4 Fourth generation


Forth generation environments were the first integrated environments. They are sets of
tools and workbenches supporting the development of a specific class of program:
electronic data processing and business-oriented applications. In general, they include
programming tools, simple configuration management tools, document handling facilities
and, sometimes, a code generator to produce code in lower level languages. Informix
4GL, and Focus fall into this category.

3.9.5 Process-centered
Environments in this category focus on process integration with other integration
dimensions as starting points. A process-centered environment operates by interpreting a
process model created by specialized tools. They usually consist of tools handling two
functions:

 Process-model execution, and


 Process-model production

70
Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia.[6]

3.10 Application areas of CASE Tools

All aspects of the software development life cycle can be supported by software tools,
and so the use of tools from across the spectrum can, arguably, be described as CASE;
from project management software through tools for business and functional analysis,
system design, code storage, compilers, translation tools, test software, and so on.

However, it is the tools that are concerned with analysis and design, and with using
design information to create parts (or all) of the software product, that are most
frequently thought of as CASE tools. CASE applied, for instance, to a database software
product, might normally involve:

 Modeling business/real world processes and data flow


 Development of data models in the form of entity-relationship diagrams
 Development of process and function descriptions
 Production of database creation SQL and stored procedures

3.11 CASE Risk

Common CASE risks and associated controls include:

 Inadequate Standardization: Linking CASE tools from different vendors (design


tool from Company X, programming tool from Company Y) may be difficult if
the products do not use standardized code structures and data classifications. File
formats can be converted, but usually not economically. Controls include using
tools from the same vendor, or using tools based on standard protocols and
insisting on demonstrated compatibility. Additionally, if organizations obtain
tools for only a portion of the development process, they should consider
acquiring them from a vendor that has a full line of products to ensure future
compatibility if they add more tools.

 Unrealistic Expectations: Organizations often implement CASE technologies to


reduce development costs. Implementing CASE strategies usually involves high
start-up costs. Generally, management must be willing to accept a long-term
payback period. Controls include requiring senior managers to define their
purpose and strategies for implementing CASE technologies.

 Quick Implementation: Implementing CASE technologies can involve a


significant change from traditional development environments. Typically,
organizations should not use CASE tools the first time on critical projects or
projects with short deadlines because of the lengthy training process.
Additionally, organizations should consider using the tools on smaller, less
complex projects and gradually implementing the tools to allow more training
time.

71
 Weak Repository Controls : Failure to adequately control access to CASE
repositories may result in security breaches or damage to the work documents,
system designs, or code modules stored in the repository. Controls include
protecting the repositories with appropriate access, version, and backup controls.

3.12 HIPO Diagrams

The HIPO (Hierarchy plus Input-Process-Output) technique is a tool for planning and/or
documenting a computer program. A HIPO model consists of a hierarchy chart that
graphically represents the program‘s control structure and a set of IPO (Input-Process-
Output) charts that describe the inputs to, the outputs from, and the functions (or
processes) performed by each module on the hierarchy chart.

3.13 Strengths, weaknesses, and limitations

Using the HIPO technique, designers can evaluate and refine a program‘s design, and
correct flaws prior to implementation. Given the graphic nature of HIPO, users and
managers can easily follow a program‘s structure. The hierarchy chart serves as a useful
planning and visualization document for managing the program development process.
The IPO charts define for the programmer each module‘s inputs, outputs, and algorithms.

In theory, HIPO provides valuable long-term documentation. However, the ―text plus
flowchart‖ nature of the IPO charts makes them difficult to maintain, so the
documentation often does not represent the current state of the program.

By its very nature, the HIPO technique is best used to plan and/or document a
hierarchically structured program.

The HIPO technique is often used to plan or document a structured program A variety of
tools, including pseudocode (and structured English can be used to describe processes on
an IPO chart. System flowcharting symbols are sometimes used to identify physical
input, output, and storage devices on an IPO chart.

3.14 Components of HIPO

A completed HIPO package has two parts. A hierarchy chart is used to represent the top-
down structure of the program. For each module depicted on the hierarchy chart, an IPO
(Input-Process-Output) chart is used to describe the inputs to, the outputs from, and the
process performed by the module.

72
3.14.1 The hierarchy chart

It summarises the primary tasks to be performed by an interactive inventory program.


Figure 7 shows one possible hierarchy chart (or visual table of contents) for that program.
Each box represents one module that can call its subordinates and return control to its
higher-level parent.

A Set of Tasks to Be Performed by an Interactive Inventory Program is:

 Manage inventory
 Update stock
 Process sale
 Process return
 Process shipment
 Generate report
 Respond to query
 Display status report
 Maintain inventory data
 Modify record
 Add record
 Delete record

73
Figure 7 A hierarchy chart for an interactive inventory control program.

Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html

At the top of Figure 7 is the main control module, Manage inventory (module 1.0). It
accepts a transaction, determines the transaction type, and calls one of its three
subordinates (modules 2.0, 3.0, and 4.0).

Lower-level modules are identified relative to their parent modules; for example,
modules 2.1, 2.2, and 2.3 are subordinates of module 2.0, modules 2.1.1, 2.1.2, and 2.1.3
are subordinates of 2.1, and so on. The module names consist of an active verb followed
by a subject that suggests the module‘s function.

The objective of the module identifiers is to uniquely identify each module and to
indicate its place in the hierarchy. Some designers use Roman numerals (level I, level II)
or letters (level A, level B) to designate levels. Others prefer a hierarchical numbering
scheme; e.g., 1.0 for the first level; 1.1, 1.2, 1.3 for the second level; and so on. The key
is consistency.

The box at the lower-left of Figure 7 is a legend that explains how the arrows on the
hierarchy chart and the IPO charts are to be interpreted. By default, a wide clear arrow
represents a data flow, a wide black arrow represents a control flow, and a narrow arrow
indicates a pointer.

3.14.2 The IPO charts

An IPO chart is prepared to document each of the modules on the hierarchy chart.

74
3.14.2.1 Overview diagrams

An overview diagram is a high-level IPO chart that summarizes the inputs to, processes
or tasks performed by, and outputs from a module. For example, shows an overview
diagram for process 2.0, Update stock. Where appropriate, system flowcharting symbols
are used to identify the physical devices that generate the inputs and accept the outputs.
The processes are typically described in brief paragraph or sentence form. Arrows show
the primary input and output data flows.

Figure 7.1 An overview diagram for process 2.0.

Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html

Overview diagrams are primarily planning tools. They often do not appear in the
completed documentation package.

3.14.2.2 Detail diagrams

A detail diagram is a low-level IPO chart that shows how specific input and output data
elements or data structures are linked to specific processes. In effect, the designer

75
.

integrates a system flowchart into the overview diagram to show the flow of data and
control through the module.

Figure 7.2 shows a detail diagram for module 2.0, Update stock. The process steps are
written in pseudocode. Note that the first step writes a menu to the user screen and input
data (the transaction type) flows from that screen to step 2. Step 3 is a case structure. Step
4 writes a transaction complete message to the user screen.

The solid black arrows at the top and bottom of the process box show that control flows
from module 1.0 and, upon completion, returns to module 1.0. Within the case structure
(step 3) are other solid black arrows.

Following case 0 is a return (to module 1.0). The two-headed black arrows following
cases 1, 2, and 3 represent subroutine calls; the off-page connector symbols (the little
home plates) identify each subroutine‘s module number. Note that each subroutine is
documented in a separate IPO chart. Following the default case, the arrow points to an
on-page connector symbol numbered 1. Note the matching on-page connector symbol
pointing to the select structure. On-page connectors are also used to avoid crossing
arrows on data flows.

Figure 7.2 A detail diagram for process 2.1.

76
Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html

Often, detailed notes and explanations are written on an extended description that is
attached to each detail diagram. The notes might specify access methods, data types, and
so on.

Figure 64.4 shows a detail diagram for process 2.1. The module writes a template to the
user screen, reads a stock number and a quantity from the screen, uses the stock number
as a key to access an inventory file, and updates the stock on hand. Note that the logic
repeats the data entry process if the stock number does not match an inventory record. A
real IPO chart is likely to show the error response process in greater detail.

3.14.2.3 Simplified IPO charts

Some designers simplify the IPO charts by eliminating the arrows and system flowchart
symbols and showing only the text. Often, the input and out put blocks are moved above
the process block (Figure 64.5), yielding a form that fits better on a standard 8.5 × 11
(portrait orientation) sheet of paper. Some programmers insert modified IPO charts
similar to Figure 64.5 directly into their source code as comments. Because the
documentation is closely linked to the code, it is often more reliable than stand-alone
HIPO documentation, and more likely to be maintained.

77
Fig 7.3 Simplified HIPO diaram

Source: www.hit.ac.il/staff/leonidM/information-systems/ch64.html

Detail diagram —
A low-level IPO chart that shows how specific input and output data elements or
data structures are linked to specific processes.
Hierarchy chart —
A diagram that graphically represents a program‘s control structure.
HIPO (Hierarchy plus Input-Process-Output) —
A tool for planning and/or documenting a computer program that utilizes a
hierarchy chart to graphically represent the program‘s control structure and a set

78
of IPO (Input-Process-Output) charts to describe the inputs to, the outputs from,
and the functions performed by each module on the hierarchy chart.
IPO (Input-Process-Output) chart —
A chart that describes or documents the inputs to, the outputs from, and the
functions (or processes) performed by a program module.
Overview diagram —
A high-level IPO chart that summarizes the inputs to, processes or tasks
performed by, and outputs from a module.
Visual Table of Contents (VTOC) —
A more formal name for a hierarchy chart.

3.15 Software

In the 1970s and early 1980s, HIPO documentation was typically prepared by hand using
a template. Some CASE products and charting programs include HIPO support. Some
forms generation programs can be used to generate HIPO forms. The examples in this #
were prepared using Visio.

Activity J Discuss the historical development of Case Tools

4.0 Conclusion

Programming tools are so important for effective program design.

5.0 Summary.

In this unit, you have learnt that:

 Programming environments gives the basic tools and Application Programming


Interfaces, or APIs, necessary to construct programs.
 Using the HIPO technique, designers can evaluate and refine a program‘s design,
and correct flaws prior to implementation.
 CASE tools are a class of software that automates many of the activities involved
in various life cycle phases

6.0 Tutor Marked Assignment

 Explain Programming Environment


 What is Case Tools?, enumerate different categories of case tools
 What is HIPO technique?
 With the aid of well labeled diagrams, discuss the components of Hipo.

7.0 Further Reading And Other Resources

Gane, C. and Sarson, T., Structured Systems Analysis: Tools and Techniques, Prentice-
Hall, Englewood Cliffs, NJ, 1979.

79

You might also like