0% found this document useful (0 votes)
54 views21 pages

Software Engineering - CH 1 - Introduction - Handout

SOFTWAR

Uploaded by

zem091415
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views21 pages

Software Engineering - CH 1 - Introduction - Handout

SOFTWAR

Uploaded by

zem091415
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 21

Software Engineering

Chapter One
Overview of Software Engineering
Chapter Objectives

At the end of the chapter students will be able to:

 Discuss the overview of software engineering in terms of history, components,


characteristics, goals, roles, principles relationship and why.
 Identify the software development life cycle.
 Discuss the software process models.

Chapter Contents
1.1. Introduction
(Definition of software, Characteristics of software, Components of software, Software
applications, Definition of software engineering, History of software engineering,
Software complexity, Why software is inherently complex, The need for an engineering
approach, Goal of software engineering, Role of software engineer, Principles of
software engineering, Relationship between software engineering and computer
science, Relationship between software engineering and other disciplines)
1.2. Software Life Cycle
(Feasibility study, Requirement analysis and specification, Design and specification,
Coding and module testing, Integration and system testing, Delivery and maintenance)
1.3. Software Process Models
(Waterfall model, Prototyping model, Spiral model, concurrent development model)
1.1. Introduction
1.1.1. Definition of Software
Software can be described as:
(i) Instructions (computer programs) that when executed provide desired function and
performance. (When we execute instructions we can perform desired functions and
maintain the performance).
(ii) Data structures that enable the programs to adequately manipulate information for the
required results.
(iii) Documents that describe the operation and use of the programs for the better
understandability of the user to the program.
1.1.2. Characteristics of Software
Software is a logical element when compared with the physical element hardware. It has
characteristics that are considerably different than those of hardware.

a. Software is developed or engineered, it is not manufactured


Although some similarities exist between software development and hardware manufacturing,
the two activities are fundamentally different.
 In both activities, high quality is achieved through good design, but the manufacturing
phase for hardware can introduce quality problems that are nonexistent (easily
corrected) for software.

1
 Both activities are dependant on people, but the relationship between people applied
and work accomplished is entirely different.
 Both activities require the construction of a “product” but the approaches are different.
Software costs are concentrated in engineering, i.e., software projects can not be
managed as if they are manufacturing projects.
b. Software does not “Wear Out”
Hardware have tendency of wearing out with time but software don’t wear out. But software
deteriorates with time as with time new bugs are found out in the software. This introduces
patch work to the software in order to remove these bugs. This leads to entropy i.e., the
software deteriorate in the conceptual sense and thus become outdated for an organization.

Hardware and Software Failure Rates


Hardware failure shows failure rate as a function of time for hardware. The relationship, often
called the “bathtub curve”, indicates that hardware exhibits relatively high failure rates early in
its life (these failures are often attributable to design or manufacturing defects); defects are
corrected and the failure rate drops to a steady-state level (ideally, quite low) for some period
of time. As time passes, however, the failure rate rises again as hardware components suffer
from the cumulative effects of dust, vibration, abuse, temperature extremes, and other
environmental maladies. Stated simply, the hardware begins to wear out. Software is not
susceptible to the environmental maladies that cause hardware to wear out.

In theory, therefore, the failure rate curve for software should take the form of the “idealized
curve” shown in Fig 1.2. Undiscovered defects will cause high failure rates early in the life of
a program. However, these are corrected (ideally, without introducing other errors) and the
curve flattens as shown. The idealized curve is a gross over simplification of actual failure
models for software. However, the implication is clear, software doesn’t wear out. But it does
deteriorate.

This seeming contradiction can best be explained by considering the “actual curve” shown in
Fig 1.2. During its life, software will undergo change (e.g. maintenance). As the changes are
made, it is likely that some new defects will be introduced, causing the failure rate curve to
spike as shown in Fig 1.2. Before the curve can return to the original steady state failure rate,
another change is requested, causing the curve to spike again. Slowly, the minimum failure
rare level begins to rise i.e., the software is deteriorating due to change.

Actual Curve
Failure Failure
Rate Rate

Infant Mortality Wear Out

Idealized Curve

Time Time

Fig 1.1. Hardware failure curve Fig 1.2. Software idealized and actual failure curve

2
c. Software is custom built.
Design engineer draws a simple schematic of the components and is implemented so that it can
be reused in many different programs. We built scientific subroutine libraries that were
reusable in broad array of engineering and scientific applications. Software is customized
generally as they need to be developed according to the requirements of the user.

1.1.3. Components of Software


A software component should be designed and implemented so that it can be reused in different
programs. The following are the important elements/components of software:
 Methods: are procedures for producing some result. It is also referred to as a technique.
They involve some formal notations and processes.
 Procedure: then combines tools and/or methods to produce a particular product.
Algorithms are examples of procedures.
 Tools: are automated systems that increase accuracy, efficiency, productivity, or equality
of the end product. A tool that automates the preparation of a data flow diagram can
increase the accuracy, efficiency, and productivity.
 Paradigms: refers to particular approaches or philosophies for building software. Different
paradigms have their own pros and cons that make them suitable over one another in a
given situation.

1.1.4. Software Applications


When looking at software application areas information content and determinancy are the
important factors in determining the nature of software application.
 Content refers to meaning & form of incoming and outgoing information. For eg., (i)
business applications use highly structured input data and produce formatted reports
(ii) Software that controls an automated machine accepts discrete data items with the
limited structure and produces individual machine commands in a rapid succession.
 Determinancy refers to the predictability of the order and timing of information. For
e.g., engineering analysis program accepts data that have a pre-defined order, executes
analysis algorithms without interruption, produces resultant data report. Indeterminate
characteristic – a multi-user OS, accepts inputs that have varied content and arbitrary
timing, executes algorithm that can be interrupted by external conditions and produces
output that varies as a function of environment and time.
Based on the above two factors, the following software areas indicate breadth of potential
applications:
1) System Software – collection of programs and utilities for providing service to other
program.
 Some process are complex but determinate, information structures (eg., Compilers,
Editors, File management utilities)
 Other system applications process largely indeterminate data (eg., O S components,
drivers, telecommunications processors)
 System software area is characterized by heavy interaction with computer hardware,
usage by multiple users.
 Concurrent operation that requires scheduling, resource sharing and sophisticated
process management, complex data structures and multiple external interfaces.

3
2) Real-time Software – software that analyses, monitors, controls real-world events that
occur in real time. Elements include
 Data gathering component that collects and formats information from an external
environment.
 Analysis component that transforms information as required by the application.
 Control / output component that responds to the external environment.
 Monitoring component that co-ordinates all other components so that real-time
response can be maintained (ranging from 1 millisecond to 1 second)
3) Business Software – this is the largest single software application area.
 Discrete systems like payroll accounts receivable/ payable, inventory have evolved
into MIS (Management Information System) that access one or more large
databases containing business information.
 Applications here restructured existing data in a way that facilitates business
operations or management decision-making.
 Also contain interactive computing (eg., point-of-sale processing)
4) Engineering & Scientific Software – have been characterized by number of algorithms.
 Applications Range from
 Astronomy to volcanology
 Automotive stress analysis to space shuttle orbital dynamics
 Molecular biology to automated manufacturing
 Computer–aided design, system simulation and other interactive applications have
begun to take on real-time and system software characteristics.
5) Embedded Software – resides in the read-only memory and is used to control products
and systems for the consumer and industrial markets.
 Can perform very limited and esoteric functions (e. keypad control for microwave
oven)
 Provide significant function and control capability (e., digital functions in an
automobile such as fuel control, dashboard displays and breaking systems)
6) Personal Computer Software – this has taken the world into a new era.
 Word processing, Spread sheets, Computer graphics, Multimedia, Entertainment,
Database management & database access, Personal and business financial
applications, External network.

7) Web based Software – web pages retrieved by a browser are software that incorporates
executable instructions (e.g. HTML, Java, Perl) and data (e.g. hypertext and variety of
visual and audio formats).
 The network becomes a massive computer providing an almost unlimited software
resource that can be accessed by anyone.
8) Artificial Intelligence Software – makes use of non-numerical algorithms to solve
complex problems that cannot be done as straightforward analysis.
 Expert systems (called as knowledge based systems), Pattern recognition (image
and voice), Games, Artificial neural networks.
1.1.5. Definition of software engineering
Software Engineering is a branch of computer science that deals with the construction of high
quality software systems by a team of professionals. A good definition of software engineering
which captures its essence is given by Parnas as “a multi-person construction of multi-version

4
software.” Or it is a systematic approach to the development, operation, maintenance and
retirement of software.
(i) The application of a systematic, disciplined, quantifiable approach to the development,
operation and maintenance of software i.e., the application of engineering to software. And
(ii) The study of approaches.
Software engineering is the establishment and use of sound engineering principles in order to
obtain software economically, which is reliable and works efficiently on real machines.

1.1.6. History of Software Engineering


In the early days of computing, software development was mainly a single-person task. The
problem to be solved-very often of a mathematical nature-was well understood, and there was no
distinction between the programmer and the end user of the application. The end user, very often
a scientist or an engineer, developed the application as a support to his or her own activity. The
application, by today's standards, was rather simple. Thus, software development consisted only
of coding in some low-level language.
The model used in these early days may be called the code-and-fix model. Basically, this term
denotes a development process that is neither precisely formulated nor carefully controlled; rather,
software production consists of the iteration of two steps: (1) write code and (2) fix it to eliminate
errors, enhance existing functionality, or add new features. The code-and-fix model has been the
source of many difficulties and deficiencies. In particular, after a sequence of changes, the code
structure becomes so messy that subsequent fixes become harder to apply and the results become
less reliable. These problems, however, were mitigated by the fact that applications were rather
simple and both the application and the software were well understood by the engineer.
As hardware capacity grew, the desire to apply computers in more and more application domains,
such as business administration, led to software being used in less and less understood
environments. A sharp separation arose between software developers and end users. End users
with little or no technical background in science and mathematics, such as sales agents and
personnel administrators, could not be expected to develop their own applications, due both to the
intrinsic complexity of the application's design and implementation and to the users' lack of
technical background required to master the complexity of computer systems.
In today's environment, software is developed not for personal use, but for people with little or no
background in computers. Sometimes, software is developed in response to a request from a
specific customer; other times, it is developed for the general market. Both of these situations add
new dimensions to the software that were not present in the previous age. Now software is a
product that must be marketed, sold, and installed on different machines at different sites. Users
must be trained in its use and must be assisted when something unexpected happens.
Thus, economic, as well as organizational and psychological, issues become important. In addition,
demand has increased for much higher levels of quality in applications. For example, reliability
requirements have become more stringent. One reason is that end users are not as tolerant of
system failures as are system designers. Another reason is that computer-based systems are
5
increasingly applied in areas such as banking operations and plant control, where system failures
may have severe consequences.
Another sharp difference from the previous age is that software development has become a group
activity. Group work requires carefully thought-out organizational structures and standard
practices, in order to make it possible to predict and control developments.
The code-and-fix process model was inadequate to deal with the new software age. First, the
increased size of the systems being developed made it difficult to manage their complexity in an
unstructured way. This problem was made worse by the turnover of software personnel working on
projects. Adding new people to an ongoing project was extremely difficult because of the poor
documentation available to guide them in the task of understanding the application properly.
Fixing code was difficult because no anticipation of change was taken into account before the start
of coding. Similarly, it was difficult to remove errors that required major restructuring of the
existing code. These problems underscored the need for a design phase prior to coding.
The second reason behind the inadequacy of the code-and-fix model was the frequent discovery,
after development of the system, that the software did not match the user's expectations. So the
product either was rejected or had to be redeveloped to achieve the desired goals. Software
development became a sort of never-ending activity. As a result, the development process was
unpredictable and uncontrollable, and products were completed over schedule and over budget and
did not meet quality expectations. Consequently, it was realized that a more detailed and careful
analysis of the requirements was necessary before design and coding could start.
The failure of the code-and-fix process model led to the recognition of the so-called software crisis
and, in turn, to the birth of software engineering as a discipline. In particular, the recognition of a
lack of methods in the software production process led to the concept of the software life cycle.

1.1.7. Software complexity


In earlier days of computing, programming was primary a personal activity. The problems handled
were mainly well-defined scientific problems. For instance, a physicist may want to
solve a differential equation he formulated in his research. So, he may develop
the necessary algorithm or select one among the already developed ones and
code a program using one programming language. The physicist then debugs
the program, fixes the bugs encountered, executes it and gets the result he is
interested in. Then the developer may keep the program for future use or
discard it right away. In this type of situation, the problem is well understood
by the developer. The developer himself does the design and coding and
hence identification of bugs (or errors) and fixing them is not a difficult task.
As the developer is also the user of the system, no problem is encountered in
using the program. Development of such software is not that difficult and the
problem is not so complex to go beyond an individual understanding power.
However, today, the majority of problems requiring software solution are not
as simple as the ones discussed above. In these problems, we find
applications that exhibit a very rich set of behaviors, as for example, in
reactive systems that drive or are driven by events in the physical world, and
for which time and space are scarce resources; applications that maintain the
6
integrity of hundreds of thousands of records of information while allowing
concurrent updates and quires; and systems for the command and control of
real world entities, such as airplanes, space shuttles, etc. These systems tend
to have a long life span, and over time many users come to depend upon their
proper functioning.

The development of such industrial strength software systems is so complex


that it is beyond the intellectual capacity of a human being. They are
developed by a team of professionals and it is intensely difficult, if not
impossible, for an individual developer to comprehend all the subtleties of
their design. Interestingly, the complexity seems to be an essential property of
all large software systems, that is to say, it may be mastered, but can never
be eliminated.

1.1.8. Why software is inherently complex


According to Grady Booch, the inherent complexity of software arises from the
following four elements:
The complexity of the problem domain- quite often, the problems seeking
software solution involve elements of unavoidable complexity, in which myriad
of competing or even contradictory requirements are found. Consider for
instance, the requirements for the electronic systems of a multi-engine
aircraft. Besides, its functional requirements, we also find non-functional
requirements such as usability, performance, cost, survivability, and reliability.
These requirements pose a great deal of unconstrained complexity. The
complexities are even made worse due to communication gap between the
users of the system and its developers. To complicate matters, the
requirements of the software system often change during its development, as
both the developers and the users better understand the problem through the
development process.
The difficulty of managing the development process- the development
of large software systems involves a team of professionals- whose size may
range from few dozens to hundreds. This requires smooth communication
among the developers and a good coordination of the team to lead to the
specified goal. Each member of the team may participate only in a portion of
the software system, but his/her work may affect the work of quite a large
number of other participants. While the communication and coordination
process by themselves are sources of great complexity, some developers may
leave the team in the middle of the development and new ones may join the
developing team.
The flexibility possible through software- as software offers the ultimate
flexibility; it is possible for the developer to express almost any kind of
abstraction. On the other hand, this flexibility also forces the developer to
develop all primitive building blocks of the high level abstraction. Because of
this, unlike other areas like the electronics industry, the software industry
could not have well developed standards.
The problem of characterizing the behavior of discrete systems- Our
application which may encompass hundreds or even thousands of variables

7
and more than one thread of control runs on a digital computer. Therefore, the
application is modeled in discrete form. Unlike, analog systems, in which small
variations inputs or certain intermediate states do not normally result in
unexpectedly large variations in the output, discrete systems may be quite
unpredictable. A small change in an input or event changes the state of the
system totally. If not properly designed, this change may result in quite
unexpected output.
1.1.9. The need for an engineering approach
It was in the middle to late 1960’s that truly large software systems were
attempted commercially. The complexity nature of the systems and how to
tackle them were not well understood. The attempt of scaling up the
techniques used by individual programmers of small systems to large systems
could not work. Large software system were over-budget and behind
schedule. Even after a long delay, they were found with substantial amount of
bugs and not meeting the expected requirements. This has coined the term
software crisis.

A number of international conferences were held to discuss on the difficulties


faced during software development and hence software crisis and many
solutions were proposed. Some suggested better development tools, including
programming languages. Others argued over better management techniques.
Still others proposed different team organizations. Many called for
organization-wide standards such as uniform coding conventions. There was
no lack of ideas. However, the final consensus was that software development
should be approached in the same way as engineers had built other large
complex systems such as computers, airplanes, bridges, factories and ships.
This has given rise to software engineering.
Software engineering is an outgrowth of system and hardware engineering. Its
concern is about designing high quality software and mastering complexity. To
master complexities, software engineers employ the engineering approach
which requires defining the problem clearly and then applying standard
methodologies, tools and techniques for solving it. Defining the problem may
take two forms: analyzing the problem to determine its nature and then
synthesizing a solution based on the analysis. In short, the engineering
approach requires design, management, organization, tools, theories,
methodologies, and techniques.
1.1.10. Goal of software engineering
 To take SW development closer to science and engineering and away
from ad-hoc approaches for development whose outcomes are not
predictable
 To systematically develop software to satisfy the needs of clients (the
people whose needs are to be satisfied by the software)
1.1.11. Role of software engineer
The evolution of software engineering has defined the role of the software
engineer and the required experience and education. A software engineer
must be:

8
a good programmer, be fluent in one or more programming

languages
 well-versed in data structures and algorithms
The above are requirements for “programming-in-the-small”. But a software
engineer is involved in “programming in the large” which requires-
 familiarity with several design approaches
 ability to translate vague requirements and desires into precise
specifications,
 ability to converse with the user of a system in terms of the
application rather than in “computerese.”
 flexibility and openness to grasp, and become conversant with,
the essentials of different applications
 the ability to move among several levels of abstraction at different
stages of the project, from specific application procedures and
requirements, to abstractions for the software system, to specific
design for the system, and finally to the detailed coding level
 the ability of modeling. The software engineer must be able to build
and use a model of the application to guide choices of the many
trade-offs that he or she will face. The model is used to answer
questions about the behavior of the system and its performance.
Preferably, the model can be used by the engineer as well as the
user.
 communication and interpersonal skills, as the engineer works in a
team
 ability to schedule work, both for his or her own and that of others –
management
Generally, a software engineer is responsible for many things and is different
from a programmer. “Scientists measure their results against nature.
Engineers measure their results against human needs. Programmers ... don't
measure their results.

1.1.12. Principles of software engineering


In order to achieve the software qualities, software engineering must have sound principles to
base itself upon. These principles apply both to the process of software engineering and its final
product, the software.
The seven most important principles that have to be applied during software development are: rigor
and formality, separation of concerns, modularity, abstraction, anticipation of change, generality,
and incrementality.

Rigor and formality


Software engineering is a creative design activity, but it must be practiced systematically. Rigor is a
necessary complement to creativity that increases our confidence in our developments.
Rigor is a very important principle to be followed because it is only through a rigorous
approach that we can produce more reliable products, control their costs, and increase our
confidence in their reliability.
Various degrees of rigor may be achieved and the highest degree is called formality. Thus, formality is a
9
stronger requirement than rigor. It requires the software process to be driven and evaluated by
mathematical laws. Example of a formal process in software development is programming. Formality
should also be applied to other phases. Yet, it may not always be necessary to be formal, but the
engineer must know when and how to be formal to apply it when the need arises. Formality can be
software process driven and evaluated by mathematical laws.
Separation of concerns
It is the most important (or even the only) principle applied to master complexity. It allows us to
deal with different individual aspects of the problem so that we can concentrate on each separately.
It may be applied in a number of ways. One may separate concerns in terms of time, quality,
views, or parts.
Modularity
A complex system must be divided into simpler pieces called modules and a system that is
composed of modules is called modular.
Modularity allows the principles of separation of concerns to be applied in two phases: when
dealing with the details of each module in isolation (and ignoring details of other modules) and
when dealing with the overall characteristics of all modules and their relationships in order to
integrate them into a coherent system.
Modularity in software development tries to achieve three goals: capability of decomposing a
complex system, of composing it from existing modules, and of understanding the system in
pieces.
To achieve these goals, modules must have high cohesion and low coupling. If the elements of a
module are related to each other strongly, then we say that the module exhibits high
cohesion. Coupling is a property that exists between two modules. If two modules depend upon
each other very heavily, then there is high coupling between them.
Abstraction
Abstraction is another case of separation of concerns. It is a process by which we concentrate on
important aspects of a phenomenon leaving aside its details. It is a very important principles
applied to master complexity.
The level of abstraction depends up on the purpose of abstraction. For instance, a computer
system will present different levels of abstractions for the end user, the application
programmer, the systems programmer, the hardware maintainer and its designer. In software
development, one needs to move at different levels of abstraction.
Anticipation of change
One important quality of software discussed is evolvability. To achieve this quality, software must be
designed and developed by anticipating changes likely to occur. When the likely changes are identified,
the software must be developed in such a way that future changes will be made easily.
Reusability is also another quality, which is greatly affected by anticipation of change. Before, a
component is reused; it may need to be modified. If the change is anticipated before and it is developed
accordingly then, making the modification will be quite easy.

10
Generality
The principle of generality states that whenever you are asked to solve a certain problem, try to focus on
the discovery of a more general problem that may be hidden behind the problem at hand. It may happen
that the generalized problem is not more complex- it may even be simpler- than the original
problem. Being more general, it is likely that the solution to the problem has more potential for being
reused. It may even happen that the solution is already provided by some off-the-shelf package. Also, it
may happen that by generalizing a problem, you end up designing a module that is invoked at more than
one point of the application, rather than having several specialized solutions. Generality is an important
principle if our goal is to develop general tools or packages for the market.
Incrementality
Incrementality characterizes a process that proceeds in a step-wise fashion, in increments. The desired
goal is reached by successive closer approximation to it. Each approximation is reached by an increment
of the previous one.
Summary of software principle
The software engineering principles just discussed in the previous section are general principles
that should be applied to all phases of the software development process. Therefore, all of them are
applicable to the design phase.
The principle of rigor and formality inspires us to adopt appropriate notations of describing the resulting
design Separation of concerns, modularity, and abstraction are used to tame the complexity of the
design activity and produce designs that are characterized by high level of understandability, to enhance
our confidence in the correctness of our solutions. Finally, anticipation of change and incrementality
will allow us design systems that can evolve easily as requirements change, or systems that can be
enriched progressively in their functions, starting from a small initial version with only limited
functionality.
1.1.13. Relationship between software engineering and computer science
Standing now on its own as a discipline, software engineering has emerged as an important field of
computer science. Indeed, there is a synergistic relationship between it and many other areas in
computer science:
a. Programming languages: these are the central tools used in software development. The most
notable example of this influence on programming language is the inclusion of modularity
features, such as separate and independent compilation, and the separation of specification
from implementation, in order to support team development of large software. Programming is
a subject of software engineering. Programming is now an individualistic approach but
software engineering is more a process involving many people and resources also.
b. Operating systems: The influence of operating systems on software engineering is quite strong
primarily because operating systems were the first really large software systems built, and
therefore, they were the first instances of software that needed to be engineered. Many of the
initial software design ideas originated from early attempts at building operating systems.
c. Databases: databases represent another class of large software systems whose development has
influenced software engineering through the discovery of new design techniques. The most
important influence of the database field on software engineering is through the notion of “data
independence and integrity”, which is yet another instance of the separation of specification
from implementation.

11
d. Artificial intelligence: many software systems build in the artificial intelligence research
community are large and complex systems. But they have been different from other software
systems in significant ways. Many of them were built with only a vague notion of how the
system was going to work. The term “exploratory development” has been used for the process
followed in building these systems.
e. Theoretical models: this discipline has developed a number of models that have become useful
tools in software engineering. For example, finite state machines have served both as the basis
of techniques for software speciation and as models for software design and structure.
Communication protocols and language analyzers use finite state machines as their processing
models.
1.1.14. Relationship between software engineering and other disciplines
Software engineering can not be practiced in a vacuum. There are many problems that are not
specific to software engineering and have been solved in other fields.
a. Management Science: As in any kind of large, militiperson endeavor, we need to do project
estimation, project scheduling, human resource planning, task decomposition and assignment,
and project tracking. Additional personnel issues involve hiring personnel, motivating people,
and assigning the right people to the right tasks.
b. System engineering: this field is concerned with studying complex systems. The underlying
hypothesis is that certain laws govern the behavior of any complex systems composed of many
components with complex relationships. System engineering is useful when one is interested in
concentrating on the system, as opposed its individual components. Systems engineering tries
to discover common themes that apply to diverse systems- for example, chemical plants,
buildings, and bridges.
1.2. Software Development Life Cycle
Starting from the inception of the idea, until it is implemented and delivered and even after
that, a software system undergoes a gradual development and evolution process. This is
said to be the life cycle of the software and it is composed of different phases. Each phase
results in the development of either part of the software or something associated with it.
The earliest code-and-fix approach to software development could not help. In fact, it leads to
the recognition of software crisis and thus the birth of software engineering. Since then,
software engineering has tried to define different models for representing the software life
cycle. Software development life cycle comprises the following phases.
 Feasibility study. Before the beginning of a development process of large software, one
has to conduct a feasibility study in order to assess and evaluate the costs and
benefits of the proposed application. Feasibility study tries to anticipate future
scenarios of the software development. Its result is a document that should contain at least
the following items: 1) a definition of the problem, 2) alternative solutions and their
expected benefits, 3) required resources, costs, and delivery dates in each proposed
alternative solution.

 Requirement analysis and specification. Requirement analysis is usually the first step
of any large-scale software development done after a feasibility study. In this phase,
the functional and non-functional requirements of the software system are identified,
analyzed and then specified. The requirements in here are specified in end user terms.

12
 Design and specification. The next step after requirement analysis and specification is to
design the software system which meets the specified requirements. This phase may be
divided into two sub phases: architectural or high-level design and detailed design.
In the architectural design, the engineer is concentrated on the overall modular
structure of the systems. The different modules that must be comprised the system are
defined without concentrating on their internal details. In the detailed design phase, each module
will be designed and by specifying its internal details.

 Coding and module testing. Once all modules comprising the system are designed, each
will then be coded and tested independently. This is the phase, which produces the actual
code to be submitted to the customer.

 Integration and system testing. The different modules developed and tested in the
previous phase are integrated and then tested in this phase. The integration and testing may
be done step-by-step in which some modules will first be integrated and tested, and when
successful, more modules will be added. Another way is to put all-modules together and
then test the whole system.
 Delivery and maintenance. Once the system passes all the test phases, it is delivered to the
customer and enters a new phase- the maintenance phase. This is the phase in which
errors are corrected, the software is adapted to changing requirements and
environments. In this phase, the software is also perfected in terms of its
performances.

1.3. Software Process Models


The goals of software process models are to determine the order of stages involved in software
development and evolution, and to establish the transition criteria for progressing from one stage
to the next. These include completion criteria for the current stage plus choice criteria and
entrance criteria for the next stage.
According to this viewpoint, process models have a twofold effect:
1. They provide guidance to software engineers on the order in which the various technical
activities should be carried out within a project;
2. They provide framework for managing development and maintenance, in that they enable
us to estimate resources, define intermediate milestones, monitor progress etc.
1.3.1. Waterfall Model
The Waterfall life cycle model was introduced by Winston Royce in 1970 and is currently the most
commonly used model for software development. It is based on developing software in a set of
sequential phases.

The waterfall lifecycle model is the most straightforward model and it is document driven. It views
software development as a series of processes that are expected to be done only once for the
project. They are to be done in the proper order and the results documented for input to subsequent
processes.

13
One of the most critical features of the Waterfall Model is that the phases do not overlap. Design
cannot begin until analysis is completed, and testing cannot begin until coding is completed.

Feasibility Study
Feasibility Study
Requirement Analysis
And Specification
Requirement Analysis
And Specification
Design and
Specification
Design and
Specification
Coding and
Module Testing
Coding and
Module Testing
Integration and
System Testingand
Integration
System Testing
Delivery
Delivery

Maintenance
Maintenance

Fig 1.3. Waterfall model (Original waterfall model)

i. Feasibility Study
 Main aim is to determine whether developing the product is financially and technically

feasible. The purpose is to produce a feasibility study document that evaluates the costs
and benefits of the proposed application. It is highly dependent on the type of SW
developer and the application.
 Document should contain

 Definition of the problem


 Formulation of different solution strategies
 Alternative solutions and their expected benefits
 Cost/benefit analysis performed – determine which solution is the best
 Any solution – not feasible – due to high cost, resource constraints etc
 Required resources, costs, delivery dates in each proposed alternative solution
ii. Requirement Analysis and Specification (What to solve)
 Studied by the customer, developer and the marketing organization.

 New project – much interaction is required between the user and the developer

 Functional Requirements: what the product does.

 Non- Functional Requirements : This may be classified into the following categories:

Reliability (availability, integrity etc), accuracy, performance, operating constraints,


human-computer interface issues, portability issues etc
 Requirements on the development and maintenance process: system test procedures,

priorities of the required functions.


iii. Design Specification (How to solve)
 Description of architecture, module description and relationships.

 High level /architectural design: Deals with overall module structure and organization,

rather then the details of the modules.

14
 Detailed design: High level design is refined by designing each module in detail.
 The design process translates requirements into a representation of the software that can be
assessed for quality before coding begins. Like requirements, the design is documented
and becomes part of the software configuration.
iv. Code generation
 The design must be translated into a machine-readable form. The code generation step

performs this task. If design is performed in a detailed manner, code generation can be
accomplished mechanistically.
v. Testing
 Once code has been generated, program testing begins. The testing process focuses on the

logical internals of the software, ensuring that all statements have been tested, and on the
functional externals; that is, conducting tests to uncover errors and ensure that defined
input will produce actual results that agree with required results.
 Individual modules are tested before being delivered to the customers.

o Alpha Testing: Carried out by the test team inside the organization.
o Beta Testing: Performed by a selected group of friendly customers.
o Acceptance Testing: Performed by the customer to determine.

vi. Support
 Software will undoubtedly undergo change after it is delivered to the customer. Change

will occur because errors have been encountered, because the software must be adapted to
accommodate changes in its external environment (e.g., a change required because of a
new operating system or peripheral device), or because the customer requires functional or
performance enhancements.
 In other words, maintenance work involves performing any one/more of the following

activities:
o Corrective Maintenance : Correcting errors – not discovered during the product
development phase.
o Perfective Maintenance : Improving the implementation and enhancing the
functionalities according to customer’s requirements.
o Adaptive Maintenance : Porting the SW to a new environment.
Strengths of the Waterfall model
 The model is well known by non-software parties making it easier to communicate.

 It offers a disciplined approach to software development because all the steps of the project

can be clear from the beginning.


 It is document driven.

 It is a good model to use when requirements are well understood and are quite stable.

 Requirements are in the hands of the testers early.

 It allows for tight planning and control by project managers.

 Late changes to requirements or design are limited.

15
Weaknesses of the Waterfall model
 The Model is inflexible for change as each phase results are frozen.

 Length of time before anything usable by the customer is produced and the implied

requirement to complete every process correctly the first time.


 The Waterfall Model forces the software engineer to completely fulfill the scope of each

phase before moving on. This is not always a realistic situation. It requires the engineer to
“freeze” the activities of each phase. Requirements are frozen while design is taking place.
 To resolve this disadvantage, the Waterfall Model may sometimes include a "feedback

loop" where the engineer can return to a previous phase. This requires very strict
configuration management and change control.
 Major decisions must be made early, when knowledge is at its least depth.

 Changing a specification affects everyone: engineers, users, developers, trainers.

 There is a tremendous pressure on testers to prove whether a product is ready or not for

release.
 Requirements must be well-reviewed early. It is not always possible for the user to define

the requirements in full. This cannot be absorbed in this model.


 Test plan can be written early.

 When testing does start, the project is on the critical path.

 There is no early deliverable. This results in long waits before users can acquire the

systems. Not having early acceptance often results in negative expectations.


When to use Waterfall model
 Because of the various weaknesses of this Model, it is best limited to situations in which

the requirements and the implementations of those requirements are clearly understood and
at the beginning.
 It works well for products that are well understood and whose technical methodologies are

also clear.
 It also works well for projects that are repeats of earlier work such as migration, updates,

and new releases.


 It is quite suitable for developing products with which the development team have a lot of

experience such as the case when the analytic reports are being developed for a Human
Resources Information System having had the experience of similar reporting in the
Payroll System.
1.3.2. Prototyping Model
Most software application development efforts often yield systems that are not totally usable. The
later releases of such products improve their use. This led to the development of a prototyping
model that is highly iterative in nature. It is mostly used for developing “proof of concept”
applications that are not completely usable. As the requirements get known, the system gets
refined.
In the case where requirements are not clear, Prototyping may be the most suitable approach. Such
situations as the following require prototyping:
 Users do not have clearly defined procedures and processes

16
 Systems are complex and are not amenable to clear analysis
 Requirements are changing due to various reasons: markets, regulations, uncommitted
management, etc.
1.3.2.1. What is a Prototype?
It is a quickly developed, easily modifiable and extendible working model of the required
application. A prototype does not necessarily represent the complete system. It is not meant to. It is
only meant to start proving the validity of converting the user requirements into working designs.
In general terms, the Prototype Model follows these steps:
Requirements gathering

Quick design

Refine
Build prototype
Requirements

Customer
suggestio
ns Customer evaluation –
prototype

Acceptance by user

Design

Implement, Test

Maintain

Fig.1.4. Prototyping Model (exploratory)


Components in this figure can be interpreted as:
 Analyze requirements in a broad manner. This stage is completed when general lines are
agreed upon between analysts and users.
 A quick design will then take place that is not rigorous but that provides proof in general
terms of the above requirements.
 The above prototype is evaluated by the user and changes and revisions are noted.
 The developer will now revise the system and return to the first step for deeper and more
elaborate analysis.
The above cycle is repeated until the system reaches the final acceptance stage.
Strengths of the Prototyping Model
 The requirements are analyzed in a top down fashion ensuring that no major blocks are

missed.
 Technological solutions are identified early.

 There is a much lower risk of confusion in terms of miscommunication, misunderstanding

user requirements or system features. Communications problems are therefore minimized.


 New or unexpected user requirements can easily be incorporated.

17
 The prototype can be used as a formal specification because it is tangible and well tried.
 It secures the user because of the steady and visible progress
 Prototyping development costs are offset by avoidance of the usual high cost of rework
and repair.
 There will be many opportunities to reappraise requirements or rethink the design.
 It is relatively simple to eliminate useless features.
Weaknesses of the Prototyping Model
 The user may be led to believe that developing software is easy and would hence become

undisciplined while expressing his requirements.


 The resulting system may have been built without consideration of such issues as security,

performance, look and feel, etc., all because the developer is concentrating on delivering
the required functions of the systems. Furthermore, inefficient algorithms may be used,
unsuitable development environments or testing may take place under unrealistic platforms
(Operating systems, database systems, etc). Addressing such issues at a later stage may be
cumbersome and inefficient.
 Lack of early planning may cause project management problems: unknown schedules,

budgets and deliverables.


 Seeing functions in working mode early in the life cycle stimulates users into requesting

additional functions. This may take the project outside the scope of the feasible
requirements. If such extensions are crucial, this weakness becomes strength.
 There is a tendency to postpone complex requirements till a later stage to achieve early

proof of concept. This may result in systems which may be difficult to modify.
 Developers may be tempted to deliver a reasonably working prototype rather than a tightly

controlled and robust product.


When to use the Prototyping Model?
 When requirements are not known at the beginning of the project and unstable and

constantly changing
 When users, for various reasons, do not have the capability of expressing their

requirements clearly or when they are reluctant to commit to a set of clear requirements.
 The model is ideally suited for developing look and feel or user interfaces because such

features cannot be easily documented and are best seen when tested.
 When user requires proof of concept

 When quick demonstrations are required

 When technological features are to be tested

1.3.3. The Spiral Model


The spiral method was originally proposed by Barry Boehm. The model focuses on identifying and
eliminating high risk problems by careful process design (Design Phase).
Consequently, minimum and manageable risks percolate into development phase.
Generally the spiral model is divided into four main task regions/stages
 Determine goals, alternatives, and constraints

18
 Evaluate alternatives and risks
 Develop and test
 Plan next phase

Fig 1.5 Spiral Model

i. Determine goals, alternatives, and constraints


The basic activities that should be performed in this stage are:
 Identify objectives of the portion of the project under consideration in terms of
qualities to achieve.
 Identify Alternatives such as whether to buy, design, or reuse of any of the
software.
 Identify constraints in the application of the software.

ii. Evaluate Alternatives and Risks


The basic activities that should be performed in this stage are:
 Evaluation of the identified alternatives
 Potential risk areas are identified and dealt with.
 Risk assessment may require different kinds of activities to be planned such as
prototyping or simulation.
iii. Develop and test
The basic activities that should be performed in this stage are:
 Consists of developing and verifying the next level product.
 The strategy followed by the process is dictated by risk analysis
iv. Plan Need Phase
The basic activities that should be performed in this is:
 Consists of reviewing the results of the stages traversed so far and planning for
the next iteration of the spiral if any.

19
It remains the responsibility of the Project Manager to define and control the number of iterations
in the whole process.
Strengths of the Spiral Model
 It is a realistic approach to the problems of medium to large scale software development

 It can use prototyping during any phase in the evolution of product.

 The main benefit of the spiral model is incorporation of project management methods such

as Risk Management, Quality Management and Configuration Management.


 Because the software development process is iterative and is interspersed with risk

analysis, both users and developers learn to respond and manage risks at each stage of the
project.
Weaknesses of the Spiral Model
 It requires excellent project management skills. In addition, the project manager must be

prepared to manage from a risk-avoidance viewpoint.


 Developers must be prepared to identify, analyze and mitigate risks which is a skill not

always identified and cultivated.


 There are not as many methods, techniques and tools to support them.

 Use of the spiral model may prevent inadequate or unusable software from being built,

resulting in the avoidance of rework and increasing trust between the customers and the
developers.
 It demands expert knowledge of risk analysis and management which may not always be

available. Major risks uncovered at later stages may have drastic impacts on the final
result.
When to use the Spiral Model?
 Projects that would benefit the most are those that would be built on untested assumptions.

 It is suitable in situations where business modeling is clearly understood and in use.

Win-Win spiral process model is a model of a process based o theory W, which is a management
theory and approach “based on making winners of all of the system’s key stakeholders as a
necessary and sufficient condition for project success”.
1.3.4. RAD (Rapid Application Development) Model
It is an incremental software development process model that emphasizes an extremely
short development cycle.
→ Is a high-speed adaptation of linear sequential model in which rapid development is
achieved by using component-based construction.
→ If requirements are well understood, RAD process enables development team to create
a fully functional system within short periods of time.
RAD approach encompasses the following phases:

20
Fig 1.6. Rapid Application Development Model

 Business modeling –the information flow among business is modeled.


 Data modeling–modeling of attributes of each object and relationship among objects
 Process modeling–transformation of data objects to achieve the information flow
necessary to implement a business function
 Application generation– use of fourth generation techniques. Use of existing
reusable components
 Testing and turnover–shorter testing as it uses already tested components
Weaknesses of RAD approach
- Requires sufficient human resources to create the right number of RAD teams.
- Requires developers and customers who are committed to the rapid-fire activities
necessary to get a system complete in the stipulated time.
- All types are not appropriate for RAD, if a system cannot be properly modularized;
building components necessary for RAD will be problematic.
- Not suitable when technical risks are high. This occurs when a new application makes heavy
use of new technology or when the new software.
Review Questions
1. Explain the term software and its role.
2. How is software product different from a hardware product?
3. What is software crisis?
4. Define the term “software engineering” and distinguish it from programming.
5. What do you understand by software life cycle?
6. What is software life cycle development model?

21

You might also like