0% found this document useful (0 votes)
92 views5 pages

1.what Are The 3 Stages To Be Executed in Waterfall Model in Theory?

The document discusses the waterfall model of software development and its limitations. It describes the typical stages of the waterfall model including requirements definition, design, implementation, testing, integration, and maintenance. However, the waterfall model has several risks including late testing, integration issues, and lack of early risk resolution. The document then discusses some improvements that can be made to the waterfall model such as preliminary design, increased documentation, iterative development, improved testing practices, and greater customer involvement.

Uploaded by

prudhvi chowdary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views5 pages

1.what Are The 3 Stages To Be Executed in Waterfall Model in Theory?

The document discusses the waterfall model of software development and its limitations. It describes the typical stages of the waterfall model including requirements definition, design, implementation, testing, integration, and maintenance. However, the waterfall model has several risks including late testing, integration issues, and lack of early risk resolution. The document then discusses some improvements that can be made to the waterfall model such as preliminary design, increased documentation, iterative development, improved testing practices, and greater customer involvement.

Uploaded by

prudhvi chowdary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

SPM Unit 1

1.What are the 3 stages to be executed in waterfall model in theory?


● In 1970,Winston Royce, presented a paper titled “Managing the development of Large
scale software systems” at IEEE WESCON.
● The paper had three primary points:
1. There are two essential steps common to the development of computer
programs: analysis and coding.
2. In order to manage and control all of the intellectual freedom associated with
software development, one must introduce several other "overhead" steps,
including system requirements definition, software requirements definition,
program design, and testing. These steps supplement the analysis and coding
steps.
3. The basic framework described in the waterfall model is risky and invites failure.
The testing phase that occurs at the end of the development cycle is the first
event for which timing, storage, input/output transfers, etc., are experienced as
distinguished from analyzed.
● Diagram of 3 Models.

2.What are the improvements we can perform on waterfall model to eliminate most of the
development risks?
● Five necessary improvements for waterfall model are:
1. Program design comes first
a. Insert a preliminary program design phase between the software
requirements generation phase and the analysis phase
b. By this technique, the program designer assures that the software will not
fail because of storage, timing, and data flux (continuous change).
c. If the total resources to be applied are insufficient or if the embryonic (in
an early stage of development) operational design is wrong, it will be
recognized at this early stage and the iteration with requirements and
preliminary design can be redone before final design, coding, and test
commences.
2. Document the design
a. The amount of documentation required on most software programs is
quite a lot, certainly much more than most programmers, analysts, or
program designers are willing to do if left to their own devices. Why do we
need so much documentation?
b. Each designer must communicate with interfacing designers, managers,
and possibly customers.
c. During early phases, the documentation is the design
d. The real monetary value of documentation is to support later
modifications by a separate test team, a separate maintenance team, and
operations personnel who are not software literate.
3. Do it twice
a. If a computer program is being developed for the first time, arrange
matters so that the version finally delivered to the customer for
operational deployment is actually the second version insofar as critical
design/operations are concerned.
b. Note that this is simply the entire process done in miniature, to a time
scale that is relatively small with respect to the overall effort.
4. Plan, control, and monitor testing
a. Without question, the biggest user of project resources-manpower,
computer time, and/or management judgment-is the test phase
b. This is the phase of greatest risk in terms of cost and schedule
c. The previous three recommendations were all aimed at uncovering and
solving problems before entering the test phase. However, even after
doing these things, there is still a test phase and there are still important
things to be done, including:
d. Employ a team of test specialists who were not responsible for the
original design
e. Employ visual inspections to spot the obvious errors.Example : jumps to
wrong addresses
f. Test every logic path
g. Employ the final checkout on the target computer.
5. Involve the customer
a. It is important to involve the customer in a formal way so that he has
committed himself at earlier points before final delivery
b. There are three points following requirements definition where the insight,
judgment, and commitment of the customer can bolster the development
effort.
c. These include a "preliminary software review" following the preliminary
program design step, a sequence of "critical software design reviews"
during program design, and a "final software acceptance review".

3.Protracted Integration and Late Design Breakage


● For a typical development project that used a waterfall model management process the
following sequence was common:
1. Early success via paper designs and thorough (often TOO thorough) briefings.
2. Commitment to code late in the life cycle.
3. Integration nightmares (unpleasant experience) due to unforeseen
implementation issues and interface ambiguities.
4. Heavy budget and schedule pressure to get the system working.
5. Late shoe-homing of no optimal fixes, with no time for redesign.
6. A very fragile, unmentionable product delivered late.
● Diagram of development progress versus time.
● Diagram of Expenditure
● In the conventional model, the entire system was designed on paper, then implemented
all at once, then integrated.
● Only at the end of this process was it possible to perform system testing to verify that the
fundamental architecture (interfaces and structure) was sound.

4.Late risk resolution


● A serious issue associated with the waterfall lifecycle was the lack of early risk
resolution
● The waterfall model includes four distinct periods of risk exposure, where risk is defined
as the probability of missing a cost, schedule, feature, or quality goal
● Diagram of risk profile

5.Requirements-Driven Functional Decomposition


● This approach depends on specifying requirements completely and unambiguously
before other development activities begin. It naively treats all requirements as equally
important, throughout the software development life cycle.
● Another property of the conventional approach is that the requirements were typically
specified in a functional manner
● The software itself was decomposed into functions; requirements were then allocated to
the resulting components. This decomposition was often very different from a
decomposition based on object-oriented design and the use of existing components.
● Diagram

6.Adversarial Stakeholder Relationships


● The following sequence of events was typical for most contractual software efforts:
1. The contractor prepared a draft contract-deliverable document that captured an
intermediate artifact and delivered it to the customer for approval.
2. The customer was expected to provide comments (typically within 15 to 30 days).
3. The contractor incorporated these comments and submitted (typically within 15 to
30 days) a final version for approval.
● The overhead of such a paper exchange review process was intolerable.

7.Focus on Documents and Review Meetings


● The conventional process focused on producing various documents that attempted to
describe the software product
● Contractors were driven to produce literally tons of paper to meet milestones and
demonstrate progress to stakeholders, rather than spend their energy on tasks that
would reduce risk and produce quality software.

8.What are Barry Boehm's Industrial Software Metrics


1. Finding and fixing a software problem after delivery costs 100 times more than finding
and fixing the problem in early design phases.
2. You can compress software development schedules 25% of nominal, but no more.
3. For every $1 you spend on development, you will spend $2 on maintenance.
4. Software development and maintenance costs are primarily a function of the number of
source lines of code.
5. Variations among people account for the biggest differences in software productivity.
6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in
1985, 85:15.
7. Only about 15% of software development effort is devoted to programming.
8. Software systems and products typically cost 3 times as much per SLOC as individual
software programs. Software-system products (i.e., system of systems) cost 9 times as
much.
9. Walkthroughs catch 60% of the errors.
10. 80% of the contribution comes from 20% of the contributors.

9.Explain software economics


● Most software cost models can be abstracted into a function of five basic parameters:
size, process, personnel, environment, and required quality
1. The ​size ​of the end product (in human-generated components), which is typically
quantified in terms of the number of source instructions or the number of function
points required to develop the required functionality
2. The ​process ​used to produce the end product, in particular the ability of the
process to avoid non value-adding activities (rework, bureaucratic delays,
communications overhead)
3. The capabilities of software engineering ​personnel​, and particularly their
experience with the computer science issues and the applications domain issues
of the project
4. The ​environment​, which is made up of the tools and techniques available to
support efficient software development and to automate the process
5. The required ​quality ​of the product, including its features, performance,
reliability, and adaptability
● The relationships among these parameters and the estimated cost can be written as
follows: Effort = (Personnel) (Environment) (Quality) ( Size of process)

10.The three generations of software development


● The three generations of software development are defined as follows:
1. Conventional​: 1960s and 1970s, craftsmanship. Organizations used custom
tools, custom processes, and virtually all custom components built in primitive
languages. Project performance was highly predictable in that cost, schedule,
and quality objectives were almost always underachieved.
2. Transition​: 1980s and 1990s, software engineering. Organizations used
more-repeatable processes and off the-shelf tools, and mostly (>70%) custom
components built in higher level languages. Some of the components (<30%)
were available as commercial products, including the operating system, database
management system, networking, and graphical user interface.
3. Modern practices:​ 2000 and later, software production.Integrated automation
environments, and mostly (70%) off-the-shelf components were used . Perhaps
as few as 30% of the components need to be custom built
● Diagram

11.PRAGMATIC SOFTWARE COST ESTIMATION


● One critical problem in software cost estimation is a lack of well-documented case
studies of projects that used an iterative development approach
● There have been many debates among developers and vendors of software cost
estimation models and tools. Three topics of these debates are of particular interest
here:
1. Which cost estimation model to use?
2. Whether to measure software size in source lines of code or function points.
3. What constitutes a good estimate?
● There are several popular cost estimation models (such as COCOMO, CHECKPOINT,
ESTIMACS, KnowledgePlan, Price-S, ProQMS, SEER, SLIM, SOFTCOST, and
SPQR/20)
● CO COMO is also one of the most open and well-documented cost estimation models.
● Most real-world use of cost models is bottom-up (substantiating a target cost) rather than
top-down (estimating the "should" cost).
● A good software cost estimate has the following attributes:
1. It is conceived and supported by the project manager, architecture team,
development team, and test team accountable for performing the work.
2. It is accepted by all stakeholders as ambitious but realizable.
3. It is based on a well-defined software cost model with a credible basis.
4. It is based on a database of relevant project experience that includes similar
processes, similar technologies, similar environments, similar quality
requirements, and similar people.
5. It is defined in enough detail so that its key risk areas are understood and the
probability of success is objectively assessed.
● Diagram

You might also like