0% found this document useful (0 votes)
15 views20 pages

Topics

it notes vtu mca final yeaR

Uploaded by

Satwik R.M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views20 pages

Topics

it notes vtu mca final yeaR

Uploaded by

Satwik R.M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

[Type here] Module-2 Project Life cycle and effort estimation 1

Topics
Software process and Process Models –
Choice of Process models –
Rapid Application development –
Agile methods – Dynamic System Development Method –
Extreme Programming– Managing interactive processes –
Basics of Software estimation –
Effort and Cost estimation techniques –
COSMIC Full function points –
COCOMO II - a Parametric Productivity Model.

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

2.1 Software Processes and Process Models


2.1.1 Software Process
A software product development process usually starts when a request for the product is received
from the customer.
For a generic product, the marketing department of the company is usually considered as the
customer.
This expression of need for the product is called product inception.
It involves the following stages. They are
Inception stage: a product undergoes a series of transformations through a few identifi able stages
until it is fully developed and released to the customer.
Maintenance stage: After release, the product is used by the customer and during this time the
product needs to be maintained for fixing bugs and enhancing functionalities. This stage is called
the maintenance stage.
Retirement: When the product is no longer useful to the customer, it is retired.
Software Development Life Cycle: This set of identifiable stages through which a product transits
from inception to retirement form the life cycle of the product. The software life cycle is also
commonly referred to as Software Development Life Cycle (SDLC) and software process.
Process model :A life cycle model (also called a process model) of a software product is a graphical
or textual representation of its life cycle.
Additionally, a process model may describe the details of various types of activities carried out
during the different phases such as inception stage, maintenance and retirement stage.

2.1.2 Choice of Process Models


The word ‘process’ emphasizes the idea of a system in action.
In order to achieve an outcome, the system will have to execute one or more activities: this is its
process.
This applies to the development of computer-based applications.
A number of interrelated activities have to be undertaken to create a final product.
These activities can be organized in different ways, and we can call these process models.
The planner selects methods and specifies how they are to be applied.

Structure versus Speed of Delivery

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

Although some ‘object-oriented’ specialists might object(!), we include the OO approach as a


structured method – after all, we hope it is not unstructured.
Structured methods consist of sets of steps and rules which, when applied, generate system
products such as use case diagrams.
Each of these products is carefully defined.
Such methods are more time consuming and expensive than more intuitive approaches.
The pay-off, it is hoped, is a less error prone and more maintainable final system.
This balance of costs and benefits is more likely to be

2.1.3 The Waterfall Model


This is the ‘classical’ model of system development that is also known as the one-shot or once-
through model.
As can be seen from the example in Figure 4.2, there is a sequence of activities working from top
to bottom.
The diagram shows some arrows pointing upwards and backwards.
This indicates that a later stage may reveal the need for some extra work at an earlier stage, but
this should definitely be the exception rather than the rule.
After all, the fl ow of a waterfall should be downwards, with the possibility of just a little splashing

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

 Even though writers often advocate alternative models, there is nothing intrinsically wrong
with the waterfall approach in the right place.
 It is the ideal for which the project manager strives.
 Where the requirements are well defined and the development methods are well
understood, the waterfall approach allows project completion times to be forecast with
some confidence, allowing the effective control of the project.
 However, where there is uncertainty about how a system is to be implemented, and
unfortunately there very often is, a more flexible, iterative, approach is required.

2.1.4 The Spiral Model

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

The distinguishing characteristic features of the spiral model are the incremental style of
development and the ability to handle various types of risks.
Each loop of the spiral is called a phase of this software process.
In each phase, one or more features of the product are implemented after resolving any
associated risks through prototyping.
The exact number of loops of the spiral is not fixed and varies from one project to another.
Note that the number of loops shown in Figure 4.4 is just an example illustrating how the spiral
model can subsume SSADM.
Each loop of the spiral is divided into four quadrants, indicating four stages in each phase.
In the first stage of a phase, one or more features of the product are analyzed and the risks in
implementing those features are identifi ed and resolved through prototyping.
In the third stage, the identified features are implemented using the waterfall model. In the fourth
and final stage, the developed increment is reviewed by the customer along with the development
team and the features to be implemented next are identified.
Note that the spiral model provides much more flexibility compared to the other models, in the
sense that the exact number of phases through which the product is to be developed can be
tailored by the project manager during execution of the project.

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

2.1.5 Software Prototyping


This is one way in which we can buy knowledge and reduce uncertainty. A prototype is a working
model of one or more aspects of the projected system. It is constructed and tested quickly and
inexpensively in order to test out assumptions.
Prototypes can be classified as throw-away or evolutionary.
● Throw-away prototypes
The prototype tests out some ideas and is then discarded when the true development of the
operational system is commenced. The prototype could be developed using a different software
or hardware environment. For example, a desktop application builder could be used to evolve an
acceptable user interface. A procedural programming language is then used for the final system
where machine-efficiency is important.
● Evolutionary prototypes
The prototype is developed and modified until it is finally in a state where it can become the
operational system. In this case the standards that are used to develop the software have to be
carefully considered.

2.1.6 Incremental Delivery


This approach breaks the application down into small components which are then implemented
and delivered in sequence.
Each component delivered must give some benefit to the user.
Figure 4.4 gives a general idea of the approach. FIGURE 4.4 Intentional incremental delivery

Advantages of this approach

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

These are some of the justifications given for the approach.


● The feedback from early increments improves the later stages.
● The possibility of changes in requirements is reduced because of the shorter time span
between the design of a component and its delivery.
● Users get benefits earlier than with a conventional approach.
● Early delivery of some useful components improves cash flow, because you get some return on
investment early on.
● Smaller sub-projects are easier to control and manage.
● Gold-plating, that is, the requesting of features that are unnecessary and not in fact used, is
less as users know that if a feature is not in the current increment then it can be included in the
next.
● The project can be temporarily abandoned if more urgent work emerges.
● Job satisfaction is increased for developers who see their labours bearing fruit at regular, short,
intervals.
Disadvantages
On the other hand, these disadvantages have been put forward.
● Later increments might require modifications to earlier increments. This is known as software
breakage.
● Software developers may be more productive working on one large system than on a series of
smaller ones.

The incremental delivery plan


The nature and order of each increment to be delivered to the users have to be planned at the
outset. This process is similar to strategic planning but at a more detailed level.
Attention is given to increments of a user application rather than whole applications.
The elements of the incremental plan are the system objectives, incremental plan and the open
technology plan.

Rapid Application Development


Rapid Application Development (RAD) model is also sometimes referred to as the rapid
prototyping model.
This model has the features of both the prototyping and the incremental delivery models.
The major aims of the RAD model are as follows:
● to decrease the time taken and the cost incurred to develop software systems; and
● to limit the costs of accommodating change requests by incorporating them as early as possible
before large investments have been made on development and testing.

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

One of the major problems that has been identified with the waterfall model is the following.
Clients often do not know what they exactly want until they see a working system.
However, they do not see the working system until the development is complete in all respects
and delivered to them.
As a result, the exact requirements are brought out only through the process of customers
commenting on the installed application.
The required changes are incorporated through subsequent maintenance efforts.
This makes the cost of accommodating any change extremely high.
As a result, it takes a long time and enormous cost to have a good solution in place.
Clearly, this model of developing software would displease even the best customer.
The RAD model tries to overcome this problem by inviting and incorporating customer feedback
on successively developed prototypes.
In the RAD model, absence of long-term and detailed planning gives the flexibility to
accommodate requirements change requests solicited from the customer during project
execution.

Agile Methods
Agile methods are designed to overcome the disadvantages we have noted in our discussions on
the heavyweight implementation methodologies.
One of the main disadvantages of the traditional heavyweight methodologies is the difficulty of effi
ciently accommodating change requests from customers during execution of the project.
Note that the agile model is an umbrella term that refers to a group of development processes,
and not any single model of software development.
There are various agile approaches such as the following:
● Crystal Technologies
● Atern (formerly DSDM)
● Feature-driven Development
● Scrum
● Extreme Programming (XP)

Extreme Programming (XP)

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

The primary source of information on XP is Kent Beck’s Extreme programming explained:


embrace change, first published in 1999 and updated in 2004.
The description here is based on the first edition, with some comments where the ideas have
been developed further in the second.
The ideas were largely developed on the C3 payroll development project at Chrysler.
The approach is called ‘extreme programming’ because, according to Beck, ‘XP takes
commonsense principles to extreme levels’.
Four core values are presented as the foundations of XP.
1. Communication and feedback. It is argued that the best method of communication is face-to-
face communication. Also, the best way of communicating to the users the nature of the software
under production is to provide them with frequent working increments. Formal documentation is
avoided.
2. Simplicity. The simplest design that implements the users’ requirements should always be
adopted. Effort should not be spent trying to cater for future possible needs – which in any case
might never actually materialize.
3. Responsibility. The developers are the ones who are ultimately responsible for the quality of
the software – not, for example, some system testing or quality control group.
4. Courage. This is the courage to throw away work in which you have already invested a lot of
effort, and to start with a fresh design if that is what is called for. It is also the courage to try out
new ideas – after all, if they do not work out, they can always be scrapped. Beck argues that this
attitude is more likely to lead to better solutions

The planning exercise


Previously, when we talked about ‘increments’ we meant components of the system that users
could actually use.
XP refers to these as releases.
Within these releases code is developed in iterations, periods of one to four weeks’ duration during
which specific features of the software are created.
Note that these are not usually ‘iterations’ in the sense that they are new, improved, versions of
the same feature – although this is a possibility.
The planning game is the process whereby the features to be incorporated in the next release
are negotiated.
Each of the features is documented in a short textual description called a story that is written on
a card.

Small releases

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

The time between releases of functionality to users should be as short as possible. Beck suggests
that releases should ideally take a month or two. This is compatible with Tom Gilb’s
recommendation of a month as the ideal time for an increment, with a maximum of three months.

Metaphor
The system to be built will be software code that refl ects things that exist and happen in the real
world. A payroll application will calculate and record payments to employees. The terms used to
describe the corresponding software elements should, as far as possible, reflect real-world
terminology – at a very basic level this would mean using meaningful names for variables and
procedures such as ‘hourly_rate’ and ‘calculate_ gross_pay’.

Simple design
This is the practical implementation of the value of simplicity that was described above. Testing
Testing is done at the same time as coding. The test inputs and expected results should be
scripted so that the testing can be done using automated testing tools. These test cases can then
be accumulated so that they can be used for regression testing to ensure that later developments
do not insert errors into existing working code. This idea can be extended so that the tests and
expected results are actually created before the code is created. Working out what tests are
needed to check that a function is correct can itself help to clarify.

Refactoring
A threat to the target of striving to have always the simplest design is that over time, as
modifications are made to code, the structure tends to become more spaghetti-like.

Pair programming
All software code is written by pairs of developers, one actually doing the typing and the other
observing, discussing and making comments and suggestions about what the other is doing. At
intervals, the developers can swap roles.

Collective ownership
This is really the corollary of pair programming. The team as a whole takes collective responsibility
for the code in the system. A unit of code does not ‘belong’ to just one programmer who is the
only one who can modify it.

Continuous integration
This is another aspect of testing practice. As changes are made to software units, integrated tests
are run regularly – at least once a day – to ensure that all the components work together correctly.

Forty-hour weeks.
but in this case overtime should not be worked for two weeks in a row. Interestingly, in some case
studies of the application of XP, the 40-hour rule was the only one not adhered to.

On-site customers
Fast and effective communication with the users is achieved by having a user domain expert on-
site with the developers.

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

Coding standards
If code is genuinely to be shared, then there must be common, accepted, coding standards to
support the understanding and ease of modification of the code.

Limitations of XP
The successful use of XP is based on certain conditions. If these do not exist, then its practice
could be difficult.
These conditions include the following.
● There must be easy access to users, or at least a customer representative who is a domain
expert. This may be difficult where developers and users belong to different organizations.
● Development staff need to be physically located in the same office.
● As users find out about how the system will work only by being presented with working versions
of the code, there may be communication problems if the application does not have a visual
interface. ● For work to be sequenced into small iterations of work, it must be possible to break
the system functionality into relatively small and self-contained components.
● Large, complex systems may initially need significant architectural effort. This might preclude
the use of XP.

Scrum
In the Scrum model, projects are divided into small parts of work that can be incrementally
developed and delivered over time boxes that are called sprints.
The product therefore gets developed over a series of manageable chunks.
Each sprint typically takes only a couple of weeks.
At the end of each sprint, stakeholders and team members meet to assess the progress and the
stakeholders suggest to the development team any changes and improvements they feel
necessary.
Selection of an Appropriate Project Approach 93 In the scrum model, the team members assume
three fundamental roles, viz., product owner, scrum master, and team member.
The product owner is responsible for communicating the customer’s vision of the product to the
development team.
The scrum master acts as a liaison between the product owner and the team, and facilitates the
development work.

Managing Iterative Processes


This discussion of agile methods might be confusing as it seems to turn many of our previous
planning concepts on their head.

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

Approaches like XP correctly emphasize the importance of communication and of removing


artificial barriers to development productivity.
XP to many might seem to be simply a ‘licence to hack’.
However, a more detailed examination of the techniques of XP shows that many (such as pair
programming and installation standards) are conscious techniques to counter the excesses of
hacking and to ensure that good maintainable code is written.

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

Chapter 2 Software Effort Estimation

Introduction
A successful project is one delivered ‘on time, within budget and with the required quality’. This
implies that targets are set which the project manager then tries to meet.
This assumes that the targets are reasonable – no account is taken of the possibility of project
managers achieving record levels of productivity from their teams, but still not meeting a deadline
because of incorrect initial estimates.
Realistic estimates are therefore crucial.
A project manager like Amanda has to produce estimates of effort, which affect costs, and of
activity durations, which affect the delivery time.
These could be different, as in the case where two testers work on the same task for the same fi
ve days.
Some of the difficulties of estimating arise from the complexity and invisibility of software.
Also, the intensely human activities which make up system development cannot be treated in a
purely mechanistic way. Other difficulties include:

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

Subjective nature of estimating


For example, some research shows that people tend to underestimate the difficulty of small tasks
and over-estimate that of large ones.
● Political implications Different groups within an organization have different objectives. The IOE
information systems development managers may, for example, want to generate work and will
press estimators to reduce cost estimates to encourage higher management to approve projects.
As Amanda is responsible for the development of the annual maintenance contracts subsystem,
she will want to ensure that the project is within budget and timescale, otherwise this will reflect
badly on herself.
● Changing technology Where technologies change rapidly, it is difficult to use the experience of
previous projects on new ones.
● Lack of homogeneity of project experience Even where technologies have not changed,
knowledge about typical task durations may not be easily transferred from one project to another
because of other differences between projects.
Where are Estimates Done?

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

Estimates are carried out at various stages of a software project for a variety of reasons.
● Strategic planning :
Project portfolio management involves estimating the costs and benefits of new applications in
order to allocate priorities. Such estimates may also influence the scale of development staff
recruitment.
● Feasibility study:
This confirms that the benefits of the potential system will justify the costs.
● System specification:
Most system development methodologies usefully distinguish between the definition of the users’
requirements and the design which shows how those requirements are to be fulfilled.
● Evaluation of suppliers’ :
proposals In the case of the IOE annual maintenance contracts subsystem, for example, IOE
might consider putting development out to tender. Potential contractors would scrutinize the
system specification and produce estimates as the basis of their bids.
● Project planning
As the planning and implementation of the project becomes more detailed, more estimates of
smaller work components will be made. These will confirm earlier broad-brush estimates, and will
support more detailed planning, especially staff allocations.

Problems with Over- and Under-Estimates


A project leader such as Amanda will need to be aware that an over-estimate may cause the
project to take longer than it would otherwise. This can be explained by the application of two
‘laws’.
● Parkinson’s Law ‘Work expands to fi ll the time available’, that is, given an easy target staff will
work less hard.
● Brooks’ Law The effort of implementing a project will go up disproportionately with the number
of staff assigned to the project. As the project team grows in size, so will the effort that has to go
into management, coordination and communication. This has given rise, in extreme cases, to the
notion
of Brooks’ Law: ‘putting more people on a late job makes it later’. If there is an overestimate of
the effort required, this could lead to more staff being allocated than needed and managerial
overheads being increased.

The Basis for Software Estimating


The need for historical data Most estimating methods need information about past projects.
However, care is needed when applying past performance to new projects because of possible

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

differences in factors such as programming languages and the experience of staff. If past project
data is lacking, externally maintained datasets of project performance data can be accessed.
One well-known international database is that maintained by the International Software
Benchmarking Standards Group (ISBSG), which currently contains data from 4800 projects.

Parameters to be estimated
 The project manager needs to estimate two project parameters for carrying out project
planning. These two parameters are effort and duration.
 Duration is usually measured in months. Work-month (wm) is a popular unit for effort
measurement. The term person-month (pm) is also frequently used to mean the same as
work-month.
 One person-month is the effort an individual can typically put in a month.
 The person-month estimate implicitly takes into account the productivity losses that
normally occur due to time lost in holidays, weekly offs, coffee breaks, etc.
 Person-month (pm) is considered to be an appropriate unit for measuring effort compared
to person-days or person-years because developers are typically assigned to a project for
a certain number of months.

Measure of work
 Measure of work involved in completing a project is also called the size of the project.
 Work itself can be characterized by cost in accomplishing the project and the time over
which it is to be completed.
 Direct calculation of cost or time is difficult at the early stages of planning.
 The time taken to write the software may vary according to the competence or experience
of the software developers might not even have been identified.
 Implementation time may also vary depending on the extent to which CASE (Computer
Aided

Software Effort Estimation Techniques


Barry Boehm, in his classic work on software effort models, identifi ed the main ways of
deriving estimates of software development effort as:
● algorithmic models, which use ‘effort drivers’ representing characteristics of the target
system and the implementation environment to predict effort;
● expert judgement, based on the advice of knowledgeable staff;
● analogy, where a similar, completed, project is identifi ed and its actual effort is used as
the basis of the estimate;
● Parkinson, where the staff effort available to do a project becomes the ‘estimate’;
● price to win, where the ‘estimate’ is a figure that seems sufficiently low to win a contract;
● top-down, where an overall estimate for the whole project is broken down into the effort
required for component tasks;
● bottom-up, where component tasks are identifi ed and sized and these individual
estimates are aggregated.

Bottom-up Estimating

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

 With the bottom-up approach the estimator breaks the project into its component
tasks. With a large project, the process of breaking it down into tasks is iterative:
 each task is decomposed into its component subtasks and these in turn could be
further analysed.
 It is suggested that this is repeated until you get tasks an individual could do in a
week or two.
 Why is this not a ‘top-down approach’?
 After all, you start from the top and work down.
 Although this top-down analysis is an essential precursor to bottom-up estimating,
it is really a separate process – that of producing a work breakdown schedule
(WBS).
 The bottom-up part comes in adding up the calculated effort for each activity to get
an overall estimate.

The Top-down Approach and Parametric Models


 The top-down approach is normally associated with parametric (or algorithmic) models.
 These may be explained using the analogy of estimating the cost of rebuilding a house.
 This is of practical concern to houseowners who need insurance cover to rebuild their
property if destroyed.
 Unless the house-owner is in the building trade he or she is unlikely to be able to calculate
the numbers of bricklayer-hours, carpenter-hours, electrician-hours, and so on, required.
 Insurance companies, however, produce convenient tables where the house-owner can
find estimates of rebuilding costs based on such parameters as the number of storeys and
the floor space of a house.
 This is a simple parametric model.
 Project effort relates mainly to variables associated with characteristics of the final system.
 A parametric model will normally have one or more formulae in the form: effort = (system
size) 3 (productivity rate)

COSMIC Full Function Points


 While approaches like that of IFPUG are suitable for information systems, they are not
helpful when it comes to sizing real-time or embedded applications.
 This has resulted in the development of another version of function points – the COSMIC
FFP method.
 COSMIC deals with this by decomposing the system architecture into a hierarchy of
software layers. The software component to be sized can receive requests for services
from layers above and can request services from those below it.
 At the same time there could be separate software components at the same level that
engage in peer-to-peer communication.

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

 This identifi es the boundary of the software component to be assessed and thus the
points at which it receives inputs and transmits outputs.
 Inputs and outputs are aggregated into data groups, where each group brings together
data items that relate to the same object of interest.

Data groups can be moved about in four ways:


● entries (E), which are effected by subprocesses that move the data group into the
software component in question from a ‘user’ outside its boundary – this could be from
another layer or another separate software component in the same layer via peer-to-peer
communication; ● exits (X), which are effected by subprocesses that move the data group
from the software component to a ‘user’ outside its boundary;
● reads (R), which are data movements that move data groups from persistent storage
(such as a database) into the software component;
● writes (W), which are data movements that transfer data groups from the software
component into persistent storage.

COCOMO II: A Parametric Productivity Model


Boehm’s COCOMO (Constructive COst MOdel) is often referred to in the literature on
software project management, particularly in connection with software estimating.
The term COCOMO really refers to a group of models.
Boehm originally based his models in the late 1970s on a study of 63 projects.
Of these only seven were business systems and so the models could be used with
applications other than information systems.
The basic model was built around the equation

where effort was measured in pm or the number of ‘person-months’ consisting of units of


152 working hours, size was measured in kdsi, thousands of delivered source code
instructions, and c and k were constants.
The first step was to derive an estimate of the system size in terms of kdsi.
The constants, c and k (see Table 5.4), depended on whether the system could be
classified, in Boehm’s terms, as ‘organic’, ‘semi-detached’ or ‘embedded’.
These related to the technical nature of the system and the development environment.
● Organic mode This would typically be the case when relatively small teams developed
software in a highly familiar in-house environment and when the system being developed
was small and the interface requirements were flexible.
● Embedded mode This meant that the product being developed had to operate within
very tight constraints and changes to the system were very costly.
● Semi-detached mode This combined elements of the organic and the embedded modes
or had characteristics that came between the two.

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

COCOMO II has been designed to accommodate this by having models for three different
stages.
● Application composition Here the external features of the system that the users will
experience are designed. Prototyping will typically be employed to do this. With small
applications that can be built using high-productivity application-building tools,
development can stop at this point.
● Early design Here the fundamental software structures are designed. With larger, more
demanding systems, where, for example, there will be large volumes of transactions and
performance is important, careful attention will need to be paid to the architecture to be
adopted.
● Post architecture Here the software structures undergo final construction, modification
and tuning to create a system that will perform as required.

Staffing Pattern
 After the effort required to complete a software project has been estimated, the
staffing requirement for the project can be determined.
 Putnam was the first to study the problem of what should be a proper staffing
pattern for software projects.
 He extended the classical work of Norden who had earlier investigated the staffing
pattern of general research and development (R&D) type of projects.
 In order to appreciate the staffing pattern desirable for software projects, we must
understand both Norden’s and Putnam’s results.

Norden’s work
Norden studied the staffi ng patterns of several R&D projects. He found the staffi ng
patterns of R&D projects to be very different from that of manufacturing or sales type of
work. In a sales outlet, the number of sales staff does not usually vary with time. For
example, in a supermarket the number of sales personnel would depend on the number
of sales counters alone and the number of sales personnel therefore remains fixed for
years together.
However, the staffing pattern of R&D type of projects changes dynamically over time for
effi cient manpower utilization.
At the start of an R&D project, the activities of the project are planned and initial
investigations are made.
During this time, the manpower requirements are low. As the project progresses, the
manpower requirement increases until it reaches a peak.
Thereafter the manpower requirement gradually diminishes.
Norden concluded that the staffing pattern for any R&D project can be approximated by
the Rayleigh distribution curve shown in Figure 5.3

RNSIT MCA
[Type here] Module-2 Project Life cycle and effort estimation 1

Putnam’s work
 Norden’s work was carried out in the context of general R&D projects.
 Putnam studied the problem of staffing of software projects and found that the
staffing pattern for software development projects has characteristics very similar
to R&D projects. Putnam adapted the Rayleigh–Norden curve to relate the number
of delivered lines of code to the effort and the time required to develop the product.
 Only a small number of developers are needed at the beginning of a project to
carry out the planning and specification tasks.
 As the project progresses and more detailed work is performed, the number of
developers increases and reaches a peak during product delivery which has been
shown to occur at time TD in Figure 5.3.
 After product delivery, the number of project staff falls consistently during product
maintenance.
 Putnam suggested that starting from a small number of developers, there should
be a staff build-up and after a peak size has been achieved, staff reduction is
required. However, the staff build-up should not be carried out in large instalments.
Experience shows that a very rapid build-up of project staff any time during the
project development correlates with schedule slippage.

RNSIT MCA

You might also like