Nptel Course On Software Project Management - Complete Notes
Nptel Course On Software Project Management - Complete Notes
S. No Topic Page No
Week 1
1 Introduction - I 1
2 Introduction - II 14
3 Introduction - III 27
4 Project Management Standards 40
5 Life Cycle Models - I 57
Week 2
6 Life Cycle Models - II 74
7 Life Cycle Models - III 94
8 Life Cycle Models - IV 112
9 Life Cycle Models - V 130
10 Life Cycle Models - VI 149
Week 3
11 Project Evaluation and Programme Management 167
12 Project Evaluation and Programme Management (Contd.) 185
13 Project Evaluation and Programme Management (Contd.) 206
14 Project Evaluation and Programme Management (Contd.) 233
15 Project Evaluation and Programme Management (Contd.) 259
Week 4
16 Project Estimation Techniques 276
17 Project Estimation Techniques (Contd.) 300
18 Project Estimation Techniques (Contd.) 323
19 Project Estimation Techniques (Contd.) 345
20 Project Estimation Techniques (Contd.) 376
Week 5
21 Project Estimation Techniques (Contd.) 396
22 Project Estimation Techniques (Contd.) 418
23 Project Estimation Techniques (Contd.) 442
24 Project Estimation Techniques (Contd.) 470
25 Project Estimation Techniques (Contd.) 490
Week 6
26 Project Scheduling 516
27 Project Scheduling Using PERT/CPM 530
28 Project Scheduling Using PERT/CPM(Contd.) 541
29 Computation of Project Characteristics Using PERT/CPM 554
Computation of Project Characteristics Using PERT/CPM:
30 Illustration 565
Week 7
31 PERT, Project Crashing 576
32 Team Management 589
33 Organization and Team Structure 605
34 Team Structure (Contd.) and Risk Management 619
35 Risk Management (Contd.) and Introduction to Software Quality 632
Week 8
36 Resource Allocation 644
37 Resource Allocation (Contd.) 669
38 Resource Allocation (Contd.) 692
39 Project Monitoring and Control 713
40 Project Monitoring and Control (Contd.) 735
Week 9
41 Project Monitoring and Control (Contd.) 761
42 Project Monitoring and Control (Contd.) 783
43 Project Monitoring and Control (Contd.) 808
44 Project Monitoring and Control (Contd.) 828
45 Project Monitoring and Control (Contd.) 850
Week 10
46 Project Monitoring and Control (Contd.) 874
47 Project Monitoring and Control (Contd.) 896
48 Contract Management 919
49 Contract Management (Contd.) 945
50 Project Close Out 969
Week 11
51 Software Quality Management 995
52 ISO 9000 1014
53 ISO 9001, SEI CMM 1028
54 SEI CMM (Contd.) 1041
55 SEI CMM (Contd.) 1056
Week 12
56 Personal Software Process (PSP) 1071
57 Software Reliability - I 1089
58 Software Reliability - II 1105
59 Software Reliability - III 1121
60 Software Testing 1137
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 01
Introduction - I
Welcome to the Software Project Management course. As has been displayed earlier the
course will be handled by myself and Professor Durga Prasad Mohapatra. My name is
Rajib Mall as you can see on the display. I belong to the Computer Science and
Engineering Department of IIT Kharagpur. Today, we will have some introductory
discussion.
First is about myself name is Rajib Mall, did all my education Bachelors, Masters, and
PhD, from the Indian Institute of Science Bangalore. So, that is the institute studied and
then worked for Motorola India for about 3 years. And, then shifted to IIT Kharagpur in
1994 and currently a professor at the CSE department, this is the Indian Institute of
Technology Kharagpur building.
1
(Refer Slide Time: 01:31)
The let us look at the topics that we will discuss in this introductory lecture. We will first
address what is software project management? Is software project management any
different from the traditional project management? The difference between jobs,
projects, and exploration work.
What makes software project planning difficult? And, what do we mean by project
scope? Why is it important and how does one define the project scope? So, these are the
basic topics we will discuss in this introductory lecture.
2
The IT spending worldwide is huge based on a Gartner report roughly about 3.5 trillion
dollars in 2014. And, gradually increase to about 4 trillion dollar and compare this with
the gross domestic product of India is about 2.5 trillion dollars. So, the IT spending is
even much more than all the domestic production of India is a huge.
And therefore, software has become a very prominent part of our living, we spend huge
amounts of money software; we encounter software in many places. And, especially in
India we are known as a software powerhouse, huge amount of software gets developed,
large number of companies and they undertake large number of projects to develop
software. So, it is a important topic all over world. And, especially for India where large
part of the software development takes place, but then everybody says that there is a
software crisis. But what is this crisis? Let us first get to understand the crisis.
What are the symptoms of the crisis? If we look at the symptoms of the crisis that the
software that had developed and delivered to the customer very often fail to meet the
requirements of the customer, extremely expensive these are difficult to alter, debug and
enhance and also these are delivered late to the customer. This is unlike hardware where
you get the hardware quickly developed and given. So, let us see, what is this after crisis,
what causes it, and how to overcome?
3
(Refer Slide Time: 05:16)
If we look at the Standish group report of all projects roughly about a third we can say
success of all the projects taken all over the world about 30 percent successful projects.
And, about 20 percent are outright failure nothing comes out of that project. And,
roughly about half are challenged projects, which are delayed cost escalation, poor
quality and so on. Let us see what is the reason behind such a dismal figure, that only
one third of the projects is successful and 20 percent are failure and so on.
4
If we look at the scenario now, we can see that the amount that the companies spend on
hardware versus the amount of they spend on software, the ratio is drastically reducing.
What it means is that companies are spending less on hardware and more on software or
in other words software costs are increasing very rapidly compared to hardware costs.
The companies spend a large part of their budget on IT on software, just to give an
example, that how the software cost is much more than the hardware cost look at a
desktop or a laptop, if you buy it from the market.
The hardware and also with its operating system and so on is 45,000. So, basically the
raw hardware will be much less than that. But, then you get the laptop or desktop and let
us say you want to do some software development work on that and you buy a case tool.
If, you buy a rational suite node locked costs 3,00,000, just look at this that the raw
hardware for the laptop or desktop is much less than 45,000, because 45,000 also
includes the software operating system and so on.
And on that you want to run software development tool costing about 3,00,000, that is a
node locked a floating license will be much more. So, just look at the irony that you buy
the hardware in such a low price and the software that you run on it just one example
there are many software, that you might have to buy. So, looks like that, the software is
becoming, so, expensive that the hardware cost will become almost negligible. And, you
buy the software and the hardware comes free with it soon that situation is going to
come, that you buy the software and that comes preloaded on a hardware. The hardware
is free basically, but then what is the problem that is causing this.
5
(Refer Slide Time: 09:12)
But, before that let’s see that if the hardware is such a nice thing that costs very less
delivered promptly and we said that the cost is almost negligible to software. Then, why
not have only hardware systems entirely on hardware system. Because, anything that can
run on software, anything that we can do with software, we can also do with hardware,
we can design the hardware to do whatever we were doing software. For example, let us
say a word processing software. We can even have a small hardware component entirely
hardware, which can also do this word processing.
Let us answer this very basic question, that why not have an entirely hardware system,
get rid of software it is problematic, expensive lot of bugs, delivered late and so on. But,
then there are some virtues of software and because of that everybody is using software,
otherwise there will be entirely hardware systems. Let us look at the virtues of software
because of which software is preferred by customers, rather than having an entirely
hardware system. The first virtue is relatively easy to change, you want to add additional
functionality, modify a functionality, the next version can appear quickly you give your
request might send you a patch and so on.
So, the software is very easy to change just do a code change, recompile and that is done.
The other big advantage is that consumes no space, it has no weight or intrinsically there
is no power associated with software. Just imagine that, if you are running a word
processing software sorry you were doing some word processing. And, then you
6
developed hardware for word processing, then you wanted a excel and you had to get
another hardware for it.
But, software even for very complex functionality where the hardware will be extremely
large, it occupies no space, weight, or intrinsic power that runs on the hardware. And,
that is the reason why the software is becoming more and more important in our life,
more and more functionalities are getting implemented on hardware on software. So,
there is a basic hardware on which all our new functionalities that are required are
developed by software.
Now, let us look at some characteristics of software that are very important to our
subsequent discussions. There are many characteristics, but we have singled out only two
characteristics; one is that, the more complex is a software, the more hard the more
harder it is to change, why is that? The reason is not very far to say. To be able to change
the software we must first understand the software, we must first understand where the
change has to be done, which variables are to be changed, what code is to be written for
that and so on.
7
The more complex is a software, the more time effort is necessary to understand the
software and find out where exactly and what change needs to occur. The second thing is
that each time we make a change to a software, the greater becomes it is complexity.
Each time we change a software the change may be required due to various reasons,
maybe to fix a bug, maybe to enhance the functionality, maybe to make it a better
performance and so on. There are 100s of reasons why a specific software would need to
change, and for each change, the software becomes more complex.
Why is that, the reason is that the small changes that occur. For example, bug fix for
example, adding small functionalities and so on, we just make ad hoc changes to the
software. The initial software is developed with a good design and so on, but each small
change deteriorates or makes the design worse, these are developed as small fixes. And,
therefore, these make the software more complex, more difficult to understand, more
difficult to change and so on.
Now, let us look at the basic question that we have been trying to address in this lecture
is that, what is the reason behind software crisis? We said the software crisis as
symptoms, which show up as poor quality software, delayed projects, project failure and
so on costly and so on. There are many reasons which contribute to the software crisis;
the first one is larger problems. Now, the software has large number of functionalities
8
100s or 1000s of functionalities or 10s of 1000s of functionalities for even simple
software packages.
Increasing skill shortage manpower turnover, halfway the project, the project is halfway
and many of the key persons leave quite common and that delays the project. And, also
other reason is low productivity improvements. Largely software has remained manual
work even with all the automation automated tools and so on. Still lot of code needs to
be written manually, lot of testing has to be done manually and so on.
But, then let us understand another very basic question, that in what ways software
project management differs from management of other engineering projects. Let me just
repeat that, what is the main difference between software project management and
management of other projects non software projects? The first problem is that software is
intangible. You do not see the software until you see that it is complete and running then
9
only you see that screens are getting displayed it does something, but until that happens,
it is just a set of documents or some pieces of code, you cannot make out that how much
is remaining just by looking at the documents or the code. Maybe huge amount of code
has been taken has been written, but then the software is far from complete, because
many bugs needs to be fixed it needs to be tested and so on.
What is difficult, what is we are unable to see is also difficult to manage? For example,
construction of a large building, here the project manager can estimate how much effort
time it is going to take to construct the building maybe 6 months. Huge building 6
months and then he can see that the building is coming up the external part is complete,
the internal needs to be done and so on. At any time he can accurately tell that how much
work is remaining, but in software that is not the case.
The second problem is change impact. In a building you make small alterations, let us
say building project. You know that, what exactly will be the impact of that and you can
estimate that and make the small change that may be required. But on the other hand and
the software is extremely complex, you change something, you try to change something
and you see that many other things are not working or working in a wrong way. So, the
impact of a change is very easy to identify in case of other projects, but in a software
project it is so, complex that you cannot estimate what will be the impact of changes?
But, unfortunately the software is required to change rapidly. And, that is one of the
advantages of software we said that you can change it quickly. And, therefore, the
change requests are millions time more in software as compared to hardware, when you
are developing a hardware you think twice before giving a change request. Because, that
will be very hard to incorporate in hardware, it has to be totally redone. Whereas,
everybody knows that software can be changed and therefore, lot of change requests
come.
The third problem is that software projects are intellectual work. Compared to a building
construction, where the bricks have to be laid the roof has to be constructed, boundary
wall has to be constructed, painted and so on. All these are largely manual work and
manual work is easy to estimate, you can easily find out how much a painter can paint
per day or a labourer can lay the bricks per day and so on.
10
Whereas, software is a intellectual work. It is very difficult to estimate what is the
complexity of the work that somebody is doing and how much he will be able to
accomplish. So, these are the three main differences between other projects and software
projects. The first thing is you have to manage something, which is invisible. And,
anything that is invisible is difficult to manage. The, second thing is that changes are
very frequent in software projects, but then it is also very difficult to estimate the change
impact, the change impact is usually very large and it is also difficult to estimate.
The third thing is that, we have to in software project management; we have to manage
intellectual work. This is a much bigger problem than managing manual work. In manual
work we can easily estimate, how much can get down on a day, what is the complexity
of the problem and so on. In intellectual work we can neither estimate what is the
complexity of the work and neither how much effort will be required for that.
And, that brings us to the question, that the software project management is hard as we
might have guessed from the previous discussion. It is much harder than managing other
types of projects. Software is invisible and it is hard to manage anything that you cannot
see, it is management of intellectual and complex work, changing customer requirements
is the rule rather than exception, and also the manpower turnover. You may be a project
manager on some project, but then you find that some of your key developers, they leave
the project.
11
And, they have the domain knowledge they have developed and once they leave it
becomes very difficult to get another person who will understand his work and start
developing from that point. These are a few of the major problem a software project
manager has to face. But, then there are many other problems here smaller problems
which we have not listed.
But, before that we like to discuss before proceeding further, we like to discuss, what are
projects? How is it different from tasks and exploration? The tasks are routine jobs, for
example, let us say buy a movie ticket or watch a video lecture. These are tasks or jobs,
these are repetition of very well defined well understood activity with very little
uncertainty, watching a lecture yes you know that you have to sit there and watch can be
done you have to sit there half an hour whereas, exploration is on the other end where the
outcome is very uncertain.
For example, finding a cure for cancer or finding a solution to the global warming. So,
these are exploration the outcome is uncertain you never know, whether your work will
have a success or not. The projects are in between the projects has challenge as well as
there are some routine tasks involved. And, that is the reason why project management is
important here, because for the tasks or the exploration you do not need project
management. These are purely innovative work whereas, projects are the one where you
12
need project management that is important thing here, with this introductory discussion,
we will stop here and continue from this point in the next lecture.
Thank you.
13
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 02
Introduction – II
Welcome to this lecture with a preliminary Introduction that we had in the last lecture.
We will discuss a few more topics which are also introductory in nature, but we will
build on what we discussed last time.
In this lecture, we will discuss about what are exactly software projects, what are the
types of software projects? We will identify two major types of software projects one is
product development and the other is services projects. What are the major activities of
the project manager and then we will discuss about the traditional versus modern
projects.
14
(Refer Slide Time: 01:10)
First let us look at what is a project? If we look up in the dictionary we will find there are
many definitions in different dictionaries. For example, find a planned undertaking; a
large undertaking for example, a public works scheme and so on. But if you look at these
different types of dictionary definitions, the key point are that it is a planned work and
also it is a large work. There are two key attributes of a project; the first thing is that it is
a planned activity, the second is that it is non-trivial; it is a large work.
15
But then let us look what is a project more technically; the project as defined in the
PMBOK; Project Managers Book Of Knowledge fifth edition. The project is defined as a
temporary endeavour undertaken to create a unique product, service or result. It is
endeavour undertaken to create a unique product, service or result, but then implicit in
this is that it is a planned activity and it is a large activity.
We can also say that a project is a set of activities undertaken within a defined time
period in order to meet specific goals within a budget. So, in addition to a planned
activity and a large activity; we also have few other things that there is a time period; by
which we need to complete the work and also there is a budget associated with it.
For example, let us say in a shopping mall; sales is occurring over a number of years.
There are sales people who are selling every day on the counter; will that be a project?
No, because it is just ongoing; we cannot associate a duration by which that will
complete. So, all projects are of finite duration, large projects, non-trivial, non-repetitive
and these require resources.
16
Each project is large consisting of multiple tasks; also there we can define a precedence
relationship between tasks that until some task completes the other tasks cannot be done.
There is a time period we can define for the completion of the tasks; we will see many
software examples of projects as we proceed in this course.
But then let us look at some non software examples of projects. A wedding; it is a
project, you need to plan, it is a large activity; there are many subtasks there is a budget
associated with it, there is a duration by which it needs to complete. A B.Tech degree is a
project, there is a duration by which you will complete the B.Tech degree, there is a
budget associated with it, you need to plan how you will complete the degree.
A house construction is a project, a political election campaign; yes that is also a project
because there is a duration by which the campaign will end, you need to plan the
activities, there are many activities in the campaign and also this is the complex activity.
But then let us be clear about what is a task? So, far we have only said that a task is
much simpler; often repetitive and no project management is necessary for a simple task.
A task in contrast to a project is a small piece of work meant to accomplish a
straightforward goal rather trivial. The effort required is only a few person hours,
involves one or two people at most. A work may be part of some project, it may not be
also and one thing is that a work is typically a repetition of some other work that you
have already done.
17
Let us give some examples of a task; attending a lecture class, buying a chocolate from
the market, book a railway ticket on an online booking; these are straightforward tasks.
And you might have done this already many times, you have many times attended a
lecture, many times have purchased a chocolate from the market; you want to repeat it
again, you might have already booked railway ticket many times.
So, these are somewhat straightforward; can be done in less than few hours one or two
persons can do it; does not need a large team for achieving this and typically a repetition
of something had already done. On the other hand in contrast to this; in project it will be
much more large and it is not a repetition of a previously accomplished task.
Now, I hope that we are clear about what is the difference between a task, a project and
an exploration. And the project management is especially important for projects; for
tasks are too trivial, straightforward, to need any project management. And also the
exploration is too challenging and there the project management techniques are not really
suitable.
Now, let us look at a important concept again which is about project stakeholders. The
project stakeholders are individuals and organizations that are actively involved in the
project and whose interests may be positively or negatively affected as a result of the
project execution or project completion; they may also exert influence over the project
and its results.
18
The main thing that we are saying here is that the stakeholders are the ones who are
either they are the project manager; who are managing, the customers who will use the
project the software that will be developed. The team members who are working every
day to develop the project, the sponsor who is financing the project; these are the key
stakeholders.
There may be other stakeholders, for example let us say a vendor. The vendor maybe
develop some outsourced software; that part of the project; he will be a stakeholder.
Because he is concerned, he is interested, he has interests on the project; he will be
affected by the result of the project success or failure.
But then we can roughly categorize the stakeholders into two categories. One are the
internal project stakeholders, these are internal to the organization developing the
software. And here the internal stakeholders are the project manager, the project team
and the top management.
The external project stakeholders are the customers; if the customers are external.
Sometimes the customer may be in the development team for example, a different
department or maybe the customer may be the same department; which is developing a
software for its own use. But typically the customers are external and the vendors
etcetera, so these are the external project stakeholders.
19
(Refer Slide Time: 13:29)
The other important concept that we will discuss in this lecture is the project scope. The
project scope essentially means that what will be achieved in the project, what must be
done to achieve the project; to complete the project and to deliver the results. So, what
we want to do in the project; how it will be done and what will be the deliverable? So
these are the project scope.
If we write it in the form of a list; it is the list of specific goals that is what needs to be
done. What will be the deliverables, what tasks need to be undertaken to complete the
project, what are the deadlines and what are the cost? So, this is the project scope. Before
starting a project; the project manager has to be clear about the project scope; the goal of
the project, the deliverables, the tasks that need to be done to complete the project, the
deadlines and the costs.
20
(Refer Slide Time: 15:03)
Unless the project scope is properly defined and the project manager is clear about the
project scope; the project typically ends up in a failure. There are many factors which
may contribute to a project failure just listed some of them here. Two major reasons why
the project fails is that the development team does not understand the customer’s
requirements, the requirement specification has not been done properly.
The second reason is the project scope is poorly defined that is, what needs to be done,
what are the tasks, what is the budget, what is the deadline and so on. Changes are poorly
managed; any small changes because changes occur frequently as the development takes
place; the changes are poorly managed. The chosen technology changes; you might have
developed it on a wired communication assuming; a wired communication, but then
suddenly the technology has changed and you will have to use a wireless; it may involve
lot of changes to the software.
The business needs change; that is the customer who wanted to have the software for
certain business needs, suddenly finds that the business needs has changed. Earlier for
example, wanted a software to let us say manage the customers, who visit the store. But
suddenly it became part of an online store, where customers do not visit the company;
the business anymore, in that case do not need the software. So, halfway the customer
might say that see the one that we had asked you to develop; our business needs have
changed we do not need that software.
21
Unrealistic deadlines; when the deadline is very aggressively set, the developers try to
compromise on the design, requirements, testing and so on and the project ends up as a
failure. In experienced development team, poor project management etcetera these are
some of the reasons for failure.
But then how is the project scope defined, what does the project manager do to define the
project scope? Barry Boehm; he gave his W5HH principle, which is 5 Ws and 2 H. The 5
Ws are the project manager need to be clear about why is the software being built? What
will be its use? What will be done? That is what are the activities, that needs to be
undertaken; to be able to develop the software. What are the deadlines by when it needs
to be done? Who is responsible for a function?
So, the activities that, need to be done who will do what; which parts will be developed
internally, which parts by a vendor and so on. Where are they organizationally located?
That is who are; who the developers, where will they are located in different offices;
across the world, where will be the vendor located? Are they properly identified?
How will the job be done technically and managerially? How much each of each
resource is needed? So, that will define the cost; resource can be manpower cost, it may
be computation, hardware cost and so on. The W5HH principle is very important for the
project manager to be able to properly define the scope of the project.
22
(Refer Slide Time: 20:29)
Now, let us look at another very preliminary concept is the type of software projects that
are being done. We have so far looked at what are the projects, how is the software
projects differ different from other projects, why software project management difficult
compared to other project management?
Now, let us look at what are the types of software projects? There are basically two types
of software projects; one are called as a products, which are also known as generic
software and the second is services or the custom software. The products are the one
which you can buy of the self, you can go to the market get it or you may order online
and get it; these are generic.
For example, a business house wants to have a software to manage the inventory. For
this, you cannot just get software online and install and have it done; need to have
identify what are the items that are to be managed, where the storehouse and so on. So,
these needs to be developed specifically for that customer and these are the services or
custom software.
23
We have seen that the total business is approximately 4 trillion dollars and right now the
total software business is half in products and half in services. But then the services
segment is growing fast; the services type of projects are becoming much more frequent,
the product development is becoming less.
What is the reason that the services projects are growing very fast and the product type
of projects are becoming less? Let us identify the reason for that, but before that the
generic software or the products call them as packaged software; these are prewritten
available for purchase you can buy them online or get it from store just walk in there and
get a database management package and so on.
So three main types of projects here broadly; there are two projects one is the product
development projects and the services projects. And in the services project we have
custom software development is develop entirely for a customer or it is a custom
software, but then you tailor an existing software. The product development can also be
further divided into two types; one is a horizontal market. This is a product for horizontal
24
market where the software is used across many types of customers. For example, a word
processing software; it is a horizontal, it is for horizontal market, but a vertical market is
one where it is intended for a specific industry.
For example: a banking software is only the banking; the banks will be the customer and
that is a vertical market. Or let us say you want to develop a software which will be used
by let us say billing mobile customers. So, that is for the mobile phone vendors and that
is also vertical market.
We have seen that there are two types of software the products and the services.
25
(Refer Slide Time: 26:35)
But then you can also classify the software into either information systems; which store
data, process the stored data present the data and statistics. This is the typically
management information system, stock control inventory management, patient
management etcetera; this can be web based or standalone software.
And there can be embedded software, where they control the hardware; for example an
automobile control software, a nuclear power plant control software, robots, toys, TV
remote etcetera. So, this is another classification one we just saw is that between the
product development and services type.
But another way of classifying them may be information system projects or embedded
software development projects. With these basic concepts, we will; we have come to the
end of this lecture. In the next hour will also be few more basic concepts based on what
we have already discussed. We will stop at this point and continue in the next lecture.
Thank you.
26
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 03
Introduction – III
Welcome to this lecture, in the last two lectures we had some very brief Introduction to
the Software Project Management and also motivation for software project management;
how software project management is different from traditional project management and
the scope of the project, stakeholders and so on.
In this lecture, we will continue our discussion on software products and services
because these are the two major types of projects that are undertaken. And the software
project manager has to manage both these types of projects; the software product
development projects and the services types of projects. We look at the major activities
of the project manager and we look at the traditional versus modern projects.
27
(Refer Slide Time: 01:22)
There are two main types of software that are being developed across the world; one is
called as the software product development. Software products are the ones that we can
buy just like any other product, you can go to the shop or order online. Let us say you
need antivirus package; you can order it online or you want a case tool you can order it
online, but the other type of project are the software services the services are customized
software.
28
The packaged software which are generic in nature are called as the software products;
these are available to buy of the self maybe from a online source; you can purchase this.
These are prewritten software available for everybody; anybody can just order this and
get it delivered.
But the other type of project is development of custom software; this software is
developed based on specific user request. There may be one or maybe a few users who
have their own typical requirement for the software. And the developing organization has
only few users in mind; they initiate the few users, they initiate this project and then the
developing organization develops for them and delivers to them; these are not generic
software available for anybody for purchasing.
But then there is another custom software where the developer has a solution and each
customer has small modifications that are required. For example, let us say an academic
institute; it needs to automate it activities for example, student admission grading and so
on. A company might have developed a software for this academic institute automation,
but then every academic institute has its own grading system admission process and so
on which may be different from other institutes.
Here the developing organization has a software which has been developed for some
customer or maybe few customers in mind. But then another customer require small
changes and here the developing organization initiates a project where it has its software
already there; only small changes need to be done in this project.
So, the first one is a product development project where the developing organization has
a generic customer requirement and develops the software. The other two are services
projects these are custom software; one is specific user, the second one is it just tailors
and existing solution. The product type of software can also be categorized into two
types; one is for the horizontal market and the other is for the vertical market. The
vertical market maybe for let us say banking or let; let us say telecom billing or maybe
medical inventory management or maybe a hospital management and so on. So, this is
for specific verticals like banking, telecom, hospital and so on.
Whereas the horizontal market is across all types of customers for example, a database
management or a antivirus software and so on. These are sold to customers who are
generic in nature and there is no specific type of customer in mind.
29
(Refer Slide Time: 06:29)
Here the management information system will keep track of what is the customer
requirement, then source these requirements in bulk. And then store this and distribute
and also, manage the accounts and report about various types of statistics what is the
inventory level, profit loss and so on.
Another information system maybe the hotel management software, where the guests to
a hotel register the pay bill; the hotel room occupancy, the profit loss and so on, these are
handled by the hotel management software. We might have a patient management
software where in a hospital patients register, the doctors they visit the hospital certain
times. And then the patients pay their bill and then the hospital needs to pay to the
doctors and then finally, have the profit loss statement.
All these information system may either be web based or these may be standalone
software. In a web based software, the software can be operated from anywhere using a
web browser whereas, in a standalone software; it needs to be operated centrally. In
30
contrast to the information system in a embedded software; these are mostly used to
control some hardware. Therefore, the software here works closely with the hardware it
controls the hardware. For example, automobile control software, nuclear plant control
software, robots, toys the software that control the toys, TV remote and so on.
The software services as we are discussing is a different type of project, it has different
characteristics than the software product projects. The software product project the
development is done to develop a generic product; whereas, in software services it may
involve customization of an existing software or developing certain items on customer
request. This has of late becoming a dominant type of projects many of the software
development projects are software services projects.
But the term software service is the umbrella term; it includes many things. For example,
customizing an existing software on specific customer request; software maintenance the
software exists, but then the customer want some changes maybe enhancements; maybe
performance improvement and so on. A project to maintain the software is also a type of
services project.
Software testing; a software has been developed, but then before deployment needs to be
tested and maybe an organization just does software testing and they are providing
software services. But what about an organization which supplies contract programmers
to another organization; who wants to automate certain activities and they just get these
31
contract programmers who do coding or other assignment assigned activities like testing
and so on; this is also a type of software service.
But what is the reason that more and more type of more and more projects are becoming
the services type of projects? About 50 years back; the only type of software projects that
are existing were software product development type of projects. But now more and
more projects are becoming services type of projects and let us investigate that what is
causing this migration from pure product development to services type of projects.
One is that over the years lot of code have been written by the company and whenever
there is a new request; the company can most cost effectively develop the software by
modifying some of the existing code. It can find out which is the code which is closest to
the customer requirement, it may be one project or it may be across several projects
which were completed. And then it can just make small modifications to this and then
deliver the software to the customer very quickly and at less cost.
This is also a services project. And here the main reason why the companies are doing
this is that; they have lot of code they have completed many projects in the past and they
have a lot of code and they can reuse these to develop the next software that has been
requested.
32
Another reason is that the business velocity has increased tremendously; by this we mean
that there is so much of competition. There is so much of competition among different
industries, different companies that they want to do things fast for software development
they become impatient.
Multiyear projects are gone; nobody wants them and it is expected that a software
development work completes in 2 weeks, a month and so on. And the way this
development request can be met is by having a services type of project, where there is a
customization of existing software.
The third reason is that the program sizes have become large and therefore, it is
imperative that most of the code is reused and only small development is done, small
customization to complete the work.
But still worldwide, nearly half of the projects are product type of projects and another
half is and services. Of course, the services type of projects are increasing very rapidly,
but if we look at the Indian companies; here we find that most of the projects are in the
services segment; why is that the case that we have very few product development
projects?
The product development project has the advantage that once the software is complete
and it is successful in the market; the company has a steady revenue the one that
33
company that developed the product. For example, oracle software once it was
developed it gives us steady revenue to the company. Whereas, a service type of project
in that way the benefit to the company is less because it is developed and sold on one
time revenue.
But then why is that the Indian companies have not focused on the product type of
projects but largely to the services segment? If we look into the reason; the answer is that
the product type of projects have their inherent risk; a product development involves
investing lot of money in developing something which may be required, may be
successful or may not be successful.
Whereas a services project; once it is complete there is one type revenue which is
assured and possibly this reason to avoid the risk invest millions of dollars or rupees to
develop something which may be a failure. I think that is the reason that the Indian
companies have largely focused on the services segment.
Now, let us look at the activities of the project manager; what does the project manager
do? The project manager; large part of the activities planning; estimate the duration,
estimate the cost, estimate the effort, allocate resources and schedule.
Staffing; recruit personnel, motivate them and once the project is started; direct the
personnel and ensure that the team works as a whole to complete the work; so that is
34
every day. Check that the work is progressing as per the plan, but then there will be a
plan deviations and monitor if there are deviations and then take corrective actions so
that any changes to the plan is contained.
And also it may be necessary to change the plan continuously because when we plan
initially we cannot foresee many things and therefore, the plan needs to be changed as
the project progresses. You can summarize the project management activity as planning
the work; that is to start with you must plan the cost, the effort, schedule and then the
projects and then do the staffing and then the project starts and largely it is directing and
monitoring and controlling and that we call is working the plan. So, the project
management activities essentially consists of planning the work and working the plan.
The major responsibilities of a project manager is summarized in this picture; that to start
with it is the project managers responsibility to carry out the feasibility study. The
feasibility study the project manager determines is it worth doing the project?
The project manager understands the problem, proposes alternate solutions and then
estimates the cost. And the technical feasibility; the project manager may decide at this
point of feasibility study that the project in feasible on account of cost considerations,
that the cost involved in developing the software far exceeds the income from the
software and then the project will be abandoned.
35
The project manager may also find out that technically the project cannot be done by the
organization; do not have the competency to undertake the project and then the project
there is no further progress project is abandoned.
But if it meets the cost requirements and also the technical feasibility then the project
starts off. And the first activity is planning the initial plan is made that how do you do it;
here it involves estimating the duration, estimating the effort and then resource allocation
and scheduling; that we call as plan. And once the plan is over the project starts off and
then we have the project execution where the project manager cons concentrates on how
the project progresses as per the plan. And here the work of the project manager is
largely directing and monitoring and control.
In the feasibility study, the project manager is concerned about two main types of
feasibility; is it technically feasible and is it worthwhile from a business point of view,
that is it financially feasible. And only when it is found feasible the planning is
undertaken and after the planning; the project starts off and then the work of the project
manager is directing the team and then execution monitoring and control.
36
(Refer Slide Time: 22:42)
The project planning starts off immediately after the feasibility study; if the project is
found to be feasible and worth doing it then the planning is done. In the planning, the
important activity is estimating.
As we proceed in this course we look at various estimating techniques and that is a major
component of this course. And once the estimation is done; the scheduling who will do
what at what time and when is likely to complete? So, that is project scheduling. The
staffing, risk management and there are many other plans that we have called here
miscellaneous plan; as you proceed in this course we look at the other plans.
37
(Refer Slide Time: 23:40)
Monitoring and control is the major activity once the project starts off; that is the
feasibility study and the planning is complete and the project starts off; the project
managers main activity is monitoring and control.
Monitoring and control lasts for the entire project duration, from the time the project
starts to the project completion. And here the project manager everyday needs to check
the progress and if there is a deviation from the plan time and the progress that has been
achieved; the project manager may take some corrective actions. And also the project
manager may find that it is necessary to revise the plan.
So, the projects manager sits down, revises the plan; he controls if there is a problem in
the project which is hampering the progress; the project manager takes controlling
action. For example that the design is taking long time then might change the roles,
might get additional designers and so on. Whenever there are problems emerge for
example, there is a delay in delivery of the hardware the project manager may decide to
lease hardware to overcome the problem.
So, coming up with solutions, innovations to projects that to problems that emerge as the
project progresses and not only that the project manager is the main person who liaises
with the clients with the top management, with the users also the developers and other
stakeholders.
38
(Refer Slide Time: 26:02)
If we summarize our discussion here in this diagram, it is that the project management
activities they start much before the actual product development starts and that is the
project planning, the feasibility study, the project initiation and so on.
The project manager does many types of processes for example, initiating the project
planning, recruiting, monitoring and so on; so those we call as project management
processes. But as the development work starts; the development team carries out their
own processes, those we call as product oriented processes. For example they might
carry out requirements specification, they might carry out design they might carry out
coding, testing and so on, those we call as product oriented processes.
And at the same time the project manager carries out the project management processes
and the project management processes are carried out over the project lifecycle.
Whereas, the product oriented processes are carried out during the development lifecycle
and the project lifecycle is longer than the development lifecycle as shown in this
diagram. With this, we will just take a break and we will continue in the next lecture.
Thank you.
39
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 04
Project Management Standards
Welcome to this lecture, in the last lecture we were discussing about the project
management lifecycle and the software development lifecycle. And we had said that the
project management activity start much before the software development activity start
with that let us discuss further.
In this lecture we will discuss 2 main concepts, the project management standards and
the life cycle models.
40
In a typical project life cycle almost every software starts with a need for the software
somebody in a organization feels the need for a software to automate certain things and
based on that the project proposal is written and it is approved by the top management.
The project manager is appointed and that forms the initiation of the project and after the
project manager is appointed for the project; the project life cycle starts with planning
the project. In the plan of the project various types of plans are prepared the schedule, the
configuration management, the risk management and so on.
And once the plans have been prepared, the project development lifecycle starts and that
forms executing the plan and during the execution of the plan the project manager
executes the plan that is directs the development team to proceed according to the plan.
But as the project progresses there can be several deviations from the plan and for this
the project manager need to perform some control activities, maybe there is some
bottleneck why the project is not progressing as per the plan. The project manager works
to remove the bottleneck for example, there may be a shortfall in the technical personnel
or there may be shortfall in the hardware equipment, the project manager proactively
removes those hurdles, so that the project proceeds as per the plan.
But then, the project manager might have to rework the plan because even in spite of that
there will be delays or there may be some part of the project may get completed quickly
than anticipated and so on. And once the project completes, the project manager carries
out the project closeout activities as we proceed we will look at all these activities in
more detail.
41
(Refer Side Time: 04:05)
During the project lifecycle, the project manager executes the project management
processes let me repeat, during the project lifecycle the project manager carries out the
project management processes. The project management processes are concerned with
planning and controlling the work of the project. The activities here the project manager
performs the planning of the project, executing the plan that is directing the team to carry
out development work and whenever there is a deviation, the project manager controls
removes the bottleneck, so that the project proceeds as per the plan.
In contrast the development team carries out product oriented processes; the product
oriented processes are concerned with specifying and creating the project products that is
they may do the requirement specification process, design process, coding process,
testing process and so on. So, those we call as the product oriented processes, these are
carried out by the development team whereas, the project manager carries out the project
management processes.
42
Let us see what are the project management processes, the project management processes
consist of initiating the project, here the project manager writes the business case and the
project charter, as we proceed we will see the business case and the project charter.
The project manager also carries out the planning processes that is completes the work
breakdown structure and performs the project scheduled cost estimation and so on. And
then the project manager carries out the executing processes, which basically performing
the necessary action to complete the work as outlined in the plan, but then during the
execution process there may be deviations from the plan.
And the project manager needs to carry out the monitoring and control processes where
the project manager checks, whether the project is proceeding as per the plan or there are
deviations from the plan and when there are deviations the project manager takes
corrective actions to match the progress of the project with the plan. And finally, the
project manager needs to carry out the closing processes, where the closing documents
are created and the customer finally gives the formal acceptance of the project.
43
There are several guidelines which have come up for the project manager to carry out the
project management processes. There are 2 main types of project management standards,
one is off-the-shelf project management standards, these are described in books and so
on and here the popular ones are PMBOK that is Project Manager Book of Knowledge
Project Management Book of Knowledge and PRINCE2 and ISO 21500 and so on.
There are many other we just given some important examples here.
On the other hand an organization may have a slightly different project management
standard as compared to the off-the-shelf standard, they might tailor or maybe take
different aspects from different standards and they might have a tailored in house project
management standard where that specific organization follows the standard.
It may be a good idea to know what is involved in the PMBOK, PRINCE2, ISO 21500
and in our discussion we will not restrict our self to any one standard, but our discussion
in this course will be more on concepts that are used across all these standards.
44
PRINCE2 is a popular project management standard. PRINCE stands for Project IN
Controlled Environments. This is not restricted to only software development, but the
PRINCE1 was more targeted to the IT or software project management and PRINCE2 is
become a more generic project management standard. This was developed by the United
Kingdom office of government of commerce, it is very popular in the United Kingdom it
is a process driven approach and has been used for both small and large projects.
There are essentially seven processes in PRINCE2, the projects starting processes,
initiating processes, directing or executing processes, managing a stage boundary,
controlling a stage and managing product delivery.
45
(Refer Side Time: 11:03)
If we represent that pictorially there is a start-up processes and then 2 main processes
which exist throughout the project are the directing a project which is basically execution
of the project and then planning continues throughout, because once the initial plan is
made it is continuously revised and based on the plan once the plan is complete initial
plan is complete.
The project is initiated and the project undergoes various stages and in each stage, the
project manager directs the project activities and the project team carries out the
activities and here the project manager monitors and controls each stage. And at the end
of the stage the new stage needs to start, so this is called as the managing the stage
boundary and after all the stages are complete one by one stages are taken up here and
once all the stages are complete the closing processes are undertaken by the project
manager.
46
The ISO 21500 life cycle here also we have the initiating processes, the planning
processes, implementing or the directing or executing processes and finally, the closing
processes and throw out the controlling processes are carried out by the project manager.
47
Management Institute PMI and this guide is used as a support to prepare for the
certification offered by the PMI.
In the PMBOK there are various process groups each process group consists of several
processes. There are the initiating process group, but the project initiation activities are
carried out by the project manager and as the project takes off the planning processes are
carried out by the project manager. The initial plan is made and the plan is continually
refined. The executing processes the project manager directs the project team to carry out
various activities and at the same time the controlling processes are carried out. The
monitor if there are deviations from the plan and then take controlling actions and once
the project completes the closing processes are carried out.
So, these are 5 main process groups in the PMBOK the initiating processes, the planning
processes, the controlling processes, executing processes and closing processes. As you
can see across all these different project management standards many things are
common, the processes, the initiation, planning, executing or directing, monitoring and
controlling and finally, the closing processes. As I told earlier will not restrict our self to
any one project management standard, but we look at these various processes the project
managing management processes across all these different project management
standards.
48
The initiating processes this is the one that is carried out by the project manager first in
the project lifecycle. The main goal of these processes is to start off the project. The
outputs of these initiating processes are that the project manager is selected, the key
stakeholders are identified and the project charter is completed. The project planning
processes here the output include work breakdown structure, the project schedule in the
form of Gantt chart and identification of all dependencies and resources and a list of
prioritized risks.
If we compare two popular project management standards the PRINCE and the PMBOK,
we can see that the PRINCE is the origin is United Kingdom whereas, the PMBOK the
origin is United States. Here the PRINCE2 is administered by the APMG which is
Association for Project Management Group, PMBOK is administered by the project
management institute, both are very popular used worldwide, but PMBOK is used more
extensively than PRINCE2.
49
Now let us look at the life cycle models, because the project manager executes the
project management executes the project management processes to direct the project
team to carry out the development work. At different stages the project team carries of
different activities and therefore, as a project manager we must be clear about the various
life cycle models, what are the stages through which the development proceeds, the stage
boundaries and so on. Here we will emphasize only on the basic concepts.
Intuitively a software life cycle consists of first somebody thinks of the project the need
for the software and then the project is specified, it is designed, it is coded, it is tested,
finally, delivered and once it is delivered and used it needs maintenance and it undergoes
50
maintenance for number of years as the software gets used over many years maintenance
activities proceed and finally, once the software is not used anymore it is retired.
So, this is the intuitive model of software development conceptualize, specify, design,
code, test, deliver, maintain and retire. Based on this conceptual model of the software
life cycle various life cycle models have been proposed where these activities the
sequencing of these activities vary.
The life cycle model is also called as a process model or software development lifecycle
model. It is a diagrammatic representation of the software life cycle and it is also
accompanied by a description. The software life cycle model essentially identifies all the
activities that are undertaken during the product development. It also establishes a
precedence ordering among the activities and it divides the life cycle into phases.
51
And each phase consists of several sub activities for example, the design phase may
consists of the sub activities structured design, structured analysis, structured design,
design review and so on.
For very small projects let us say student assignment, the student unconsciously follows
a build and fix model it is not a formal life cycle model, but this intuitively comes to the
student. This yields success for very small projects like a student assignment where the
problem is within the grasp of an individual, the build and fix model works. As the name
says here do the initial building here write the code here for the project and then test, as
long as the code does not pass the test carry out fixes, again test carry out fixes; again
52
test until the developed code passes all the tests. This is also called the exploratory
model, here the problem is very small and the developer has flexibility and freedom to
undertake whatever activities he feels like.
So, there is lot of flexibilities built into the build and fix model. For example, the
developer may first code, then test, do a little bit of design, then test, fix and so on or my
code, then do some design, test, change code and so on or may specify, then code,
design, test, etcetera. So, this model build and fix model is a very very informal model
suitable only for projects where which is rather trivial like a student assignment.
But when the development is to be carried out by a team the formal process model has to
be followed, there must be a precise understanding among the team members that what
are the activities, which activity needs to be done after what, who will do and so on,
otherwise it will lead to chaos and project failure.
53
If the build and fixed model is used for developing a non trivial project and the team
starts using the build and fix model, then maybe one of the developers will start writing
the code, another will try to design, the third one will write to the do the test document
and so on. Another may define the IO for his portion first and so on and this way the
project will not succeed. There will be too much of delay poor quality of code will come
out, there will be a lot of idling, cost escalation and a very poor quality code will come
out. So, an informal project model like a build and fix model is not suitable for non
trivial project work.
54
Before we look at the different project lifecycle models you must know what is the phase
entry and exit criteria. A life cycle model as we said that consists of various phases and
each phase starts when there are some criteria are met we call that as the phase entry
criteria, only when those criteria are satisfied the phase starts and when some criteria are
satisfied the phase ends. So, for every phase it is necessary to define the entry and exit
criteria for that phase.
For example let us say what is the phase exit criteria for the software requirement
specification phase? The exit criteria is that, the requirement specification document
must have been completed, it must have been reviewed and approved by the customer.
Similarly a phase can start only when it is phase entry criteria have been satisfied.
55
Another important concept is the milestones, a milestone is very important for the project
manager because using the milestones, the project manager keeps track of the progress of
the project. There are various types of milestones, but one important category of
milestones are the phase entry and exit, when there is a phase starts after the phase entry
criteria has been met an important milestone is met.
Similarly when the phase completes and the exit criteria are satisfied, the project
definitely has made some progress and that forms another important milestone, with this
discussion we will conclude this lecture and in the next lecture well discuss a few project
lifecycle models.
Thank you.
56
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 05
Life Cycle Models - I
Welcome to this lecture. In the last lecture we discussed about the software development
lifecycle. In the software development life cycle the project starting from the concept is
developed by various activities. The software development lifecycle essentially defines
all the activities that are carried out and also the ordering among those activities. There is
a intuitive concept of the activities that are carried out during the software development,
but then as we will see that various project management life cycles structure these
activities in different ways. Let us look at some popular Life Cycle Models.
In this lecture we will discuss about the waterfall model, V model, evolutionary model
and the prototyping model.
57
(Refer Slide Time: 01:28)
The lifecycle the project lifecycle is very important for the project manager because
based on the lifecycle the project manager can track the progress of the project. Because,
there are various milestones in the lifecycle and as the milestones are met the project
manager can tell how much progress has been achieved or at which stage the project is.
Without a lifecycle model the project manager is handicapped. It becomes very difficult
for the project manager to monitor and control the project does not know that how much
work is remaining and therefore, the project gets delayed. The cost rises and therefore,
58
for every non-trivial project the project manager uses a suitable lifecycle model and the
development team follows it. Without a lifecycle model usually a problem it is known as
the 99 percent complete syndrome occurs. Here the project manager has no way of
finding out the progress of the project other than asking the team members that how are
you doing, how much you have completed.
Typically the development team is very enthusiastic and they optimistically answer that
we have almost done only small thing is there and each time you ask them they say that
see we are nearly done. So, the project manager thinks that the project is about to
complete, but is far from true. It takes many more months or years even when he the
project manager hears the term that the work is almost done and that is known as the 99
percent complete syndrome. It occurs when the project manager has no other way to
track the progress of the project other than asking for the opinion of the team members
that how far they have progressed.
As we were discussing the project management lifecycle allows the project manager to
have various milestones and stage completion by which the project manager can track
the progress of the project more meaningfully.
Another very basic concept that we will discuss before looking at the lifecycle models as
the project progresses various deliverables are produced. It is a myth if we say that the
working program is the deliverable of a project not really. There are a large number of
59
deliverables that are produced as the project progresses and these are the documentation
of all aspects of development for example, specification design etcetera.
We will discuss about the waterfall, V model, evolutionary, prototyping, a spiral model.
This we will call as the traditional model. This have been used for traditional projects
where the project is a product development project starting from scratch, but for services
type of projects typically the agile models are used.
We will first discuss about the traditional models the waterfall, V model, evolutionary,
prototyping, spiral model. We will find out what are the shortcomings of these models.
Why they are not so suitable for the services type of projects and we will discuss about
the agile model and we will see that how they overcome those problems.
60
(Refer Slide Time: 06:15)
The software life cycle as we had already said, is a series of stages during the
development of the software. Intuitively, every software has a feasibility study during
which it is decided whether to take up this project or not the requirement analysis and
specification, design, coding, testing and maintenance.
So, these are the stages that intuitively every software undergoes the feasibility study
requirements analysis and specification, design, coding, testing and maintenance.
61
The classical waterfall model fits very accurately to the intuitive model of software
development. Here the activities are carried out one after other. First the feasibility study
is undertaken then requirements analysis and specification work is undertaken. And once
that is complete the design is taken then the coding and unit testing is undertaken, next
the integration and system testing is undertaken and finally, the maintenance phase is
undertaken.
This is a very close fit to the intuitive model of software development and this is called
as the classical waterfall model because based on this model all other lifecycle models
have been developed. This is the classical model and all other models are derivatives of
this model. So, this is according to the conceptual model of software development we
were discussing.
If we represent the classical waterfall model in a diagrammatic form we can see that the
different phases are represented here feasibility study, requirement analysis specification,
design, coding, testing, and maintenance and there is a transition from one phase to the
other. If we look at it looks like a waterfall a series of waterfall and from this the name
waterfall model comes. This is the simplest and most intuitive model.
62
(Refer Slide Time: 09:00)
Out of all the phases in the classical waterfall model which phase takes the most effort. If
we look at all the phases, the phases between the feasibility study and testing are called
as the development phases and then we have the maintenance phase is called
development phases because here the product is developed and delivered to the customer
and after that the maintenance phase starts.
Among all the phases if we consider all the phases the development phases and
maintenance phase then the maintenance phase consumes the maximum effort. But,
among the development phases the testing phase consumes the maximum effort, you
may wonder that is it really the testing phase which consumes the maximum effort, the
answer is that see we are discussing about a commercial software it is not a student
project a student project or a student assignment.
The student just checks that it is working for few inputs and then submits it whereas, we
are discussing about a commercial software where it has to be extensively tested the
quality needs to be maintained because the organizations name is at stake. You can not
just give a software having bug to the customers and therefore, the testing phase
consumes the maximum effort for every non-trivial commercial software.
If we represent the effort the form of a diagram the maintenance phase takes the
maximum effort, but then among all other phases the development phases the testing is
63
the one which takes most effort and then comes the coding, design and the requirement
specification.
Most organizations they document the software lifecycle model they follow. They are
not only they identify all the phases and the activities that take place in the phases. They
also identify what are the outputs or the deliverables that are produced at the end of a
phase what are the entry and exit criteria for the phase and what are the methodologies
that are to be used for by the developers for various phases and this forms a process
model document and is a important reference book for all the team members.
64
(Refer Slide Time: 12:04)
The classical waterfall model we were saying that it is a very intuitive model and other
models are derivatives of this model and the development methodology that the
organization uses during various phases. These are all documented in the form of a
process model book given to the developers and they must understand this well before
they undertake the development activities.
The first activity in the waterfall model is the feasibility study. Here the project manager
is responsible to carry out the feasibility study. The project manager needs to identify
65
whether the project is feasible. So that, it can be undertaken by the development team
and there are 3 types of feasibility that the project manager is concerned. There are 3
dimensions of feasibility or 3 types of feasibility that the project manager is concerned.
The first one is technical feasibility whether the development team has the competence
and capability to undertake the project. For this the project manager needs to understand
various aspects of the development to be undertaken and then assesses whether the team
that he has whether they can technically complete this project. Another important
dimension of the feasibility is the economic feasibility. Whether the budget that is that
would be available would be profitable to carry out the project. This is also called as the
cost benefit feasibility.
The third dimension is the schedule feasibility that is the time that the project manager
estimates for the project to complete whether the customer agrees to that time or the
customer wants the project to be completed in 2 weeks and the project manager estimates
that needs at least 3 months and then the project manager would say that this is not
feasible schedule wise. Only when all the 3 feasibilities are satisfied the project manager
agrees to undertake the project.
66
do, what the customer wants out of the software, the data which would be input to the
system, the processing that needs on the data and the output that are to be generated.
And, based on that the project manager identifies that how many developers need to
work for what duration.
And, based on that the project manager identifies the cost of developing the project and
based on that the budget that the sponsor of the project or the customer provides the
budget. Based on that the project manager finds out whether it will be financially
worthwhile and based on the activities required as part of the project the project manager
identifies whether it is technically feasible and the project manager roughly identifies the
time it takes and based on that whether it would be feasible schedule wise.
Just a case study to identify just to illustrate what the project manager needs to do to
carry out the feasibility study because this is a important activity for the project manager
to carry out the feasibility study for a project.
This is a Special Provident Fund Scheme for a Coalfield Limited. Coalfield Limited is a
coal mining company and mining being a risky profession. In addition to the ordinary
provident fund that are available to the miners the company proposes to have a Special
Provident Fund Scheme and for this it has invited bids and the project managers they
visit the company and understand what the company needs. They find that the coalfield
limited has number of employees exceeding 50,000 majority of these employees are
67
casual labourers and mining being a risky profession casualties are very high. Though
there is a Provident Fund Scheme, but the settlement time is very high and there is a need
for a Special Provident Fund Scheme where the fund can be disbursed immediately.
And here the manager first visits the main office of the company, finds out what are the
functionalities required from this software, finds that there are various mine sites where
the data will be input about the miners who are working for a specific day and the
contribution they make for that day towards the Special Provident Fund Scheme.
So, the project manager visits some mine sites, finds out what data to be input, how
many data what format and so on and then suggests alternate solutions. For example,
there may be local databases at the mine sites and this will be periodically uploaded to
the central database at the company’s head office and based on that the settlement when
there are claims for the Special Provident Fund Scheme these would be made, but then in
this the project manager finds that there will be a delay because instantly claims cannot
be settled.
But, another solution may be that the local mine sites only upload the data and these are
immediately updated on the mine side. There are no local databases at the mines, but
these are just data entry stations where the data is immediately updated on the server at
the main office of the company, but then the problem here is that an extremely reliable
communication link is required because it needs to instantly update the data and the main
68
computer whereas, the other option when there is a local database the communication
link may not be so reliable because only once in a while the data at the main office is
updated and the cost for these different solutions is worked out by the project manager
and then discusses the Coalfield officials which option would be suitable for them and
what is the budget, what is the cost for each one and the budget available, based on that
the project manager makes a go, no go decision.
So, this is the typical sequence of activities that occurs that the project manager
understands what are the requirements main requirements for the software, what are the
alternate solutions that can be proposed to solve the functionalities that are required and
what is the cost for each of these alternate functionalities, what is the best solution
among these alternate solutions and based on that whether it is financially feasible,
technically feasible and schedule wise feasible is determined by the project manager.
That is what we have just written down here that the project manager first develops an
overall understanding of the problem, formulates the different solution strategies and
examines the alternate solutions in terms of the resource required, the cost of
development and development time.
69
(Refer Slide Time: 22:16)
And forms a cost benefit analysis and based on this the project manager finds that it is
feasible to carry out the project or find that it is infeasible due to high cost, resource
constraints, technical reasons or schedule reason.
The cost benefit analysis is a important activity during the feasibility study here the
project manager needs to identify all costs. The cost can be development costs, the setup
cost, operational cost and also identify all the benefits that would accrue. For example,
the benefit may not only be the financial budget that it provides the company gets, but
70
also the experience it builds up the reusable software that it makes and so on. At the end
of the cost benefit analysis the project manager needs to check whether the benefits are
greater than the costs for the project to proceed.
So, this is what I have just pictorially written here that the benefits are more than the
costs then the project is undertaken. The costs include development, operation setup
etcetera and the benefits some are quantifiable benefits like how much the customer is
willing to pay, but then there are non-quantifiable benefits such as the experience
reusable software and so on.
71
(Refer Slide Time: 24:11)
One of the important activity of the project manager is write to write the business case.
We had said that this is one of the initiating processes to write the business case. The
feasibility study once it is undertaken helps the manager to write the business case. Here
the project manager provides a justification for starting the project and shows that the
benefit of the project exceeds the cost and also identifies the business risks.
The business case is a very important document and writing a business case is a
important project management process initiating process almost anybody aspiring to
72
become a project manager needs to write a business case sometime or other. Here for
writing a business case need to have an executive summary of what is required, the
project background, the business opportunities that is what are the benefits that the
project will bring, what we will lose if we do not do the project, what are various costs
that will be incurred as the project progresses, the benefits and what are the risks and
how these risks will be managed.
The business case is submitted to the top management. This gives them an overview of
what is involved in the project. How it will help the company and what are the costs and
benefits and what will be the risks here and what are the plans of risk management. This
forms an important project initiation process. With this discussion we will just stop here
and continue in the next lecture.
Thank you.
73
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 06
Life Cycle Models – II
Welcome to this lecture. In the last lecture we had discussed about the Life Cycle Model
and especially the classical waterfall model and as part of the life cycle model we had
looked at various activities that are undertaken during the development process. And we
had spent some time on the feasibility study which is a important activity undertaken by
the project manager with the help of a case study we had identified what are the activities
that are taken up by the project manager during feasibility study and at the end of the
feasibility study the business case is written. This is one of the initiating processes
writing the business case and we had seen that how an effective business case can be
written.
Now, let us proceed further in this lecture based on the discussions that we had the last
lecture we will discuss about the iterative waterfall model we will look at the V model,
evolutionary model and prototyping model that is the plan for this lecture.
74
(Refer Slide Time: 01:42)
The classical waterfall model is an idealistic model, the main problem is that it assumes
no defect is introduced during any phase each phase is completed and the next phase
starts and so on until the project completes, but in effect there are lot of defects that are
introduced in almost every phase of the lifecycle.
And that is the reason why it may be necessary to go back to a previous phase once a
defect is identified. For example, a defect may be identified in the design phase that a
requirement specification a functionality was missing and then need to take up the
75
requirements activity rework the requirement specification document and then again
come back to the design phase, that is the reason why we need the feedback path. Once
the defect occurs in a phase there is a mistake by a developer it often remains unnoticed
until a later phase and then once it is noticed in a later phase, it is then removed.
But the question here is that the later the phase the more expensive is to correct this
defect, why is that? The answer is not very far to sake because if defect occurs and it is
removed immediately, then that is the best thing the cost will be minimal, but let us say
the defect is removed after one phase that is the next phase then only two phases are
affected; only two phase documents need to be changed. But if let us say a defect is
occurs in the requirement specification phase and it is discovered during testing, then not
only the requirement specification document has to change the design needs to be
changed, the code needs to be changed and these are lot of activities and lot of
documents need to change. And that is the reason why the later the phase in which the
defect gets detected the more expensive is the removal of the defect this is a very
important concept and it is known as the phase containment of errors.
So, that is what is written here that once the defect is detected redo the work in the
previous phases. And therefore, if there are lot of intermediate phases it has crossed
many phases before the defect is detected, then lot of rework is needed.
76
(Refer Slide Time: 05:24)
In the iterative waterfall model is very similar to the classical waterfall model, so you
can see the waterfall here, but also there are feedback paths; this feedback paths make
the model realistic unlike the classical model where one phase completes and then the
next phase completes and so on. Here we assume that there can be a defect which will
get noticed during the testing phase and we need to go back to the design phase or maybe
the requirement analysis phase and then fix that defect and not only that fix the work in
the later phases as well.
77
As we are saying that it is a important concept for the errors when they are the mistakes
are made by the developers they must be fixed in the same phase that is the ideal that the
defects are detected and fixed in the same phase that will incur the lowest cost. The more
delay occurs in detecting the defect, the more expensive it will be.
If a design defect is detected in the design phase itself, then it just need to change the
design document, but if it is detected during the testing phase not only the design
document needs to be changed, but the code needs to be reworked and that will be much
more expensive. So, this is called as the phase containment of errors it is an important
principle that, once a mistake occurs mistake or error occurs this must be contained to
the same phase it is called as a phase containment of errors.
The principle of detecting errors as close to its point of introduction is called as the phase
containment of errors it is a very important principle and very intuitive also. So, this is
the iterative waterfall model, the phases are identical to the classical model excepting
that the feedback paths are available.
78
(Refer Slide Time: 07:57)
The main strengths of the waterfall model is it is very intuitive easy to understand all the
developers can very easily understand and use the model. The milestones are clear
understood by the team and the project manager can easily monitor the progress, it
provides the required stability. Management the project manager can have a strong
control put scheduled milestones and monitor, but then it has lot of deficiencies.
One of the major problem with the waterfall model is that it is not very suitable to handle
changes to the functionalities, but in almost every software project the requirements
79
change there are many reasons why the requirement may change. For example, the
customer may not be may not have thought that some functionalities will be required
because he did not understand or did not think you cannot grasp what are all required at
the beginning of the project, but as he looks at functionalities then he can make out that
certain functions will be required. The other is that as the development takes place, the
business itself might undergo some change and therefore, changes will be required.
So, there are many reasons why changes will be required during the development of the
software, but in the waterfall model the requirements are fixed upfront and then the
design coding and testing are undertaken and there is no way provided by the waterfall
model to make any changes, this is possibly the biggest shortcoming of the waterfall
model. The other shortcomings are that here lot of documents are produced phases are
completed, the requirements is complete, design is complete documents are produced,
code is complete, but during testing found that most of the requirements are not met.
So, the project manager gets a false impression of the progress that, the progress is the
project is proceeding nicely according to the plan, but during the testing phase finds out
that very little has been done. The main reason for that is that, the integration is a big
bang at the end till then everything was fine, but at during integration it has found that
most of the functionalities are not working.
Another problem with this model is that, once the customer gives the requirement they
have no chance to see the system till the end when it is delivered to them. It would have
been really good if the customers were involved they could see that what are the
functionalities that’s they are being developed they could have suggested modifications
and so on. These are the major difficulties with the waterfall model the biggest one is
that it does not have the flexibility to handle the requirement change, false impression of
progress because phases are getting complete, but integration is on big bang at the end
where it is noticed that the progress is very less; most of the functionalities are not
working and the customer involvement is very minimal, the customer cannot give
suggestions.
80
(Refer Slide Time: 12:28)
But then the waterfall model was popular and it is also successfully used in some types
of projects. The projects for which waterfall model is suitable is that, the requirements
are very well understood and known to the customer they have used similar software
they know what all is required, it is not that something they are thinking of a new type of
software it is a regular software. Requirements are well known and stable, technology is
understood, development team have experience with similar projects.
So, as you can see here that the project is rather routine project where the development
team have done similar work, they are just trying to develop one more of that you can
say that the challenges here are less because the requirements are known what the
software will finally, look like what functionalities is very well known. The technology
is understood, development team had experience in similar projects and for such simple
projects waterfall model can be used and this happens to be the most efficient way to
complete these projects.
81
(Refer Slide Time: 14:13)
But what about the classical waterfall model, the iterative waterfall model has its use we
saw that for simple projects they can be used, but what about the classical waterfall
model? Because here this was the idealistic model where it was assumed that there is no
defect in any phase all phase proceed idealistically, but the use of this is that the final
documentation should appear like the software is developed in a using a classical
waterfall model, there are no backtracking no mistakes nothing and if the document is
written that way then it facilitates comprehension of the documents.
82
Just to give an analogy you may be trying to do a mathematical theorem proving or
maybe trying to solve a problem mathematical problem and you might many times do
rework mistakes, go back to the previous phase previous step and so on. But if your final
document for this problem has all these things that where you did mistake where you
went back corrected and so on, then it becomes extremely complicated for somebody
trying to understand.
The best for somebody to understand is that you give him the correct steps as if you had
not committed any mistakes and so on and that is what is basically the classical waterfall
model that all phases proceed without any mistakes. Now, let us look at the V model this
is a variant of the waterfall model only small changes are there small derivative of the
waterfall model essentially all the activities are there the waterfall model.
Now, let us look at the V model, the v model emphasizes on the verification and
validation activities throughout the lifecycle. We had seen that in the waterfall model the
testing activity was only at the end, initially there is requirements specification then there
is design coding, but the testing activities are only at the end and that is the reason why
the testing activities take long time. And the other problem is that what do the testers do,
till the testing phase starts what do they do?
The V model gives answer to that, it emphasizes verification and validation and normally
this model is used where the software is supposed to be more reliable. For example,
83
those software being used in critical applications the V model is used. Here in every
phase of development there are some test activities for example, planning the test cases
and later they may get executed, but every phase there are some test activities that occur.
So, this is the diagrammatic representation of the V model looks like a V here the model
representation and that is the reason why it is called as a V model. During the
requirement specification, the system test cases are developed which are executed during
system testing during high level design the integration test cases are developed, during
the detailed design the unit test cases are developed which are used during the unit
testing after the coding is complete. As you can see here that there is a emphasis on
verification and validation activities and every phase there are some test activities unlike
the waterfall model where the test activities occurred only when only in the integration
and system testing.
84
(Refer Slide Time: 19:14)
Here this slide it just says what are the system the test activities occur during the
requirement analysis and specification system test case design activities occur, high level
design the integration test design activities occur and during the detailed design the unit
tests design activities occur.
The strength of this model is that it emphasizes verification and validation, its very
similar to the waterfall model excepting that every phase has some verification and
validation activities.
85
(Refer Slide Time: 19:58)
But then it has many weaknesses just like the waterfall model here it does not support
overlapping of the phases that is one phase has to complete then the next phase starts,
what if some of the developers few of the developers complete first should they wait for
the other developers to complete just idle or they can get started with the next phase this
model does not allow that, they wait for all phase the phase to complete and only then
they can start the next phase. Does not easily accommodate later change requirements,
does not provide support for effective risk handling.
86
So, these are some of the problems of the V model, but then it is the natural choice for
systems requiring high reliability and it is used when all requirements are known upfront
and the technology is known. Now, let us look at the prototyping model which is also a
small variation of the basic waterfall model.
If we look at the diagrammatic representation of the prototyping model the initial phase
here is prototype construction and after that the design coding etcetera are carried out.
Here just like the waterfall model the activities are organized, but then before starting the
actual development a working prototype must be built. What is a prototype? A prototype
is a toy implementation of the system which has limited functionalities, low reliability
and so on. But the question is that why do we need to develop a prototype before we start
to develop a system, what are the advantages of developing the prototype?
87
(Refer Slide Time: 22:24)
The first thing is that once you develop the prototype we can know what are all the
requirements that should be there, if there are any requirements which are missing. If the
customer can look at the prototype, then you can find out what are the missing
requirements. There is a improved communication with the customer, there is a user
involvement because unlike the classical or the iterative waterfall model where the after
the requirements are given the only thing that the customer sees the delivered software
here in contrast in this model the prototype is constructed and given to the customer for
his comments and also since a prototype exists, it reduces the documentation, reduces the
maintenance costs. And also if there are technical risks that the development team
anticipates then they can through the prototype construction they can address those risks.
88
(Refer Slide Time: 23:51)
The reason for developing the prototype is to illustrate to the customer what is the
software is going to look like, what will be the input data formats, messages, reports,
interactive dialogues. The technical issues associated with the product development, the
developers can experiment and find out what is the best way to handle technical issues
sometimes the major decisions depends on issues like the response time of the hardware
controller.
And here by developing the prototype the designers can have their solution based on the
hardware, the response time of the hardware controller at the upfront rather than
proceeding throughout all the development activities and finding out that the hardware
controller response time is slow and they need to change everything. So, developing a
prototype helps in getting the customer feedback and also many technical issues are
clarified and the development can proceed correctly.
89
(Refer Slide Time: 25:31)
Also when a prototype is developed the once it is used for illustrating to the customer it
is thrown away and the actual system is developed, but then the experience that the
development team gets in developing the prototype that goes a long way in developing a
good software.
Here in the prototyping model first the approximate requirements are obtained, a quick
design is construct is made and then the prototype is constructed using various shortcuts.
90
The shortcuts can be like inefficient, inaccurate or dummy functions; a table lookup
rather than writing the code for the actual computation.
We can represent that in this diagram that initially a quick design is done based on the
requirements that have been gathered prototype is built. Submitted to the customer for
evaluation based on customer feedback, refine the requirements again have a quick
design change the prototype and so on until the customer is satisfied with the prototype
then a waterfall model is used for developing the software
91
So, the it is a small variation of the waterfall model only the initial prototype
construction that is added here.
92
But does the prototype construction add to the cost and the development becomes more
costly? No not really because for systems with unclear user requirements with
unresolved technical issues the prototyping is actually a cost effective way of developing
the software because without a prototype we might have to change the software many
times and that would make it much more expensive that would incur massive redesigned
costs.
And even though there is a small upfront cost of developing the prototype, but that will
be more effective cost effective, then not using the prototype and finally, finding out that
the customer has lot of dissatisfaction and suggests many changes and also the technical
issues are unresolved as a result many changes occur. With this discussion we will just
stop here and continue discussing a few more lifecycle models in the next lecture.
Thank you.
93
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 07
Life Cycle Models – III
Welcome to this lecture, last time we were discussing about Life Cycle Models and we
had said that the waterfall model is a intuitive and important model and many models
have been derived from the water fall model.
We had looked at the some of the derivatives from the waterfall model and today’s
lecture we will first see how the waterfall models have done, what are the good things
about them and what are the short comings about them. And, then we will discuss about
other models which overcome specific short comings of the waterfall model. There are
some severe short comings of the waterfall model because of which they are not as
popular as they were about 30-40 years back.
And one of the main problem that is being faced is that the type of project that are being
handled are different from those that were handled 30-40 years back. Now, the projects
are short and the code is not written from scratch because lot of code is available lot of
code is being reused and only small modifications and tailoring is done which we called
as service oriented software.
94
So, today we will first look at the some reflections on the waterfall based models and
advantages disadvantages etcetera, waterfall model and we will see the models that have
been proposed to overcome the specific short comings of the waterfall based models. We
look at the incremental, the RAD and evolutionary model.
First let us look at the major difficulties of the waterfall based models. Possibly the most
important difficulty is there is no way change requests can be handled in the waterfall
model. In all waterfall based models that is the iterative waterfall model, the classical
waterfall model, the prototyping model, the V model and so on. The requirements are
gathered up front and these are documented, it is assumed that this will not change and
then plan is made based on this documented requirements, design is done on this
requirements and then coding and testing is done with respective these requirements.
But then in reality, the present projects the requirements keep on changing as
development proceeds. Typically about 40 to 50 percent of the requirements change after
the initial requirement specification. And, this is possibly one of the major short coming
of the waterfall based model, 40 percent of the requirement change during a typical
development. The second problem is that many of the projects now are for customizing
applications as we are discussing about the service oriented software where the
organisation has some software and would tailor it for specific needs of another
customer. If we use the waterfall model for this custom applications there will be huge
95
cost because the entire thing has to be documented, specification laid out, design and so
on.
The third problem is that the waterfall model is called as heavy weight process, it is
called a heavy weight process because lot of documentation is produced as part of the
process, everything is supposed to be documented. There is starting from requirement
specification, the reviews, review results, the design various types of plans everything is
to be documented and that is the reason these are called as heavy weight processes. And
it is estimated the typically 50 percent of the effort by a development team goes towards
creating documents and naturally the project costs increase and there is project delays.
And specifically now that the project durations are very short such heavy weight
processes are not finding favor and we need some processes which do away with these
problems.
Let us just do a bit of elaboration that in waterfall model, requirements are determined
and documented at the start, they are fixed from that point on; no changes are permitted
and long term plans are made based on these documented requirement. The design is
made on based on these documented requirement even test cases are written based on
this documented requirement.
96
And therefore, any change to this requirement will require changes to all documents and
therefore, those who use the waterfall model they heavily discourage the customer to
make even the small changes.
But let us look at the observation of Frederick Brooks who is one of the guru’s in this
area. The assumption that one can specify a satisfactory system in advance, get bids for
its construction, have it built and install it. This assumption is fundamentally wrong and
many software acquisition problems spring from this.
Just to explain Frederick Brooks thought, he felt that many of the project failures, project
delays etcetera are the result of having the requirements worked out at the beginning of
the development. And, then all the development proceeds based on the documented
requirements is fundamentally wrong and many of the problems arise from this. In
response to this short coming of the waterfall model one model that was proposed is the
incremental model. To get the main idea behind the incremental model, let see what
Victor Basili one of the founders of this area has to say.
97
(Refer Slide Time: 08:24)
The basic idea is to take advantage of what was learnt during the development of earlier,
incremental, deliverable versions of the system. Learning comes from both the
development and use of the system. Start with the simple implementation of a subset of
the software requirements and iteratively enhance the evolving sequence of versions.
At each version design modifications are made along with adding new functional
capabilities. Thus to explain what Victor Basili has to say about a incremental model is
that, the software is developed in increments. In the first increment some of the core
functionalities are implemented and then these are given to the customer who gives
feedback and this forms a learning from the use of the system, even the developers they
have they learn from the implementation of the core functionalities. And based on the
feedback the system the software is refined and this is called as the iteration.
In iteration the same functionality is refined each time the customer gives new feedback,
the same functionality gets refined iteratively and also at the same time new
functionalities are also implemented these are called as increments. So, this model is
called as incremental model with iteration.
98
(Refer Slide Time: 10:47)
Incremental and iterative development here each time new increments of functionalities
are implemented and at the same time the functionalities that were already implemented
they are refined and that is called as iteration with those functionalities.
The key characteristics of the incremental and iterative development is that, built the
system incrementally small parts of the system are built. There are number of iterations
for each functionality and each time a deliverable is given to the customer in the form of
a working program which the customer can use and give feedback.
Each deliverable to the customer would not only have incorporated the feedback that he
had given in the previous increment, but also added new functionalities. As can be easily
seen this is a good way to manage changes, the customer is encourage to feedbacks and
these are incorporated. The incremental model has become popular in industry, the
rational unified process is a incremental model, the extreme programming the agile
models etcetera are all incremental.
99
(Refer Slide Time: 12:29)
From the customer prospective gets a functionalities some part that he can use and give
his feedback based on this the organisation developing organisation incorporates the
feedback on the core functionality and also implements additional functionality and gives
it to the customer. The customer again gives feedback on this and the feedback is
incorporated and also new functionality are added and this way the system gets
completed and hopefully at the end the delivered system will meet the customer’s
requirements because all his feedback in using the system have been incorporated.
100
If you can diagrammatically depict the model initially requirements are collected and this
requirements are split into incremental features there is a overall design and then each
time one increment is taken developed, validated, integrated with previous increment, the
system is validated and delivered to the customer feedback is taken next increment is
developed and so on until the final system is updated.
As you can see that in the increment model the requirements are collected up front these
are split into small increments or slices. Similarity, here with the waterfall model is that
the requirements are collected up front, but the difference is that each of this increment
customer feedback is taken and these change.
101
(Refer Slide Time: 14:58)
In a waterfall model there is a single release initially all the functionality requirements
are collected, designed, coded, tested and given to the customer as a single release
whereas, in incremental many releases are there and in after each release customer
feedback is taken and the functionality that was already delivered are again changed to
meet the requirements of the customer.
The first increment is typically the core functionality, then the successive increments
they not only refined the existing functionality, but add or fix functionality the final
increment is the complete product. This is called as the incremental model with iteration,
but then somebody can think of a purely incremental model where there are no iteration
in which the system is just developed in small increments and given to the customer, but
the customer feedback is not really taken to change the existing functionality there is no
iteration.
102
(Refer Slide Time: 17:24)
103
Then design the increment, build the increment, implement the increment and then give
the customer for evaluation the increment and then use that as a feedback and take the
next increment and so on.
Because there are several increments out of which the designer need to choose one of the
increment for delivery in the next iteration, which increment should be selected first?
This is an important problem.
Some parts will be pre requisite of the other parts without that the system will not work.
For example, until the customer registration is done customer billing cannot be done first
we must have the customer registration software developed and then only the customer
billing software can be implemented, but then many of the increments are without any
order.
Now, these increments where there are no order value to cost ratio may be used, this we
call as the V by C ratio, where V is the value to the customer which is graded in a scale
of 1 to 10 and C is a score 0 to 10 representing the cost to the developers.
104
(Refer Slide Time: 20:24)
Let us take an example to see how it works. Let say at one step in increment for the next
increment we have let say these are the modules that the development team has the
flexibility to implement profit generate the profit reports, implement the online database,
implement ad hoc enquiry, implement the purchasing plans, implement the profit based
pay for the managers. These are the five functionality, that the developers have the
choice and from the customer feedback about the value of these different functionalities
they have assigned these values.
Online database is the lowest value to them profit report is 9, ad hoc enquiry 5,
purchasing plan 5 and profit based pay for manager is 9. The development team when
ask to rate this on 1 to 10 I found that the least cost for development is profit based pay
for manager, profit reports is 2, purchasing plan is 4, ad hoc enquiry is 5 and the most
difficult to implement at the online database and then on computing the V by C ratio we
find that there is a scale of 9 to 1, 0.1; 0.11.
So, for the next increment this is one that should be chosen this as high V by C ratio and
n the profit report should be chosen, third is the purchasing plans and forth is the ad hoc
enquiry and last should be the online database. The incremental model as become
extremely popular, it is almost part of every model that is being used those are
derivatives of the incremental model one important derivative is the RAD model, RAD
stands for Rapid Application Development.
105
(Refer Slide Time: 23:04)
This is sometimes called as the rapid prototyping model; here the main aim of the model
is to decrease the time taken and the cost incurred to develop the software. Facilitate
accommodating change requests as early as possible before any large investments have
been made.
This is as we said is a incremental model here increments are decided and only short
term plans are made and heavy use of the existing code. Contrast to the waterfall model
this model is very efficient, it can incorporate flexibility to incorporate changes because
106
only short term plans are made any change does not require changing all the plans that is
design the project management plan etcetera, the test plan and so on.
The plans are made for one increment at a time and this increment is called as a time box
and after each increment is delivered, the system or the applications becomes more
complete.
During each iteration a prototype of the software is first developed is given to the
customer for evolution and then this prototype is refined based on the customer
107
feedback, one point to note here is that in this model the prototype itself is refined in the
RAD model the prototype itself is refined to the actual to be the actual software, where is
in the prototyping model that we had discussed which is a derivative of the waterfall
model. The prototype is used to get customer feedback the start of the project and then
the prototype is thrown away and the software is developed fresh. But in the RAD
model the prototype is refined based on the customer feedback.
The RAD model achieves faster development, it encourages use of specialized tools, the
tools should be supporting visual style of development like drag and drop. Use of
reusable components and use of standard APIs these are something that are suppose to
help the software implemented faster.
108
(Refer Slide Time: 26:20)
Now, let us look at the software projects for which the RAD model is suitable. The
projects for which the performance and reliability are not critical, but these are required
to be developed very fast and the system can be split into several independent models. If
the system is very trivial and we cannot have several increments then; obviously, RAD
model is unsuitable.
Also the RAD model is unsuitable when we have few login components are available,
high performance or reliability is required. The reason why the RAD model does not
109
give high performance or reliability is that the prototype itself is refined into the actual
software. Unlike a waterfall model where a fresh software is designed coded and tested,
the RAD model would give a low reliability and performance.
If there are no precedence for similar products then also RAD model is unsuitable that is
challenging new type of software for which; obviously, there are no components would
be available to use as plugins the systems that cannot be modularized RAD is unsuitable.
The RAD model each time constructs a prototype and gives to the customer for
evaluation and based on the customer evaluation the prototype is changed and as the
customer gets satisfied the refined prototype is becomes the part of the software. In the
prototyping model which is a derivative of the waterfall model, the developed prototype
is primarily used to gain customer feedback and also to get insight into the solution that
is the technical issues and so on.
And therefore, using the prototyping model you can chose between different design
alternatives, you can elicit customer feedback and the developed prototype is usually
thrown away, but not in the RAD model. In RAD model the prototype itself after
enhancement refinement becomes the final software.
110
(Refer Slide Time: 29:18)
Another important development model is the evolutionary model with iteration, we had
seen the incremental model with iteration which is of course, important and has become
part of many development models that are being used across industry, but another idea
that has come is the evolutionary model with iteration. We will examine this model in
the next class and we will see that the present model that has become extremely popular
at the agile models which have the features of both the incremental and evolutionary
model with iterations.
Thank you.
111
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 08
Life Cycle Models – IV
Welcome to this lecture Over the last few lectures, we had looked at few important
software development lifecycle models. We had started looking at the classical model
which is very intuitive, the classical waterfall model and the derivatives of this model
namely the iterative waterfall model, looked at the V model, prototyping model and so
on.
And over the years few more models came up as the shortcomings of the waterfall model
were noticed. These were the RAD model, the incremental model, incremental with
iteration and the evolutionary model. In the last lecture, we had looked at the incremental
model. Today, we will first look at the evolutionary model and then, we will look at the
agile development model.
This is in fact, an evolutionary model with iteration; remember in the last class we had
said that the term iteration is used when existing functionality which was delivered
developed and delivered is iterated and refined based on customer feedback. Now let us
look at the evolutionary model with iteration.
112
(Refer Slide Time: 02:01)
One of the main problem that we had said about the waterfall based model is that the
requirement is defined and frozen at the start of the project. But then, look at here about
the practical situation here. Capers Jones researched on 8000 projects and he found that
40 percent of the requirements were arrived, when the development had already begun. If
we had frozen the requirements, the start; then, this 40 percent changes will be very
difficult to incorporate. Because in reality this is the average figure; 40 percent, it can
even be more for some projects.
And therefore, there is need for a model which can effectively handle requirement
changes. Requirements change due to various reasons maybe the business changes very
rapidly. Once they have defined a business procedure may be after a month the business
procedure changes or maybe the technology changes or maybe they might have not
understood what is required and told something else, maybe they had forgotten to tell
something and so on.
So, the requirements change due to various reasons and we need a model which can
effectively handle requirement changes. One way that has been considered very
promising is that each time do only small part of the system. Those are riskier likely to
change the customer is not sure about it. Plan a little for this small work, design a little,
code a little and once the small part is completed, give it to the customer for his
113
evaluation and feedback. Here the customer is encouraged to participate. The end user
the tester, integrator, technical writer all are involved in the development.
Let us see what very distinguished person in this area. Tom Glib, what he has to say on
the evolutionary model. He says that “A complex system will be most successful if
implemented in small steps... “retreat” to a previous successful step on failure... If some
functionality is totally not required, then at least you can go back to the previous version
which is acceptable and then develop from there opportunity to receive feedback from
the real world before throwing in all resources.. and you can correct possible errors..”
So, according to Tom Glib, the evolutionary model which he captures here in this few
lines is a very promising model to handle changes to the requirement due to various
reasons.
114
(Refer Slide Time: 06:04)
Let us see what Craig Larman has to say about evolutionary model with iteration.
“Evolutionary iterative development implies that the requirements, plan, estimates, and
solutions evolve or are refined over the course of iteration, rather than fully defined and
“frozen” in a major up-front specification before the development iterations begin.
Evolutionary methods are consistent with a pattern of unpredictable discovery and
change in new product development.”
So, Craig Larman says that when a problem when a project starts it is very difficult to
visualize all that is required. Only when you start doing, then you can know that what is
wrong; what is really required and so on. So, is very critical about the waterfall model,
where the requirements are fully defined and frozen before the development starts. And
therefore, evolutionary model is the way to go and true to his words, most of the
development models that are being used now which have become popular, use the
evolutionary model in some form.
115
(Refer Slide Time: 07:34)
Let us look at the evolutionary model. Here the core modules of the software is first
developed and this is refined into increasing levels of capabilities which are called as
iterations. In iteration new functionalities will be added and also the existing
functionalities can change based on the user feedback. And each development, each
iteration is developed through a mini waterfall model.
At the end of a iteration, some code is delivered to the customer invariably. At the end of
development maybe the existing functionalities might have changed. It is possible that no
116
new functionalities might have got added, but only the existing functionalities have got
refined or it may so happen that existing functionalities have got changed a little bit and
new functionalities have been added. So, these are an increment and therefore, it has
some incremental development.
But then, the question is that how is the incremental development and the evolutionary
development differ? Because as you can see that in both these models incremental and
evolutionary, there are small increments over which the system is developed and these
increments are deployed at the customer site and feedback obtained.
But let me mention here that in the purely incremental model as you might have
remember from our last discussion that all those requirements are identified and these are
planned into small increments. Each increment is developed deployed feedback obtained
and changes can happen. But in an incremental model, all the requirements are gathered
to start with whereas, in evolutionary model, it is not the case. Here after an overall
understanding of the system we do not really capture all the requirements, but then start
with some of the core or riskier modules.
At the end of each iteration, a tested integrated executable system is developed, deployed
at the customer site. The length of the each iteration is typically fixed, something like a 2
week to 1 month or maybe 1.5 month, 2 to 6 weeks and the development proceeds over
many iterations. For example, a system may get developed over 10 to 15 iterations.
117
As you can see the evolutionary model, the requirements are not frozen. But here the
customer is encouraged to participate, to give feedback based on which the further
developments occur and the customer feedback is taken into account and functionalities
get modified. So, here the requirements are modified in a iteration, some of the existing
functionalities as well as new functionalities are delivered.
Here, successive versions are developed and deployed at the cost customer site. Each
version is capable of performing some useful work which the customer can use and give
feedback and a new release can include a new functionalities and also existing
functionalities may get modified and refined.
118
(Refer Slide Time: 12:05)
We can diagrammatically represent it like this that initially rough requirements, just
understand what the system needs to do and then, there are iterations. The iterations are
shown here. In each iteration is a mini waterfall, where need to specify a small part;
develop, validate and deploy at the customer site.
To start with the initial version, the feedback is obtained. There are several intermediate
versions and then, the final version is deployed at the customer site. Evolutionary model
has the advantage that, the customer can use the system and give feedback. And
119
therefore, the evolutionary model is best suited to find the user requirements, the exact
user requirements and once it is delivered to the customer, the customer to meet the
customer requirements. Also since these are each time tested, each increment is tested
and deployed at customer site. Therefore, multiple test activities, integration activities
occur and therefore, it is less likely to contain bugs because these are tested thoroughly.
And since each time the developers focus each time on a small part, it helps in managing
complexity; better manage changing requirements; customer feedbacks are obtained and
incorporated.
120
(Refer Slide Time: 17:05)
The customer gets version quickly. They get trained on using the software. If there are
bugs, these are fixed quickly in the next iteration.
But then, there are some problems with the evolutionary model. Let us be aware about
the problems of evolutionary model. You had seen that it is lot of advantages, but then
here the process is unpredictable that is each time, we develop a increment deploy and
get the customer feedback.
121
Now, what if the customer just keeps changing the requirements; keeps on asking for
modifications and so on. Therefore, the process is unpredictable. We don’t know when it
will finish, because the customer may keep on changing the requirements and also if we
have a long term plan, then it becomes easier to deploy manpower, recruit them,
schedule work monitor and so on. Here all these are problematic.
The system is poorly structured because there is no overall design made its only small
parts that are designed and integrated. Since, the code is changed continually, the code
structure degrades and the system may not even converge to a final version if each time
the customer keeps on asking for changes.
Now, let us look at the spiral model. The main feature of the spiral model is risk
handling. Just look at the spiral here diagram the model follows spiral, we will see that. It
was proposed by Boehm. Here each loop of the spiral is called as a phase and the system
gets developed over many phases. There is no fixed number of phases. As the system
develops more and more phases are created. There can be many loops on this spiral until
the final delivery takes place. But over each phase that is each loop of the spiral, one
feature is identified and is developed.
122
(Refer Slide Time: 17:11)
If you can see there are four quadrants here on the spiral model, the first quadrant what
needs to be done in that phase is identified and then, if there are any uncertainties.
Technical uncertainties or the user requirements are not clear. A prototype is developed
to resolve the risk and then, the next iteration is developed given to the customer for his
evaluation and then, another feature is taken up.
So, over each iteration one or more features are taken up. The risk is resolved and then
the next development occurs and customer feedback. As can be seen here, this model is
ideally suited for very risky projects, where the technology is not known. It is possibly
the first time that this kind of software is being developed. The technology is not certain
and so on. So, each time some features the most riskiest feature is identified. Risk is
resolved, it is developed and given for customer feedback and the next set of features are
taken up.
123
(Refer Slide Time: 18:49)
Here, the risk is any adverse of circumstance that might hamper the successful
completion of the project. It can be technical issues are not clear, it can be that the
customer is not able to clearly specify what is required and here a prototype is developed
to overcome the risk.
If the risk is that requirements are inappropriate, prototype is developed given to the
customer so that he can more clearly say what is required.
124
(Refer Slide Time: 19:34)
The third quadrant is developed and validate and the fourth quadrant is give the
developed solution to the customer for feedback and plan for the next iteration. We can
see that with each iteration around the spiral, more and more complete versions of the
software get built.
The spiral model is also called as a Meta model because a single loop of the spiral is a
waterfall model, there are iterations and therefore, it incorporates evolutionary and
incremental models. It uses prototyping. The stepwise approach of the waterfall model is
125
retained and therefore, this model can be degenerated or can be used as any other model.
But then, need to remember that the spiral model is suitable for very risky projects and
large projects.
Now, let us look at the agile models. If we look at the meaning of agile in dictionary;
agile means easily moved, very fast, nimble, active software process etcetera these are
some of the terms that are given as explanation for agile model in dictionary. But then,
agile means very fast. So, the development in agile model should occur very fast in much
less time than it occurs in other development models.
But then, how is it developed very fast; how his agility achieved? Agility is achieved
because here it gives lot of flexibility to change the process that is if some project
requires some work to be done some specific activity, it will be done; if it is not required
for another project, it will can be easily skipped because the model is very flexible.
Unlike the waterfall model, where there are predefined activities and steps and so on.
Here, we can fit the process to the project, but more important is we can avoid things that
waste time. That is to get it done very fast, we need to avoid whatever is a wastage of
time. What are the wastage of time? Maybe if we are preparing very elaborate
documents, we are mentioning that in the iterative waterfall model 50 percent of the
development effort and time goes in preparing documents. Here we will see that it does
away in preparing these elaborate documents and saves lot of time.
126
(Refer Slide Time: 22:47)
It was proposed in mid 90’s, mainly to overcome the shortcomings of the waterfall
model. One important characteristic is to handle change requests. We had seen that the
evolutionary model is a good way to handle change requests and the agile models do
incorporate evolutionary development; evolutionary and incremental development. Here
the requirements are decomposed into small incremental parts and development precedes
incrementally.
127
There are a few things which are important here. Instead of producing elaborate
documents, following process rigorously here the individuals and they interact with each
other rather than passing on a document to each other. They explain to each other
through interaction. In a waterfall model, the progress of the development is measured in
terms of the documents produced; is the requirement document complete; is the design
document complete; is the detailed design complete; is the code documentation
complete; test documentation complete. But here, the progress is measured in terms of
the working software that has got developed. How many iterations have been completed
and each iteration, some working software is deployed at the customer site.
The agile methodologies is an umbrella term. There are many processes development
processes which have come up like extreme programming or XP, Scrum, Crystal,
DSDM, Lean etcetera each of them have the features of the agile model. But then but
then they differ in some ways, each one has its own focus. New things that are
introduced in each of them as you will see new terms, new activities and so on.
128
But then, they all have the features of the agile methodologies which we just so
discussed in the last slide, development over increments, interaction with the customer,
less documentation interaction among the developers, testing in integrating and testing or
each iteration, deploying getting customer feedback and so on.
For this lecture, just come to the end and in the next lecture, we will discuss more details
about the agile development model.
Thank you.
129
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 09
Life Cycle Models - V
Welcome to this lecture. In the last lecture, we had discussed about the evolutionary
model, the spiral model and just introduced the agile development. The agile models
have become very popular. We will see that these have lot of advantages especially for
the projects that are being undertaken now. At the beginning, we had said that the type of
projects that are undertaken have undergone a drastic change over the years. Now the
projects are of very short duration; 1 month, 2 month or 3 month projects are very
common whereas, earlier there were multiyear projects; 3 year, 4 year, 5 year projects
and now lot of reuse is being made only customization work.
So, that is service type of projects. Earlier we had product development from scratch,
now it is customization. Slowly most of the projects are becoming like this that short
development deployment at the customer side getting feedback and so on. And in that
respect, the agile model is very advantageous and we had just started discussing about
the agile model.
130
Let us look at more details of the agile model. Last time we had seen that the agile model
incorporates some features like customer participation, incremental development,
integration and testing over each iteration, deployment at the customer site, less
documentation and so on.
Now, let us look at the main techniques of the agile model. Here no formal requirements
are developed; it is based on user stories; as the name says that these are more informal
than a requirements specification. These are like stories; these are simpler than the use
cases. To give a overall design perspective, agile model proposes metaphors where there
is a common vision of what is required and based on that the development starts.
Wherever required a spike is done; a spike is a simple program that is written to explore
potential solution. We can see that it is similar to a prototype. Wherever there is
uncertainty develop a spike, check out the alternative whether the spike performs well
and so on and also refactor.
Once it is developed at the customer site and the customer accepts it, restructure the code
without affecting the behavior such that the code becomes more structured, put some
design into the code. So, here as you can see that the design after each iteration is
afterthought, the code is restructured. Initially the code is made to work and then it is
restructured of course, we have the metaphor where there is some common vision overall
design and so on.
131
It is incremental; each time one increment is planned developed and deployed at the
customer site, no long term plans are made. Iteration may not add significant
functionality, it may just only enhance the existing functionality.
But then at the end of each iteration invariably, code is deployed at the customer site and
the length of the iteration is usually fixed something like 2 to 4 weeks. And remember
here that after each iteration, the customer makes the code; puts the code into regular use.
It is not that they just evaluate it, they just they start using it regularly.
As we are saying that the agile model, one important thing is face to face
communication. This required that the developers share the same building or the same
room they meet regularly discuss issues rather than passing documents to each other they
go and explain to each other what is required and so on. To facilitate face to face
communication, they share a single office space and typically the team size is small 5 to
9 people so, that they can interact well.
And this makes the agile model very suited to small projects, but of course, there have
been effort to use this to scale this to larger projects. But then it started with focused for
small projects. One thing need to mention here is that when want to convey something to
somebody, it is not a good idea to give him a document to read. If you can explain him
over a whiteboard that is the best way, you can explain him; you pass a document or
explain him on phone or email that is not a good idea.
132
(Refer Slide Time: 06:16)
In fact, there are experiments done to find the effectiveness of various communication
modes. This is given by Alistair Cockburn from experiments; he could determine that
paper is a cold form of communication not very effective. If you want to convey
somebody something give him a paper specially technical communication is not a good
way to communicate. Audio tape is slightly better than paper. Email conversation may be
slightly better; videotape slightly better, video conversation is still better and face to face
conversation with a whiteboard. This is the most effective way to communicate technical
items; this was the finding here. So, a small team they meet with a whiteboard and one
person explains his ideas on the whiteboard that is the most effective way to have the
communication and that is incorporated in this model; face to face communication is
give given importance.
133
(Refer Slide Time: 07:040)
Here the progress is made measured in how many increments have been deployed at
customer site. There is a frequent delivery of versions once every few weeks. The
customer uses the delivered versions and gives requirement changes. These are
accommodated is a close cooperation between the customer and developer and among
the team members there is face to face communication. These are some of the very
fundamental principles of the agile model.
134
But what about documentation is no documentation done in agile model, not really. Here
the idea is to travel light that is you do not document unless it is needed. You need far
less documentation than you think. In the waterfall model, everything needs to be
documented; the review results, the plans, the requirement review, the design, high level
design, high level design review everything is documented. But here, the main idea here
is that the documents need to be prepared only when these are required by somebody to
be referred over a long time. But even then these are concise; these are the document is
prepared only for information that is not likely to change that is at the end of the
development things do not change.
So, the end of development documents are prepared and here only those things are
documented where somebody can learn something which will be very informative to
somebody which is good things to know. It is not that for the sake of documentation,
documentation is created. The documents are sufficiently accurate, consistent and
detailed because these are prepared when there are no more changes. Some of the valid
reasons to document are that the project stakeholders need it; for example, the customer
needs it maybe there is a contract form to be signed for which it is needed, maybe there
is an external group who like to review or give their suggestions, maybe the document
will be for them, maybe the document will be for those to be referred after the project is
over and so on.
Only when there are valid reasons to document, the documents are prepared. It is not that
documents are prepared by default as is the case with waterfall model.
135
(Refer Slide Time: 10:54)
The requirements here are changing over the development time and new requirements as
they arise, they are kept in the requirements stack here. The stack is prioritized the top
are the ones that will be first taken up. These are high priority and these are all
requirements and this diagram given by Scott Amblers book the requirements keep on
arising. They are inserted in the stack, they are taken out removed whenever necessary
they change; they are reprioritized, but then there is a priority assigned to each
requirement and then this are changed dynamically.
136
The agile model has many advantages, but then to make it practically useful, you must be
aware of some pitfalls. First thing is that here documents are not there. It is based on
explanation on a whiteboard and so on. And therefore, somebody can misunderstand.
High quality people skills are required who when in doubt will consult each other and do
what is correct. Long term design is not made and the short iterations, these degrade the
design structure. There can be feature creep feature creep is the word where the
customers or the developers become more ambitious. They just keep on thinking new
features which can be incorporated without thinking of whether how much the feature
will be used; what will be the value to the customer, what will be the cost of developing
it, feature after feature get added. So, there is a higher risk of feature creep in the agile
model because the freedom is given to give feedback and request for new features and so
on.
And therefore, it becomes difficult for the project manager to manage the feature creep
and also since the development is dynamic, the features change new features get added
and so on. To give an upfront cost to a development becomes very very difficult; to give
a upfront timeline by which development will be done is difficult and quality difficult to
assure sure much more difficult.
The document a done away that is a good thing saves time effort of preparing document,
but then must be aware that the lack of document. There can chances of
137
misinterpretation, getting reviews on the document are difficult and also when the project
is complete unless the documents are consciously prepared at the end of the project
before the team disperses, then maintenance may become difficult.
With this overall understanding of the agile model, let us look at some specific agile
processes. One of the agile process which is very popular is the Extreme Programming
which is also known as the XP model.
138
(Refer Slide Time: 14:54)
The main items that are introduced here is pair programming. In a pair programming,
each desk each desktop is manned by two programmers. The main idea here is that
reviewing is a good thing; so, why not continually review each other’s work. When one
programmer writes the program, the other programmers goes through the code and
reviews it give suggestions how to make it better code, more efficient code, avoid
mistakes and so on and the take turn one programmer program programs for an hour
maybe writes few functions and then the other programmer takes up, write few more
functions where as the first programmer reviews that.
Here every day the testing takes places unlike the waterfall model where testing is at the
end. Here the test cases are written continually and the test cases are executed before a
feature is passed. Before a feature is implemented, the test cases are written for that
feature and the feature is considered passed when it passes the test cases. This is called as
test driven development.
Here incremental development is practiced every few day increments are given. The
name extreme programming comes from the fact that the good practices are taken to
extreme. The good practices are code review is good and this is implemented in the form
of pair programming testing is good; good practice and therefore test driven
development. Incremental development is good therefore, every few days new
increments are develop in delivered. Simplicity is good and therefore, the simplest
139
design is developed. Do not try to make it think of extensions that may be required
decades later or long time, afterwards some enhancement may be required; do not think
of it.
Now, make it work make the simplest design make it work. Designing is good and
therefore, after the software is deployed accepted, refactor the code, put some design,
make the code better architecture is important and therefore, a metaphor has to be
defined. Integration testing is important; therefore, several times a day build and
integrate that is continuous integration as the development proceeds. Remember that this
was one of the major problem with waterfall model where the integration took place only
at the end.
140
(Refer Slide Time: 18:28)
The four values here; communication face to face communication, simplicity, implement
the simplest design may not pay attention for tomorrow make it work. Feedback get
customer feedback, encourage customer feedback. If you do not get feedback from the
customer, this is trouble going to happen because at the end they will reject; they will be
unhappy. And here the other value is courage. If some code is not good discard it, rewrite
it. Design is not good, discard the design; redo the design.
141
Coding is a best practice utmost attention on coding testing is a best practice primary
means of developing fault free software and therefore, importance to testing and listening
is a best practice listen to the customer and find out what is required really by the
customer.
Designing without proper design, the system becomes complex. Therefore, put design
into the system. And feedback is a important thing get customer feedback.
142
Emphasizes test driven development; based on the user story, first develop the test cases
before writing the code first write the test cases that is the test driven development. First
write the test cases based on the user story and then implement it. Once implemented run
the test cases and the development is not complete until it passes all the test cases.
Develop over increment every few days increments to be given, get customer feedback,
alter if necessary and then finally, once accepted by the customer re factor, make the
code better, put some design and then take up the next feature.
Now, let us look at few practice questions. So far, the points that have discussed; what
are the stages of iterative waterfall model. Hope you remember, if not please look up;
what are the major disadvantages of the iterative waterfall model.
Because the later models were proposed to overcome the disadvantages of iterative water
fall model which you had already discussed. Several problems with the waterfall model,
most important is difficulty in handling change requirements, why has agile model
become so popular, what problems of the waterfall model it overcome, how it fits into
the current type of projects, what difficulty might be faced if no life cycle model is
followed in a certain large project ok.
143
(Refer Slide Time: 22:09)
Now, let us look at another agile development process which is called a scrum. The main
characteristics of scrum is self organizing teams that is the team members decide among
themselves who will do what, who will do testing, who will do which function
development etcetera. Here the product progresses in a series of month long sprints; the
sprints are basically increments.
So, the increments are called here as sprints typically one month. Here the requirements
are captured, we are looking at the requirements stack something like that which is called
as a product backlog.
144
(Refer Slide Time: 23:02)
So, one of the agile processes; here is the requirements that have been gathered which
arise as the development proceed some may get deleted, changed and so on. This is
called as a product backlog. In each sprint, a sprint is a month long activity. One of the
top priority feature requirement is taken out that forms the sprint backlog and once the
sprint backlog is obtained, this is not changed anymore. It is immune to change and this
is developed over a month long iteration here and every day the developers meet for a
daily scrum meeting to review what has happened, what is the next thing to do and so on
and they complete the sprint backlog.
In the sprint backlog, based on the identified requirements from here activities that need
to be done to meet the requirements are identified and that are put in the sprint backlog
which is put by the development team. And these activities are get completed over daily
scrum and finally, the sprint review; the sprint is reviewed and the product increment is
deployed at the customer site.
145
(Refer Slide Time: 24:32)
So, that is the main idea here. The progress is in the form of sprints, duration is typically
one month. In one month, the some of the features from the product backlog are taken
up; design, coded, tested. And once the sprint starts the requirements that have been
taken up are not allowed to change otherwise the sprint will not converge. One of the
principle here is that once the feature have been taken out from the product backlog, they
are not allowed to change.
146
Here some of the terminologies the roles, ceremonies and artifacts. The roles are product
owner. One of the team member acts as on behalf of the product owner; that is a
customer. He has the customer perspective, the scrum master who is like a project
manager and then the team members. There are various ceremonies that get conducted
during development; one is the sprint planning, the sprint review at the end of a sprint,
sprint retrospective and daily scrum meeting. There are various artifacts produced one is
the product backlog which keeps track of the requirements that have been identified so
far, more requirements can be identified as the development proceeds and these are kept
in a prioritized order. The sprint backlog this is the activities to be done during the sprint.
The burn down charts here how much progress has been made are depicted in the form
of burn down charts.
The product owner has the customer perspective is the development team has five to nine
people with cross functional skills. The scrum master is also called as a project manager.
He facilitate the scrum is the buffer between the team and outside in interference and
resolves any difficulties that that team members might be facing.
147
(Refer Slide Time: 26:57)
The product owner is one of the team members or maybe one of the customer
representative who is part of the team. He acts as if the representative of the customer.
He is the one who defines what are the features that will be required decides on the
release date. Prioritizes features based on the customer perspective according to what
will be the market value, adjust features and priority in the product backlog in every
iteration and finally, accept or reject the results.
We have seen the main idea behind the scrum. There are many terminologies in the
scrum; looked at some of the terminologies and we are just coming to the end of this
lecture few more terminologies and concepts in the agile scrum model is could not
discuss in this lecture. We will take up in the next lecture and will complete our
discussion on the development life cycles in the next lecture.
Thank you.
148
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 10
Life Cycle Models - VI
Welcome to this lecture. In this lecture we will first discuss about Scrum which is a agile
and very popular development model, many industries and projects now use the Scrum.
And then we will just review what all lifecycle models we have discussed and we will
look at how to select a lifecycle model for a given project and then if possible, if we still
have some time will discuss some introductory discussion on project scheduling. Now let
us look at Scrum.
149
(Refer Slide Time: 01:00)
In the last lecture we had looked at Scrum which is one of the agile models and one
important thing here is the Sprint. The Sprint is the fundamental process flow of Scrum.
A Sprint is actually an iteration or a time box in which an increment is developed during
the Sprint and is delivered to the customer. Usually it is a month long iteration in which
some existing features might get improved or some new functionality may be
implemented. From the product backlog the product backlog has all the story points are
the features has to be developed are maintained in a product backlog and during Sprint
planning the objectives for a Sprint is identified. This is called as a Sprint backlog. These
are the objectives or the subset of features of the product backlog will be developed
during 1 Sprint.
A Sprint as it continues every day the team meet and discuss what is their plan for every
day and are there any problems that they are facing and after a month long iteration the
Sprint is completed, the Sprint is reviewed for what has been accomplished and then it is
delivered to the customer. One thing that the Scrum methodology or the Scrum model it
requires that once the team members identify the subset of product backlog to be
developed during a Sprint which forms the Sprint backlog there is no outside interference
with the team. That means, that once they take up some items for implementation no
changes to these items are allowed, otherwise it will create chaos, the Sprint will not
converge, uncertainty and so on.
150
And that is the reason why no outside interference with the Sprint team is allowed during
as with the Scrum team allowed during the Sprint.
These are some of the important concepts in the Scrum framework. One are the various
types of roles. In a Scrum team there are the product owner who is one of the team
members the Scrum master who is another team member and the other team members.
These are the different roles in a Scrum team a product owner Scrum master these are
distinguished team members and then the other team members. In the Scrum framework
some ceremonies or meetings that are required to be conducted one is the Sprint planning
meeting, Sprint review meeting, Sprint retrospective meeting and the daily Scrum
meeting. We look at this meetings.
In the Sprint planning meeting this is conducted just before a Sprint starts and here a
subset of the product backlog is identified and the Sprint backlog is formed. The Sprint
review meeting this is conducted at the end of the Sprint. Here the work that has been
completed during the Sprint are reviewed. The Sprint retrospective meeting it is
conducted at the end of the Sprint that is after the Sprint review meeting and before the
next Sprint starts or the next Sprint planning meeting. And here any reflections on what
could have been done, what is required to be done etcetera are discussed and the daily
Scrum meeting it happens daily. Here the team members discuss among each other in a
151
short meeting that what is their plan are they facing any obstacles and also this is a way
for the Scrum master to keep track of the progress of the project.
There are some artifacts which are mandated in the Scrum framework. One is the product
backlog, here the items or the features that are not yet complete are maintained in the
product backlog. This is a prioritized list of features to be implemented not yet complete.
The Sprint backlog is a subset of product backlog which are to be completed during the
Sprint. The burn down charts these are basically concise representations of the progress
during a Sprint, the amount of the work to be done for project completion and so on. We
will look at the burn down charts in the subsequent slides.
The key roles are the product owner, development team, and the Scrum master. The
product owner is a team member who represents the customer’s interest, thinks on behalf
of the customer. The development team typically 5 to 9 people with the cross functional
skills; that means, each team member typically has multiple skills maybe a tester, a GUI
developer, coding, database and so on.
The Scrum master also known as the project manager he is the team member who
interfaces with the top management and any difficulties that arise he discusses with the
customers and the top management to remove any difficulties. And, that is how insulates
the team member from having to talk to the top management, the customers and so on
and also if there are any outside interference during a Sprint he is the one who takes care.
152
(Refer Slide Time: 08:47)
The product owner is the one who represents the customer. He liaises closely with the
customer to get their perspective. The product owner defines the features that are
required. New features may crop up some of the features which are identified may get
deleted or modified, he decides on the release date of the features. Prioritizes the
features according to the market value or the value to the customer, adjust the picture
priority on every iteration if needed and finally at the end of a Sprint accepts or rejects
the work results.
153
The Scrum master is essentially the project manager removes any obstacles that the team
faces, ensures that the team is cohesive, fully functional, productive, close co-operation
is ensured. It is responsibilities to ensure that all roles and functions there is a closer co-
operation and also shields the team from external interference, interfaces with the
customer top management. So, that the team members are relatively prevented from
external interference.
The team members typically 5 to 10 people they have cross functional expertise. For
example, a team member may have quality assurance programming or maybe testing UI
designing etcetera typically different functionalities and maybe multiple of them. The
teams are self organizing that is they decide who will do which part of the work and what
role they will assume and the membership of the team is not allowed to change during a
Sprint. New members may be inducted some member may be replaced and so on, but for
the process to be stable no changes are allowed during a Sprint.
154
(Refer Slide Time: 11:39)
The ceremonies these are the meeting there are 4 important meetings that are mandated
here. One is the Sprint planning meeting which occurs at the start of a Sprint. Here the
Sprint backlog is formed from the product back log the features to be developed during
the Sprint. The daily Scrum meeting it is a daily meeting during the Sprint. Every day
during a Sprint the team members meet for a short time maybe 15 minutes typically, a
stand up meeting otherwise the meeting may prolong it is intended to be very short
meeting and they just discuss what are the items they are working on are they facing any
obstacle and so on.
The Sprint retrospective meeting is conducted after the Sprint review meeting just to
reflect on the things that have been done what could be improved to be taken up in the
next Sprint and so on. The Sprint review meeting is undertaken at the end of the Sprint
and here the work that has been achieved during the Sprint is reviewed along with the
customer at the top management.
155
(Refer Slide Time: 13:10)
The Sprint planning meeting it is undertaken just before a Sprint starts. The main
objective of this Sprint planning meeting is to produce the Sprint backlog. Here the
product backlog is examined and the product backlog is typically organized in a
prioritized manner and the top priority features are picked and the product owner along
with the team they negotiate what are the backlog items to be taken up so that it will
meet the requirement of the customer and also it will meet the release goals.
The Scrum master the project manager he also participates during the Sprint planning
and he ensures that the team agrees to realistic goals because it is a 1 month iteration and
the work should be doable in roughly 1 month.
156
(Refer Slide Time: 14:20)
The daily Scrum meeting is done every day, typically at the start of the day it is very
short meeting typically 15 minutes or so. It is a stand up meeting because if the team
members sit down then the meeting may prolong stand up meeting and it is not a
problem solving meeting, but then the meeting objective have to identify what the team
members are working on just to update each other, are they facing any obstacle.
Basically each team member answers 3 questions. What did the team the member
achieve yesterday? What is the plan for today and are there any obstacles? These are
only 3 things that each team member answers discusses.
157
It is important to remember that the daily Scrum is not a problem solving meeting. The
team members do not really ask for solution it just the it is a information sharing meeting
and also it is not a blame fixing meeting that who is behind schedule why is he behind
schedule etcetera. This is only information sharing. Here each team member informs
what was achieved yesterday and what is the plan for today and are there any obstacles.
The Scrum master or the project manager participates and as the discussion proceeds he
picks up that what is the progress that has been achieved so far and this is a good way for
the project manager to track the progress of the project.
The Sprint review meeting this is undertaken at the end of the Sprint. Basically here the
work that was accomplished during the Sprint are reviewed in presence of the
management, the product owner, the team member and also customer representative
intended to be a very short meeting and typically it is a demonstration of the features that
have been just completed.
Typically these are on the computer they show that what can be achieved after the Sprint,
which features have been implemented how does it work and so on and this is a informal
meeting not like elaborate documents that are prepared and so on. That is not the case. It
just a 2 hour preparation for the team members they just think about how to present the
features that have been completed and that demonstrate those features.
158
(Refer Slide Time: 17:46)
One of the most important artifact in the Scrum is the product backlog. The product
backlog is a list of all the desired work for the project. The product backlog there are 2
types of items here. One are the user stories like let the user search and replace, store the
item on the database, provide registration of the members and so on these are the story
based items. The second category of items that are also present are the task based items.
The task based items are like improve exception handling because still now what was
implemented is that the exception handling it works, but not that well and then an item in
the product backlog will be introduced is that improve the exception handling.
So, some of the items here are basically the user stories or user requirements and the
others are based on the items completed some actions to take on that the list is prioritized
by the product owner based on the customer perspective.
159
(Refer Slide Time: 19:29)
The product backlog is basically managed by the product owner. Typically maintains in
the form of a spreadsheet and usually created during the Sprint planning meeting.
So, this is a typical product backlog a spreadsheet. The priority is very high, high,
medium and so on and here these are the item numbers and then a description some of
our user stories and the others are some activities to be undertaken.
160
(Refer Slide Time: 20:24)
The Sprint backlog is a subset of items to start with. This is created during the Sprint
planning meeting. Basically the product owner along with the team members decide
what are the items to take up for the next Sprint, but then it is updated daily because
some activities may be noticed to be done and that is added to the Sprint backlog.
The Sprint backlog changes during a Sprint. The team member can add new tasks in
order to meet the Sprint goal. They can also remove unnecessary tasks, but then the thing
is that the backlog is only updated by the team. Once the Sprint starts the Sprint backlog
161
is updated only by the team. The product backlog is updated by the product owner and
also the estimates for the work different items and the Sprint backlog are updated
whenever there is necessity.
Now, let us look at the burn down charts. This represent the progress achieved so far.
There are 3 important types of burn down charts. One is the Sprint burn down chart
which is during the Sprint how much progress has been achieved till a date. The release
burn down chart; a release may consist of multiple Sprints and how much of this work
for the release has been achieved. This is represented in a release burn down chart. The
product burn down chart this is the overall progress towards the completion of the
project.
162
(Refer Slide Time: 22:41)
First let us look at the Sprint burn down chart. This represents the progress achieved
during the Sprint. Typically represents the hours remaining as the days progress as the
Sprint starts some estimate of the number of hours of work is given and then as the days
progress here a number of works reduces.
But then it can also increase because there can be a wrong estimation of the complexity
of the work. It shows the time to release typically, the days are 30 days and at the end of
the Sprint it should become 0. It is not a straight line everyday constant progress is not
achieved there are various obstacles and also the estimates are revised as the Sprint
progresses.
163
(Refer Slide Time: 23:46)
These are larger view of the Sprint burn down charts. For each day it is updated that total
hours remaining initial estimation and then as the progress this progress this is updated
and finally, at the end of the Sprint should become 0.
This is the release burn down chart. Each release might consists of several Sprints and
during a release several story points are implemented. Basically it represents how many
more Sprints are required before all the required story points for the release are
implemented.
164
(Refer Slide Time: 24:36)
The product burn down chart gives the big overall picture. This is for completion of the
project how many Sprints are been completed and how many are likely to be required
and here 3 items are represented. The real burn down this gives the actual progress the
estimated progress and then the velocity that is the story points completed per Sprint.
The Scrum is typically for small projects 6 to 10 people, but then it has been also tried
for larger projects involving hundreds of people and it is called as the Scrum of Scrums
or meta Scrum will not be discussing this.
165
(Refer Slide Time: 25:46)
If the uncertainty in the project is very high then typically the evolutionary approach is
favored the features are identified to be completed as the project progresses. For very
well understood applications the waterfall based models are desirable because these are
the most efficient way to do it. And, the long term plan is made, strong management
control, plan for the entire project is done, but if the team members are novice they do
not have much experience on the projects and even similar types of projects. Then
typically an incremental model is preferable where some small increments are planned
and completed and that way the project builds up. With this will be completing this
lecture and then we will take a estimation of the project and then scheduling of the
project.
Thank you.
166
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 11
Project Evaluation and Programme Management
Good afternoon. Today, we will see some important concepts of Software Project
Management. I am Dr. Durga Prasad Mohapatra, Professor of Computer Science
Department, NIT Rourkela. So, today’s topic is Project Evaluation and Program
Management.
So, the concepts are today we will cover business case for a project; project portfolio
management and project evaluation.
167
(Refer Slide Time: 00:45)
So, let us see first what a business case is. Business case normally it refers to feasibility
study. So, the main focus of feasibility study is to determine whether it would be
financially and technically feasible to develop a software. It will provide a justification
for starting of the project. It will also show that the benefits of the project that will
exceed, that will outweigh the cost of the development, cost of implementation and the
operational cost. So, it needs to take account of the business risks; that means, we also
should consider take into account, what are the various risks, business risks associated
with and that will help us in deciding whether the project will be feasible or not.
168
Now, let us see what are the different types of feasibility study. So, there are three kinds
of feasibility study; one is technical feasibility, then financial feasibility and then,
operational feasibility. In technical feasibility, we have to decide that whether the current
technology that is available with us with that we can develop the software or not or if any
special hardware or software is required, whether that is available in the market or not.
So, if you require a special software or hardware to develop your project and that is not
currently available or not likely to be available in the market in near future. Then, we say
that project will be technically in feasible.
Then, after this technical feasibility study is over, we should cover, we should consider
the financial feasibility. Financial feasibility means we should ensure that the profit or
the benefit that will be obtained by implementing the proposed project, it must outweigh
the cost, the benefits must be much much higher larger than the cost that will be spent in
developing the project. Then only, we will say that the project is financial feasible, else
we say that the project is not financial is not financial feasible and hence, we should not
undertake that project.
Then, next feasibility is operational feasibility. So, if you say that the project is technical
feasible as well as financial feasible, then we should go for the operational feasibility.
Operational feasibility means that if a project will be developed, then how far it will be
accepted by the customers, the end users. If the end users, they accept it, they are happy;
in they will be happy to use it; then, we say that it is operational feasible, else we say that
it is not operational feasible.
Because you see when a new project may be an automation system, we will develop and
previously the project was running manually people were very happy and if they will
make it automat, if you will make the their system automat, they may have fear that their
job may go. So, they may resist it and in that case we say that the project will be
operational feasible. So, we should take up a project for development, only if it is
technical feasible, financial feasible and operational feasible. Otherwise if any of the
feasibility it is not satisfied, then we should not take up that project.
169
(Refer Slide Time: 04:20)
Now, let us see what are the contents of a business case or feasibility study. So, the
following contents are there in a business case about first little bit of introduction or
background of the proposed project should be written; then, about the details of the
project proposed project that should be written. Then, what is the market? So, this project
is aimed at which market little bit description of that. Then, the organizational and
operational infrastructure that will be required to develop the project, that must be
described.
Then, as I have already told you two important concepts are there in feasibility study; the
benefits and costs. We must ensure that the benefits must outweigh the cost. So, the next
content must be benefits. Then about outline of the implementation plan if you will
develop the project what will the outline to implement it, for that we must prepare a plan.
Then, what cost will be required to develop the project, details of the cost and then the
financial case.
Then, some the other next content is the risks. What type of risk will be there, if will go
to develop the project then, we will try to see that how they can be handled and finally,
there even overall plan regarding management of the proposed project. So, these contents
should appear in a what business case content or in a feasibility study report.
170
(Refer Slide Time: 05:49)
So, now let us little bit see about all these what contents in brief. So, as I have already
told you first little bit introduction or background should be there for this business case.
So, this introduction will describe a problem to be solved or an opportunity to be
exploited. So, what kind of problem we are going to solve or what opportunity you are
going to exploit? So, that has to be a little bit of background or introduction should be
described first. Then, about the proposed project here every outline of the project scope.
The scope of the proposed project every outline about the project scope must be
described, next then each market; so, here the project could be to develop a new product.
Suppose, you are planning to develop a project for developing a new product. For
example, a new computer game you want to suppose a develop a project for designing a
new computer game. Then, the lightly demand for the product would need to be assessed
if nobody will what purchase your software, then what is the point in making a
developing the product the product projects. So, that is why you must target at what will
the market; how far your product will be in demand in the market? So, that is why little
bit about of the what ah demand of this product in the market, should be discussed.
171
(Refer Slide Time: 07:13)
Then, about the next content is the organizational and operational infrastructure. Here
how the organization would need to change, if you will develop this particular project
what changes may occur in this organization; whether there will be a lot of change in the
organizational structure, in the operational structure. If that will happen, then some
problem might be arise. So, that is why what this should be important where a new
information system application was being introduced. So, when you are; suppose this is a
manual one, you are just going to automatic then there the organizational or the
operational structure might not be affected a lot.
But if you are going to completely develop a complete new information system which
was before, it was fully manual. Then, there might be some changes in the organizational
infrastructure, in the organizational structure or the operational infrastructure. So, we
have to note down that what kind of changes will occur if there will be a change, if the
new project will be developed, what kind of changes will be there in the organizational
and the operational infrastructure, so that we must discuss. Then, as I have already told
you the two important components; the two important contents of business case are
benefits and costs.
So, normally this should be expressed in financial terms, any benefit as far as possible,
we should try to express in financial terms may be in monitoring terms. We know some
of the benefits might not be expressed in what, in terms of the financial in terms of
172
financial terminologies, we will see the different types of benefits and cost which can be
expressed in monitoring terms or financial terms and which cannot be expressed, those
we will see in the might be in the next class. So, in the end it is up to the client to assess
this because first we have to see what financial, what benefits will get out of the
proposed project.
So, at because at the end the client will assess this that after implementing this project
how much they have spent; what cost they have given; how much amount they have
given and how much benefit they are getting. If the benefit will not outweigh the cost,
then definitely it will be loss to them and they will not go for developing this proposed
project. So, that is why we must have to identify what are the benefits and we must try to
express the benefits in terms of the financial terms, in monetary terms in the business
case or in this feasible study report.
Next one is about the outline implementation plan. Here, how the project is going to be
implemented, in which way you will implement the project that has to be discussed. So,
this should consider the disruption to an organization that a project might cause. So, if
you will implement the new project, then what if there will be any disruption to the
organization to the normal operation of the organization if any disruption will be there.
So, that also should be written in this outline implementation plan and the other
173
important content that we must describe the cost. Here the implementation plan will
supply information to establish this.
So, what are the different types of cost? So, we will discuss in the next class, what could
be the different types of a costs. So, how much cost the end user the client will be pay for
developing this project? So, the implementation plan must supply the information to
establish this cost and the next component is financial analysis. So, this financial
analysis, it combines both the cost as well as benefit data to establish the value of the
project. If the project will be developed what will the value of the project?
So, here in this content financial analysis, this will combine; this section will combine
the various cost and benefit data to establish what will the real value of the project;
whether the project can be accepted by the end user not; whether the project can be
accepted by the developer not by comparing the cost and benefits. So, this analysis will
be met.
Next important component of a business case is risks. So, risk as you know that it is
something undesirable; something undesirable so, there are two types of risks will find
out in two major types of risks in a project. So, this two types of risks must ah what
contain; this two types of risk must be contained in this feasible study report or a
business case.
174
So, one is business risks, another is the project risks. So, let us see in the project risk,
what is happening? Project risks are the risks which are related to threats to successful
project execution. In order to complete the successful project or while completing the
project successfully so what threats might arise ok, so these things are known as project
risks. So, these are related to the what as a threats to successful completion of the or
execution of the project. Whereas, business risks these are related to the factors
threatening the benefits of the delivered project. So, if the new project will be delivered,
so what benefits you will get and which factors will threaten to get these benefits of the
delivered project. So, these factors will be termed as a business risks.
In this business case or the feasibility study our focus will be on the different business
risk. So, we will discuss the business risk somewhat towards the end of the session. So,
then, last content or last component in the management plan here a detailed plan for
smooth management of the organization should be planned; what should be the different
activities that you should consider; what are the different milestones etcetera. So, the
detailed plan for this smooth management of this organization while developing the
project that should be written in this section.
So, these are the various contents of a business case. So, if this business case or a
feasible study report should contain this what sections these contents.
175
Now, we will see another important aspect of software project management that is your
ah project portfolio management. So, let us first see what this project portfolio
management provides. So, this project portfolio management, it provides an overview of
all the projects that an organization is undertaking. An organization is just developing
only one project, it may take up, it may develop multiple number of a projects. So, this
project portfolio management will provide an overview of all the projects that the
organization developing.
So, some of the projects might be academic related projects, some of the projects might
be so healthcare related projects, some of the here projects might be business oriented
projects or some of the projects might be what industry oriented projects. So, this project
portfolio management, it will provide an overview of all the projects that the
organization is undertaking. So, it will prior this project portfolio management, it will
prioritize the allocation of resources to projects.
So, since the organization or this what project developing organization is developing so
many projects, what the resources are limited. So, how to allocate the resources? So,
which one which activity or which project is more important, say more number of
resources should be given to those projects at first. So, how to prioritize that which
project should get should be allocated which resources and when? So, this prioritization
is one of the objective of this project portfolio management. It will prioritize the
allocation of the resources to the different projects.
This will also decide which new projects should be accepted and which existing one’s
should be dropped. See if where after the feasible study we have found of that if this new
project will undertake that will give us much more profit. Then certainly, we must accept
it to develop it. While we have seen that some of the projects which are running, they are
not giving so much benefit to the organization rather there incurring losses. Then, this is
the time we must take action to drop the projects whatever loss we have suffered that
will accept and we should not allow it to bring in more loss to the developed to the
organization.
So, those existing ones which are giving loss those are running in loss. So, they must be
dropped. So, how to decide that which new projects should be accepted and which
176
existing ones should be dropped; for this, we should again use project portfolio
management. This will decide which project to accept and which existing project to drop.
Let us see what are the important concerns of project portfolio management? So, these
are the important concerns of project portfolio management; first one is evaluating
proposals for projects.
So, project proposals may come from different sources. So, how to evaluate the project
proposals? What all the parameters based on which you will evaluate the proposals? So,
this is the first important concern that we have to evaluate the proposals for the projects.
Then, we have to assess the risk involved with the projects. I have already told you that
there are two kinds of important projects; business risks and what are the project risks.
We have to assess what will the impact of this risk will happen; so, how much how the
project will be affected. So, assessing how the system will be affected. So, assessing the
risks involved with the projects.
Then, deciding how to share resources between projects since there are multiple number
of projects are running in an organization, but the resources are limited how to decide
when, which resource will be allocated to which project that is an important factor. So,
project portfolio management aims at are deciding which resource will be given when
and to which project. So, now, taking account of dependencies between projects, there
are some projects, there are dependent. That means, suppose project a depends project b;
177
project a depends on project b; that means, if project b is not completed maybe project a,
we may not start.
So, that is why in order to handle this, we must take into account the dependency
between the projects; which project is dependent on what other project. Accordingly, we
can assign the resources so that the project on which another project it independent that
should be finished as early as possible so that the other project which dependent on it can
be started. Then, removing duplication between projects; so, if you are having similar
projects, if a running similar projects and some components are redundant they are
duplicated. So, it is unnecessarily use of effort and resources etcetera also a necessary
what wastage of time.
So, we have to identify the duplication between the projects and we have to remove them
this is also one of the important concern of project portfolio management; then, ensuring
that necessary developments have not been inadvertently missed. So, another important
concern of project portfolio management is that we have to ensure that necessary
developments have not been inadvertently missed.
So, if there is what some good thing, some necessary development. So, somehow that
has been missed. So, this thing should be ensured that yes no necessary developments, no
important things; they have been missed inadvertently. This can be also aimed at this can
be also ensured by using this project portfolio management.
178
Now, let us see what are the different elements to project portfolio management? There
are three important elements one is this project portfolio definition.
Another is a project portfolio management and then, project portfolio optimization. Let
us see about first the project portfolio definition. So, what is the objective? So, the
objective here is to create a central record of all projects within an organization. So, with
an organization mean there are many records. So, how to create a central record of all the
related projects within an organization; so, that is why is one of the objective of project
portfolio definition.
So, this is this first how to define the project portfolio or project portfolio definition. So,
we must also decide whether to have all the projects in the repository or only a special
category of projects like as ICT projects. So, we have to decide that whether all the
projects will be placed in the repository or some special kind of projects or some special
category projects like ICT projects will be kept.
Then, we have to note the difference between the new product development projects and
the renewal projects that whether because see there is sufficient difference between
developing a project newly and just on projected there renewing that updating that. So,
we must have to difference because the resources require by both the projects new
product development and the renewal they will be different.
179
The time required, the effort required will be different for developing a new product
development as well as renewal of projects. So, we have to note down the difference
between the new product development and the renewal of project. So, project portfolio
definition aims at it its objective is this things. Then, we will see project portfolio
management. Here we must have to see that the actual costing and performance of
projects can be recorded and assist. So, what is the actual cost that the project takes and
what is the performance; what kind of performance project is skipping.
So, they can be recorded and assessed by using portfolio management. The main
objective is to record and assess the actual costing, the actual performance of the
projects, they will be developed and in project portfolio optimization as its name suggest
will see what is happening here that here the information gathered above. So, we are
gathering the information, you will see that in this project portfolio definition we are
creating the record, we are gathering the information.
So, here in project portfolio optimization the information that happened gathered earlier,
they can be used to achieve better balance of a projects. How the different projects they
can be balanced like see some of the projects are very risky, but they are potentially very
valuable. They will be valuable balance by because they may give more profit you know
that the business or the projects which are more risky, they are likely to give more
benefits. On the other hand, there are some less risky projects; it is very less risky
projects, but their properties also very less. They are very less valuable projects. So, we
have to balance these different kinds of projects to achieve better performance.
So, the information, how we can do; how we can balance them? By obtaining, by
gathering the information, by using the information those two which have gathered in the
above steps. So, the information gathered above can be used to achieve better balance of
the projects. So, the example as we have already seen that some projects might be very
risky, but their even if their very risky, but the return; the profit will be also very high.
But some projects which are less risky, but the return is very less, they will give very less
profit.
So, we have to do a balance between these different kinds of a projects. So, this can be
done; that means, we have to what go for some optimization. How we can make more
profit even if the projects are risky; how we can spend less time less cost, but objective is
180
to maximize the profit? So, these things can be achieved by this project portfolio
optimization.
Now, let us quickly see about this pros and cons of the project portfolio management.
The pros is that it allow some small ad hoc task to be done outside the portfolio. These
are the a project portfolio management; it allows you to carry out some ad hoc tasks
outside the portfolio. For example, you may want to quick, you may want to what do the
quick fixes to the systems that can be done easily. Whereas, the disadvantages that the
you may have taken some full time staffs, you might have allotted some fulltime staffs.
For a project about them effectively, they will be just considered as part time. There will
be there will be effectively part time basis because there still have some routine work to
do. We cannot give the full time for the assigned projects. So, that is one of the
limitation; then, the official project portfolio may not accurately reflect the
organizational activity, if some projects are excluded.
So, if some of the projects that excluded then what project portfolio report you have
prepare the official project portfolio it may not accurately reflect the organizational
activities if some of the projects are excluded. So, these are some of the pros and cons of
the project portfolio management.
181
(Refer Slide Time: 25:47)
Now, let us see how to evaluate the individual projects. I have already told you that three
kinds of feasibility studies are required to evaluate individual projects. So, one is this
technical feasibility, then financial feasibility and operational feasibility. Let us see about
this technical assessment first. So, the technical assessment consists of evaluating
whether the required functionality can be achieved with current affordable technologies.
What technologies currently existing in the market, with that can you develop the
project.
Then, we say that it is technically feasible and the technical assessment has been
perfectly met. So, similarly then, the cost of technology, if you will use a new
technology, new hardware, new software; so, the cost of technology adopted that must be
taken into account on in the financial assessment. While people you will prepare the cost
benefit analysis, you have to consider the cost of new hardware, new software or new
technology you have to take into account this cost.
182
(Refer Slide Time: 26:47)
Then, is this financial assessment; in financial assessment, we have to what analyze the
cost and benefits. So, that is why this is known as cost benefit analysis. We have to
analyze the various cost that will spent, we have to analyze the various benefits that will
get. Then, we have to compare that whether the benefits, there outgoing the cost or not if
the benefit will sufficiently outweigh the cost. Then, we will take up the project for
development authorize. We must we must not take up that project for development. So,
this ah financial assessment or which is popularly known as cost benefit analysis, this
relates to an individual project.
For individual projects, you have to perform cost benefit analysis. For cost benefit
analysis, you need to do the following first. You have to identify all the cost; what are
the different kinds of cost, you have to identify. For example, some of the cost would be
development cost; some of the cost maybe setup cost. For example, for what developing
your project, the hardware cost for the recruitment and the staff training cost for setting
of this what experimental. So, what cost will be required regarding and the setting of the
hardware or what recruiting the different personal required providing training to them.
So, this cost as setup cost.
Similarly, another cost we have maybe they are like operational cost. So, after the cost
for operating system, after installation; after the installation, we have both we may have
to provide some cost through run it to operate it. So, these are operational cost. So, in this
183
way for performing cost benefit analysis, we have to identify the different types of cost.
Then, we have to similarly identify the different types of benefits. So, there are the
different types of benefits are there which will discuss in the next class. I mean the
classification of the cost and the classification of benefits, we will discuss in the next
class.
Then, check that benefits are greater than cost. As I have already told you where
performance cost benefit analysis, we must check that we must ensure that the benefits
those who will be obtained out of the project must be greater than the cost that will incur.
Then, only we can check up, we can take up the project for development; otherwise we
should not take up the project for development. So, that is the one of the important
objective of cost benefit analysis. So, we have to check that the benefits are larger than
this the costs ok.
184
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 12
Project Evaluation and Programme Management (Contd.)
Good afternoon to all of you, last class we have discussed about the cost benefit analysis.
We have seen the different ways of evaluating proposals one is this technical assessment,
then the cost benefit analysis or the financial assessment. So, now, we will take up the
next item for individual project assessment. So, that we will see that now today will first
see the other item that is left for this individual project assessment that is cash flow
forecasting. Then as I have already told you C- B analysis that is the most important
analysis among all the assessment among all the feasibility studies financial assessment
or C- B analysis is the most important.
So, we will see the detailed steps of C- B analysis, then we will see among one of the
steps we have to identify and classify the cost and benefits. So, we will see the details of
the different types of the cost and benefits associated with a project.
So, like cost benefit analysis it is also very much important to produce a cash flow
forecast. This cash flow forecast indicates when expenditures and income will take plus
will take place during the development project.
185
You know that development of the project will incur some cost. So, we need to spend
some money such as purchasing equipment staff payments etcetera during the project
development, but these expenses cannot wait until the income until the revenue is
generated from the project, we have to spend some money initially. So, when the system
or product is released then only it will generate a income that gradually a pays off the
cost. But the so, far till that point we cannot wait we have to spend some money, that
money maybe we have to spend from the our own resources the companies own
resources or by taking loan from banks etcetera.
Also we have to consider we have to take into account the timing of the cost and incomes
when cost will incur? When the first income when the first revenue will come out? So,
the timing of cost and income for the product that also needs to be estimated.
The we have already known that whether we have to you know expect we have to think
of that this initial fund that we required to start up the project, where the fund will come?
So, we need to know that whether the fund the initial fund requirement, the initial
expenses whether we can fund them from the organizations own resources or by
borrowing or by what taking loan from the banks etcetera.
So, a forecast is needed a typical what forecasting is shown here we are seen as I have
already told you during the development of a project, initially we have to spend some
money this is the expenditure this is the initial expenditure. So, here what will happen?
186
No revenue is there because the project just is started. So, some initial expenditure just
such as purchasing a hardware software, making the site ready and staff recruitment
etcetera we have to make.
And after sometime you see will get gradually the revenue will come up and gradually
the revenue the income will increase and this increase so, some time will come, at some
point of time this income it will outweigh this expenditure, then it will gradually it will
meet the expenses, it will be the expenditure will be equal to what or the income it will
pay off the expenditure that we have made initially then the gradually the income will
increase and after some years again the expenditure are the income the revenue will
come to 0 then it will ask maintenance. So, again you have to put some more
expenditure, this is how the typical product life cycle cash flow this looks likes.
So, basically what the importance is I have to forecast a cash flow and cash flow means,
it will indicate when expenditure will take place and when the income or revenue will
take place.
With this now will go to the detailed steps of this cost benefit analysis, which is very
important for successful development project. So, these are the steps for the cost benefit
analysis that we have to carry out first we have to identify the cost and benefits
pertaining to the project, then we have to categorize various cost and benefits, then we
187
have to select a cost benefit evaluation technique we will see there are various cost
benefit evaluation techniques are there we will see them.
Then we have to interpret the results of this cost benefit evaluation techniques for
analysis and then after analyzing the results of the cost benefit evaluation techniques, we
somebody are the organization it has to take appropriate action. So, now, let us see the
first step.
The first step is identifying the cost and the benefits pertaining to the project. You know
that the certain cost and benefits that easily identifiable. For example, direct cost such as
purchasing a computer and payment to the what a staff members etcetera they can be
easily identified.
These direct and the direct benefits very often they relate one to one to the direct cost,
the direct benefits will directly relate to the direct cost especially the savings that may be
obtained from reducing the cost in the activities in the questions they can be mapped to
the what benefits. Some direct cost and benefits may not be well defined they cannot be
clearly identified since they represent some estimated cost they are not the actual cost
they are estimated cost or they are estimated benefits that have some uncertainty.
So, they cannot be clearly defined, they cannot be clearly measured examples are like the
cost reserved for bad debt etcetera. So, this types of direct cost or this types of some of
188
the benefits they cannot be what they cannot be well define they cannot be clearly
defined and they cannot be clearly identifiable they cannot be clearly measured. A
category of cost or benefits that is not easily identifiable is opportunity cost. We have
taken here example that or some category of cost or benefit that cannot be easily
identifiable is opportunity cost and opportunity benefits. So, these opportunity cost or the
what do you mean by these opportunity cost or benefits?
Opportunity cost or benefits are the cost or benefits which are forgone by selecting one
alternative over another. If you could have taken another alternative you might have you
might have got better benefit for somehow you have overlook that somehow you have
forgotten that one. So, these are the what. So, this types of cost or benefits those are
forgone by selecting one alternative over another alternative. So, the cost or benefit
associated with the these types of things are known as opportunity cost. So, these
opportunity cost or opportunity benefits they are very much difficult to identify.
Now, let us see what are the different then the second step as I have already told you first
step is identify the cost and benefits, then next step is categorize the various cost and
benefits. So, what are the various categories of what are the categories of different
various cost and benefits? So, the cost or benefits they can be tangible or intangible, they
can be direct or indirect they can be fixed or variable. So, these are the important
categories of different cost and benefits let us discuss a one by one.
189
(Refer Slide Time: 07:54)
So, first one is tangible cost. So, what do you mean by tangibility? Tangibility refers to
the ease with which cost or benefits can be measured; how easily you can measure the
cost or the benefits. So, tangible means the ease with which the cost or benefits can be
measured. An outlay of the cash for a specific item or activity is referred to as a tangible
cost. So, tangible cost is defined as an outlay of the cash for a specific item or for a
specific activity. So, this outlay of cash for a specific item or activity is called as tangible
cost. So, examples of tangible cost are what? Purchase of machineries purchase of
devices such as purchase of hardware or different software the what cost required for a
training the personnel the cost required for paying salaries the employees these are
examples of tangible cost.
190
(Refer Slide Time: 08:47)
Similarly, what do you mean by intangible cost? So, tangible cost means cost which can
be easily what measured which can be easily which can be easily measured what
intangible cost means cost that are known to exist, but with the financial value cannot be
accurately measured. You cannot easily measured these cost some examples let us see
employee morale problems caused by a new system. So, suppose it was earlier some
existing system was there.
Now, you have implemented you have installed a new system, you have followed a new
system then what kind of morale problems will come to the employees? So, that will
incur some cost that you cannot easily measure. So, that is why these are intangible cost.
Similarly lowered company image if the company image is lowered is down in the
market among the customers then it will also incur some customer the difficult to
measure. So, these are some examples of intangible cost. In some cases these intangible
cost are easy to identify, but difficult to measure you can identified them, but you cannot
measure them.
For example the cost of breakdown of an online system. An online system such as
railway reservation system or banking system etcetera it was running, but suddenly they
as a breakdown, then some cost is associated with this due to this what the customers
may be diverted to what other service providers and so, and so, on. So, here the cost of
breakdown of an online system during what the maybe we may take that banking
191
example, if an online banking system it is a several times it is getting breakdown. So,
there are so, many banks why the people will go for only your bank. So, they may
migrate to other bank.
So, you lose some amount. So, this will incur some cost. The cost of breakdown of an
online system during banking hours will cause the bank to lose deposits as well as waste
the human resources, but is difficult to measure by how much amount you will lose, how
much money you will lose or how much, but human resources you will lose it is very
much difficult to measure. So, this is something about intangible case are caused. So, in
some cases these cost may be difficult even to identify. In many cases here we can
identify this example have taken that cost of breakdown of an online system here it can
be easily identified, but difficult measure.
So, but there are some other cases where these cost may be difficult even to identify you
cannot identify, what is about measurement you cannot identify. So, we cannot measure
them for example, you see improvement in customer satisfaction due to online systems
previously this system was manual. Now you have what followed you have installed
your you have implemented on analysis system. So, difficult analysis the customers will
satisfied, but how much they have satisfied how to quantify it, it is very much difficult to
quantify them or it is very much difficult to what a measure them.
192
So, these are intangible cost so, similarly as we have seen tangible cost and the intangible
cost. So, similarly the corresponding category of benefits are there like tangible benefits
and intangible benefits. So, benefits which can be measured. So, tangible benefits means
these are the benefits which can be easily measured or quantified. For example, benefits
due to completing jobs in fewer hours previously the job was for hour you are
completing say 10 jobs for due to what automatic a developing a new automation system,
you are completing what in what the same what 10 jobs you are completing in half an
hour.
So, you are certainly getting some benefits. So, this, but how much this benefits of
course, you can you can measure that previously one for one hour you are doing that 10
jobs now the send 10 jobs you are completing in half an hour or for one hour you are
completing 20 jobs. So, this can be easily measured on quantified. So, this is an example
of tangible benefits. So, similarly producing reports with no errors previously you are
preparing the reports manually. So, there are many errors maybe per page say 10, 20 or
like that, but after automating in your report almost just one or two errors are coming,
this number of errors are present in that reports due to making your system online or
automated. So, this can be measured this is an tangible benefit.
So, producing the reports with no errors of course, this can be measured and quantified
you can count the number of errors etcetera. So, this is an a example of tangible benefits.
193
So, similarly what do you mean by intangible benefit? Benefits which cannot be easily
measured or quantified. So, the benefits which you cannot easily measure which cannot
easily quantified there intangible benefits, let us take some example more satisfied
customers. So, if you are having more number of satisfied customers or your company
your corporate has an improved image definitely your benefit will be more but is
difficult to measure them it difficult to quantify them.
So, why we are discussing the different types of cost and benefits? Because all the types
of cost may be tangible and intangible cost may be tangible or intangible benefits or
direct cost indirect cost or direct benefits or intangible benefits all should be considered
in the evaluation process. In the cost benefit analysis all the different types of cost and
benefits including tangible and intangible cost and benefits direct and indirect cost and
benefits they should be considered in evaluation process. It has been observed that many
times the management often tends to deal irrationally with intangible cost or benefits by
ignoring them we should not be. Very often since these intangible benefits they cannot
measure they cannot identify they cannot be identified or they cannot be quantified. So,
management they just ignore them only takes can count what like tangible benefits and
costs.
So, in that way the assessment the financial assessment the cost benefit analysis cannot
be made proper. So, management has to treat equally has to consider all the different cost
and benefits similarly in the C- B analysis method.
194
(Refer Slide Time: 14:51)
Next category is direct cost. These are the cost those with which a dollar figure can be
directly associated in a project; that means, we can express the cost in a monetary terms
either in terms of rupee or in types of what dollar. So, these are the cost with which a
dollar figure can be directly associated in a project, it is easy to quantified monetary
items. So, this cost can be easily quantified in monetary items let us take a small
example.
Suppose you want to purchase box of diskettes or purchase the number of computers,
purchase what a set of softwares. So, these are examples of direct cost here why?
Because here in this case purchase of the box of diskettes say it is taken thirty five dollar
or so, here you can associate the diskettes with the dollars expended. So, since you can
assign monetized what values to this cost these are known as direct cost and other
examples of direct cost as such as development cost, establishment of the laboratory
etcetera, setup cost and operating cost I mean operating running this system after the
installation is over, these are some examples of direct cost because they can be easily
they can be directly measured, they can be directly quantified.
195
(Refer Slide Time: 16:04)
What is about direct benefits? These are the benefits which can be specifically
attributable to the given project. So, these benefits can be specifically attributable to the
given project that is why these are known as direct benefits let us take a small example.
Suppose previously you are your system was running manually; now you have
developed a new online system and that can handle 25 percent more transaction per day
due to the manually how many transactions you are completing? So, now, due to making
it online 25 percent more transactions are completed per day.
So, here directly we are getting the benefit and the benefits these are specifically attribute
attributable why? Because due to implementation of the new online system these are
examples of. So, this is the example of direct benefit.
196
(Refer Slide Time: 16:48)
So, now will call the a direct indirect cost. So, these are the cost or these cost are the
results of operations that are not directly associated with a given system or activity ok.
So, this types of cost which are not a this types of cost are not directly associated with a
given system or a given activity, normally they are often refer to the often referred to as
overhead or overhead cost.
A for example, like insurance, maintenance etcetera these are not specific to a particular
system or a particular activity, these are overall these are required for running of the
overall organization. So, similarly protection of the computer center from the heat and
fire lighting the organization, providing air conditioning facility etcetera these are not
specific these are not directly associated with a given system or with given project or a
given activity this is the required for overall running of the organization these are
examples of indirect cost, but it is a very much difficult to determine the proportion of
each of these which are attributable to a specific activity this is very difficult to
determine and their proportion.
197
(Refer Slide Time: 17:56)
Now, what is about indirect benefits? The benefits which are realized as a byproduct of
another activity or system. These are not directly actually benefits, but these benefits
realized as a byproduct as a side product of another activity or system let us take a small
example. See this is a main objective of steel plant is produced what I steels, but by the
time this steels are produced also in some cases fertilizers are produced. So, the benefits
due to selling of the fertilizers we can consider this as an indirect benefit.
So, similarly you know that the shop keepers they are what selling the oils different oils,
but after selling the oils that containers the tin boxes, they also sell it in the what to the in
the market. So, the amount the benefit they get out of selling the empty oil containers
just like the tin boxes or this what paper cartoons etcetera these are indirect they may be
treated as the indirect benefits.
198
(Refer Slide Time: 18:53)
Now let us come to the last category that as fixed cost versus variable cost. So, some
cost are constant regardless of how well a system is used. So, whether you use this
system frequently used or less frequently used or do not used at all. So, some cost are
constant they are called as fixed cost.
So, they are they are encountered only once normally they will not recur again and again,
examples you can see this straight line depreciation of the hardware exempt employee
salaries you are one times suppose you have a exempt employee salary etcetera and the
insurance etcetera these are one time cost these are known as the fixed cost.
199
(Refer Slide Time: 19:29)
Similarly, variable cost. So, these cost are incurred on a regular basis in contrast to the
fixed cost which are, but only encountered ones they occur only ones whereas, variable
cost they are incurred on a regular basis might be weekly monthly or yearly, they are
usually proportional to the work volume and continue as long as the system is in a
operation.
So, normally they are proportional to the volume of the work and they continue as long
as the system is in a operation will take some examples like cost of the computer forms,
computer printouts that you want to take which vary in proportional to the amount of
processing. So, if you are having large amount of work has to be done large amount of
activities have to be processed; obviously, you have to print more computer printouts
more computer forms are required. Similarly if a it is a less amount of processing has to
be done; obviously, less number of computer forms will be required. Similarly the length
of the reports required more activities performed; obviously, the length of the report will
be more only few activities are performed the length will be less.
So, here the cost associated with a this applications; obviously, that will be vary
according to the volume of the work performed.
200
(Refer Slide Time: 20:36)
Similarly, let us say about fixed benefits versus variable benefits. These benefits are also
constant and they do not change. So, as fixed cost they normally do not change. So,
similarly a fixed benefits they are also constant and they do not change. So, some
examples of benefits we can say which are fixed benefits like benefits due to the
decrease in the number of personnel by 20 percent resulting from the use of a new
computer.
You are automating your system previously it was running manually now a new
computer system is implemented. So, by this you by due to the use of computer suppose
the number of personnel that you are using previously it is reduced by 20 percent. So,
this is exactly it is a fixed benefit ok. Similarly the benefit of personnel savings may
recur every month, but the benefit of personnel savings how much personnel have been
saved? The benefit of a personnel savings they may occur very every month.
201
(Refer Slide Time: 21:34)
And variable benefits on the other hand these benefits; that means, variable benefits they
are realized on a regular basis, they change. For example, suppose there is a safe deposit
tracking system that saves 20 minutes preparing customer notices compared with a
manual system.
When it was manual system, it was taking to 20 minutes more for preparing the customer
notices after making it automated, after making a computerized that saves 20 minutes of
a preparing customer notices. So, this will vary the amount of the time saved it varies
with the number of notices produced. How many number of notices are produced
accordingly the time saved it will vary. So, that is why this is an example of variable
benefit.
202
(Refer Slide Time: 22:19)
So, now we have seen that in summary. So, today we have discussed in this class what
are the different steps for carrying out C- B analysis. We have learnt the different types
of a cost and benefits with suitable examples such as we have seen this direct cost versus
indirect cost, direct benefits versus indirect benefits fixed cost versus variable cost fixed
benefits versus variable benefits similarly this tangible cost versus intangible cost and
tangible benefits versus variable benefits.
So, this cost and benefits have to be considered properly in C- B analysis that is why this
classification is required. We have to consider each and every possible type of cost each
and every type of benefit during the C- B analysis because you have seen that very often
the management, they are not considering they are not taking into account the intangible
benefits and intangible what cost because that difficult to measure the difficult to
identifiable the difficult to quantifiable.
So, that is why they are ignoring this, as a result the result of C- B analysis will not get
proper the financial assessment will not be proper. So, that is why this cost and benefits
have to be considered properly while carrying out C- B analysis or while carrying out the
financial assessment.
203
(Refer Slide Time: 23:43)
So, last class as well as this class we have taken these materials from this books mainly
serial number 1 by Hughes Cotterell and mall that book will take here the preliminary
book and besides that the different cost and the benefits and the feasibility study on the
cost benefit analysis it is well explained in the system analysis and design book by E M
Awad and then this some fundamental concepts of feasibility study and this cost benefit
analysis and this software project management, some fundamental concepts are given in
this fundamentals of software engineering Rajib malls book you may refer this
references in this books. So, from time to time as an when we will I will use different
materials, different books or different resources I will put all those things in due course
of time.
So, this is all about this different the types of costs and benefits in the next class we will
see the other steps of cost benefit analysis. So, today we have seen three two steps that is
identify the cost, step 1 and step 2 is your classify or categorize the cost. So, these two
steps we have seen of cost benefit analysis, the next step is after identifying and
classifying the different cost and benefits we have to use we have to select a proper cost
benefit evaluation technique there are so many cost benefit evaluation techniques such as
net profit, present value net present value, payback period, internal rate of return. So,
those methods are there. So, how to select this methods based on what that will discuss
in the next class thank you very much. So, we will take up the selection of cost benefit
evaluation technique in the next class.
204
Thank you very much.
205
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 13
Project Evaluation and Programme Management (Contd.)
Good afternoon to all of you. So, today we will discuss the different cost benefit
evaluation techniques.
Last class we have seen these the two phases of cost benefit analysis; that means,
identifying the cost and benefits, then categorizing the various cost and benefits. Today
we will see the third step that is the selecting the cost benefit evaluation technique. How
to select a proper a suitable cost benefit evaluation technique?
206
(Refer Slide Time: 00:48)
So, let us see what are the different cost benefit evaluation techniques available, there are
many cost benefit evaluation techniques available these are as such net benefit or net
profit analysis, payback period, Return on Investment or ROI, present value analysis, Net
Present Value or NPV, Internal Rate of Return or IRR and break even analysis and a few
others also exist let us discuss these cost benefit evaluation techniques one by one.
First we will discuss about the net profit. So, what do you mean by net profit? Net profit
of a project is the difference between the total cost and the total income over the life of
207
the project. So, we have to find out the difference between the a total cost spent and the
total income obtained over the life of the project.
We will take a small example to illustrate this, so we have taken a small example we are
the cash flows are given year wise last class we have already seen what a cash flow is.
So, in this example you can say that the year 0 represent the cost before the system is
operation. So, before the system is in operation that we take as year 0 and this cash flow
represents the cost before the system is in operation.
Cash flow already we have discussed yes yesterday it is the value of income less
outgoing I mean the value of the difference between the income and the expenditure, so
income minus outgoing will represents the cash flow. Net profit is defined as the value of
all the cause cash flows for the life of the lifetime of the application. So, in an application
for the whole life how much what cash flows occur?
So, we have to find out the total value of the cash flows for the whole lifetime of the
application in this example you see the application or the project is expected to continue
for 5 years and in 0 years we have spent how much 1,00,000. So, this is spent for what
establishing the project. So, here we have not got any revenue, then year 1 onwards it
gives revenue and you can see that or the first year it gives a 10,000 total or and like this
are the end of 5th year it gives 1,00,000 return 1,00,000 benefit. So, or 1,00,000 we can
say the income it gives in the 5th year.
208
So, the net profit it has to be all these cash flows for 0 to 5 years they have to be added
and you will get it. So, here you can say easily that the net profit for this example of
project for 5 years is coming to be 50000 dollar. So, for this example in a profit is
50,000.
Now, let us see what is the drawback of this technique in the drawback is the does not
taking the count the timing of the cash flows. So, what is the time value of the money? It
does not take into account the timing of the cash flows that is one of the drawback in
some of the classes we will see how this draw back can be overcome by which
technique.
So, we will take quickly another one example here you can see that there are four
projects have been shown, the cash flows for each of the projects have been shown and
year wise the year wise cash flows for every project has been shown. And finally, at the
last row we have discussed about this what we have discussed the various, the net profits
for each of the projects and you can see here that already for project 1 I have shown in
the previous slide.
So, the net profit is coming to be a 50,000, for project 2 again you will got of the all the
cash flows you will get the total the net profit this is like say 1,00,000. And similarly for
the net profit for cash, for project 3 its 50,000 and the net profit for project 4 is coming to
be 75,000.
209
You can easily say that here project 2 gives the maximum net profit. So, that is why this
will be the what beneficial one here, but you see even if it gives the maximum net profit,
but at the expense of huge investment initially see how much every year you are just
spending here the project 1 and project 2 1,00,000, project 4 it is 1,20,000 about here we
spent about how much it 10,00,00. So, the initial investment is of 10,00,000. So, that is
why even if it gives maximum net profit, but at the expense of huge investments, so that
has to be considered.
The next one is payback period, so payback payback period it depend the time taken to
break even or pay back the initial investment how much you have initially spent, when
you are starting when you are in which year you are getting that amount to back, so that
is the payback period.
This is the time that it takes to start generating surplus of income over the expenditure
how much you have spent, in which year you are getting you are recovering you are
getting back that amount plus you gets you get more on the amount more benefits. So,
that year is known as the what payback period.
The project what is the desirable? The project with shortest payback period is desirable it
is preferred because the owner wishes to minimise the time that a project is in debt. So,
you will try to minimize that period because he might have brought this initial funding
from some loan or from some bank.
210
So, in this example if you will see first here in invested how much say 1,00,000 and you
see 1st year he is getting 10,00,00, then 2nd year 10,00,00, 3rd year 10,00,00, 4th year
20,000. So, this is how much; that means, up to 4th year he is getting 50,000. So, next in
5th year he is getting 1,00,000, so; that means, the initial 1,00,000 here invested he is
getting only at the only during 5th year. So, for this example the payback period is year
5.
So, similarly for our earlier what example where four projects are there, you can
compute the payback periods, you can say that for project 1 I have already shown you
the payback period is 5. And similarity for project 2 you can see initially as sends
10,00,000 this is also you are it is getting recovered and the what year 5 and a for project
3 you are getting that in the initial he has spent 1,00,000. So, it is getting 30 plus 30 60,
30 90 and the 4rth year he is getting this initial amount that he has spent that is 1,00,000.
So, for project 3 year 4 is the payback period and in case of project 4 you can see that
initially he has spent or the project the organisation has spent 1,20,000 he is getting that
value when 1st year it will be in 30, then another 30,000 and then another 30,000. So, at
the end of when 4th year is completed then only he gets back or the organization get
back gets back the initial amount that is 1,20,000. So, at the end of the year 4 the project
will get back the initial expenditure the initial expenses.
211
So, here you can see that project 3 where the organisation receives the original or the
initial funding it is during year 4. So, you year a project 3 may be treated as the most
beneficial project if we will considered the payback period.
So, you can see that the advantages of this technique pay back payback period is that it is
simple to calculate no not particularly sensitive to small forecasting errors, but it has
some draw backs like it ignores the overall profitability of the project what is the overall
profitability project we will does not take into account. Also it ignores any income or any
expenditure once the project has broken even; once the project has reached as the
procurement point whatever the income expenditure of that it does not take into account.
You can see that that project 2 and 4 overall there more profitable than project 3, but this
is ignore you please see here project 2 and 4.
So, according to payback period we have seen that project 3 is the beneficial one because
during year 4th year he is recovering he is getting his initial amount, but you see the
profit is more in where, the profit the net profit is more in case of what highest is in
project 2 and then it is what a 75.
So, if you will take into account in net profit actually project 2 should be the best one,
but since it is not considering; since it is not considering the overall profitability of the
project, so that is why either project 2 or project 4 even if they are overall more
212
profitable they are ignored by using the payback period. So, this is one of the drawback
of the payback period.
So, now let us see the next metric the next measure or the next technique that is ROI
which stands for Return on Investment, this measure provides a way of comparing the
net profitable to the investment period, it will compare the net profitability to the
investment period of for an organization. It provides a simple and easy to calculate
measure of return on capital now. So, this technique provides a very simple and easy to
calculate measure of the return on the spent capital amount.
213
(Refer Slide Time: 10:16)
So, now, ROI is expressed in terms of percentage. So, if you will take our the a previous
project that is project 1 here you can see the net profit is coming to be 50,000 along the
how many years along 5 years.
So, average profit is coming to be how much? Average profit will be coming to be
50,000 divided by 5 because it is occurring in 5 years. So, average profit coming to
10,000. So, ROI it can be computed as 10,000 by the total investment you have seen
initially the farmers spent 1,00,000. So, 10,00,00 by 1,00,000 into 100 coming to be 10
percent.
214
(Refer Slide Time: 11:01)
So, for project 1 ROI coming is coming to be 10 percent. So, similarly you can compute
the ROI for the other projects where for project 2 it is coming 2 percentage, for project 3
it is 10 percent and project 4 it is 12.5 percentage and the objective is which one will be
preferred the project having the highest ROI must be preferred. So, project 4 in this case
have is having the highest ROI, so project 4 is the most beneficial project in this
example.
215
Next see let us see what are the drawbacks of this ROI like the profitable net
profitability, it also does not take into account the timing of the cash flows. We have
already seen net profit profitability method or net benefit method the problem with that it
does not take into the timing value of the cash flows. Similarly, ROI also does not take
into account when the timing value of the cash flows, it also bears no relationship to the
interest rates charged by the bank. So, different banks may charge different interest rates.
So, it does not bear any relationship to the interest rates which are charged by banks,
since it does not take into account the timing of the cash flows. So, that is why; so this
method potentially or may what may be very misleading.
Now, we will see one of the important method that is known as a present value analysis
So, what do you mean by present value analysis? In the developing in developing long
term projects it is very much difficult to compare what is the today’s cost with the full
value of tomorrows benefits; that means, what cost we are investing today and what is
the value we will get tomorrow at a benefit it is very much difficult to compare this.
Also the time value of money it allows for interest rates, inflation and other factors that
might alter the value of the investment. So, that is why we must know the present value.
So, we must carry out present value analysis for the what investment that we are making
or the benefit that we will get. So, present value analysis is very much important.
216
So, this present value analysis it controls these above problems; that means, how to
compare the today’s cost with the full value of tomorrows benefits or, how to consider;
how to take into account the interest rates inflation and other factors that might alter the
value of the investments. So, this it will carry out PV analysis present value analysis. It
will control the above problems how by calculating the cost and benefits of the system in
terms of today’s value; in terms of today’s money value of the investment and then by
comparing across different alternatives.
A critical factor to consider in computing present value is a discount rate. So, while we
will compute the present value another important factor we have to know that is known
as discount rate and this discount. So, see for present value we have to computer a factor
called as discount factor. So, this discount factor it based on the discount rate.
So, a critical factor to consider in computing the present value is a discount rate which is
equivalent to the forgone amount that the money could earn if it were invested in a
different project. If that money could have been invested in a different project, then what
could be the forgone amount? So, discount rate is equivalent to that forgone amount
which the money could earn if it were invested in a different project.
In other words we can say that discount rate is defined as the annual rate by which the
discount we by which we discount the future earnings. Mathematically discount factor is
calculated as follows,
217
1
DF
(1 r )
t
where r is equal to discount rate and t is equal to the number of years into the future that
the cash flow occurs. So, in the this is the simple formula by using which we can use this
discount factor.
So, now let us see we will find out the discount factor for some percentage of the
discount rates and for different years. If we will take the discount rate as 10 to 10 percent
or the interest rate at 10 percent and 1 year per for 1 year period, then the discount factor
is equal to 1 by 1 plus this much 0.10 to the power 1. So, this becomes 0.9091 and
similarly in case of 10 percent return 2 years discount factor becomes 1 by 1.10 and I
think there is a small mistake in the its the in this formula 1 plus r to the power t. So, here
this should be 1 by 1 plus 0.10 to the power 2.
So, there is a slight mistake here it must be 1 by 1 plus 0.10 to the power 2, so which will
be equal to 0.8294. So, similarly discount factor can be computed for any given year and
discount rate. So, we have given you a table which help you for a computing the present
values. So, the discount values we have a given, there is a table here yes.
218
(Refer Slide Time: 16:15)
So, you can see that the discount values or the discount factors for different years and
taking different discount rates like 5 percent, 6 percent 8 percent, 10 percent, 12 percent,
15 percent and a different what years like 1 to 10 up to 15, 20, 25 what could be the
discount factor with different discount rates and different years and the number of years
it is shown. This will help you for solving or for finding out the present values on the net
present values.
219
So, let us quickly take a small example, here suppose that dollar 3,000 has to invested in
the project and the average annual benefit is expected to be 1500 dollar for four years life
of the project. The investment has to be made today please see the investment will make
today, where as the benefits are expected to come in the future. So, we compare the
present values to the future values how much we have spent today and how much we are
expecting in future. We compare the present values to the future values by considering
the time value of the money that we have invested.
So, this time value of the money has not considering has not been considered in net profit
and the ROI, so that those are the problem. So, those problems are over come by in this
present value analysis. The amount that you are willing to what investment today is
determined by the value of the benefits at the end of the given period, please see how
define present value.
The amount that we are wanting to invest today, how it can be determined? This is
determined by the value of the benefits which may be obtained at the end of a given
period or the given year. So, this amount that we are willing to invest today for which we
are expecting that some benefit will occur might be after n years or n number of years.
So, this amount is call the present value of the benefit.
So, how you can compute the present value? We can compute the present value by using
formula for the future value. The future value or the benefit can be expressed as
220
P
F
(1 r )
t
that P the future value; that means, if the future value, P is the present value of the
money or the present value of the investment, r is the discount rate and t is the number of
years into the future that the cash flow occurs.
On solving the above equation you can get the value of a P which is equal to P is equal to
F by 1 plus r to the power t,
F
P t
(1 r )
221
(Refer Slide Time: 18:49)
We will take a small example for that previous example that we are saying that suppose
we want to what invest 3000 rupees and the average annual benefit is coming to be
15000 sorry 1500 for the four years life of the project now we want to compute its what a
present value, you can see the present value can be computed like this. So, the present
value of dollar 1500; 1500 invested at 10 percent in interested at the end of 4th year.
So, the present value you can see that 15,000 is what? 15,000 is annual average annual
benefit we were expecting that the average annual benefit will be 1500 when for the 4
years life of the project. So, now, we can see what will the present value of the 1500
what rupees that we will get after 4 years at the rate of 10 point 10 percent interest is
calculated as P is equal to 1500 by 1 plus 0.10 to the power four. So, this is coming to be
1027.39.
So, literally what is its meaning? Its meaning is that, if we invest 1027.39 dollar today at
10 percent interest then we can expect to get 1500 dollar in four years; that means, what
is the meaning of present value. This calculation can be represented for each year where
a benefit expected that you can calculate yourself.
222
(Refer Slide Time: 20:17)
Now, based on the present value another important concept that you will see that is
known as Net Present Value NPV its short form is NPV; NPV stands for this Net Present
Value.
So, NPV how will define or what do you what this NVP, what does it represent? NPV is
a project evaluation technique that takes into account the profitability of a project and the
timing of the cash flows that are produced. So, we have already seen the earlier
techniques such as net profits and this ROI they do not take into account the timing
value, but here this NPV it is a evaluation technique that takes into account the
profitability of a project as well as the timing of a cash flows that are produced.
The net present value is obtained how? The net present value can be obtained by
discounting each cash flow both whether it is negative or positive because you know the
initial expenses that we represented as negative value and then the profit we represent in
terms of the positive or the income that we represent as a positive what values. The net
present value it is obtained by discounting each cash flow and summing the discounted
values.
223
(Refer Slide Time: 21:32)
We will take a small example here. So, again the for the project one I am calculating the
NPV. So, year wise the cash flow value is have been given in dollars, then the discount
factor is obtained using the table I had shown here. The table we have already seen that
the discount factors we have already seen for year for the rate of 5 percent at the year 1
this is the what discounted factor.
Similarly for 10 you can see for year 1 for 10 these discount factor is 0.9091, for 10
percent rate for 2nd year this discounted factor is 0.8264. So, this values you can use
directly here to compute the NPV values that we have done here.
So, discount factor for see it is assume that discount factor for 0 th here is 1, then we
have use that formula to compute the discount factor for 1st year it is 0.9091, for 2nd
factor for 2nd year it is 0.8264 unlike that and then we multiply this discount factor with
the cash flow, then we get the discounted cash flow. So, we have to find out the
discounted cash flow by multiplying the discount factor with the cash flow for every year
we can find out the discounted cash flow and finally, we have to add the discounted cash
flows for all the periods that will give you the NPV.
So, after adding the discounted cash flows for all the your years we get NPV is equal to
618 dollar 618 dollar for this example with discount rate 10 percent and for the period of
5 year the NPV comes to 618 dollars.
224
(Refer Slide Time: 23:15)
So, similarly what you do you can calculate the NPV values for the projects at for the
2nd, 3rd and 4th project and you can draw the inference find out which project with the
most preferred one which project will be the most beneficial one please find out this one
this is an assignment for you.
So, let us see what are the limitations of net present value analysis. The main difficulty
with NPV for deciding between projects is selecting an appropriate in discount rate, how
you will select whether you will take the discount rate at 5 percent, 8 percent, 10 percent
or 12 percent. So, deciding choosing an appropriate discount rate it was that is one of the
main difficulty. Similarly using NPV it might not be directly comparable with earnings
from other investment or the cost of borrowing capital, it is very difficult to directly
compare the with the earnings from other investments or the costs of borrowing capital
these are the some of the limitations of NPV method.
225
(Refer Slide Time: 24:15)
Next we will see the other measure that is called is IRR which stands for Internal Rate of
Return. IRR provides a profitability measures as a percentage return that is directly
comparable with interest rates. IRR it provides a profitability measure and it expressed as
a percentage return that is directly comparable with the interest rates.
So, suppose a project that shows an error of 10 percent, it would be worth if the capital
could we borrowed for less than 10 percent or the capital it could not be invested in
elsewhere for a return greater than 10 percent, then we say that project will be worth.
IRR is IRR may be considered as the discount rate that would produce an NPV of 0 for
the project, this IRR can be used to compare different investment opportunities.
It can be calculated using a spread sheet and also Microsoft or Microsoft Excel will be
provide some functions. So, also Microsoft Excel provides some functions such as IRR
which takes two arguments a value and an initial guess and initial in an initial seed value
as input and finally, it returns the IRR it returns an IRR at the output. So, this is the
internal rate of return which is another very good measure.
226
(Refer Slide Time: 25:42)
The limitations of IRR are like follows. It does not indicate the absolute size of the return
it normally it does not indicate the absolute size of the return. Similarly it might happen
the case that a project which an NPV of 1,00,000 with an IRR 15 percent it may be more
attractive than one project with an NPV of 10,000 only with an IRR of 18 percent that
might happen what IRR what is unable to what predict this thing is unable to handle this
thing.
Under certain conditions it is possible to find more than one rate that will produce zero
NPV, it can be possible. There can be what two rates that will produce the same 0 NPV.
In these cases we have to take the lowest value and ignore the other values I mean the
other rates.
227
(Refer Slide Time: 26:35)
Now, we see the last technique that is the break even analysis break even analysis or first
let us see break even analysis based on a point called as breakeven point. Break even is
defined at the point where the cost of the candidate or the propose system and that of the
current one are equal.
So, one current system say manual system is running, the propose system is an
automated system, so break up suppose, then break even is the point where the existing
system and the where the cost of the existing system and the cost of the point of propose
system they are equal.
So, unlike the payback method which compares the cost and benefits of the candidate
system I mean the propose system, but break even analysis it compares what the cost of
the propose system or the candidate system and the current system.
228
(Refer Slide Time: 27:22)
So, when a candidate system is developed the initial cost are use a normally usually
exceed those of the return say current system this is an investment period. Obvious, so
when a propose system is developed you have to spend more money. So, the initial cost
normally it will exceed that and that of the that the cost of the current system.
So, this period we call as the investment period, then when both the cost of the propose
system and the cost of the existing system they become that point is known as the break
even and after the breakeven point what will happen the propose system or the candidate
system it will be provide better will get a profit than the older one or the existing one and
this period is known as the return period we will show it through an example I will
through a graph.
229
(Refer Slide Time: 28:01)
You can see that in x axis we have taken the processing volume y axis the processing
cost and you can see that this line what is the total you can see that; see you can see that
in this graph here this line represents the current system and this line represents the what
the total the existing system the candidate system, as I have already told you initially the
propose system will take more cost.
And the existing system will take the what they because it is running, so it will take less
cost, but some point we will come here see at point of B prime both the cost of the
existing system and the what this propose system become sequence. So, B prime
becomes the what the breakeven point near about 65000 processing volume both the cost
equal after this though this breakeven point.
After this then you see that the what the propose system takes less cost and the current
system it takes more cost that mean the propose system starts giving, but return starts
giving profit. So, that is why since here the propose system takes more cost this area is
known as the investment range and after the breakeven point here the current system
takes the what more cost whereas, the propose system takes less cost it starts giving
return it starts giving profit.
So, that is why this area known as the return range. You can see that this area represent
this point represents a fixed cost whereas, this portion represents the variable cost, this is
what is meant by the break even chart; this I have already explained here.
230
(Refer Slide Time: 29:42)
Now, the shaded area the investment period that we have already shown you this is the
investment period or investment range where as this is the return range and B prime is
the break even period, so breakeven point. So, breakeven point is also very helpful to
decide that which project will be beneficial for us.
So, in this lecture we have discussed the different cost benefit evaluation techniques with
suitable examples also we have described the advantages and disadvantages of each
technique also we have presented which technique will be best suitable in which
231
circumstances that also we have seen. So, in the next class we will see the remaining two
steps of cost benefit analysis that is what interpreting the results as well as taking the
appropriate action. These are the references and in the next class we will see the
remaining two steps.
232
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 14
Project Evaluation and Programme Management (Contd.)
Good afternoon to all. So, now, let us see the remaining parts of cost benefit analysis ok.
So, we have seen three steps of C-B analysis. So, the other remaining two steps are we
will see first then we will see the risk evaluation.
So, the last two steps of C-B analysis are interpret the results of the analysis and then
take appropriate action.
233
(Refer Slide Time: 00:42)
So, after so, we have already seen that, we have already selected, we have discussed the
cost benefit evaluation techniques, then depending upon the scenario, we should select or
the organization should select a proper and appropriate cost benefit evaluation technique.
So, after that cost benefit evaluation or that cost benefit analysis over, then the project
the result should be interpreted ok.
So, after the cost benefit evaluation of the project is over the results must be interpreted.
So, for interpreting the results it requires comparing the actual results against the
standard or the result of any other alternative investment. The interpretation and the
decision phases are highly subjective in nature they require the judgment and the
intuition capabilities of the project manager. Based on the level of uncertainty the project
manager may be confronted with two possibilities.
Either with a single known value or a range of values for the different material such as
NPV or that net profit etcetera either you may choose, you may be confronted with a
single known value or a range of values.
234
(Refer Slide Time: 02:01)
In either of the cases whether it is a single value case or it is a range of values for the
what measures such as NPV or net profit or what you are ROI, whatever it may be
simpler techniques may be used such as net profit analysis etcetera. May be used they are
easier to implement the and other complex techniques.
But, if it can be modified if because we know the drawback of net profit analysis is the
that does not take into account the timing value of the investment. So, if it can be
modified to include the time value of the money, then the net profit method it probably it
would be comparable with the net present value, but you know that it is difficult to apply
or to implement the complex techniques such as NPV.
235
(Refer Slide Time: 02:55)
Now, after interpreting the results after analyzing all those results we must take the
appropriate action the project manager must take the appropriate action. So, after
interpreting the results the appropriate decision or action they has to be taken.
And this taking the decision it is highly subjective in nature, this depends on the project
managers or the end user confidence in the estimated cost and the benefits. And the
magnitude of the investment, the final decision or the action will be to select the most
cost effective and beneficial system for the user.
So, the final action of the cost benefit analysis is to select the most appropriate the most
cost effective and beneficial system for the user, then the a project manager has to
prepare the feasibility study report which contains the major findings and during the
analysis process on the suggested recommendations. This is how the final action has to
be taken after interpreting the results. So, this is something about this C-B analysis.
236
(Refer Slide Time: 04:02)
But, what are the limitations of C-B analysis? So, there are many limitations of the C-B
analysis number one is valuation problems. I have already told you in the last class that
there are different types of cost and benefits. So, the particular categories of cost and
benefits or such intangible cost and benefits they are very much difficult to identify
measure and quantify, can that be quantified easily. That is one of the most difficult
problem in most difficult challenge in case of cost benefit analysis, that is intangible cost
and the intangible benefits, they are difficult to identify may be difficult to what measure
and quantify.
Another problem is known that the distortion problems, there are two ways of distortion
in the results of C-B analysis one, there may be intentional favoritism of an alternative
for the political reasons. And the two when the data are incomplete or they are missing
from the analysis it is may difficult to take an appropriate action. And the next problem
is completeness problem here the cost the cost related to C-B analysis may be on the
high side.
The cost that have been estimated they may not represent the actual cost, there may be on
the higher side or the estimated cost of the they may not be enough or not enough cost
might be considered the project manage manager might not have consider enough cost
for conducting a complete analysis. So, this could be the possible limitations of C-B
analysis.
237
(Refer Slide Time: 05:33)
Now, we will see that I have already told you, the contents of business case; one content
was the based one risks we know that every project has some kind of risks. There are two
types of risks we have discussed earlier; one is project risks, another is business risks.
You may know that the business risks they are related to the threats to completion to a
successful project execution whereas, business risks are related to the factors which will
threat the benefits of the delivered project.
So, in business case the main focus is on business risks. So, right now we will discuss
how to evaluate the business risks, the project risks will be discussed later on.
238
(Refer Slide Time: 06:15)
So, let us see what are the risk evaluation methods particularly business risks. So, we
will see about risk identification and rank and ranking, then risk and net present value,
cost benefit analysis method for handling risks, then risk profile analysis and using
decision trees.
Let us quickly say about this risk handling and ranking. So, here in any project
evaluation we should identify the risks first. So, before handling them in any project
evaluation we must first identify the risks. And we should quantify their effects how it
239
will affect the business. One approach to now what identify the risks and the quantify
their effects is to construct a project risk matrix a special type of matrix will construct,
that is known as project risk matrix, which will utilize a checklist of possible risks.
So, what are the possible risks that will appear in that matrix? As well as then we have to
classify the risks according to their relative importance and likelihood. What is the
possibility that this risk will occur and then what is the importance of that risk, how far it
will affect the business, how far it will hamper the business? So, that is the importance.
So, we can give values like a high H for high, M for medium and L for low; that means,
the importance can be high and importance can be medium or importance can be what
low similarly, possibility of occurring this could be or the likelihood could be high low
or medium high, medium or low. So, then as I have already told you this matrix will
mainly consists of the possible types of risks, then we have to classify by using the
relative importance and the likelihood.
So, the point of speaking is that the importance of the risks as well as the likelihood of
the risks, they need to be separately assessed, which should not what consider them
together they should be separately assessed. Because, see we might be let us concern
with something or some risks that although serious, but is very unlikely to occur.
So, some kinds of risks are there they are very serious, but the very unlikely they occur
the chance of occurring them is very less. So, we may not be very much serious about
that, but there are some less serious risks there are very less serious risks their effect is
very what less their importance is very less, but that is almost certain the probability that
will occur is almost based certain say 90 percent. So, we must be very much what
concerned about this.
240
(Refer Slide Time: 08:52)
For example, say there is a project A which appeared to give a better return, but it could
be very risky. So, we should treat it differently. Similarly, another project could be
another project B, it might give what very less return, but it is what we can say there is
less risk. So, you may go for that has to be debated. So, I have already told you that in
order to classify the risk their importance or the we have to construct a project matrix, we
could drop a project matrix for each project for assessing the risks. Here, we are consider
two important factors the importance of the risks as well as the likelihood the probability
that the risk will occur.
241
See we have taken a example of a project risk matrix, where as I have already told you
we have first write down what could be the possible risks, then what is their importance
and what is their likelihood? So, here we have taken this example for an e-commerce
application. The possible risk could be like the client rejects the propose look and the
propose look and feel of the site, because the GUI part is not very good.
So, he has rejected he has not accepted the client is not accepting that one so; obviously,
the importance will be very high, but the likelihood is this dash line means it is unlikely,
very unlikely it may occur. Similarly, competitors under cut the prices might also
happen, if the it happens it is consequences is very high. So, importance is high and
likelihood is medium, warehouse is unable to deal with the increased demand.
So, it is importance is a medium and likelihood is also less. So, online payment has
security problem, problem it might happens. So, importance is a medium likelihood is
also medium and a maintenance cost may be higher than the estimated, this it is
importance it is effect on the business is less and likelihood is so, it is less. So, response
time deter ate the purchasers.
So, it is importance is medium and likelihood medium from some study this has been
analyzed. So, in this way for any given project in order to classify the risks, in order to
evaluate the risks, we have to construct a project risk matrix, matrix by considering the
importance of the risks and the possibility or the probability or the likelihood of the risks
that they will occur.
242
(Refer Slide Time: 11:11)
So, next we will see the next method that the risks and the net present value. Here where
a project is relatively risky it is a common practice to use a higher discount rate to
calculate the net present value. So, the in the last class I have already told you, while
calculating the net present value, we are considering different discount rate such as 5
percent, 8 percent, 10 percent, 12 percent or 15 percent things like that. So, when we
observe that a project is relatively risky.
Then what we should do? So, it is a common practice that we should use a higher
discount rate to calculate or for computing the net present value. For example, this risk
premium might, for example, be an additional 2 percent for a reasonably safe projects.
So, if a project is a reasonably safe we may use an what additional 2 percent for this
discount rate, but if the project is a fairly risky one, then we have to consider additional 5
percent of this discount rate. For example, if we are considering 10 percent normal
discount rate. So, if a project is reasonably safe we may consider the discount rate as 12
percent, but if the project is a fairly risky one you should considered, you should take the
discount rate as 15 percent.
So, projects may be categorized as high. So, just like we have done in this case of what
previous approach, we are considering the matrix. So, similarly while considering the net
present value the projects may be categorized as high medium or low risks using a
scoring method, and the risk premiums designated for each category.
243
So, we can also rank them as high risk or medium risk or low risk projects using some
scoring method using some formula or given by using some weights and a risks and the
risks premiums designated by each category.
For if it is a very high, what the risk project we may considered like a 15 percent. For
example, 5 percent extra may be 15 percent what premium, risk premium or discount
rate and if it is a medium risk project, we may used might be what we can take 13
percent or so, of this what discount rate, if it is a very low one we can take 11 percent or
12 percent as the discount rate and then we can classify them.
So, the premiums even if you are consider a arbitrary if you have even if you have taken
them arbitrarily, but still they provide a consistent method of taking risks into account.
244
(Refer Slide Time: 13:56)
Now, we will see a cost benefit analysis how it can be used for this what evaluating the
risks we have already seen cost benefit analysis earlier for evaluating the projects. So,
similarly while evaluating the impact of the risks, we can also use cost benefit analysis.
This is a more sophisticated approach to the evaluation of risks. So, here we consider
each possible outcome on the estimate the probability of it is occurring. So, according to
cost benefit analysis we will consider each possible outcome, also we will estimate the
probability of it is occurring and the corresponding value of the out outcome, how much
benefit we will get the corresponding value?
So, rather than a single cash flow so, rather than taking a single cash flow forecast for a
project here will that happens set up cash flow forecast. So, instead of taking a single
cash flow forecast, we will may have set of cash flow forecast, where each is associated
where each with an associated probability of occurring. So, each forecast ok, each
forecast may have associated with a probability of occurring.
245
(Refer Slide Time: 15:06)
The value of the projects is then obtained by summing the cost or the benefit for each
possible outcome weighted by it is corresponding probability. So, here you will take two
things that what is the cost or benefit for each possible outcome, then for each one you
will give what the probability. So, we have to multiply this cost along with this
probability. So, the value of the project is then obtained by summing the cost or benefit
for each possible outcome weighted by it is corresponding probability. Let us take an
example and then I will say.
246
So, suppose there is a company which is planning to develop a payroll application and
now currently it is engaged in C-B analysis. So, study of the market shows that if the
company can target it efficiently and no competing products become available, then it
will obtain a high level of sales generating what on annual income of 800000 dollar.
It estimates that there is 30 percent chance of happening this. Then so, this is the extreme
case so, these are the two extreme cases the best case is the about 800000’s annual
income whereas, the worst case is what 100000 dollar per annum. And here the best case
chance is 10 percent whereas; the worst case chance is 30 percent. The most likely
outcome is somewhere in between these two extreme extremes; that means, between n
the 800000’s and this 100000.
So, the most likely outcome is somewhere in between these two extremes. So, it will
gain; that means, it will gain so, according to the survey, it will gain a market lead by
launching before any competing product becomes available and achieve annual income
of dollar 650000, which is in between these 800000 and 100000. The expected sales as
per the market study, the expected sales income as per the market study are shown in the
next table.
247
(Refer Slide Time: 17:38)
Like this say it is expected that as I have already told you the annual income the best
case it will be 800000, where the probability is 10 percent. And if it is another
competitor is there then it will be the worst case, where the annual income will be what
100000 and the probability is 30 percent. So, as I have already told you in cost benefit
analysis what we will do? The value of the project is then obtained how by summing the
cost or benefit for each of the possible in income, which are weighted by it is
corresponding probability.
So, you have to see these are the values annual incomes this is the probability we will
multiply them. So, these are the values you will get and then the expected income will be
around this. So, you can see the expected income for this problem is found out to be how
much 500000’s now.
248
(Refer Slide Time: 18:31)
Ah it is also estimated that the development costs are estimated to be 750000, sales
levels are expected to the constant for at least 4 years. The annual costs such as
maintenance etcetera are estimated at 200000s. So, now, the question is would you
advice the project or the company going ahead with the project. Now, let us analyze. So,
what will do what will be the advice?
249
See we have seen that the expected sales is 500000s we have already seen the average
the expected income is 500000’s. So, and also we know that sales levels are expected to
be constant for 4 years and annual costs such as maintenance are 200000’s.
So, what we will find out now? The expected sales of what 500000 over 4 years would
generate what is the expected income, what will be the expected what benefit, the total
expected income will be what the what is the total income? So, 500000’s into 4 that is
20,00000’s minus, what is the annual maintenance cost? 200000’s per year. How for
how many years it will be run? So, 4 years.
So, 200000s minus 4 sorry 200000s into 4 that is 800000s, so 20,00000s minus 800000s
coming to be how much? 12,00000s. So, the expected sales is coming to be 1200000s.
Now, you see what is our, what investment you can see that the development cost is
7,50,000’s dollar. So, now, we can see that the benefit that we will get is or the expected
sales benefit is coming to be a 1200000s dollar whereas the investment is 7,50,000
dollar.
So, of course, it is a good return. So, the company may go for this project, but please see
the other side, you see let us consider the worst case. If the sales are low and another
competitor is there then what will happen, that it will lose money. And you know the
chance is already 30 percent. It is a significant amount and if this is happen and if this
happens then how much you will get? The company will get only 30,000, because see
the annual sales income will be what 100000 and since the probability is 30 percent.
So, expected value will be what just what 30,000 very what low value. So, that every
possibility that this may occur; because the probability is 30 percent and the company
will certainly loose money. So, it is not advisable to take up this risky project. So, in this
way cost benefit analysis can be carried out for at evaluating risks.
250
(Refer Slide Time: 21:05)
Next, so, the limitations are that, how to assign the probability? So, there were 10
percent, 20 percent, 30 percent. So, assigning probability of occurrence is a challenge, it
does not take a full account of the worst case scenario. It only considers the average case
scenario, it does not take into account the worst case scenarios.
So it may take the it may find out it may take the average case scenarios, but it does not
handle the worst case scenarios. So, now this risk profile analysis the next method. So,
this is an approach which attempts to overcome some of the problems of C-B analysis
251
method what I have just told by constructing a risk profile. So, here it consider it
construct a risk profile using sensitive analysis to evaluate the risks. So, that the some of
the problems of the C-B analysis will be overcome, by studying the results of this
sensitive analysis, then we can identify the factors that are most important to the success
of the project.
Then with a need to decide whether we can exercise better control of over them or
otherwise mitigate their effects. So, with this study if we will know that which factors are
most important to success of the project, then we need to decide whether we can exercise
better control over these factors or otherwise we will mitigate their effects. If, neither is
the case if it both the things are not possible, then we must live with the risks or we have
to abandon the project, this is how risk profile analysis method works.
Next quickly see this one, this is using decision trees evaluate the risks. So, I hope
decisions trees you must have studied in B-tech career. So, this decision trees can also be
used to evaluate the risks. The previous approaches assumed that the project managers
are passive by standers, they are by stand they are bystanders the standard this spelling
mistake, they are passive bystanders allowing nature to take it is own quotes; that means,
only the project manager can only reject the over risky projects or choose those with risk
profile.
252
So, only those which are less risk they may take which are more risky the project
manager just simply what rejects those projects. So, but in some cases, it may be
required to evaluate whether risk is important. And you decide a suitable course of action
always we cannot just what leave projects which are risky, we must have to handle them.
So, how to handle them, how to deal with them, that sometimes required we have to take
some appropriate action depending upon the importance of the risks.
So, such decisions will limit or affect the future option. So, this decisions will affect
future options and it is important to assess how a decisions will affect the future
profitable project, if risk is there currently at in a project, how it will affect in future. So,
decision trees comes in handy for the dealing with these scenarios let us quickly see so,
how decision trees can be constructed.
253
(Refer Slide Time: 24:00)
So, again I have taken a simple example. A company is considering when to replace it is
order sales order processing system. So, a company currently it is suppose it is running at
manually. Now, it wants to replace it may be through one automated system.
So, the decision depends on the rate at which it is business it expands. If it is market
share increases, then the existing system might need to be replaced within two years. But
no if the company will not replace the existing system in time, that could be this could be
an expensive option as, it could lead to lost revenue, if it cannot cope with the increase
sales. Replacing the system immediately will be very expensive.
So, now this is one about replacing, but let us see about the extending case extend the
system will have an NPV of dollar 75,000. If, the market expands, this will be turned
into a loss with an NPV of 100000. So, I think it is a 100000. So, here it is negative sign,
because this is loss due to the lost revenue.
If the market expands replacing the system will have one NPV of 250000 dollar due to
the benefits of handling increase sales and other benefits such as what enhanced MIS
system etcetera. If sales do not increase then benefits will be severely reduced and the
project may have to suffer with an NPV of again dollar what some 500000’s or so, com
sorry 50,000.
254
So, company estimates the likelihood of the market increasing the significantly at 20
percent. So, it is estimated that that the likely of the market increasing market expands it
is 20 percent, the probably 20 percent. And hence the probability that the market will not
increase market will not expand is 80 percent. So, now, we have to construct the decision
tree and take the appropriate action.
Can say that what problem I have described here, easily you can construct a decision tree
like this. And these are the values, these are the probability that I already told you, the
probability that much market will expanded is 20 percent. So, of the probably of it will
not be expanded as 80 percent. So, this is on what you can say extent, because it is the
existing system the existing system can be extended or the existing system can be
replaced with an automated system, that the company is thinking.
255
(Refer Slide Time: 26:32)
So, now we have to analyze the decision tree you can see that, the analysis consist of
evaluating the expected benefit of taking each path from the decision point D. So, this is
the decision point D from decision point D, we have to evaluate we have to asses each
path. So, this is one path, this is another one path, let us evaluate or asses each path, you
can see that the expected value of each path how you can find out.
So, the expected value of the each path can be finding out can be found out by taking the
sum of the values of each possible outcomes multiplied by it is probability. So, let us see
first case the extending case ex extending value of the expected. So, now, the expected
value of extending the system which found out by what. So, extended case is this one,
you can see that the NPV, if market expansion happens. Then what is minus 100000 and
probability is 20 percent, if market does not expand so, 80 percent probability and
75,000.
So, we can find out the expected value. So, this will how much 75,000 into probability
0.8 minus what 100000 into 0.2, this value is coming to be how much 40,000? So, this
path will take up what 40,000? So, 40,000 into what 40,000 coming to be this much
value. Similarly, expected value of replacing the system is how much this you can see
replacing it is this case.
256
(Refer Slide Time: 27:57)
Replacing you can see the value is 2,50,000 if market expands. So, probability is 20
percent if market does not expand, there will loss that will be 50,000 and probability is
80 percent. So, we can evaluate it. So, it will be 2, 50,000 into 0.2 minus 50,000 is 0.8
coming to be 10,000. So, this path now will become the value for this path will be
10,000. So, 10,000 into this value is 10,000. So, 10,000 into 10,000 coming to be this
value, if will compare both the values so; obviously, profit will be maximum in this case;
that means, it extending system not replacing the system.
So, the company should choose the option of extending the existing system rather than
replacing the system, because the benefit will be more in case of extending system rather
than replacing the system ok. So, we have seen that different what first we have seen the
two phases of C-B analysis, then we have seen what is a risk different types of risks. And
how the risks how the risks can be evaluated, we have discussed different risk evaluation
techniques? So, this is about the different types of risks evaluation techniques.
257
(Refer Slide Time: 29:20)
258
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 15
Project Evaluation and Programme Management (Contd.)
Good afternoon to all of you. Now, let us start the remaining part of the Project
Evaluation and Programme Management. We have already seen about this evaluation of
projects. Now, let us see about this programme management part.
So, in this lecture we will discuss first the various concepts of programme management,
then what is the benefit, how benefits can be managed.
So, let us first before going to programme management. Now, let us see what do we
mean by a program? Project, we have already seen. So, programme means there are
many definitions are there one definition is given by D.C. Ferns. So, according to this
definition programme is a group of projects, it is a collection of projects, that are
managed in the coordinated way to gain some benefits that would not be possible were
the projects to be managed independently.
So, if the projects will be managed by independent persons; they will be managed by
independently then, the benefits that will get it will be almost very negligible.
259
In other words, if a group of projects they can be managed in a coordinated way in a
collaborative way, then the possible benefits, that we will get which must be on high
which may not be obtained, if the projects will be managed independently. So, that is,
what is the difference between projects and programs. Let us recall again a programme is
a group of a projects, which are managed in a coordinated way, in a collaborative way to
gain benefits that would not be possible, if the projects would have been managed
independently.
Now, let us see what are the different types of programs available. These are the different
types of programs one strategic programs. So, strategic programs are the projects which a
group of projects a collection of projects, which implement a single strategy only those
group of projects will follow they will implementing a strategy that is why they are
known as a strategic programs. Business cycle programs these are the groups of the
projects, that an organisation undertakes within a business planning cycle.
So, within a particular business planning cycle what category of the projects, what
groups of projects that on under taken an organisation undertakes, that is known as
business cycle programs.
Then, infrastructure programs so, these are the groups of projects, which are performing
the activities of identifying some common infrastructure and its implementation and its
maintenance. So, that is why since here the group of projects, they perform some
260
activities of identifying some common infrastructure. And, its implementation and
maintenance that is why, this programme is known as infrastructure programme.
So, then R and D development or research and development programs more popular
known as R and D programs these are very easy to understand. So, these are the groups
of projects, which are involved in developing some new products, some innovative
product, based on some current research work. So, this is known as research and
development programs or R and D programs then innovative partnerships.
So, now a day’s many companies they do some collaborative projects. They do
combinedly some they tackle some projects combinedly collaboratively. So, this kinds of
projects are the groups of similar types of projects are based on collaboration by different
organisations is known as innovative partnership. Here the projects are based on some
collaborative work performed by different organisations.
Now, let us see we have already seen what is a project, we have already seen what is a
program. Now, let us see what is the difference between a programme manager and
project manager, why the resources for the what for the resource allocation or the
resource allocation to programs, it will depend upon who are the end users. They could
be that could be system analyst, database administrators, programmers, coders, etcetera.
So, how the resources can be allocated to different projects or to different programs.
261
Before this so, for allocating knowing this resource allocation we must know, what are
the differences between the programme managers and the project managers. So, mainly
programme manager he is responsible for handling many simultaneous projects whereas,
project manager he performs only one project at a time. Similarly, the programme
manager he has personal relationships with all the skill resources working under him, but
a project manager he does not have personal relation.
So, he has him personal relationship with the resources. The objective of programme
manager is to optimal use of the resources. How optimally you can utilize the resources,
but the project manager his objective is how to what give minimise demand for the
resources? How he can put minimization of the demands, how he can achieve
minimization of the demands for the resources.
Then the project manager; in case of project manager the projects tend to be seen as
similar almost the projects are similar, but the project manager he use the projects or the
projects which are under the project manager. They tend to be seen as unique they are
not similar, they are the projects which are under a project manager, they tend to be seen
as unique. These are the differences between programme managers and the project
managers. So, accordingly the resources can be allocated to different projects.
Now, let us see about this strategic programs. So, we have to see the strategic
programme management; so, actually we should know the before knowing this strategic
262
programme managements, we should know the strategic programme. Strategic programs,
I have already told you that these are the projects which implement a single strategy
only.
So, now we can see about this strategic programme management. It is a difference form
of a programme management; actually title should be strategic programme management.
So, strategic programme meant programme management is a different form of
programme management, where portfolio of the projects. They all contribute to a
common objective, they all work for a single strategy, they all contribute to a common
objective.
So, this is another form of a programme management where the portfolio of the projects
they all contribute to a common objective. It is based on what this OGC approach?
Where OGC stands for the Office of Government Commerce previously it was shown as
CCT.
263
(Refer Slide Time: 07:34)
So, the initial planning document that we have to use is a Programme Mandate. So, now
let us see how to create a programme? What kind of documents are required to create a
programme? The first document that, we required to create a programme is programme
mandate. So, this is the initial planning document is known as the programme mandate.
The programme mandate describes what the following things.
It describes the new services or the capabilities that the programme should deliver. So,
what services or what capabilities, which are particularly new that the programme should
deliver that should be mentioned in this programme mandate this document. Then, how
an organisation will be improved? The suggestions, how the organisation can be
improved further.
So, all those suggestions also should be put in the programme mandate, then fit with
exiting organisational goals. So, whatever the services or the capabilities will do so, will
perform. So, or it will deliver they must be fitting to the organisational goals, what are
the organisational goals? So, these services capabilities as well as the suggestions for
improving the organization they should be fit with the existing organisational goals or
the objectives. A programme director appointed a champion for the scheme. So,
normally live a programme director appointed a champion for this type of schemes.
264
(Refer Slide Time: 09:10)
So, as I have already told you the first document, the first document for creating a
programme is a programme mandate. Now, we must to see that the other documents, that
must be used for while creating a program. So, we must use these kinds of documents for
creating a programme. Now, first one the programme brief, as if name suggest, the
programme details must be explained here briefly that is known as the programme brief.
He it is just equivalent to a feasible study; feasible study we have already seen earlier
that and the three types of feasibility we have studied, we have known technical
feasibility, financial feasibility and operational feasibility. The most important one is the
financial feasibility or the cost benefit analysis.
So, this programme brief it is just equivalent to a feasibility study. Here emphasis is
given on the financial assessment or the cost benefit analysis, then there must be a
statement there is a document containing the vision statement. So, every organisation
what should have a vision statement, this document explains the new capability that the
organization will have.
So, what kind of new capability the organization will have this document explains this
thing then there must be a blue print. So, this document explains the changes to be made
to obtain the new capability. So, in order to so what new capability that the organisation
should have that is contained in the vision statement whereas, the changes they need to
be followed. So, that should be in order to obtain the new capability, that is present in the
265
blueprint; just a master print, blueprint it has to prepared, since obviously in this meaning
the it has the organizations they every organization they have a blue print. This
documents it explains what changes they have to be made in order to obtain the new
capability.
Then, let us say, what are the aids to programme management, what documents or what
diagrams or what tools they can be used to aid programme management. See there may
be a physical and technical dependencies between projects. Many in many cases; in some
cases the projects are independent, but in some cases the projects are dependent. That
means, before completing one project you cannot go to another project.
For example, suppose let us see that two organisations, what two organizations computer
sections they have to be merged. Suppose this is a project this two organisations
computer sections or computer infrastructures they have to merged. So, this project is
there. So, a previously the two organizations they are running in two different what say
building.
So, since they will be merged, then this can be treated as a project. So, after merging
suppose they will be shifted to another building. Then, you can see that this project that
merging of the computer sections of two organizations is completely dependent on the
new building. The completion of the new building because, unless the new building is
266
completely developed is completely furnished then, the computer sections of both the
organisations cannot be shifted.
So, here that merging of two computer sections of the two organisations, this is a project
and finishing of the new building is another project. So, here merging of the two
computer sections of the two organizations is completely dependent on the second
project that is this finishing or completing the new building.
So, how you can represent these dependencies may be physical or technical, how you
can represent the dependencies between projects. So, several diagrams are there to
represent this; so, one particular diagram there called as the dependency diagrams. So,
this dependency diagram can be used to represent the physical and the technical
dependencies between the different projects.
So, this dependence diagram just like activity networks those have are known PERT or
CPM .So, they are you must have used activity networks so, this dependency diagrams
just they are very much similar to the activity networks.
Now, before going to see about the benefits management let us see, what is a benefit? So,
after every organization they do some investment with an hope that after some years they
will get back their money spent in investment as well as some profit they will get. So,
that we call as the benefit. Benefits are of different types we have already seen earlier
267
they can be tangible benefits or intangible benefits, they can be direct or indirect, they
can be fixed or they can be variable.
So, now let us see these are the different types of benefits, you know that some of the
benefits, they can be easily identified, they can be easily measured, they can be easily
quantified. Whereas, some of the benefits they cannot be identified, some of the benefit
they cannot be measured, some of the benefits they cannot be quantified. So, we have to
if we know the different types of a benefits and which of them can be identified, which
of them can be quantified, which of them can be measured, then that will help in
managing the benefits.
So, this benefit management it provides an organization with a capability that does not
guarantee that this will provide benefits in besides need for benefits management. Simply
say, this provide that benefits management, it provides an organization with a capability
that does not guarantee, that this will provide benefits in besides. It needs for benefits
management you just get benefit does not mean that what you are always getting the
benefits, you need to properly manage them; that they need for benefits management,
this has to be outside the project; project will have been completed. So, project will be
completed after a few days or few hours few years.
So, this benefits management is normally what handled outside the project. Therefore, it
is done at the programme level, we want to have the benefit for the organisation. You
know that some of the projects may give benefit some of the benefits may lead to loss,
but at a programme level our objective should be to get more benefits, so that is why the
benefit management it has to be conducted, it has to handle outside the project may be at
the programme level.
268
(Refer Slide Time: 16:50)
So, what these benefits management or in order to carry out this benefit management,
what we should do, what we should must try for? So, we have to define the expected
benefits, in order to just like as we have seen the programme management. In order to
carry out this benefits management, we have to first define the expected benefits; we
have to identify the expected benefits, we have to define them.
But, as I have already told you some of the benefits they can be easily defined, but some
of the benefits you cannot define them, then we have to analyse the balance between cost
and benefits. So, we have to analyse the balance between the cost and benefits how much
investment we have made those are the cost and how much benefits we are getting. Here
you have to consider all sorts of benefits including tangible-intangible fixed and variable
and direct and benefit what indirect. So, we have to take into consideration all the
benefits.
Because, some of the managers they normally they do not take into account intangible
benefits, because they are unable to measure them, they are unable to quantify the
intangible benefit, they are not taking to consideration in the cost benefit analysis. But, it
is required that all the possible types of cost including tangible and intangible direct and
indirect and fixed and variable, they should be considered. Similarly all the possible
benefits such as direct and indirect benefits tangible and intangible benefits fixed and
269
what variable benefits, they must be taken to account during the cost benefit analysis,
then you have to analyse the balance between these cost and the benefits.
Then the next step is plan how benefits will be achieved? This time the organisation this
year this much benefit suppose they have got, then they must be plan how to maximize
the benefit the next year. So, they have to plan, they have to develop some specific plans,
how benefits can be what maximized, how benefits will be; more benefits will be
achieved in the future time. Then allocate responsibilities for their achievement. So,
different personals involved in the project, different personals involved in the
programme, they will be assigned responsibilities for their achievement. So, this should
be in the plan.
So, we have to allocate a responsibilities for their achievement similarly, monitor the
achievement of the benefits. So, whether we have made the plan, how to achieve the
what benefits, then just making the plan is not sufficient enough. Then, we have to
monitor them; regularly we have to monitor the achievement of benefits.
We have to see what is the target that we have to put this year and periodically we have
to say in periodic intervals what is the achievement whether we are closer to the what
target or we are lagging behind the target. So, if you are lagging much behind the target,
what precautions, what remedial actions we should take so, that we should achieve our
target benefit. So, we have to monitor and control the achievement of the benefits .
270
So, how can do this as I have already told you we have to monitor achievement of
benefits. How we can monitor the achievements of the benefits? So, for example, as I
have already told you we are not able to what get the benefits, because our the service
were providing or the product were providing we are not able to provide in the
designated time.
So, we have to see why we are lagging, why there is a delay, so we can take
precautionary measures so that, this delay can be avoided. If the delay can be avoided the
service we can provide more services, we can sell more number of products. So, that will
bring more profit. So, that is why we have to regularly what monitor the achievement of
the benefits. If something we are lagging behind we must take remedial actions. So, that
we can be closer to the targeted benefits.
So, now, let us see what could be the possible benefits for an what organization or for a
what projects? So, these benefits, these might include mandatory requirement, what kind
of requirements are must so, that you may get a benefit. So, mandatory requirement
improved quality of service. So, this is called as QoS; so, the quality of service must be
improvised. So, that automatically benefits will come in the example I have already
showed you, I have already told you if there is a much delay in providing the services, if
you are much delay in what producing the products then; obviously, the customer the end
user will be dissatisfied.
And, we know what dissatisfaction of the customers is not a what benefit rather, the
satisfactory customers even if it is a what intangible benefit, what that will bring, what
reputation to the organisation we can get more profit. So, we have to improve the quality
of service. Then increase productivity, how in less time we can produce more number of
products, that is called as what this increased product productivity we may see what are
different.
We may use different approaches may be we will use some automated tools or some
other things so, that with less time how you can increase the productivity, how can
increase more number of products in a given period of time, then more motivated
workforce how we can motivated the people to work better. So, more motivated
workforce we will also bring in profit ok to more motivated work force will also bring in
profit to this organization. And similarly internal management benefit so among the
271
organization, how can create awareness, how can and to better management. So that, we
can get more benefits.
So, for example, what like if you will create a awareness among the what employees say
by giving some awards, awards etcetera and providing some scheme to them etcetera.
So, we can internally manage what the benefits the employees they can be, what
motivated to better and different kinds of benefits, those are internal to the management,
they should be handled in a proper way.
See for example, the we are now suppose, what doing one thing to increase the number
of sales. And, now if we want to the increase the number of sales, suppose that is
happening the sales are in creating. And, now in order to get more number of sales if you
will put some what employee to do overtime job.
So, time will come that the benefit due to getting more sales might it will be what is less
that then the amount that will pay to the what the employees in overtime. So, that will
not be considered as a benefit, rather that will be dis benefit. So, we should try to avoid
this. So, this kind of things we must take care of in order to what improvise our benefits.
So, similarly another what benefit may come due to risk reduction. We have already seen
the different types of risks such as what business risks and the project risks. So, our
272
objective is to reduce the number of risks. Of course, where there is more risks, there is
more benefit, more profit, but handling the risks will be much more important.
So, we will try to reduce the number of risks how the number of risks can be reduced so
that, ultimately the benefits may improve. Similarly, we should see about the economics,
what is the economical status of the organization, how it can be improved. So, if the
economical status is improved and; obviously, benefit will come to the organisation, then
revenue enhancement and accelerations. What steps you should, because see revenue
enhancement is also treated as a benefit, what steps we must take to enhance to
accelerate the revenue.
So, those steps if will follow strictly then automatically the revenue will be enhanced the
acceleration and similarly strategic field fit. So, you must particularly improve the
strategies in order to maximize the profit, in order to maximize the benefit.
So, these are the strategic needs or the organisation level, we must see then strategic
level at the programme level, then strategic level; what strategic techniques or the
strategic actions are the project level, they have to be defined clearly. So, that the
benefits will be maximized for the a project or for the programme or for the organisation.
So, next we will see about to quantifying the benefits, how we can quantify the benefits?
So, we have known that there are some benefits which can be quantified and valued. For
273
example, there is a reduction of say x staff saving dollar y. Say, we have due to
automation of the existing system previously it was manual due to automating it, we
have reduced 10 staff members so, that we have saved say 200 dollars.
So, it can be easily quantified and valued. Some benefits they can be quantified, but they
cannot be valued. For example, decrease in customer complaints, we have previously it
was manual and now the system we have made it automated. So, the customer
complaints that we are what getting now, there is what it is successfully reduced.
So, we can easily quantified it, but the value that we will get as the gain or the benefit
that it cannot be valued. Similarly, some benefits they can be identified, but not they can
be easily quantified. For example, public approval for an organisation in the locality
where it is best suppose that and you take these mobile industries.
So, they want to put a tower in your area for that they have to take the approval of the
local authority. So, if you will get; if you can they can put a tower then; obviously, they
are what mobiles will be or there yes, their mobiles sims, they will be sold more and;
obviously, they will get more benefit.
So, this kind of benefits like a public approval for a mobile industry to set up a may be
tower in the locality.
If they get the approval then they can set up a tower and they can sell more number of
their mobile sims and hence their benefit will be more. So, this so, this types of the
benefits, they can be identified, but they cannot be easily a quantified.
274
(Refer Slide Time: 28:45)
So, in this class we have discussed about what is a programme and about the some what
details of different types of programme management, we have also seen what is a
benefit, what are the different types of benefits and how benefits can be managed.
So, these are the references and it is so, most of the things we have taken the reference
number 1, only the different types of cost and benefits, we can see at reference number 2.
275
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 16
Project Estimation Techniques
So, good afternoon, today we will discuss about little bit of project planning and basics
of Project Estimation.
276
See first let us see what makes a successful project? So, if there are we can deliver the
project in proper time with agreed functionality, that has been what discussed with the
end user. And, then at the agreed cost, at the mutual cost that is decided with the user and
then with the required quality. What is the quality the user wants, that much quality that
we have provided then the user will be satisfied, then we say that this project is
successful.
For this we require two stages, that is the we have to set the targets, then we have to
attempt to achieve the targets. But, sometimes what happens if the targets are not
achievable, the project manager tries to achieve all those targets, but it is not possible to
achieve, they are not achievable. Then what he should do? Then actually the projects
normally they get failed due to lack of proper planning and proper estimation. So, he has
to do proper planning and estimation so that, the project might get success.
So, let see little so, that is why let see as I have already told you he has to the project
manager has to plan properly regarding the activities of the project and do proper
estimation in order to get the process project success. Let us little bit say about the
introduction to project planning.
We have already seen that before a project starts we have to carry out the project feasibly
study activities. Then if a project is found to be feasible, if a proposal is found to be
feasible in every respect, like technical it is technical feasible, financially feasible and
277
operational feasible. Then the project managers will take it for what further steps. So, a
project managers activities are varied and they can be broadly classified into two
categories. One the project planning and then he has the project monitoring and control.
So, the activities project managers activities can be divided into these two categories.
As I have already told you, that once a project is found to be feasible the project manager
then he has to undertake the project planning. So, first he has to make an initial plan,
before the development starts. And then this plan is updated very frequently.
278
You can say that first the project study is conducted in order to know that whether the
project is worth doing or not. Then the project manager has to develop the plan. Here he
has to see that, how do he do it? How this project these can he can they can follow it? So,
they have to make a plan and finally, project execution step here they have to actually
execute whatever plan they have made here, they have to execute the project execution.
Now, let us see briefly what are the various project planning activities; so, number 1
estimation. So, during estimation phase what needs to be estimated? effort, cost resource
and project duration. Then the project manager has to develop a proper scheduling what
he has to make a proper schedule, he has to follow some scheduling methodologies.
Then also a staff organization has to be done properly, about how many staffs will be
required for this project that has to be planned. Then as we know in a every project
contain some risk, some different or some specific methodologies have to be follows for
handling the risk, like identifying the risk, analyzing the risk and abatement of the risk.
Then there may the project managers may have to follow some miscellaneous plans, that
is the quality assurance plan, configuration management plan etcetera.
279
(Refer Slide Time: 04:24)
So, let us see in detail, what the how the activity it is a project planning activity they will
be followed. First the manager has to identify the steps required to accomplish the set
project objectives or the targets. Then they he has to identify the a task which are needed
to be performed at each of the step, he may use for this is the work break down structure
etcetera. Then he has to estimate how much he put each task requires, also he has to
estimate how much resources are required for performing each task.
And if 3 and 4 are known given item number 3 and 4, then the project manager he has to
calculate, how long each task or step will take; that means, he has to calculate the project
duration. Similarly he has to estimate the task step and project cost, how much cost will
be spent for developing this task or for individual step or for the whole project. He has to
also determine the interdependencies among the task if any and finally, he has to prepare
a schedule for each task and the whole project. So, here he may use milestones
deliverables cost and payments etcetera.
280
(Refer Slide Time: 05:38)
Now, let us see why you should go for project planning, because project planning
requires utmost care and attention. Because what commitments to unrealistic time and
resource estimates result in the followings. If the project manager committed some
unrealistic times, which he cannot achieve at all and some resource estimates which is
not feasible at all then the following problems might arise.
What could the following problems? It may unnecessarily make delay the delivery of the
projects. So, that the customer will be irritated and customer will be dissatisfied. Hence
there will be adverse effect on the team morale and due to poor quality work there will
be adverse effect on team morale and it may be even if it may be the case due to
improper estimation etcetera. So, the project might get failure. So, that is why it is very
much important we should give at most care and attention to project planning.
281
(Refer Slide Time: 06:32)
So, sliding window planning is a special type of planning which involves project
planning over several stages. So, it does not make the plan in one step, at a single step
rather, it prepares the project planning over several stages. Here the normally it protects
managers from making big commitment too early. Because what to do normally
managers do are the very early requirements phase they make a commitment that will
develop or we will deliver the project by this state, which they may not actually achieve
it.
So, that is why in sliding window protocol or sliding window planning the managers they
will be prevented from making big commitments too early. So, this principle just like a
java, sliding, window, protocol that you have studied in networking yes, networking. So,
then more information becomes available as project progresses. Say initially only less
information available, what are the progresses. So, the more and more on the updated
information they are coming.
So, this updated information which are received later on so, they will facilitate for
accurate planning. So, that is why instead of making in one window or in one what, a one
phase the managers they should perform the planning in several stages or in just I using
this some concept of window in the what in the phase up several stages.
282
(Refer Slide Time: 08:04)
So, finally, as you know that every stage of this classical water fall model it some output
should be there. So, similarly what is the output of this software project management?
The output of software project management is this SPMP document which is known as
software project management plan document.
So, after planning is complete, the project manager he has to document the plans where
in a document called as software project management plan document. What will be the
organization? What will be the contents of the organization?
283
The following content should be there in the SPMP document. First there should be little
bit of introduction of the project where the objectives and the major functions etcetera
should be described. Then the various project estimates they have to mention like the
historical data, the estimation techniques used the estimation for cost effort and project
duration etcetera.
They have to be placed in this section, project estimate. Then project resources plan it
will contain the, what are the resources they will be required by the project might be the
people, hardware, software and any special resources. If any schedule in this content the
following so, what item should be there like the you may use work break down structure,
the; what task network, Gantt chart and PERT chart etcetera.
Then another what content might be risk management plan. How you will handle the risk
for that you have to put what kind of different types of risk? What risk analysis you have
performed? How to identify the risk? How to what estimate the impact of the risk? How
to abandon or mitigate the risk? So, those things has to be explained in the risk
management plan.
Similarly, project tracking and control plan; how you are tracking the project; how you
are controlling and monitoring the project the plan regarding that should be discussed
here. And finally, one miscellaneous section should be there in the SPMP document
which will contain like the process tailoring, quality assurance aspects, configuration
management aspects they will be described in the SPMP document, under the heading of
miscellaneous plans.
284
(Refer Slide Time: 10:13)
So, now let us see what are the fundamental estimation questions; what should be the
size of the software that we are going to developed. So, we have to estimate the size.
Then how much effort is required to complete an activity. So, we have to estimate here
the effort similarly, how much calendar time? How much duration is needed to complete
the an activity, that has to be estimated. And finally, what will be the total cost upon
activity and what will be the total cost of a project that also has to be estimated these are
the fundamental estimation questions.
285
Now let us see, what are the sequences of the estimations that we have to follow. First
we have to determine size of the product and if the size is known, from the size estimate
we can determine the effort needed. And from the effort estimation from the estimated
effort, we can determine the project duration and the cost.
So, this sequence can be represented in the following graph. You can see that first we
have to estimate size, from size we can estimate the effort and duration and from effort
we can estimate the cost then from the effort estimated effort on the estimated duration
we can somebody or the project manager can estimate the staff and then from the
duration estimation and staff estimation. So, the project manager can prepare a schedule.
286
(Refer Slide Time: 11:32)
So, now let us see what are the software cost components? So, we will see that these
followings are the software cost components. First is the hardware and software cost. So,
the cost require to develop the software and on which hardware it will run on the, what
are hardware will be used to develop the software. So, those cost this is the hardware and
software cost. Then travel and training cost; that means, the cost that is required to
provide training to the employees and the travel associated with that they are also
important components. Then another component effort cost which is very important and
finally, we have to know what is the overhead that is, that may be required for this
project.
So, out of this all the cost hardware cost the dominant factor, in most of the projects and
here the salaries of the engineers involved in the project. So, that is why it may be what
the most dominant factor, it will contain much of the percentage of the total costs. Then
overheads; so, overheads could be like the it is not for what single project, but maybe
what for different projects also. Here we have to take into account the cost of building,
heating, lighting etcetera. Cost of networking and communications, cost of shared
facilities, just like as library staff restaurant etcetera. So, these may be treated as the,
what overheads. So, these are the different the software cost components for developing
a project.
287
(Refer Slide Time: 13:04)
Now, as I have told you one of the most important cost component is effort. So, how to
measure the effort? So, in order to effort the measure; in order to measure the effort one
terminology you require that is known as person month which is very very associated;
which is very much associated with the effort. So, suppose a project is estimated to take
300 person months to develop. Now what do we mean by this? So, is just one person
working for 3 days, same as 30 persons working for 1 day? No, certainly not, then why?
So, let us see in order to answer this question let us see how many hours constitute a man
month. So, if we will say that the default value is 152 hours per month; that means so, 19
days some has to be worked at or some person has to work 19 days at 8 hours per day.
288
(Refer Slide Time: 14:05)
So, why person month and not person days or person years. So, normally the modern
projects typically, takes a few months to complete. Some projects may take more than
what 1 year, but modern projects they typical take a few months to complete. So, that is
why person year is clearly unsuitable to what for these projects.
So, for similarly person days would be too much what small; so, person days would
make monitoring and estimation overhead very difficult. It will be large and tedious. So,
hence we should try for what this is the middle one that is the month. So, it will be a
suitable measure to; it is a suitable metric to measure the effort.
289
(Refer Slide Time: 14:40)
So, now, let us see this costing and pricing. So, estimates are made to discover the cost of
producing a software system. So, we have to prepare estimates to discover what will the
cost of producing a software system. So; however, you will observe that there is no such
simple relationship between the development cost and the price that will be charged to
the customer.
So, these you know that this product or organizational what considerations or economic
political and business considerations they influence, the price that will be charged to the
customer. So, that is why you can see that there is no direct relationship simple
relationship between the development cost of the project and the price and that will
charge to the customer for what obtaining the project.
290
(Refer Slide Time: 15:42)
So, the cost estimation process consist of it can be represented like this which takes two
inputs, what are the requirements and other cost drivers. And after this estimation
process we can get the effort, the development time we can get estimates for the effort
development time, number of personnel’s required and the cost of the project.
These are some of the what factors those are affecting the software pricing such as
market opportunity this what you can say that this cost estimate uncertainty then these
291
contractual terms requirements volatility. And about what are the financial health they
these are some of the factors which may affect the software pricing.
This is so, some words of wisdom are given here. Like unless a software project it has
clear definitions of it is key milestones and realistic estimates of the time and the money.
Then what will happen, the project manager cannot tell that whether it is very difficult
project manager cannot tell whether the project is under control or beyond control.
292
So, next is when are the estimates required? So, during different phases of the project,
different estimates are required. Like in the initiation of phase you want what to be
estimated the time cost and benefits estimates are required. Similarly do the planning
phase time estimates in project schedule cost estimates in project budget and cost and
benefit estimates in business case is they are required. Similarly during the start of the
project stages, what we require time and cost estimates the reconfirmed for this stage that
is required.
These are some problems actually with estimating you know that this estimation of the
project is highly subjective nature. So, the subjective nature of estimating it puts barriers
during estimation. Because this is highly what subjective as well as the political
pressures, that also create problems while estimation and you know that current
technology is changing rapidly. So, changing of the technology also puts barrier for
software for project estimation and finally, you know the projects nowadays are the,
what clients they are giving. So, they are; while they are largely different in nature.
So, the experience on one project, may not be applicable to the other since their nature
they are different. So, that will also what put some special difficulties in estimating the
different the various factors, the various parameters.
293
(Refer Slide Time: 18:29)
So, now let see two other important problems associated with estimation. one is over
under, over estimating and another is under estimating. So, over and under estimating
they are guided by two laws. one is Parkinson law, which said that work expands to fill
all the time available; that means, if you are over estimated in the place of say 20 days
you have assigned of 30 days. Then what will happen that the work that will expand to
fill the time available the people they will they will not they will deliberately not work
they will try that sometime is still left, we will only so, let us just wait even two days are
left we will do those pending works in those days.
So, the work expands to fill the time available this will happen if you will overestimate
the duration. So, if it is underestimate advantage is that there will be no overspend, but
the disadvantage that the system will be usually unfinished and it will be substandard.
294
(Refer Slide Time: 19:29).
So, another law is that guides this over and under estimating problem, is that Weinberg’s
Zeroth law of reliability. It says that a software project that does not have to meet a
reliability requirement can meet any other requirement. So, if it does not require this
reliability requirement it does not have to meet a reliability requirement. It can meet any
other requirement which is not desirable.
So, the effect of under estimate is that the motivation and the morale will be lowered
with high aggressive target and under estimation can lead to may be abandoned of the
295
project. Because that developers they will respond to highly pressing deadline with
substandard work. So, since there will be pressure because time is less, they will
compromise with the quality and they will produce somewhat substantial work. So, the
project may get abandoned ok.
So, now let us see what are the basis for successful estimating. So, two important what
things are required, one is information about similar past projects, another is that we
should know how to measure the amount of work involved. We need to be or the project
manager needs to be able to measure the amount of work, that is involved in the project.
So, this will form the basis for the successful estimation ok. So, those things are there.
So, for what measuring the amount of work involved we may use the traditional size
measures such as LOC, ‘lines of code’, but we know LOC of some special problems that
we will discuss in the next class.
296
(Refer Slide Time: 21:09)
And then refining estimates, how to refine estimates? I have already told you this
estimation is not done in one step. You have to first do the initial estimation the project
manager has to first do the initial estimation, then gradually it has to be updated it has to
be refined. So, now, let us see what is the reason for what adjusting this estimates for
refining the estimates or for updating the estimates because the interaction cost are
hidden in estimates. What are the interaction cost they are hidden they are not coming
out or you cannot you may not find out them which also have some impact.
And normal conditions may not apply. So, things may go wrong on different projects,
changes in projects. There might be some changes in the project scope and plans so, in
order to incorporate all those things. So, the project manager should refine to adjust the
estimates.
So, adjusting estimates means what, the time and cost estimates of some activities that
adjusted, they are refined as the risk resources and situation particulars become more
clearly defined in later phases. So, there is a need that the project manager should adjust
or refine on the time and cost estimates of some specific activities.
297
(Refer Slide Time: 22:29)
So, finally, we can see that we have first discussed the various activities which are
carried out during the project planning. So, that is we have seen estimation and
scheduling, staff organization as on some miscellaneous plans these are carried out
during about this project planning. We have also seen that the outcome of project
planning is a document called as SPMP or software project management plan document.
So, here the contents we have discuss like such as introduction, the various estimates and
there will be a section also on the scheduling and miscellaneous ones etcetera. Also we
have discussed some basic concepts of project estimation like, what is the basis for
project estimation? How do you, why the projects the estimation does not become
correct and we have also seen a sliding window estimation; sliding window estimation.
Because the project managers should not do the estimation in one step in one phase
rather they should perform it in several stages.
So, that the some more accurate information may get or may be obtained at the lateral
phases which may be taken into account during the project estimation or the during
computing the various estimates. So, that is why you should use this sliding window
estimation. And as I have already told you it should not be done in single page rather
than first the initial plan is made then gradually as in the some more updated information
comes then you can or the project manager can update or adjust this various estimates
that will give you what more accurate estimates.
298
So, these things we have seen right now. And in the next class we will discuss, what are
the various estimation, what are the categories of, what are the taxonomy for the
estimation techniques also we will discuss about this the size measure. Because I have
already told you the one of the initial measure is size. If you can estimate properly the
sizes then you can estimate the other parameter such as effect duration effort duration
and cost. So, that also we will discuss in the next class like this size estimation ok. So,
thank you very much.
299
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 17
Project Estimation Techniques (Contd.)
Good afternoon. Now, let us continue the remaining parts of the estimation techniques.
Well here see a taxonomy of the various estimating methods, then we will see about
these size estimation.
300
(Refer Slide Time: 00:31)
So, let us see at the one type of taxonomy, a taxonomy of the existing estimation
methods. The estimation can be top down or bottom up. So, here normally they are
activity based and analytical. And then, the estimation techniques can be parametric or
algorithm models, for a view example function points that we will see in the next class.
Then expert Judgement technique, that is another category of method expert Judgement.
Here, we just guess the various estimates, initial with initial guess we start the estimation
procedure. Then analogy as its name suggest we have to see how a project is a
analogous. It similar to another project existing project, then we can estimate the
required parameters. These are normally case based and they it is based on the
comparative principle and another methodology for estimation is price to win.
301
(Refer Slide Time: 01:32)
Let us start with the price to win method, what does it say? The project cost whatever the
customer can spend on it. So, according to this the cost or the estimated project cost is
that cost that the customer can spend on it. So, the project cost whatever the customer
can spend on the project and advantages here is that you get the contract, it is very
possible, it is very much probability there that you will get the contract and
disadvantages that the cost they do not accurately reflect the work required. So, what
work will be performed the cost may not accurately reflects the work required.
So, here either the customer he will not get the desired system or what will happen the
customer will over pay, because the work is less and he may pay more than the that is
desirable ah in regard to the work that or the service that he is getting .
302
(Refer Slide Time: 02:37)
So, this approach may seem very much unethical and unbusiness like, because as you
have seen that the cost they do not accurately reflect the volume of the work required.
So, it this approximation may seem to be unethical and unbusiness like; however, when
the detailed information regarding the volume of work etcetera is lacking then this
method may be the only appropriate strategy.
So, now, let us see which is the most ethical approach? The project cost is agreed on the
basis of ah an outline proposal and the development is constrained by that cost. So, the
project cost normally, it is agreed on between the client on this developer based on what
on the basis of an outline proposal and the development is constrained by that cost. So,
that what should be ethical and a detailed specification maybe negotiated or an
evolutionary approach may be used for the system development .
303
(Refer Slide Time: 03:45)
So, what parameters are to be estimated, for project planning? We need to estimate the
effort or the indirectly the cost and the duration, but it is very much difficult to estimate
the effort or cost or the duration directly from a problem description. So, what we have
to do? The effort and duration, they can be measured in terms of the project size, which
is an indirect metric and a project size we can measure by using either LOC that is Lines
of Code or FP that is Function Point .
304
So, let us see that; we know that size as effort and cost duration they cannot be directly
estimated. We have to use a size or this what yes, we have to use project size for
estimating and then the for estimating effort and duration. So, size is a fundamental
measure of work, based on this we can estimate the other parameters such as effort and
duration.
So, it is based on the estimated size ok, based on the estimated size, two parameters are
can be estimated; they are the effort and the duration. So, effort is measured normally
person month, I have already told you if the person month earlier, that effort is measured
in person month still let say define person month. How the person month is defined? One
person month is defined as the effort that an individual can typically, put in a month ok.
So, one person month means is the effort that an individual can typically, put in a month.
So, here in this way we can calculate this, what person we can define person month and
using person month, we can define, we can measure effort. So, as I have already told you
first, we have to compute size then we can compute effort and the duration.
So, what is size? It is a measure of work that is being done. So, project size is a measure
of the problem complexity ok. Project size can be considered as a measure of the
problem complexity, in terms of what? In terms of the effort and the time required to
develop the product. Two metrics are popularly used to measure a project size. I have
305
already told you one is LOC or in a finer terms can say that Source Lines of Code or
SLOC and another is Function Point or FP.
So, SLOC is conceptually simple, just we have to count the lines of the code, the Source
Lines of the Code we have to count that is conceptual very simple, but FP is nowadays, it
is favored over a SLOC. We will see there are several drawbacks of SLOC. So, that is
why nowadays people are using FP it is a preferred over SLOC, because of the many
shortcomings of SLOC we will discuss after few minutes, what are the shortcomings of
SLOC.
So, the lines of code means; what is a line of code? Basically, this lines of code was
proposed when programs were typed on cards with one line for cards. So, previously
people are using punch cards. So, during that time so this lines of code it was proposed.
So, it was proposed when the people were or programs were typed on cards a, punch
cards with a one line for card. So, what happens when statements in Java span several
lines or where there can be several statements in one line; see this is the drawback of
lines of code.
So, it was develop keeping in mind that only one line or yes, one line can be entered for a
for card. This metric was meaningful, but when, so different programming language high
level programming languages came and in some cases like Java that this statements Java,
they are, they span over several lines or in one line you can rise statements then this lines
306
of code may not be a suitable measure to may not be suitable metric to measure the
program size. So, similarly what programs should be counted as part of the system. So,
what programs will be counted as part of the system say for example, GUI.
So, it is not a functional requirement, but should it be the statements written for
designing a GUI should it be counted as a part of this system. Similarly, the built in
classes, which are provided in the programming languages. So, can they be counted as
worth or this LOC? Will they be counted as part of the system that you are going to
develop? So, these are some of the limitations, you can say and or that is why the lines of
code may not be a suitable metric for all types of these what modern applications.
Initially, software development consist of only writing the code, but now, several things
are possible while developing or several things are performed while developing any
system or project.
So, in Lines of Code we will use this terms like LOC, which represents lines of code,
KLOC represents what kilo lines of code that mean thousands of lines of code. KSLOC
means thousands of source lines of code and NCKLKSLOC means the new or changes
KSLOC. So, how many? What lines of code? They are completely new or they are
changed. So, these are the terminologies, they are used in this metric lines of code.
307
(Refer Slide Time: 09:32)
So, now let us say few things these are counter intuitive in case of LOC. The lower the
level of the language the more productive the programmer is; obviously. So, if the lower
language, lower the lower the level of the language.
For example; you say assembly level language, etcetera, then what will happen their size
will be less. In one instruction with a programmer may achieve several things. So, that is
why the lower level of the language the more productive programmer is by using only
fewer sentence we can achieve a lot of jobs.
And similarly, what this I have already told you the same functionality takes more code
to implement in a low level language, then in a high level language is not it. The same
functionality it takes more code to implement in a lower level language than a high level
language. So, the more verbose, the program the higher the productivity.
So, if there will be more verbose the more verbose the programmer is the higher the
productivity. Measures of the productivity based on lines of code suggest that the
programmers who write verbose code are more productive than programmers who write
just compact code, because he is using more what he is using what more verbose. So,
that is why the lines of code will be more. So, it say that measure of the productivity
based on lines of code is suggest that the programmers who are writing verbose code,
they are more productive and than the programmers, who write compact code. So, this is
another counter intuitive for lines of code.
308
(Refer Slide Time: 11:15)
Now, quickly let us see some of the major shortcomings of SLOC. Size can vary with
coding style. So, different programmers can follow different coding styles and if the
coding style is different, then size will be different.
Similarly, this LOC, it focuses only coding activity alone no other activity, because you
may have to has a while developing the software, you might have to carry out
requirement analysis, you may have to do design, you may have to testing etcetera, but
this LOC for that you are putting some effort, for the requirement analysis or for coding
or for testing or for designing you are putting some effort, but LOC does not take into
account that it focuses only coding activity alone it correlates very poorly with quality
and efficiency of code.
So, what will the quality of the code, what will the efficiency of the code? So, LOC
correlates very poorly with them similarly, penalizes higher level programming language
code, reuse, etcetera if you are using very high level programming languages, because in
assemble language programming same thing can be done with very just few statements,
but in high level languages you may use what some more statements for that. So, it
penalizes the high level languages, programming languages.
Similarly, if you are using code, if you are what using this code reusing, if you are
following code reusing; that means, you have developed one and again and again you are
309
using. So, then also it penalizes this, so the size may not be accurately corrected, if you
are using code reuse. So, these are some of the shortcomings of SLOC.
So, other shortcomings are like difficult to estimate at start of the project from problem
description, if we are just given with a problem description like as a SRS document then
it is very much difficult to estimate, maybe what is the size or the very beginning of the
project. The only way since, you cannot get an accurate estimate at the beginning, the
only way is to estimate it to make guess and so not useful for project planning.
So, that is why this LOC is not very much useful for project planning. So, it only it is I
have already told you, it is used only as a code measure. It cannot be used for measuring
what the effort that you are putting during what your requirement analysis or design or
testing this cannot be used for measure them.
310
(Refer Slide Time: 14:08)
There are still some further difficulties with SLOC like SLOC can become ambiguous
due to rapid changes in programming methodologies program and languages and tools.
So, due to the changes in the programming methodologies languages and tools, SLOC
becomes an ambiguous. Might be see the language advancements are there. So, different
languages are coming up and this can put um, this can lead to problem, this can lead to
ambiguity while estimating SLOC. Similarly, many tools are there the where you can
automatically generate the source code.
So, then what this SLOC may not be useful very much say like you are having RSA
rational software architect where from the class diagram you can automatically generate
the skeletal code, then SLOC may not be properly what it, it may not be suitable to
measure this what it may not to be suitable metric to measure the size, similarly custom,
software and reuse.
If you are customizing the software, if you are developing a customized software and
you are using code reuse then SLOC may be ambiguous and similarly due to a newer
programming or what changes in programming methodologies such as object oriented
programs, aspect oriented programs, feature oriented programs so they are in those cases
SLOC, the size cannot be accurately estimated by using this measure SLOC.
311
(Refer Slide Time: 15:34)
So, now let us see some of the what a techniques for effort, let us, we have already seen
this pricing technique then we will see about the other techniques, we will see first expert
Judgement based techniques. Here, we will see these are the techniques which are
coming under expert judgement based technique that is basic expert judgement weighted
average estimating consensus estimating and delivery let us first say about the basic
expert judgement.
312
So, here in the if you are using basic expert judgement technique for estimation, what is
happened here, one or more experts, they predict the software cost, this process is
iterated until some consensus is reached among the experts. The advantage of using this
method is that is very much relatively simple estimation, it is a very relatively simple
estimation method, it can be accurate, if the experts have direct experience of the similar
systems.
But if the experts they do not have experience of the system that you are developing then
the estimation may be wrong. The disadvantages that very it may the estimate may be
very inaccurate, if there are no experts available in this kind of what system that you are
developing suppose, you are developing a avionics system and there is no expert who
have knowledge who has knowledge on the avionics, then; obviously, the estimates will
be wrong .
So, now let us see about this steps of the basic expert judgement method. So, first in this
method there will be a as I have already told you there will be a group of what experts,
one person who will act as the coordinator. The coordinator presents each expert with a
specification and an estimation form ok. You will coordinate the what activities, you will
present the coordinator present each expert with specification and an estimation form and
the coordinator calls a group of meeting in which the experts discuss estimation issues
with the a coordinator and each other.
313
So, coordinator can convene the meeting where the experts they will discuss the
estimation issues with the coordinator and with the other experts then the experts fill out
the forms anonymously. Then the coordinator prepares and distributes a summary of the
estimation, the coordinator prepares a summary and then he distributes the summary of
the estimation on an iteration form. Then the coordinator calls a group meeting specially,
focusing on having the experts discuss points, where their estimates varied widely.
So, here the if there are suppose, five experts and their estimates are varying widely, then
again the coordinator will call a meeting and discuss why there is a variation, because
project is same parameters are same, why there is a variation among the estimates then
they may have discussed. Then experts, they will fill out the forms again anonymously
and steps 4 and 6 are iterated for as many rounds as appropriate till they reach to a
consensus decision, this process will be iterated.
Now, the we know what is an expert, actually an expert is a familiar with and expert is a
familiar with the system and he has he is knowledgeable about the application area that
you are developing and the technologies that you are using. So, particularly it is very
much appropriate where existing code is to be modified and existing code is already
there and you are trying to modify, then you are doing this estimation. In this case basic
expert judgement technique will be very much what useful, but research shows that the
314
expert judgement in practice it tends to be based on another methodology called as
analogy methodology.
So, let us see what we will see this analogy methodology. So, let us see what are the
stages that the basic expert judgement method may pass through, as I have already told
you, it is based on an analogy method the steps or the stages are like this. Then first
identify these significant features of the current project. Then identify or just look at the
previous projects with similar features.
Then look at if there is find out, if there an any differences find out whether if there are
any differences between the current and previous projects. Then find out what are the
reasons why they are differences find out the possible reasons for the errors or the risk.
Then measures adapt measures to reduce this uncertainty, then you have to text some or
adopt some measures to reduce this uncertainty or the risk.
315
(Refer Slide Time: 20:15)
As I have already told you this the basic judgment technique is basic expert judgement
technique is based on the estimation by analogy. So, what do you mean by this
estimation by analogy? The cost of a project is computed by comparing the project to a
similar project in the same application domain. So, suppose, you are we have developed
a project for a private organization academic institution and next you are the as a project
manager you are developing this, what academic system for a government institute, then
they are similar types of applications.
So, here you may what compare the projects that you have spent in the while developing
the academic autonomous system, for the private organization or private university with
that of the what proposed one; that means, for the government university, then you can
see what could be the estimate. May might little bit vary, but it will it may give
appropriate result.
So, the cost of a project here is computed by comparing the project to a similar project in
the same application domain. So, one example I have already given you that if you have
already developed a project for a private institution and then developing a similar project
for a government academic institution. So, you can compare the projects and estimate
accordingly, for the new project that you are undertaking for automating a government
university or institution.
316
Now, the advantages of estimation analogy that it may be accurate, if a project data they
are available and the people and the people and the tools they are same ok, it may be
accurate, if the project data which are available to you from the first project and the
people and tools they are the same. It will be advantages, disadvantages is that it is very
much impossible, if no compatible project has been tackled.
If similar kind of similar or if similar kinds of projects they have not been done earlier
then this is then, you cannot estimate for the current projects. So, this will disadvantage.
So, it needs systematically maintained cost database. So, whatever previous projects are
there you must maintain their various cost in some database. So, it is needs
systematically, maintain their cost database, so that you can use in future.
So, now again let us go back to the basic expert judgement technique what are the, what
disadvantages, it is very much hard to quantify. Similarly, it is a hard to document the
various factors, which are used by the experts or the experts group. Experts may be
biased, because after all they are human beings, experts may be biased, they may be
optimistic or pessimistic.
Even though they have been decreased by the group consensus and the expert judgement
method always complements the other cost estimating methods such as algorithmic
methods. So, always the expert judgement method, it complements the other estimation
method such as algorithmic method.
317
(Refer Slide Time: 23:18)
So, now we will receive the second one that is the weighted average estimates. Here,
weighted average estimation is also known as a sensitive analysis estimation and here
three estimates are obtained rather than one just, because if you are going one; it may be
it may not be fully accurate. So, three estimates are obtained rather than one.
So, one is the best case that is the O and or the optimistic case, then worst case, or
pessimistic case P and most case or most likely case that is the median or M we take and
now, this provides a more accurate estimate than when we are considering only one
estimate. So, then these are used in a formula to produce the estimate. So, that formula is,
Estimated effort = (O + 4M + P) / 6
So, use the first find out these values for what best case worst case and most likely then
use this formula you will get this estimated effort.
318
(Refer Slide Time: 24:22)
This is what is happening in that is why the name is weighted average estimates. The
other method is consensus estimating, consensus estimating. Here, the steps in
conducting a consensus estimation session is like this first a briefing you have to brief a
briefing is provided to the estimating team on the project, then each person is provided
with a list of work components to estimate. So, each person will provided a list of the
work components, they have to be to estimate. Then each person independently estimates
these optimistic value most likely value and pessimistic value for each work component.
Then the estimates are written up on the whiteboard and then each person he discusses
the basis and the assumptions based on which they have prepared the estimates and
finally, some suggestions may come then a revised set of estimates is produced. Then we
have to take what since, we are getting a revised set of estimates you have take the
average you have to take the average of the O M and P values. You have to take the
average for the optimistic and then the most likely on the pessimistic values these are
calculated and finally, these values are used in the above formula that we have seen. So,
in this way the consensus estimating is followed.
319
(Refer Slide Time: 25:42)
The experts may carry out the estimation independently ok. The normal the experts they
will carry out the estimation independently. They will mention the rationale behind their
estimation what is the rationale, what is the assumption they taken. They will also
mention in that a paper, then the coordinator notes down any extraordinary or rationale if
any expert has taken, then he circulates the coordinator circulates the estimation or the
rationale among the other experts.
After receiving the what estimates from other experts each expert they re-estimate their
values then they, but here the experts never meet each other in judgement or yes in this,
expert judgement technique the experts they meet each other, but here in Delphi
estimation the coordinator takes the most important role. He will circulate among others
among other experts, the experts never meet each other to discuss their viewpoints.
320
(Refer Slide Time: 26:58)
So, Delphi means Delphi is an expert survey in two or more rounds. So, starting from the
second round so, first round is over, after first round starting from the second round a
feedback is given about the results of the previous rounds, then the same expert he
assesses the same matters once more influenced by the opinions of the other experts. He
takes into account the suggestions or the opinions of the other experts and then he
assesses the same matters once again, he re-estimates again the same parameters. So,
here the important is anonymity.
321
So, the steps are like this the coordinator presents each expert with a specific and an
estimation form, then coordinator calls a group meeting in which the experts they discuss
the estimation issues with the coordinator and each other and then experts, they fill out
the forms anonymously. And coordinator prepares and distribute a summary of the
estimation on an iteration form and the coordinator calls a group meeting specifically
mentioned in the noted rationale, where the estimated varied widely. And experts they
fill out the forms, again anonymously, experts fill out the forms, again anonymously, and
steps 4 and 6 are iterated for as many as rounds as appropriate ok.
So, finally, we have discussed the price. We have first discussed a one method this price
to win estimation method, then we have presented how to find out how to estimate size
using LOC or SLOC. Also discussed expert Judgement based estimation techniques. So,
basic Judgement weighted average estimation ah, consensus estimation and Delphi. We
have also explained the analogy based estimation. So, these are the references, we have
taken and.
322
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 18
Project Estimation Techniques (Contd.)
Good afternoon to all of you, we will take up some other categories of estimation
techniques now ok.
323
(Refer Slide Time: 00:31)
The types of, there is another category of different types of estimation techniques though,
there are many techniques of estimation they can be broadly classified into top down and
the bottom up, this is one classification.
So, what about: algorithm models, Expert opinion, Analogy, Price to win? In the last
class, we have discussed about the these last three expert opinion or expert judgment,
analogy mythology and price to win, those things already we have seen and so, basically
these four are also another classification for project estimation. So, the last three already
we have seen. Now, let us see about the top down and the bottom up estimation
techniques then we will see about the algorithmic models for project estimation.
324
(Refer Slide Time: 01:18)
So, bottom up estimation is a estimation technique which identifies all tasks that have to
be done ok. So, so it identifies all the tasks that have to be performed. So, it is very much
time consuming. Use when you have no data about similar past projects. So, when you
do not have similar past projects, you may use this see I have already told you that
analogy method.
Analogy method is based on when it will be suitable when you are having similar kinds
of past projects or similar kinds of projects you have done in the past then this analogy
method you can use, but when you do not have any data about the similar past projects,
then you use this bottom up approach. Then what is about top down approach? It
produce overall estimates based on project cost drivers and based on the past project
data.
So, top down method is used when this past project data are available. So, this method
produces overall estimates based on two things; the project various project cost drivers
based on what the past project data. So, here we have to divide the overall estimate
between the jobs they have to be performed.
325
(Refer Slide Time: 02:39)
Now, let us see first bottom up estimating or bottom up estimation. What are the steps?
The steps for bottom up estimation are as follows; break the project activities into
smaller and smaller components. So, take as its name suggest bottom up estimation, just
like in object oriented approach you are using bottom up what mythodology.
So, here similarly you break the project activities into smaller and smaller components,
then stop when you get to what to be done by one person in one or two weeks then.
Where to this breaking process? When it should be stop? We should stop when you get
that what the piece of work that one person can do within one or two weeks, then you
estimate the cost for the lowest level activities.
So, after breaking the projects into activities, smaller activities then you estimate the cost
for the lowest level activities at each higher level. At each higher level then you calculate
the estimates how; by adding the estimates for lower levels. So, at each higher level, you
have to calculate the estimates by summing up the estimates for the lower levels.
326
(Refer Slide Time: 03:55)
Let us then let us see about the top down estimates, we will take an example. So, here
suppose, as top down estimate I have already told you earlier, I think I have told that ok.
Let us see bottom up approach, bottom up estimation algorithm, we have already seen.
So, now, let us see about this top down estimation, difference already I have told you.
Here, the top down estimates was as follows like produce an overall estimate using
produce, the overall estimate using the effort drivers or the cost drivers you have to use
here some drivers to produce the overall estimate and then you have to distribute the
proportions of the overall estimate to various components.
Here, you can see that for the overall project you have estimated that you it might take
100 days, but the overall project can be divided into what different what sub activities
like; design, coding, testing, etcetera and you have estimated that 30 percent, 30 percent
of the total days can be given to design, 30 percent to code, and 40 percent you will give
to test.
Then you can easily calculate the proportionate the proportionate of this what total
estimate like 30 percent of 100 days that is 30 days for design, again 30 percent of the
total estimate 100 days, again it is 30 days for coding and 40 percent of the total
estimated that is 40 days for the testing.
So, in this way the top down estimate works, first estimate the overall estimate, first
calculate the overall estimate then distribute the proportions of the overall estimate to the
327
individual components then you will. Then this method of estimating the effort or the
duration is known as a top down estimation.
Another example is given here, where these the total. Suppose, the total project cost is
500000 then the total project, overall project total project what are the different
components in developing the total project like design, coding, testing, documenting,
produce the CDs, etcetera.
So, then what you have to do, the person you have to estimate how much percentage will
be taken up by each component; suppose, design will take 20 percent, programming will
take 30 percent, testing will take 40 percent, documentation will take 5 percent and
producing CD will take 5 percentage. Accordingly, this you have to compute, this say 20
percent of 500000 that is 100000, it will give to what, it will give to the design.
Similarly, 30 percent of 500000 maybe 1.5 lakhs, it will give to programming and
similarly, you assign this what calculate the proportions and calculate the percentages
assign to different components.
So, similarly design can be say two types of design D 1, D 2 like; D 1 design and your
database design the percentage at 10, 10 ,10 percentage each. So, you have to give 50000
each.
328
So, similarly program there can be two programming shows write writing down the code
for the G U I and for the say may be here three programming’s; say programming for this
G U I, programming for the database and programming for the normal code. So, again 20
percent, 5 percent, 5 percent. So, the proportionate are calculated, documentation
suppose, two types CD, prepares only one.
So, in this way you can divide this, what proportionate similarly, testing a divide into
unit testing is T 1 integration testing T 2 system testing might be T 3. So, accordingly
compute this percentage you will get the estimates for each of the phases. So, this is how
the top down estimation this works.
Let us see what are the advantages of top down approach, it accounts for system level
activities such as integration testing or the integration, documentation, configuration
management, etcetera.
So, top down estimation, it considers the various system level activities such as
integration, documentation, configuration management, etcetera. Many of these may be
ignored in bottom up estimation methods. We will see one of the drawback of bottom up
estimation is that it does not take into account to this these what system level activities.
329
It requires minimal project details. So, this top down estimation, it requires very few
amount of project details, very minimal project details. It is usually faster and easier to
implement. These are the advantages of top down estimation.
But there are also some disadvantages of the top down estimation such as; it upon does
not taken to consideration the difficult low level problems, it takes to account this what
system level activities, but it does not take into account the low level of problems. It
tends to underestimate and overlook the complexity of the low level components. It does
not take into account as well as it may tend to underestimate and overlook the
complexities of low level components. It provides no details no detailed basis for
justifying the decisions or estimates.
So, why you have taken this decision? Why you have done this estimate? It does not
provide any detailed basis for justifying the decisions or the estimates that a project
manager has made..
330
(Refer Slide Time: 09:18)
The on the what are let us see some of the what the advantages of bottom up estimation.
It permits the software group to estimate in an almost traditional fashion. So, in a
traditional fashion the software group or the project manager, they can estimate the
various parameters. Each group estimate components for which is has adequate
experience.
So, different groups involved in the software development, they will estimate different
components for which they have adequate experience. It is more stable, because the
estimation errors in the various components more or less balance out. So, different it is it
is very stable, because the estimation errors in the various components, they are already
balanced.
331
(Refer Slide Time: 10:04)
But the drawback, there are several drawbacks of bottom up estimation such as it may
overlook many of the system level cost. This I have already told you earlier, the
advantages of top down approach is that it takes into account the system level activities,
but bottom up estimation may overlook many of the system level costs such as
integration, configuration management, etcetera, quality assurance.
So, the cost that will be involved in this system level cost may be overlooked by this
bottom up estimation process. It may be inaccurate, because the necessary information
may not be available in the early phase. So, during the early phase may be like a system
requirement system requirement analysis phase or this design phase. So, the necessary
information may not be available. So, the estimate may be inaccurate if you are using
this bottom up estimation. It is tends to be more time consuming, it is more it is more
time consuming in comparison to the bottom up estimation.
332
(Refer Slide Time: 11:04)
So, now let us see about another model called as the algorithmic model for this project
estimation. Now, first let us see what are the criteria required for a good algorithmic
model, then we will take up for the steps for this algorithmic model.
So, first of all this algorithmic model, if you want to develop a algorithmic model for
project estimation, it should be clearly defined, the procedure should be very clearly
defined. Then it is must be accurate. It should be objective; that means, you should avoid
the subjective factors. It should only concentrate on the objective factors. The results
should be understandable and it should be stable; that means, the results must be valid
for a wide range of parameter values, not for a small range of parameter values and it
should be easy to use. It should be very much causal; that means, future data not required
only with the existing data current data. We should be able to estimate the various
parameters. These are some of the criteria for a good algorithmic model.
333
(Refer Slide Time: 12:09)
Now, let us see what could be the algorithm models. For project planning, we need to
estimate what the report as well as the duration. We have already told you that directly
estimating the effort duration this is very much difficult it is hard to estimate effort or
cost or duration directly from a program description. So, that is why effort and the
duration they can be measured in terms of certain project characteristics that correlate
with it, for example, it is that call size, by using the size we can estimate effort and
duration. So, first we have to estimate the size, by using the size estimate we can then
estimate the effort and duration.
334
So, software size we have already discussed earlier, that what exactly this is this size of a
software project, how do you measure it and I have already told you, we may use that
two parameters S L O C and function point. So, any characteristic of software that is
easily measured and correlates with effort that decides,, any characteristic of software
that is easily measured and correlates with effort is known as what size and how can you
measure it using what the metrics such as S L O C and function point or F P.
Now, this let us see the algorithmic models or now, let us see this algorithmic models or
parametric models. So, two important examples are there; one is COCOMO of this lines
of code and the function points, they are examples of these models.
Let us see one of the model with the L O C based model or the COCOMO. In either the
L O C based model or the COCOMO we initially guess some value, we initially guess
some value then apply the algorithm and estimate, but what actually desire is, we should
what estimate the values not from starting from the guess, we should estimate the values
starting from starting from the system characteristics, then we can apply the algorithm
and then we can estimate. So, this is the problem with L O C based models or the
COCOMOS. So, in order to overcome this, we may apply algorithmic or parametric
models.
335
(Refer Slide Time: 14:23)
Now, let us see what are the advantages of algorithmic methods. It is able to generate a
repeatable estimations. So, once you estimate for some parameter, you can generate
repeatable estimations.
So, that is why it is repeatable in nature. It is the estimation that you are obtaining using
algorithmic methods they are easy to modify ok. They are easy to modify input data,
refine and customize the different formula. It is efficient and able to support a family of
estimations or a sensitive analysis. So, this algorithm method it is efficient and it is able
to support not only single estimations, but a family of estimations or a sensitive analysis.
It can be meaningfully calibrated to previous experience. So, one of the most important
advantage is that, it can be very meaningfully calibrated, you can do the calibration to
previous experience.
336
(Refer Slide Time: 15:23)
There are some drawbacks of algorithmic methods such as a it lacks the means to detail
with to deal with exceptional conditions. So, a algorithm methods, they can deal with
some exceptional conditions such as exceptional teamwork, exceptional match between
skill levels and task etcetera. So, these exceptional conditions can easily sorry, they
cannot be what dealt with using this algorithm methods.
So, poor sizing inputs and inaccurate cost driver rating will result in inaccurate
estimation. So, if you are having what poor sizing inputs, if the cost driver ratings are
inaccurate, then ultimately, they will result in inaccurate estimation. Some factors such
as experience cannot be easily quantified. So, if you are using algorithmic methods. So,
factors such as experience of somewhat coder or experience of some tester, it cannot be
easily quantified.
337
(Refer Slide Time: 16:25)
So, now let us as I have already told you that here one of the drawback sorry, one of the
advantage here, I have already told you that the algorithmic methods, they can be
meaningfully calibrated to previous experience. Let us see how we can do calibration to
the previous experiences using the algorithmic methods. So, let us see the calibration,
model calibration.
So, many models are developed for specific situations and are by definition calibrated to
that situation. So, there are several models, they are developed for only specific
situations, they are not generalized, they are developed for specific situations and by
definition, they can be calibrated to that situation. Such models usually are not useful
outside of that particular environment.
I have, as I have already told you, these models will be very much what a suitable for the
specific situation, they cannot be useful outside the particular environment. Calibration is
needed to increase the accuracy of one of these general models. So, why calibration is
required? Calibration is needed to increase the accuracy of one of these general models.
Calibration is a sense customizing a generic model. We have to a generic model is there
and we want to customize it according to our own needs.
So, calibration is in a sense customizing a generic model. Items that can be calibrated in
a model include let us say in a model what items can be calibrated. You can calibrate you
can calibrate the product types, we can calibrate the operating environments. Similarly,
338
the other items they can be calibrated are labour rates and factors, various relationships
between the functional cost items, etcetera.
Now, let us see some of the recommendations for this what calibration. Do not depend or
let us see some of the recommendations while using this estimation models some of the
recommendations while using or while doing estimation. Do not depend on a single cost
or schedule estimate. We have already seen that various methodologies are there. So far
we have seen bottom up, top down and this what algorithmic, then this judgment based,
pricing so many, so do not depend on a single cost or schedule estimate, because
sometimes if you are using a single estimate or single technique, it may give wrong
result.
So, that is why you may use several estimating technique or cost models, instead of using
just a single estimation technique or a single model, you may use different or several
estimating techniques or cost models, so that the result may be very much accurate. Then
if you are using different estimation techniques or models for estimation, then what you
can do after applying the different techniques or models for finding out or for estimating
the parameters, then you may compare the results and then determine the reasons for any
large variations.
They should; so whatever method may use, they should give very closer results. If the
results are in the results there are large variations they may, you may analyze, why this
339
large variations are coming. So, then you note down them then document the
assumptions made when making the estimate.
So, while you are using different techniques or models for estimation you may use
different assumptions. You may take different assumptions, but whatever assumptions
you have taken so you have to document all the assumptions met, when making the
estimates. Then monitor the project to detect when assumptions that turned out to be
wrong jeopardize the accuracy of the estimate. So, sometimes due to wrong assumptions,
your accuracy, your estimate it becoming what less accurate, it is becoming wrong. So,
monitor the project to detect when the assumptions that will turn out to be wrong
jeopardizing the accuracy of the estimate, then improve the software process ok.
So, analyzing the estimates and if you are what observing that due to this assumptions
you are getting what, you are getting wrong estimates, then you may improve this
software process. So, how again; you can also improve the software process the whole
process by maintaining a historical database. So because sometimes we require the past
data, so you have to improve the software process by maintaining the historical database,
containing the details of the past projects. So, these are some of the recommendations
that you may follow as a project manager while carrying out project estimations.
340
Now, let us take this the concept of simplistic model. According to this a very simple
model, according to this model the effort is estimated as follows; estimated the effort is
equal to system size divided by productivity and very straightforward formula.
So, if the system size is some lines of code and the productivity is some lines of code per
day, then productivity from this equation, you can find out that productivity equal to how
much? System size divided by effort.
So, here this formula is based on the data relating to the past projects, but please think of
what is wrong with the simplistic model.
So, in which cases it will fail? So, please think of on this. Now, let us quickly take a
small example; we are, we want to find out what these effort and the effort and this
duration, we want to estimate the effort and this duration for developing this project.
This example says that consider a transaction project of 38000 lines of code. So, this
transaction project contains 38000 lines of code, we have to found out what is the
shortest time that it will take to develop this project. Here, it is also given that
productivity is about 4000 S L O C per staff month.
341
So, the productivity is given in terms of 400 S L O C per staff month. Now, we have to
do this. As we can see that the formula of effort I have already given you earlier. So,
effort can be calculated as system size by productivity or in that you can say that effort is
equal to productivity to the power minus 1 or 1 by productivity into size, this is equal to
1 by productivity is given as 400 L O C or K L O C divided by ok, 1 by 0.400 K S L O C
per staff month into size. Size is equal to how much? Size is equal to 38 equal 38000
lines of code; that means, 38 K S L O C, K S L O C. So, this is coming to be something
this 2.5 into 38 is equal to approximately, 100 staff months will be required for
developing the project.
So, here the estimated effort is coming to be 100 staff month. Now, the minimum time is
required is equal to we have to find out that. So, this will be equal to 0.75 T and which is
equal to 0.75 and T here, it is this effort 0.7 and into 2.5 that we have already got this
thing, 1 by 400 something. So, this is equal to 0.75 into 2.5 S M to the power 1 by Q and
this is coming to be 1.875 into and simplification it is coming to be around 9 months. So,
for this transaction project having 38000 lines or 38 K S L O C and productivity of 400 S
L O C for staff month we get that that, the effort is estimated to be 100 staff months and
minimum time is estimated to be 9 months.
So, similar examples are there, you can get from the book or in places. Please, try to
solve them, also some exercises you may solve them. So, these types of small exercises
may be asked in the assignments as well as in the examination.
342
(Refer Slide Time: 25:40)
So, finally, we have discussed some different types of so, let us see the summary we
have discussed some different types of estimation techniques here, in this class such as
top down testing we have seen and bottom up estimation techniques we have seen and
we have also compare the top down, the top down approaches and the bottom up
estimation techniques. Also we have taken some examples to illustrate the top down and
the bottom up estimation techniques. Then as also we have also discussed the
algorithmic model or the parametric model for project estimation.
343
(Refer Slide Time: 16:50)
Next class, we will see the remaining parts of this project estimation techniques.
344
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 19
Project Estimation Techniques (Contd.)
Good afternoon. Now, let us start the other Project Estimation Techniques. We will
mainly a start about the parametric models, especially the Albrecht IFPUG function point
model.
345
(Refer Slide Time: 00:32)
So, let us see what is a parametric model, and then we will see the parametric model for
size. So, one of the important example for parametric model for size is function points.
Function points, where normally used to estimate the lines of code, rather than effort.
Say effort directly cannot be estimated from function point, first we have to estimate the
lines of code and by using the lines of code then we can estimate the effort using this
function points. So, this model works like this. It takes the number of file types and the
number of input and output transaction types are the input, and then it processes
something and then it will produce the system size as the output.
346
See we have seen there are many drawbacks of the LOC model that we have seen
yesterday. So, this function point approach, we will try to resolve some of the issues
though they are though they are appeared in case of these LOC. Now, let us see what are
the different parametric models for size. So, first one we will see that Albrecht IFPUG
function points, then we will discuss Symons Mark II function points, then COSMIC
function points, and then COCOMO81 and COCOMO II. So, out of all those parametric
models, today we will discuss right now in this class Albrecht IFPUG function point.
Now, let us see how does it work; so what are the purpose, what are the benefits and
what are the extent of use. So, IFPUG it stands for International Function Points Users
Group. So, the basic purpose of this group was to promote and encourage use of function
points, because we have already seen LOC has several drawbacks. To another objective
of this group is to develop consistent and accurate counting guidelines.
Several benefits are there which can be obtained from IFPUG. So, like the benefits will
be networking with the other counters, so you can make networking with the other
counters. Similarly, IFPUG counting practices manuals are also available, they it a
supports research projects, hotline, news papers, certification or some of the other
benefits of this IFPUG. The extent of use is like is the member companies they include
almost all industries industry sectors. Over 1200 members are there from across from 30
countries in this group.
347
(Refer Slide Time: 03:11)
Actually this Albrecht IFPUG function point it was coined by Albrecht’s. So, while
Albrecht was working at IBM he found that there is a need of measuring the relative
productivity of different programming languages, so he was using different programming
languages; he observed that there is a pressing need to measure the relative productivity
of different programming languages. So, he also needed some way of measuring the size
of an application without counting the lines of code, because the counting lines of code
has several drawbacks.
So, he has identified five parameters for performing the function point analysis let us see.
And he has counted the occurrence of each type of functionality in order to get an
indication of the size of an information system. So, in order to estimate the size of an
information system, he has counted he has first identify five parameters and counted the
occurrences of each type of functionality.
Let us see and basically yesterday we have last class we have discussed about these, the
different project estimation techniques such as top-down approach, bottom-up approach,
etcetera. So, this Albrecht IFPUG function point analysis is based on a top-down
approach.
348
(Refer Slide Time: 04:31)
Now, let us see why IFPUG thinks that one should not use LOC rather this would use
function point. We have already seen the drawbacks of LOC in the last class, till then let
us quickly revise what we have seen earlier are the drawbacks of LOC. We know that
lines of code it tend to reward profligate design and penalize concise design; so if your
design is concise, then it will penalize you.
And there is no industry standard ISO or otherwise for lines of code, so that is another
drawback we will see how function point is influenced somewhere else ISO standards.
And lines of code we know that it cannot be used for normalizing across different
platforms or different languages or by different organizations. Some 4GL they do not
have at all the use of these lines of code. So, then lines of code will made might will not
be what suitable for those languages, also lines of code it can be misleading. So, these
are some of the reasons why one should not use LOC rather than he should try for a
better major such as a function point.
349
(Refer Slide Time: 05:37)
How do function points over come the LOC problems, number one the function points
are independent of the language, we have already seen that LOC is somehow language
dependent platform dependent, but function points that independent of languages,
independents of tools or independent of methodologies those are used for
implementation. Then function points they can also be estimated early in analysis and
design, we have seen that LOC by LOC approach cannot be used in the earliest stages of
analysis design such as requirement analysis, specification and design whereas, function
point can be used to estimate the what size in early stages of software development such
as analysis and design.
So, since function points are based on the users external view of the system you will see
that even if any layman non-technical users of the software, they can also have better
understanding of what function points are measuring ok. So, this function points they are
based on what the users external behave, the user’s perspective, so that is why the non-
technical users of the software or the projects they can also better understand what this
function points are doing or their measuring.
350
(Refer Slide Time: 06:47)
Now, let us see what are the objectives of function point counting. It measures
functionality that the user requests and receives. So, it will basically measure the
functionality that the user request and the user receives. Similarly, it measures the
software development and maintenance independently of technology used for
implementation, this we have already told you that it independent of any technology
used, so that is why it can measure the software development and the maintenance
activities efforts.
So, it is very simple enough to minimize the overhead of the measurement process. So,
by using function point you can minimize the overhead of the measurement process. And
similarly, another objective is it act has a consistent measure among various projects and
organizations. So, it acts has a consistent measure among different projects and
organizations.
351
(Refer Slide Time: 07:41)
Then why should we use count function points, when should you count the function
points? So, as I have already told you that you can use this thing not only coding, but
before the coding starts may be requirement analysis phase and design, so that is why I
saying it can be what a used in the early phases of software development and vary often
it can be used. So, you see that the sooner you can quantify what a project is delivering,
the sooner it is control it will under better control, you can monitor it.
Under IFPUG 4.1, there are several rules and guidelines; these rules and guidelines make
it possible to count the function points once the requirements have been fixed. So, once
the requirements have been frozen, then you can easily a count the function points by
using the rules and guidelines provided in IFPUG 4.1.
The estimate of size in function points can be refined continuously throughout the
development cycle, please see it is not just estimated only once. So, you first you have to
make an initial estimate, then the estimates they can be refined continuously or you get
different information updated information throughout the development lifecycle.
The function points please remember that there should be recounted throughout the
development process. They can be refined throughout the development process, it hence
it can measure the scope or creep and the breakage.
352
(Refer Slide Time: 09:10)
Let us see what are the steps involved in the function point computation. So, first you
have to identify in the first step you have to identify, the counting scope and application
boundary what is this scope of the counting and what is the application boundary first
that has to be identified. Then you have to count two things, you have to count the data
functions as well as you have to count the transactional functions. Here in data functions,
like input, output those input data, output data, etcetera you have to count; and in count
transactional functions what are the mainly file transactions, etcetera or the interfaces
that you have to count.
Then you have to determine the function point and this is now unadjusted, you have not
refined anything else. So, we this is the first time we are getting we, we call it as
unadjusted function point. So, by using the what count data functions and count
transaction functions, you can determine the unadjusted function point count. And then
you have to calculate, because I have already told you that you may you have to refine
the function points.
So, how this can be refined by using a factor called as value adjustment factor, there is a
formula for finding this I will tell after few minutes. So, then we have to determine the
value adjustment factor. Now, this unadjusted function point and this what adjust value
adjust factor, they will be multiplied to give you the adjusted function point count. So,
this is how the function point computation steps they follow.
353
(Refer Slide Time: 10:44)
Let us see what are the key components of function point analysis, there are five key
components or external use types they are identified in function point analysis that the
external inputs, external outputs, external inquiries, logical internal files and external
interface files. So, these are the basic five components they will be used in the function
point analysis. They are also known as in Albrecht terminology, they are also known as
external user types.
354
Now, let us see we will define so five components I have told, let us quickly define
although there be or their usual meanings. So, external input in sort we called as EI, they
represent the transactions which update the internal computer files. So, which will update
the internal computer files; so, these operations these functions are known as external
input types. And external output types, here these are the transactions which extract and
display the data from internal computer files from the internal computer files what will
do, so the transaction which will extract the information, so basically these are output
types. And generally involves creating different reports of course, if you are where
creating report this will be an output type of what function.
And next one is external enquiry types. So, here the user initiated actions which provide
information, but do not update the computer files. So, you can see that in the external
input type the transactions they update the computer files, but this is just inquiry and
inquiry no updation will be there, only the user initiated the transactions which provide
only the information, but do not update the computer files. Normally the user inputs what
some data that guides the system to the information the user needs ok. So, here in
external enquiry types the user will input some data which will guide the system to the
information, to which guides the system to the information that the user needs.
So, next one its logical interface or LIF files. So, this equates roughly to data store in a
system analysis design terms. So, those who have studies system analysis design terms,
355
you must have study some data stores. So, here this LIF this equates roughly, this is
equivalent to the what data stores. Created and so normally these types are created and
accessed by the target system. And external interface file types this represents or these
things represent, the data retrieved from a data store which is actually maintained by
another different application that is why it is known as external interface file types.
See these are the function point parameters that we have discussed five and basically
what we have discussed all those things have been summarized here; more explanation is
given you can see yourself.
356
(Refer Slide Time: 13:51)
Then let us see with some examples of the definitions we have given, as I have already
told you that external input it process the data that comes from outside the applications
boundary from somewhere. External input it is giving and it is a so for example that
some online entry and the external input is giving, and then what is there this transaction
what will happen, it will update the customer information and then the where the
customer information file, this file will be updated. So, here this external input it will
process the data that comes from the outside application from a boundary, it will update
the data.
357
And in the external output, it generates data that is sent outside the application boundary.
Here data is coming from outside application boundary in external input, but in external
output the data is sent outside the application boundary. Please, you see here that say this
is the customer inform information file and then there is a process what categories
customer info, this sends the information like the category summery to the outside to the
end user which is an external output, so that is why these an example of external output.
What is about an inquiry? So, on external enquiry is an output that results in data
retrieval. The results contains no derive data. So, here you cannot what update anything
else only that produces what some data it results in some data retrieval. So, here and this
result cannot contain any derived the data.
So, here the end user asking some query inquiry and like what this inquiry might be what
get the details of the customer information, then these are the easy process display
customer info. This is getting the data from the customer info file and after getting data it
is passing to the end user, so that is why it is an example of external enquiry.
358
(Refer Slide Time: 15:43)
Next is a what definition of what internal logic file. So, an internal logic file is a user-
identifiable group of logically related data. So, basically this is a what group of logically
related data that is maintained where, within the boundary of the application; not external
please remember this is not external, this is maintained within the boundary of the
application, like this you have see.
The end user what send some customer info, then this process update customer info it is
receives that information and accordingly it updates the customer info where in the
customer info file. So, customer info file will be treated an internal logical file.
359
(Refer Slide Time: 16:23)
And last one is the external interface file. So, this is a user-identifiable group of data
which are reference by the application. So, basically this is a group of data which are
referenced by the application, but it is maintained within the boundary of another
application; it is not internal it is maintained within the boundary of another application
that is why the name is external interface file.
We have already seen in internal what is happening this is within the boundary of the
application that is why this is internal logical file, but here you can see this file its
outside the boundary so ok, so that is why this is maintained within the boundary of
another application. This is outside the boundary of this present application it is
somewhere outside that is why the name is external interface file. For example, here of
the end user update something like validate zip code and zip code in stores somewhere
else zip code table. So, for validation the message has to go here and come here, and then
it will update the customer info file. So, here zip code table represents on external
interface file.
360
(Refer Slide Time: 17:29)
Now, let us quickly take a small example we want to place a purchase order and the
input data items are like date, supplier, number, product code, quantity required, date
required, etcetera. Output data items are like purchase number; this will be generated by
the system. Entities referenced are these types are entities are referred like product,
purchase order, supplier, purchase order item, etcetera. Now, we want to estimate the
size of the system, let us see how can do it.
361
So, for calculating the system size you are supposed to do this things what you do, for
each function count first find the number of input data items that is n i, then the number
of output data items that is n o, then the number of entities which are rate or updated that
is n e. And then after finding out the function count for all, after finding out the function
count after finding out these for each function count, then you are these for the whole
system. Then these are to be added, this will give you the number of input data items N i
for the whole system, number of output data items N o for the whole system and number
of entities read or updated that is N e for the whole system.
If you will take a small example, this is what a sample example where these are the
requirements ok. So, for these are the functionalities for these requirements are
functionalities. So, for A 1, 10 inputs are there; for A 1, 2 outputs are there and for A 2
entity accesses is 4. Like this for there are 12 requirements find out the inputs, for all 12
requirements find out the outputs, for all the what 12 requirements then found out the
entity accessed the counts, for all the entity accessed.
Then what you do, these are all small ends, these are these are all we can see this for
each count for each function count we represent with small n and for the whole system it
will be depended with capital N. So, you can see the capital N i will be just summation
of these counts that is 120; and similarly, the number of outputs of for the whole system
that is N o is equal to 120; similarly for the number of entities accessed for the whole
362
system will be the sum of this counts which is equal to 60. So, in this way we can find
out the what if inputs, outputs, entity accessed which roughly what correspond to the size
of the whole application in this manner.
Now, what will see, that Albrecht has the proposed a formula for finding out the function
point for different applications.
So, he has actually categorized the external user types into different categories such as
some of the external user types might be of low complex, some of them are medium
complex and some of them are high complex. Accordingly he has proposed some
weights, some multipliers for these external user types depending upon the complexity.
So, like for the for external input if it is a low complex or low complex application, it is
3; for a medium complex application EI is equal to 4; for high complex application the
value of EI is equal to 6. So, in this way he has proposed somewhere some multiplies for
the what different external user types, this is known as Albrecht, these are known as
Albrecht complexity multipliers, those multipliers I will use them in this calculation of
function points.
363
(Refer Slide Time: 21:02)
Let us quickly take a small example, how you can find out the there is a you want to
develop a software for spell-checking. The spell-checker accepts as input what a
document file and an optional personal directory file. Then the spell checker, it lists all
words which are not content in either of these files. The user can query the number of
words processed the user can query the number of words processed and the number of
spelling errors which are found at any stage during the processing.
364
So, if you can draw if you have I hope you have known data flow diagram, so you can
easily draw the data flow diagram or the context diagram for this, where you can see that
user can send what this information to the process. And, the process can what do some
processing, this spell checker can do some processing and give this outputs and now we
want to find out the function point.
How can be done, see if you look at this you can see that the 2 inputs; what are the 2
inputs, if you will see the program the example 2 inputs, on a document file and an
optional personal directory. So, these are two inputs I have shown that are 2 user inputs
document filename and personal dictionary name and they can be what rated as average
complex or medium complex. There are 3 user outputs, what are they error or report,
word count and misspelled error count, also they will treated as average complexity or
medium complexity.
The user request two things there will be 2 queries, what are the 2 queries; number of a
processed words and number of spelling errors, they can be made as queries. So, they
may be treated as average, so that are 2 user queries or 2 user request. There is 1 internal
logical file that is the dictionary, where all those what is have stored and there is a 2
external files, what are they; the document file as well as the personal directory, personal
dictionary average. See those things have so described in the problem as well as they
365
have been also represented in the diagram as the input and output that you can look at the
diagram and see yourself.
Now, our objective is to find out the function point. So, this Albrecht has proposed a
formula and according to this formula, let us see where this formula must be there, this
formula ok. So, now the formula says like that the
So, there is a formula you can use. So, the UFP can be computed as like this that
summation of this parameter whatever you are taking and into it will be the weight. So,
you have to take up this parameter and then the weight there will be multiplied, there will
be ‘into’ here and take the summation. So, if we can see that here, the UFP is computed
as the here what are the parameters, the parameters are ok. So, now we can see that the
number of UFP is equal to number of inputs into 4 plus number of outputs into 5 plus
number of inquiries into 4 plus number of files into 10 plus number of interfaces into 7.
So, then what is happening, so here basically the these are the weights; 4, 5 and 4, 10
these are the weights, the weights are chosen for an average project, because see all the
parameters are type average. So, these are average values has been taken and then you
multiply them and they take the summation. So, finally you are you are observing that
UFP is coming to be 55. So, this is the unadjusted function point. So, after this
unadjusted function point we have to this is the initial value, then there can be what then
that can be refined.
366
(Refer Slide Time: 25:53)
Now, let us see how this can be refined. So, Albrecht had proposed a 14 general system
characteristics these are evaluated and used to compute a value adjustment factor known
as VAF or sometimes it is known as TCF technically Technical Complexity Factors. So,
these are the general system characteristics data come like data communication,
distributed data processing, etcetera. And the final calculation is based on what finding
out the unadjusted function point which we have already used this formula, then this
what value adjustment factor.
367
So, there is a formula for computing the value adjustment factor, this is like given like
this
So, so what is happened that these are 14 parameters they are given some values within
the range 0 to 5, these values are known as degree of influence or DI; where 0 represent
not present or no influence one is incidental influence and 5 with this strong influence of
the parameter.
So, here they how to process calculate the value of the VAF that is written here that we
have to evaluate each of the 14 general system characteristics on the scale of 0 to 5 in
order to determine the value of a DI, which is known as degree of influence. So, then we
have to add all the degree of influence for all the 14 general systems, this will give raise
to the TDI that is total degree of influence.
Then we have to insert the value of TDI in these formula to find out the value of VAF
that the value adjustment factor or complexity factor why, we want to refine the value of
the function point. So, VAF is equal to TDI into 0.01 plus 0.65, this Albrecht has already
found out from his research. So, this VAF or this TCF, it expresses the overall impact of
the corresponding parameter on the development effort.
368
(Refer Slide Time: 27:50)
So, this is the procedure to do. Now, let us see that same example will continue where
there are suppose 14 general system characteristics. And let us assume that these values
almost average, it is 3, because I have already told you the value lie in between 0 to 5,
suppose this is 3 then how you can compute. So, VAF formula I have already told you
here that TDI and how TDI will be computed; if for 1, the value is at the degree of
influence is 3 and 14 parameters so 3 into 14, so that is 42. So, value adjustment factor
will be 42 into how much zero point see TDI into 0.01. So, this will be 42 into 0.01 plus
a constant 0.65 like this it is 1.07.
FP=UFP* VAF
you have to multiply the unadjusted function point into with the value adjustment factor,
so after these you will get that 58.85. So, this is the unadjusted function point for the
given what spell-checker application that is the adjusted function point. Now, let us take
an example here I have chosen all the what degree of influence for every what that
parameter, it is of what 3 uniform.
369
(Refer Slide Time: 29:10)
But if it is difficult different like the 14 general system characteristics with the different
DIs, like for something 3, something 0, something 4 and something 5; so it is ranging in
between it is ranging in between 0 to 5. So, then the total degree of influence is the
summation of these, this is 36.
And you can say that so now, this value adjustment factor will be how much 36 will put
in the above formula that will give raise to 1.01 and once the VAF is calculated, you can
easily calculate the adjusted function of the adjusted function point will be equal to
370
multiplication of UFP – Unadjusted Function Point into the value adjustment factor, so
this is coming to the 55.55. So, in this you have taken what are the two cases where the
values of GSC – General System Characteristics are same, in another case the value of
the general system characteristics are different, how to find out the adjusted function
points ok.
So, there is another example is a very small example, this you can see yourself.
371
Where and reading that example you can find out, how many inputs are there and here I
have taken there are two inputs; one is of medium, another is high. And there is a one
what output that is of medium complexity, one internal file that is medium complexity,
and one simple interface file that is simple, so add all those things. So, will get total UFP
is equal to how much 30 FPs.
So, if a previous projects, if you have already develops similar kind of projects and the if
the previous projects delivered 5 function points for a day, then you can easily calculate
that you have to take 6 days 35 by 30 by 5, so that 6 days you have to take in order to
develop this project.
372
(Refer Slide Time: 30:56)
For each exercise find out what is the unadjusted function point, then what are the what
VAFs, multiply all those things.
373
(Refer Slide Time: 31:06)
Then you can get the total function point using Albrecht, what function point method.
So, finally, here we have discussed Albrecht IFPUG function point analysis; we have
taken some examples related this. And in the next class also, I will take few more
examples for this, then we have to discuss this steps to compute the FP count. Then we
have presented also the function point different parameters, parameters I have told and
then we have explained the IFPUG function point counting with some suitable examples.
374
We have taken mainly it is a concepts on these two books; we can have along to look
into this.
375
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 20
Project Estimation Techniques. (Contd.)
Good afternoon. Now let us start in this section, in this lecture we will take up some of
the examples for this Albrecht a function point methods. We will take few examples so,
that you can better understand. We will take some more examples on this Albrecht
IFPUG function points then, we will take two more types of function points that is
Symons Mark II function point and the COSMIC function points.
So, we have already discussed this parametric models for size, last class I have told we
will use a we will see 4 types of this a estimations like IFPUG function points, Mark II
function points, COSMIC function points and COCOMO 81 and COCOMO II. Last
class already we have discussed IFPUG function points Now, we will see this Symons
Mark II function point and COSMIC function points, before going to that let us take one
or two more problems more examples on this IFPUG function points.
376
(Refer Slide Time: 01:17)
So, we will take up this example, but I hope that before going to this example the let me
again tell you the let us take up that this example. And here, we will see that how this
previous formula that we can use I have already told you last class, first we have to find
out the unadjusted function point, then we have to refine it by using the value adjustment
factor. So, this formula for unadjusted function point can be written as what? I have
already shown it in the last slides, this will be equal to summation of number of elements
of the given type maybe it is a input or output or a query, what is the number of elements
into the weight
The weight I have already given you for simple there can be three;3 the weights can be
divided into three categories depending on whether it is simple or medium or high. So,
that table Albrecht might that complexity classifiers the multipliers we have already
given you in the last class. And now, with this now let us take up this problem ok. So
now, let us take this one more example, this example say that consider a project with the
following functional units there are suppose in that project there are 50 user inputs, 40
user outputs, 35 user enquiries and 6 user files, 4 external interfaces.
So, now let us assume that all the complexity adjustment factors and weighting factors
they are average, everything is average. So, then the question is that compute the
function points for the project and then compute what will the size? Given that the
program needs 70 LOC per function point.
377
(Refer Slide Time: 03:29)
So now, let us go you can see that the inputs is equal to 50, outputs is equal to 40 and
queries is 35 and internal files you can see that number of internal files is 6 and 4 is the
external interfaces that 6 and this 4.
So, use these values in the given formula that already I have given you and then it will
give you this values of UFP. So, basically two things I am taking the possible number of
elements from each category input, output, inquiry then internal file and external
interfaces multiplied by the multiplier the weight, which is whether it will vary
378
depending on whether it is a simple or average or what high. In my previous problem I
have already told you that these are all what average. So, all the complexity adjustment
factors are average, as well as these weighting factors they are average, it will be much
more easy to compute.
So, now UFP is equal to state formula apply this you will get 628, then we have to find
out the, what value adjustment factor. Again that formula we have told you 0.65 plus
0.01 into TDI, where TDI is equal to Total Degree of Influence. And here 14 parameters
since, it is told that this is average please see I have already told you that these
complexity adjustment factors are average, as well as the weighting factors that mean the
GSC this General System Characteristics there, these are these are also average.
So, again for average since you have taking a scale from 0 to 5, again this is average will
coming 3 so, this will 1.07. So, the AFP will be the adjusted function points will be equal
to unadjusted function points into VAF. So, because why we are doing this? We want to
refine it. So, the you can get the adjustment function point by what multiplying the
unadjusted function point on the value adjustment factor. So, this is given to 672. So, this
is the adjusted function points.
So, now what is question again and the second part question is that find out the size of
the complete project, how you can compute the file find out the complete the size of the
complete project, you have to use this information. It said that the program needs 70
LOC per function point; that means, per function point that can be 70 lines of code. So,
what is the total number of lines of code; so, sorry what is the total number of FP
function point 672. So, the total size will be equal to the function points into LOC per
function points. So, 672 into 70, in this way, you will get that these total size of the
project in terms of LOC is coming to be 47040.
So, in this way for small projects you can easily use this formula find out the function
points and size. So, in the examination some this types of small what questions might be
asked and you have to find out the answer. So, let us take quickly another example in
these example a project you have to develop for that you have to find out the function
point for that project, this project has the following characteristics there are number of
user inputs is 30, number of user outputs is 60, number of user inquiries 24, number of
files is equal to 8 and number of external interface is 2.
379
So, and assume that all the complexity adjustment value are average and all the
weighting factors are average.
So, then you have to find out the number of function points, you can see UFP again
straight forward formula that I have given summation of the number of elements from
each type into weight, whole then take the overall summation you will get applying this
formula straight forward. You can see that there are 32 number of what input users and
we know, therefore, average case this what value is 4, it has already given these are
adjustment factors are average. So, the value will be what 32 into 4. Similarly output is
60, the average weight is 5 and the inquiries it is 24 number of inquiries are there you
can see, and this average value for inquiry is 4, unlike this you can compute there are the
8 numbers of what internal files and 2 number of external interfaces.
So, 8 into average value for the interfaces 10 and 2 numbers of external interfaces,
average value it is 7. So, find out this value it is coming 618 then, you have to compute
the value adjustment factor by using the formula 0.65 plus 0.01 into TDI and since, it is
given that is these are also the general system characteristics they are average so, again
14 into 3 so, that will 1.07. So, finally, the adjusted function point can be obtained by
multiplying the unadjusted function points and the value adjustment factor. So, this is
giving to be 661.26. So, the total number of adjusted function points or the adjusted
function point counts for this project will be equal given as 661.26.
380
So, suppose if for function point you have to write say 80 lines of code. So, what will the
size in terms of LOC? So, 661.26 and 80 so, this will give you roughly the size of the
proposed project. So, in this way you can use Albrecht function point method to find out
the, to estimate the size of a given project.
So, now, let us quickly say about these what function points Mark II this is also another
interesting approach to find out the function points. This was developed by Charles R
Symons, actually the details of each method is available in a book he has developed, he
has published a book on software sizing and estimating by Wiley and Sons in 1991,
which contains the details of his function points, that is function points Mark II.
381
(Refer Slide Time: 09:57)
So, how it works? So, for each transaction you have to count the data items input, the
data items output and the entity types accessed. So, these three parameters you have to
count and then you can find out the total function point for the whole project by using a
formula, which we will see in the next slide.
See how does it works, as I have already told you, you have to supply the input items as
well as you will see also three parameters are required I have told the number of input
382
output; the number of input items supplied to the process, the number of entities assessed
from this data store and the number of output items are produced.
So, in this way this function point Mark II this works this is a simpler method than this
Albrecht function point and it is widely used in UK. So, what is the final formula for
finding out the Mark II function point, the function count is given by the following
equation
Where Ni Ne and No we have already defined earlier, Ni the data items input No is the
data items output and Na is the entity types associated or the sorry the entity types
accessed.
So, this value judge; the these constants are some weights 0.58, 1. 66 and 0.26, these are
weights. This Simpson ok; Symons he had found out these values of the constants by
performing experiments in his lab. So, directly he has utilized these values of the
constants along with these parameters Ni, Ne and No and he has developed this formula
to find out the function points, this is known as Symons Mark II function points this is
simpler than Albrecht function point.
Now, let us see about the whether these function points so far we have seen the Albrecht
function points and the Symons function points, can they be extended to real time
383
systems and embedded systems. See this Mark II and IFPUG function points, they were
mainly designed for information systems and hence they are not suitable for embedded
systems why? Because these two function points they are largely or heavily dominated
by the input and output operations, the most emphasis is given on the number of input
operations and the output operations. So, in order to overcome this problem so, cosmic
full function points they have been developed in order to overcome these problems, this a
cosmic full function points they attempt to extend the concept, this concept which
concept this the Mark II concept or IFPUG concept to embedded systems.
So, Now these concepts have been extended to handle embedded systems to apply on
embedded systems or real time systems. You know that in an embedded system it is
features like hidden and etcetera is not it. If in an embedded system it is features
normally will be hidden, because the software, the software’s user probably is not a
human being, it will be used by some hardware device or another software component.
In order to what overcome these problems COSMICS deals with this by decomposing
the system architecture into a hierarchy of software layers. And the embedded software
is seen as being in one of the or in a particular layer in the system. This layer where this
software is present this communicates with the other layers I mean, upper layers and the
below layers and also other components are the same level. The software component
384
which has to be sized, it can receive requests for services from layers above and it can
request services from those below it this is shown in the, in this figure.
So, the software component represents suppose here it may receive what messages from
higher layers. It may make a request for a service what from a lower layer and also it
may receive a what service from a what a lower layer it may service it may provide some
service to some higher layer.
Besides that this software component may communicate with the persistent storage, peer
component and other components present in the same layer. This is the layered software
and based on this concept this embedded systems work. So, since they Albrecht function
point and Symons function points they cannot handle this embedded systems that is why
this, because another what function point technology has been developed that is
COSMICS Now let us see how COSMICS does work.
385
(Refer Slide Time: 14:57)
So, now, as I have already told you that here the software component that has to be sized
can receive requests for services from layers above and it can requests services from
those below it. So, this identifies what will the boundary of the software component that
has to be assessed and thus the points at which it receives inputs and transmits outputs. It
inputs; here, the inputs and outputs are aggregated into data groups where each group
will bring together data items that relate to the same object of interest.
386
Let us see, what can these; what groups the data groups can move about in four ways as
follows and like one is entry another is exits, another is reads, another is writes. So, these
are the what, data groups these are the four ways in which the data groups can move
about. So, entries means movement of data into the software component that is under
consideration from a higher layer or a peer component. Exit means, movement of data
out of the software component that is under consideration. Read means, data movement
from persistent storage and writes means, data moment to persistent storage.
So, in this way before which the data groups can move and what we have to do that each
counts; so, each of the above each counts one COSMIC functional size unit and in short
we call as Cfsu. So, in this way the COSMIC function points can be a calculated in this.
So, the COSMIC function points can be computed, can be estimated for any embedded
system or real time system.
The overall full function point count, how it can be found out the overall full function
point, count can be derived by simply adding up the counts for each of the 4 types of data
moment, that we have seen in the last slide. These COSMIC FFPs they have been
incorporated into ISO standard. We have already seen that these LOCs etcetera they are
not included incorporated in any ISO standard, but this COSMIC FFPs they have been
incorporated into ISO standard.
387
(Refer Slide Time: 17:16)
Now, let us quickly say about the, what disadvantages of the cosmic FPs. It does not take
into account any processing of the data groups ok. So, it does not take; it does not
consider any processing of the data groups once they have been moved into the software
component. So, once they what data groups they have been moved into the software
component, this metric does not take care of this material, does not take into account any
processing of these data groups. Normally it is not recommended for use in systems
which involve complex mathematical algorithms. So, this is not recommended for this
system. So, these are important drawbacks of cosmic function points.
388
So, Now let us summarize what are the pros and cons of the function points. On the
positive side you can say that function point is language independent, we have already
seen LOC is a language dependent, but function points are language independent.
Understandable by the client because here no technical details are there, you just what
are the functionalities and they are associated things you are just analyzing. So, it can be
understandable by clients which are non technical people, it is a very simple modeling
technique, hard to fudge and the variable features they creep.
And on the flip side you will see that it is very much labor intensive that to compute the
what, function points, extensive training is required, inexperience may result in
inaccuracy. So, the; now the function points you have computed if you are quite
inexperienced, it may result in inaccurate number of function points, weighted to file
manipulation and the transactions, there may be errors which are introduced by single
person. So, multiple raters are advised. So, if you adjust giving only one person to
estimate this some their sometimes that errors may be introduced. So, you have to put a
multiple what raters to rate these what applications, they can find out the different
weights so, multiple raters are required.
It does not consider one of the major problem with function point is that it does not
consider algorithm complexity of a function. So, because you know that it rates that if
there are some functions every function is handled in scenario, but you know some of the
functions are more complex, you have to put more effort. some of the functions are very
straight forward straight forward functions. So, they require only what less effort or less
time. So, it does not consider algorithm complexity of function. So, iterates all the
functions similarly.
389
(Refer Slide Time: 19:53)
So, I have already told you that function point metric suffers from a major drawback;
that means, the size of a function is considered to be independent of it is complexity
which is not true, the complexity of the functions are different. In order to overcome this
problem, an extension to the function point metric is there which is known as feature
point metric. So, this feature point metric it takes into account and extra parameter it
considers an extra parameter that is known as algorithmic complexity.
390
So, this parameter algorithm complexity it ensures that the computed size using the
feature point metric, it reflects the following fact. What if the fact that the higher the
complexity of a function the greater the effort required to develop; it if they complexity
of function is high so, you have to put more effort to develop it. If the complexity of a
function is very less just like a small GUI kind of a function it remain require less effort.
Therefore, it should have larger size compared to a simpler function.
So, if a; what the function is having higher complexity it should have larger size as
compared to a simpler function. It should have larger function point and hence, it should
have larger size as compared to a simpler function.
So, finally, let us see what the relationship or how that is the relationship between
function points or SLOC or how a given a function point, how it can be converted to
SLOC? See, for different programming languages this conversion rate is different like
for C language 104 SLOC is there per 1 function point. So, 1 function point may contain
104 SLOC. Similarly for C plus plus the SLOC for function point is 53 and like this, you
can see it is highest in case of C and we can see that it is almost lowest in case of HTML
and visual basic it is 42 ok.
391
(Refer Slide Time: 21:54)
So, how you can still make your estimation more accurate, how can do accurate size
estimation. So, here 1 one thing you have to extra take that is the risk factors, because in
the previous things almost we have neglected the risks factors. So, in order to make your
size estimation more accurate, you have to take the risk factors and then you have to
multiply.
So, you will take the requirement then estimate the project size, then multiplied with the
project complexity adjustment factors, then multiplied with the risk factors. So, these two
points the we have already told you this is given where in the function point analysis, the
project size and the project complexity adjustment. This you can say the unadjusted
function points and these are the complexity factors by multiplying this you can get some
value and that will multiplied by the risk factor.
So, these two factors can be obtained from function point analysis, then by what under
these two factors from the definition and the project completion adjustment and risk
factors they are dependent. The situation you have to adjust the situation finally, you can
if you will use all those things then by taking the multiplication you can estimate what
are the cost the effort and you can perform the scheduling.
392
(Refer Slide Time: 23:14)
Finally, let us quickly say about the object points. See do not confuse objects points, they
have nothing to do with the object oriented programming. Here only the number of
object points is estimated; the number of object points is estimated based on three
factors. First the number of separate screens they are displayed, then the number of
reports that are produced and the number of modules they are present in the code. By
taking into account, then by using some formula you can find out the object points.
These object points are normally are the; they are usually simpler to estimate and they
take into account the graphical user interfaces. So, this is a little bit fundamental about
object points.
393
(Refer Slide Time: 23:57)
So, finally, we have seen that we have solved two more problems on Albrecht IFPUG
function points, then we have discussed the Symons Mark II function points and the
cosmic function points. We have seen that how cosmic point it is already included in
this, what ISO standards also we have discussed what the concept of feature point
because we have seen that function point has one important a drawback. That it does not
take into account to the algorithmic complexity if two different functions are there.
It treats them as equally, but in fact, the two function those are there one might be very
highly complex, another might be very less complex. So, which one is highly complex
we have to put more effort and which one is less complex we have to put less effort. So,
these what complexity is not taken into account by function points; so, that is why
function point has been extended to in consider this complexity, it has another parameter
called as algorithmic complexity.
So, this algorithm complexity takes into account that if the complexity of a function is
more, more effort should be given if the complexity of a function is less. So, less effort
has to be given. So, if the complexity of a function is more; obviously, it should contain
it should produce or it should have more number of function points and hence, the size
will be more. And if the function is having less complexity, then it should have less
function points and hence less size. Also we have discussed little bit a of the object
394
points which takes an account or three factors that or three parameters ok, object points it
takes into three; it is based on three parameters that also we have discussed.
So, these are the things that we have discussed on today. So, in the next class we will see
and that COCOMO model.
We have taken from these books, the reference mainly what it is both of the books we
have used for this topics and finally, we stop here.
395
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 21
Project Estimation Techniques (Contd.)
Yes, good afternoon. So, now we will take up one more parametric model that is the
COCOMO 1 and COCOMO 81 and COCOMO II.
See last class we have seen Albrecht LFPUG function points, Symons Mark II function
points, COSMIC function points. So, one was remaining that is COCOMO. So, in this
class we will discuss the parametric models for size which is COCOMO 81 and
COCOMO II.
396
So, let us first see what does COCOMO stands for what does COCOMO stand for.
COCOMO stands for Constructive Cost Model, it was first published by Doctor Barry
Boehm in 1981, several interactive cost estimation softwares are there the packages
available. This is derived from the statistical regression of data, this COCOMO model or
coco yes this COCOMO model was derived from statistical regression of data from a
base of 63 past projects they have and they there are almost 2000 to 5 lakh 12000
delivered source you know instructions there from this data they have derived this
COCOMO model from the statistical regression of this data and there was almost he has
conducted studies from 63 past projects.
397
We will first see about COCOMO 81 then, we will see COCOMO II, COCOMO 81
based on industry productive standards.
The data base can be a constantly updated. It allows an organisation to bench mark, it is
software development productivity. The basic model is like this. We want to calculate
the effort, the basic model says that effort can be calculated as the product of 2 things; c
and size, where effort is represented as
effort = c x sizek
where c and k they depend on the type of the system the type of the product at the type of
the project to going to develop, the product type could be organic could be semi-
detached or could be embedded and the size is measured in KLOC; that means, in terms
of thousands of lines of code.
Now, let us see what are the different COCOMO modes and the models, 3 different
environments or modes are there for COCOMO. I have already told you they are organic
mode, semidetached mode and embedded mode, 3 increasingly complex COCOMO
models are there. First one is the basic model then intermediate model then, detailed
model and today we will discuss only the basic model and other two intermediate model
and detailed model we will discuss in the next class.
398
See let us first see the COCOMO modes. As I have already told you that are 3 modes or
3 environments for COCOMO, one is organic mode, then semidetached mode,
embedded mode. In organic mode what do we when you will call a product as organic
mode? A product which can be developed in familiar and stable environment we call it is
an organic mode.
So, the products are similar or the product that you are going to develop is similar to the
previously developed products, normally it is size is less than 50000 delivered source of
instructions. So, normally it is size is less than or should be less than 50000 delivered
source instructions. For example, people see your accounting system, all the information
systems normally coming under organic mode.
So, for example, accounting system, student information system these things pay roll
information system, these all are coming under organic modes. And what is about
semidetached mode? Before going to semidetached mode let us see about the embedded
mode. Embedded mode actually, a product which is completely new, a new product. So,
embedded mode means, a product which is completely new, a new product requiring a
great deal of innovation, a great deal of effort inflexible constrains and interface
requirements are there.
So, normally all the what we can say your systems kind of things like operating systems,
real time systems so, they are coming under a these system software kind of things like
operating systems and real time systems, embedded systems etcetera they are coming
399
under the embedded mode. Now, we will see this semidetached mode. Semi the products
which are coming in between organic mode and embedded mode they normally we call
them as a semidetached mode. So, semidetached product semidetached mode products
they lie in between what somewhere these organic and embedded modes semidetached
mode.
So, these products somewhere lie in between the organic and embedded and examples
are the utility applications such as, compilers, linkers etcetera.
So, here basically in this table we have shown the different features of the products under
different modes for example, the organisational understanding of product and objectives
in case of organic it is a thorough in semidetached it is considerable and it is embedded it
is general and experience in working with related software systems, you can see organic
it is extensive, semidetached it is considerable and embedded it is moderate.
So, need for software conformance with pre established requirements here organic yes it
is a basic, where semidetached it is considerable and embedded it is full Similarly need
for software conformance with external interface specifications in organics is just it is
basic, but semidetached it is moderate or considerable and in embedded applications it is
full.
400
As I have already told you there are different models of COCOMO, one, the fundamental
one the basic model, then intermediate model and detailed model. Let us see about first
basic model. Normally, this basic model is used for early rough estimates of a project
cost, performance and schedule.
So, using this model or this model can be used for estimation of early and rough
estimates ok, this can be used for the early rough estimates of project cost, the
performance and the schedule. It is accuracy is within a factor of 2 of actuals 60 percent
of time. Intermediate model it uses effort adjustment factor which is known as EAF from
15 cost drivers, it does not account for 10 to 20 percent of cost; that means, training. It
does not account for 10 to 20 percent of the cost for example, training maintenance and
quality etcetera.
Here the accuracy is within 20 percent of the actuals 68 percent of time. The another
model called detailed model or the COCOMO complete model or complete COCOMO,
we say, it uses different effort multipliers for each phase of project, most project
managers they use intermediate model.
401
Let us first see the basic effort equation or the basic COCOMO model, what will be the
equation for estimating effort using basic COCOMO, it is known as COCOMO 81.
So, here the according to COCOMO 81 the basic the fundamental equation for
estimating effort is given a like this:
effort = c x sizek
where c is a constant based on the developmental mode, for organic mode the value of c
is 2.4, for semi organic semidetached it is 3 3.0 and for embedded mode it is 3.6.
402
So, all those values that I have shown here different values of c and k, they are presented
in a table for a better understanding, here k is the exponentiation to the power of
something it adds disproportionately more effort to the larger projects, it also takes
account for bigger management overheads.
Now, let us find out the equation for the schedule using the basic COCOMO model that
is COCOMO 81. The nominal development time can be represented as 2.5 into effort to
the power exponent, where effort we have already calculated using this fundamental
formula that effort is equal to c into size to the power k.
403
So, this value of effort will be put in this equation so that we can get in the nominal
development time. So, the nominal development time will be equal to
where 2.5 is constant for all the modes that is organic, semidetached and embedded. The
exponent, this factor it again depends upon the mode that you are using for organic
mode, the value of the exponent is 0.38 for semidetached mode it is 0.35 and for
embedded it is 0.32.
So, now let us see we have already seen this what these 2 equations for estimating effort
and duration, now let us see how the graph will look like for the different products. You
can see that in x axis in this graph, we have taken x axis along x axis we have taken size
and along y axis we have taken effort. You can see that effort is somewhat superliner in
problem size, just the problem size is increasing it is effort in is just increasing, but not in
that proportionate. So, effort is somewhat superliner in problem size.
404
Similarly, you can draw a graph between the what size and the development time, you
can see that the development is a sub linear function of the product size. What do you
mean by what is this? That when the product size increases in what in which proportion
the development time does not increase in that way. When the product size increases 2
times please mark here the development time does not double it. The time taken is
almost the same you another observation you can make from this graph it is like that, the
time taken is almost same for all the 3 product categories whatever may the product may
organic or semidetached or embedded the time taken to develop this different kinds of a
products it almost same.
405
And you can say that let us say, that development time it does not increase linearly with
the product size I have already told you, because for larger products more parallel
activities can be identified and they can be carried out simultaneously by a number of
engineers.
So, hence the development time it will not increase linearly with product size that is why
the graph is what is coming like this.
For example, you see the developmental time is roughly the same for all the 3 category
of the products, I have already told you. Let us take an example for example, a 60
KLOC program, it can be developed in approximately 18 months irrespective of whether
the product is organic or semidetached or embedded type. So, there is more scope, why?
Because I have already told you that for larger programs there is more scope for parallel
activities for larger programs such as, systems and application programs than the utility
programs.
So, another thing you can mark. So, this is a graph which shows that if the exponent is
greater than 1, so, how this what graph looks likes for the effort and the if the exponent is
less than 1 how the graph is look like for the duration. See duration for the increasing
effort the and graph, for the duration for the increasing effort when these power the
exponent is less than 1 0.38, see how this what look like because you are the what
exponential is less than 1 and in case of a effort here the this exponent is greater than 1.
406
So, for effort you see the graph it looks like this. So, this is how the graph can look like
for the effort and duration depending upon on the value of the exponent, whether it is
greater than 1 or less than 1 the graph size also changes.
Now, let us take some of the examples and try to elaborate this. We will use some the
equations of the basic COCOMO and we will try to find out, we will try to estimate the
effort and the nominal development type.
So, the first example says that the size of an organic software product has been estimated
to be 32 lines of source code. Assume, that the average salary of a software developer is
15000 per month, what you have to do? Determine the effort required to develop the
software product, the nominal development time and the cost to develop the product. So,
as you we can see from the problem it is given as organic software product.
So, now for effort the formula is you know: c into size to the power k, now we have to
find out what is the value of c and k for the organic software. You can see the value of c
and k I have already given you a table for the value of c and k are given like this.
407
For organic product, the value of c is 2 to the power 2.4 and the value of k is how much?
1.05. So, we can see that here I will just put the value of c and k in the equation.
So, for organic product value of c is 2.2, for organic product the value of k is 1.05, the
size is already given 32000 lines of code; that means, 32 KLOC. So, effort can be
estimated as 2.4 into 32 to the power 1.05 and I have already told you the unit of effort in
the last class, that effort is on normally what expressed as in terms of person months or
man months.
So, effort is equal to after solving this equation it is you coming to be 91 approximately,
91 person month. The nominal development time again we have seen the equation for
nominal development time earlier this is coming to be how much? The nominal
development time can be expressed as 2.5 into effort to the power exponent and the
value of exponent varies as the mode changes, it is the organic type of product, So, the
value of exponent or the constantly 0.38.
So, now I have to use the value of effort and the value of this exponent is 0.38 in the
equation. So, I will get how much? So, see we will get the value of nominal development
time is equal to 2.5 which is a constant and what is this? Effort is 91 to the power the
exponent and for the this product which organic software product the value of exponent
is 0.38 and I am solving what this equation you will get the nominal development time is
equal to 14 months, approximately 14 months. So, now, what the question is find out
also how much cost will be spent to develop the product. So, we have to see that what
408
will be the staff cost. I have only in the problem says that one software developer will
take the average salary of a software developer is 15000 will require, how much? What
is the person month? We require 91 person months to develop the software. So, the cost
that will be required the staff cost that will be required to develop these organic product
will be 91 into 15000, which is coming to be how much? 14 lakh 65000.
So, in this way given a product how you can estimate the effort and nominal
development time and the cost? First step is to identify what kind of product is, because
in the examination it may not be given clearly organic software it might be an
information system.
So, we have to apply your common sense that information system means this an organic
software. So, if it is a real time system means, this is an embedded software.
Accordingly, these value of the constants we have to remember and put in the equation.
So, then after finding out the effort by putting the value of the c and k in the equation,
find out the effort then nominal time use this value of the effort obtained in the previous
step put that value and then value of the constant again it will vary from the for the
different modes such as, organic, semidetached and what your embedded put the
appropriate value of the constant here. So, that you will get the nominal development
time and then in order to find out the staff cost what you have to do? In the question, it
may be given what is the average what salary of the developer multiply it by the effort,
you will get the total cost that will require to develop the project.
409
Now let us quickly take another example. Suppose you are developing a software
product in the organic mode you have estimated the size of the product to be about how
much 1 lakh lines of code we have to compute the nominal effort and the development
time.
So, we can see that here it is given that it is a organic product. So, we can easily choose
the value of the c and k, the size is also given it is 1 lakh lines of code; that means, 100
KLOC.
So, nominal effort could be we know already for organic product value of c is 2.4 and for
organic product, the value of k is equal to 1.05, putting the values in the equation we will
get and the size already given 100 KLOC. So, nominal effort will be called 2.4 into 100
to the power 1.05; on solving, you will get 302.1 man months or no you can round off it
to 302 what man months.
So, this is the effort then nominal development time will be equal to how much? 2.5 into
effort to the power some constant and for organic mode the constant is 0.38; so, which
will give you 8.6 months. So, in this case what for this project the nominal effort is found
to be 302.1 man months or person months and the nominal development time is coming
to be 8.6 months.
410
You can take a another example here. Suppose that certain software product for business
application cost rupees 15000 to buy off-th-shelf and that it is size is 40 KLOC,
assuming that in house developers cost rupees 6000 for per program or month or for
person month including overheads compute the nominal effort and the development time
and the total cost of the project. See the product is see indirectly it is not told what
whether it is organic or semidetached it is not directly given, but it is given is an business
application.
So, business application means, it is an information system. So, indirectly you can say
that this is organic type. So, indirectly all are given. So, you have to start from this point
that the product is for business application and hence it can be classified as organic type,
the nominal product the nominal effort we have to now find out and we know for organic
products the value of constant c and k are as follows c will be 2.4 and k is equal to 1.05
and it is given that what is the size already given, please whatever the size is given it may
be given in KLOC or simple LOC, if it is given LOC you convert into KLOC, here it is
directly given 40 KLOC.
So, I will use in place of size 40. So, this is coming to be this many man months then in
house engineer cost is how much? It is given that the in house developers, they take 6000
per programmer month and I have to find out then the total cost. So, the effort is coming
to be 115.5 and per month cost is 6000. So, the program will take how much cost? Total
cost would be required this much cost 692.669 rupees it will be taken for developing this
product.
411
(Refer Slide Time: 22:11)
Still another example you can take. Again, suppose an organic project you want going to
construct it is of 7.5, 7.5 KLOC it is size.
So, effort will be for organic product we know that these what the value of c is 2.4 and k
is equal to 1.05. So, effort is equal to straight forward 2.4 into 7.5 to the power 1.05 that
is nearly equal to 20 staff months; development, time is equal to 2.5 into 20 to the power
constant for normal product, the value of constant is 0.38 which is coming to be on
simplification it is coming to be approximately 8 months.
So, now the average staff required will be how you can calculate it? It will be calculated
by dividing the effort by the development time. So, 20 by 8 so; that means, roughly or
approximately 2.5 staff members will be required. So, the productivity if you will want
to find out what will be the productivity then the productivity will be coming to as how
much the total size is 7.5 or 7005 LOC, 7500 LOC divided by 20 staff months.
So, this is we got that 20 staff months will be required here, because effort is 20 staff
months. So, that we get how much? So, for staff 375 LOC may be developed. So, the
productivity is 375 LOC for staff month.
412
And the other example say it is like this. Suppose an embedded project has 50 KLOC
and again we want to develop this value for the nominal effort to nominal development
time and the average staff and the productivity here, this product size is already given 50
KLOC and type is embedded project, for embedded project, we know the value of c is
3.6 value of k is 1.20 by putting the values here we can get the value of effort.
So, let us put the value. So, 3 it will be now the equation will be simplified as 3.6 into 50
to the power 1.0, on solving you will get approximately 394 person months for this
embedded project. The nominal development time will be 2.5 into effort to the power the
exponent the constant. So, here effort is coming to be 394 and the value of constant or
the value of the exponent for embedded type of project is 0.32. So, this require how
much? On simplification you will get this is 17 months. Average staff how you will get?
So, what is the effort divided by the nominal development time is coming to be
approximately 23 staff members, productivity can be obtained by dividing the total size
of the project divided by what the productivity can be obtained by dividing what the total
size by the effort which is in person months or staff months. So, this is equal to a 50000
LOC divided by 394 staff months, which is equal to how much? Approximately 127
LOC per staff months; that means, the productive is it is 127 LOC for I think it is yes the
productivity is equal to how much for staff month this is or for person month this is 375
LOC and similarly, this here in this example the productivity is 121 approximately 127
lines of code for staff month.
413
So, in this way in the examination some questions may be asked simple questions, where
you have to first identify what kind of product it is then what are the values of the
constants for those products, then we can easily what then you have to remember the
required equations for finding the value of effort and development time, then you can
easily estimate the values for the effort and development time and then you can calculate
the average staff, productivity and the total cost etcetera. So, please solve some of the
examples and exercises from new book or from internet sources etcetera.
So, a small exercise we have given to you like a software package is required by a
company to mine the existing customer data to select prospective customers for a new
launch. The estimated what it is estimated to be 30 KLOC of effort assume that the
competent developers can be hired at 50000 per month. However, commercial offering
support supporting almost all the required features cost, this how much 1 lakh. So, here
you see that the developers can be hired at rupees 50000 per month, but here if you will
purchase from outside; however, commercial offering if you market you will purchase it
is total is coming 1 lakh.
So, the question say that should we or should the company develop it or build it this
software or they will buy it. So, we have to think a decision. So, hence I am giving
identify first what type of product if this is identify then calculate the effort, then what
the what average, what salary or the developers is developer salary can be given per
414
month, then find out what is the total cost if it will be developed in house then compare
with this market value, if somebody will come what purchase from the market it is 1 lakh
compare this values, then take a decision whether the company should go for buying it or
building it.
These guild lines will help you to take a viable decision to the make or buy decision can
be made based on the following conditions, the will the software product available
sooner than internal developed software, if you will go to market definitely you will you
can easily very soon you can purchase it.
So, we have to consider will the software or whether the software product be available
sooner than internal developed software similarly, we have to think of when the cost of
acquisition plus the cost of the customisation both they will be less than the cost of
developing the software internally will the cost of outside support that is the maintenance
contract etcetera will less than the cost of internal support.
415
Say if we can what. So, before going for a buy build decision these issues we must have
to consider, that this we must have to look what.
So, that then we can take a proper decision whether to buy or build a software. So,
finally, we have come to the summary that we first discussed the fundamental concept of
basic COCOMO, we have also discussed the various types of projects or products such
as, organic semidetached and embedded we have also presented the cost and effort
estimation that formula that formulae using basic COCOMO finally, we have solved
some examples on estimation of cost and effort using basic COCOMO.
416
We have taken the references from these books, the details you can find in these books.
417
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineer
National Institute of Technology, Rourkela
Lecture -22
Project Estimation Techniques (Contd.)
Good morning. Now, let us take up the other cost estimation technique. Last class we
have discussed about, basic COCOMO model. Now, let us take about other two types of
COCOMO that is a intermediate COCOMO and a complete COCOMO.
418
(Refer Slide Time: 00:31)
See ah there are little bit disadvantages of basic COCOMO, the basic COCOMO model
assumes that the effort and development time, they depend on the product size only.
However, you will see in the modern project developments several other parameters they
also effect effort and development time, such as reliability requirements, you might
require that your product should be highly reliable.
You may you are using several modern available case tools or some modern
programming facilitates to the developers definitely the usual reduce the efforts, and this
size of data to be handled it is quite large. So, these requirements or these parameters
may affect the effort and development time. So, these have to be addressed in the
COCOMO model. So, in order to consider these issues, the basic COCOMO model has
been enhanced.
419
(Refer Slide Time: 01:36)
So, the newer COCOMO model we call as the intermediate model. So, for accurate
estimation and to consider the effect of all such modern relevant parameters. So, a new
COCOMO model little bit advanced COCOMO model was proposed that is known as a
intermediate COCOMO. The intermediate COCOMO model recognizes the fact that, all
these relevant parameters they have been addressed.
So, first you have to make the initial estimate, then the intermediate COCOMO refines
the initial estimate which is obtained by the basic COCOMO by using a set of 15 cost
drivers. So, first you have to prepare the initial estimate using COCOMO, then this initial
estimate they can be it can be refined by using a set of 15 cost drivers or multipliers.
Now, let us see what could be the possible what 15 cost drivers those are applicable for
intermediate COCOMO.
420
(Refer Slide Time: 02:39)
So, before going to the 15 list lets taken example that how these cost drivers or these
what multipliers, they are affecting the effort or the cost. See for example, if somebody is
using modern programming practical practices rather on using the traditional
programming practices, then definitely the effort required will be less. So, hence the
initial estimates obtained by the basic COCOMO they have to be scale downwards.
Similarly, suppose you for you are developing a real time system a real time application
for which there is stringent reliable requirement, it must not fail there should not be any
delay, they are should not be what the deadline must be met. So, there are several other
reliability requirements on the product. Then what will happen, you have to put much
more effort, what does it mean that the initial estimate what you have prepared, what you
have ah obtained that has to be scaled upwards. So, in this way you can consider the
other parameters how they are affecting, the estimation of the or the initial estimates of
the effort and cost.
421
(Refer Slide Time: 03:50)
So, then what you have to do as I have already told you intermediate COCOMO uses 15,
what cost multipliers or these cost drivers, you have to rate the different parameters on a
scale of one to three. So, these different parameters they can be rated on the scale of one
to three depending on these ratings what you have to do, you have to multiply the cost
driver values.
So, how may see all the 15 cost drivers or the these parameters may not be applicable for
every system might be 1, 2 or 10 whatever that you know which parameters, which cost
drivers will be required for your project. Find out or rate them using a scale of one to
three. Then multiply the values of these cost driver these cost drivers with the estimate
which is obtained by using the basic COCOMO. So, this product this multiplication the
result of this multiplication will give you on more a more accurate estimate. So, this is
the objective of intermediate COCOMO.
But please remember that even if I am saying here that the rating should be made in
between one to three, but in some cases the value of the parameters or the cost drivers
may be also less than 1. For example, when you are using modern programming
practices instead of using the traditional programming practices, then what will happen;
what will happen this effort required will be much less, much less means you have to go
downward. So, we have to in that case the value of the cost driver will be less than 1;
422
might be 0.9 something that will I will give you in the table. So, what will the exact
values for this cost drivers.
So, let us see how these intermediate COCOMO proceeds, it takes the basic COCOMO
as the starting point using the basic COCOMO prepare the initial estimate. Then identify
what are the attributes they are affecting the cost and development time for your project;
such as personnel attributes, product attributes, computer attributes and the project
attributes. So, all those 15 attributes are categorized into these four categories. So, these
attributes are expected to affect the cost and development time of your product.
Then what we have to do? We have to multiply the basic cost that you have obtained by
the basic COCOMO. Then you what multiply the basic cost by these attribute multipliers
or the cost multipliers which either may increase or decrease, this initial basic cost.
423
(Refer Slide Time: 06:25)
Now, let us see what are the basic the attributes I have told there are 15 attributes
categorized into four categories such as, personnel attributes, product attributes and other
two are, I have already told you; there are four attributes will see. The four attributes are
product attribute, project attribute, here personnel attribute and what computer attributes.
So, these let us see all those 15 attributes under the these different arrays.
Under personnel attributes you will see that the analyst capability, virtual machine
experience, programmer capability, programming language experience and application
experience, these attributes are coming under personnel attributes related to the what
related to the personnel working for the project. So, these are the codes there may be
used for these attributes, like for analyst capability you may use the code ACAP,
etcetera. So, for product attributes these are the product attributes, like reliability
requirement, database size, product complexity these are the attributes related to the
product that we have developing.
424
(Refer Slide Time: 07:37)
And then computer attributes you must consider, what are the execution time constraints,
what are the storage constraints, the virtual memory volatility, the computer turnaround
time, so these factors these attributes also may affect the what effort and cost of your
product. And finally, project attributes or the environmentally attributes you must
consider, project attributes also known as environment attributes.
So, what modern programming practices you are using? Like whether are you using
software reengineering, reuse, reverse engineering, etcetera or object oriented
programming, so all the modern programming practices, then that will make your effort
and cost much less. Then what kind of software tools, automated software tools are using
if I doing something manually, but if some tools are used then obviously, the effort
required will be less. Similarly, required development schedule, so what is the schedule
that is required; whether it is very tight schedule or this is flexible schedule, this will be
also affecting the effort and cost. So, these are the 15 attributes that you must consider to
get on up what more accurate estimate of the effort and cost.
425
(Refer Slide Time: 08:48)
So, COCOMO effort multipliers as I have already told you, these are also known as the
cost drivers. So, each of the 15 attributes they receive a rating, they receives each of the
15 attributes receives a rating on a six-point scale it will, we will use a six-point scale
that will range from very low to extra high, I will show in importance or in value. So,
this you may consider as the what is the importance, if it is importance is very low or low
or medium or high or extra high like this or may use. So, for these different categories
like difference just from very low to extra high, you may use the values one to three or
any points, I will give you the table for these values.
So, then after for identifying which attributes are affecting your product, then see it can
be categorized as whether very low or high etcetera, then find out the corresponding
value from the table, I will show the table in the next slide. Then you have to find out the
multiplication the product of all such effort mutiliers, and this multiplication will give
you a what term which you call as EAF or known as the Effort Adjustment Factor.
Then and this effort adjustment factor may range from 0.9 to 1.4. Then what we have to
do, we have already found out the basic the initial estimate by using basic COCOMO,
multiply this EAF or the effort adjustment factor with the base initial estimate obtained
by basic COCOMO which will give you the refined effort or the ah this will be the more
accurate ah estimate for the effort.
426
(Refer Slide Time: 10:31)
See these are the 15 cost drivers as I have already told you, there is a what six-point
rating very low, low, nominal, high and very high; so it is I think 1, 2, 3, 4, 5, the five so
here it is written six, so then this should be actually five, please correct it. On a rating of
the five-point scale that will better and here it is five there are five what ratings are
shown. So, like required reliability is equal to 0.7 when it is very low, when the required
reliability is very high it should be 1.40.
And you can sees in some cases, the value will be less than 1. For example, if the product
complexity is very low you may assign 0.70, these example I was saying that if modern
programming practices are using and if what it is very high, then you see if it is high it is
less than 1 nine 0.91; if it is very high it is 0.82. You can see the nominal values of all
the cost by drivers the average values, the nominal values are all ones and based on that
if it is high, it will be more than 1.
And normally here what is happening for some of the what ah some of the parameters
like analyst capability, etcetera; if it is very low, it will be what greater than greater than
1. So, in this way you can use these values of the cost drivers to get the refined estimate.
427
(Refer Slide Time: 12:06)
So, let us see what will be the equation for this intermediate effort, it intermediate model
for calculating or estimating effort. So, the effort equation can be stated I like this that
Effort=EAF*c*(size)k
so c almost similar to the basic COCOMO only here EAF has been added. So, the EAF is
the what this adjustment factor. So, you can see EAF is this Effort Adjustment Factor
which is the product of what, the effort multipliers that you have considered for your
product corresponding to each cost driver rating.
And c is a constant based on the developmental mode, we have already seen. There are
three development modes, may be organic, semi detached and embedded and same
values we have already used in what earlier; so for organic it is 3.2, for semi detached it
is 3.0 and for embedded it is a 2.8. So, the size is equal to what 1000s delivery source
instructions or KDSI and k is a constant for the given mode. So, in this way you can
estimate the effort using this intermediate; using this intermediate COCOMO model, you
can estimate the effort using this equation.
428
(Refer Slide Time: 13:27)
So, similarly you can also compute the time the development of the time calculation,
uses the effort which you are getting ah by using this equation using the EAF. So, this
effort you can use in this earlier equation that we have already discuss for the basic
COCOMO. The development time calculation it uses effort in the same way as in the
basic COCOMO that means, nominal development time will be
Here this effort is the effort that you have obtained by multiplying the what effort
adjustment factor. So, this is the you can say refined effort or this is the revised effort
that value you can put here, this will give you the development time.
Where, 2.5 is a constant for all the modes may be what organic semi detached or
embedded. And this exponent is based on the mode that you are using, for organic modes
this is 0.38, for organic products this is 0.38, for the semi detached products this is 0.35,
for embedded products the value of exponent is 0.32. So, by using these two formulas;
one for effort which is the multiplied with EAF and these one for this what nominal
development time by using this formulas, you can estimate the value of the effort and
cost ah by using this intermediate COCOMO model.
429
(Refer Slide Time: 14:55)
Now, let us take a small example we will consider how to use this cost drivers or the
effort multipliers. So, the example said that you are considering the what analyst
capability, the capability of the; the capability of the analyst and then you want to find
out the estimate for the effort.
You can see, it is given that see depending upon the capability of the analyst, you can
rate this capability as very low or low, nominal, high or very high. So, the if it is very
low, you can see it is 1.46 and low is 1.19 you can see in this table. See what is analyst
capability, what is analysts capability; yes it is analyst capability, you can see analyst
capability. Yes here, analyst capability if it is low, it is very low 1.46, if it is low 1.19, if
it is nominal 1.0 and if it is high it will reduce 0.86 and it is very high again it will reduce
0.71. So, like this you can find out these values.
So, find out the values of the multipliers in this what we have already shown in the table.
Now, suppose the initial estimate of the effort is 32.6 and suppose the analyst is having
very high capability, analyst capability is high, analyst suppose the analyst capability is
high that means, the value 0.80 multi this is the what only one cost driver is there, where
multiplying this. So, this will the value of the effort adjustment factor that is 0.80. So, the
value of this effort adjustment factor will be multiplied with 32.6. So, this will give you
roughly 26.8 staff months.
430
So, in this way so we are getting the adjusted nominal estimate or the we are adjusting
the nominal estimate for the effort. So, the what refined effort or the adjusted effort is
coming to be 26.8 staff months by using the effort adjustment factors 0.80. So, in this
way you can use the values of the cost multipliers, to find out the refined value of the
refined estimate of the effort.
Now, let us take another example. So, determine effort, duration, staffing level for the
following scenario. So, suppose according to what suppose you have estimated the size
of a product is to be a 10,000 LOC which is equal to ten KLOC. And it is if it is small
project, familiar development is there. And you are estimating that; you are expecting
that or it is given that the analyst capability is low, application experience is also low.
Programmer capability is a high and programmer experience is what and programmer
experience is also high.
431
(Refer Slide Time: 17:59)
So, what you do, you take we want to find out the cost effort, etcetera. It is needed to
produce see from the problem, you can see that needed to produce a program of size 10
KLOC. Since is a small project and familiar development, it can be treated as a organic
model. So, effort we will use what, we will first use these effort using what using the
basic COCOMO.
So, if will take the basic COCOMO it is equal to what standard formula that this value of
c is 2.4, size is 10 to the power the exponent value will be 1.05 for organic products, this
will be 20 roughly 26.9 person-months. Development-time against a forward formula 2.5
into effort to the power what the exponent for organic product this is 0.38, so roughly it
will give 8.7 months. And hence average staff will be how much, effort divided by
development time so this is equal to roughly 3 people you will get. Now, what you will
do, we have to get the refine value using the cost multipliers or the cost drivers.
432
(Refer Slide Time: 19:05)
Now, the attribute multipliers will be ah as follows, these values we can get from the
table that I had provided. So, analyst capability is low; low you know it is 1.15. You
have application experience it is also low that is 1.13 and programmer capability it is
showing programmer capability, perhaps will have to see here. Programmer capability is
also low, here something just one minute. Yes, programmer capability is also low here.
So, programmer capability is low and this value you can take programmer what will low
is 1.17 and, but programming experience is high. So, value will be the programming
experience is high; it will be less than 1, so 0.95 already given in the table. So, the
adjustment factor will be how much multiplication of all those values. So, this you will
give you EAF Effort Adjustment Factor this is coming to 1.49.
So, the revive effort will be completed as the basic estimate of 26.9 into the EAF – Effort
Adjustment Factor 1.49, this will give you 40 person months. Now, the development
time will be what? 2.5 into 40 not these 26.9, I have to take the revised value that is 40 to
the power exponent for organic product this is 0.38, so it will be roughly 10.2 months.
Now, the average staff; so these are all the what refined values by taking into account the
values of the cost multipliers. So, average staff will be now required 40 by this is equal
to 4 people 3.9 or approximately 4 people. So, in this way we can use these values of the
cost drivers or these multipliers to refine the basic estimates more accurately. We will
quickly take another example.
433
(Refer Slide Time: 21:02)
Suppose, we have to develop a project for a flight control system, which is a mission
critical with 319000 DSI in embedded mode. Of course, this a flight control system this
is a very risky project. So, it is what is a very complex project treated as embedded, it is
a type of embedded product.
The reliability is required to be very it should be very high and for reliability very high
values 1.40. So, now we can calculate the effort is equal to how much, the basic effort
first we have to compute or the initial estimate by using the basic COCOMO; so 1.40
into the value for embedded value embedded case, the concern value c is 3.6 and the
DSI is given 319 kilo what LOC and the exponent value for embedded mode is 1.20, this
is equal to 509 percent month approximately.
Duration will be again strait forward formula, this will be 38.4 months and average
staffing will be equal to how much effort by duration, so coming to be roughly 133
people approximately. Now, we have to use the cost drivers, you can see that only one
cost driver is given, only one attribute is given, so I have to directly multiply with this.
So, and so I have to directly multiply of course, we have already direct so already I am
sorry, that we have already in the while calculating the effort already I have taken into
account this what reliability, because in according to basic COCOMO we should get
only this much 3.6 into 319 to the power 1.20.
434
And if we will apply intermediate COCOMO, I have to take the value of the effort
adjustment factor; since only one cost driver is given that is reliability which is equal to
what 1.40 for very high. So, I have multiplied here 1.40 into 3.6 into this, so we will get
5093 percent month, this is the refined estimate for effort.
Similarly, duration you can use this what refined effort the what revised effort that is
5093 to the power 0.32, it will give you the revised duration. And this average staffing
will be equal to this revised effort or the refined efforts by this duration which is equal to
133 people for approximately. In this way, you can use the cost drivers to refine the
value of the initial estimates.
So, similarly quickly we will take the other example and embedded software system on
micro computer hardware is it has to be developed. Basic COCOMO it predicts that 45
person-months effort is required. So, here there are four attributes are required, so four
cost drivers you have to find out; one is reliabilities, storage, time and tool. The values
are already given and suppose these values are all already given for this embedded
software system.
So, intermediate COCOMO can now predict or estimate the effort as such what will be
this, so already basic COCOMO gives 45 person months, so 45 into effort adjustment
factor. How we can find out the effort adjustment factor? By multiplying all these what
435
the values of the cost drivers, so for cost drivers there values are modified are multiplied.
So, this will give you 76 person months.
So, assume that the cost per months person month that we have to pay each rupees
50000. So, the total cost for developing this embedded software system products will be
76 into 50000 which will be equal to rupees 3800000. So, in this way you can use the
embedded software, you can use the intermediate COCOMO to estimate or to refine the
basic estimates.
So, now let us quickly see about the shortcomings of basic and intermediate COCOMO
model. For better accuracy COCOMO has to calibrated to an organizations environment,
where is it is very sensitive to parameter change if you just make some my, if you just do
so minor adjustments, minor changes over a person-year difference may arise in case of
10 KLOC project. It is a broad brush model that can generate many significant errors,
these are some of the drawbacks.
436
(Refer Slide Time: 25:13)
Similarly, as you know this in this modern what today’s software reuse is there,
application generation programs are there, object oriented approached are used and
application engineering such as reuse, reverse engineering, application translation,
etcetera, they are and so there are need for rapid development. So, this basic and
intermediate COCOMO they do not considered these modern practices. So, one so these
are the some of the drawbacks the basic and intermediate COCOMO models do not
consider these modern programming what practices. So, there is need to go for another
model.
437
So, we will see still then another important drawback is there, which say that this both
the models. Basic model and intermediate COCOMO, they consider a software product
as a single homogenous entity as if it is considered only one type of what components.
But however, most large systems they are made up of several smaller sub-systems, where
some sub-systems may be considered organic time type, some sub-systems maybe of
what embedded type, some sub-systems may be of the semi detached type ok.
And in some cases reliability, some of the systems may require high reliability
requirements or at the some of them may require very low reliability requirements things
like that. So, your basic and intermediate COCOMO cannot handle all this things,
because it considers all the sub components are treated as say similar, because the what
software product is considered as a single homogenous entity.
So that is why a new so another advanced model of COCOMO it has come into existence
that is known as complete COCOMO or detailed COCOMO. It overcomes some of the
limitations of a basic and intermediate COCOMO. How does it work? Here the cost of
each sub-system is first estimated separately using this what basic COCOMO or the
intermediate COCOMO. Then cost of the sub-systems they are added to obtain the total
cost. So, because the as I have already told you, some of the sub-systems might be
organic, some of the sub-systems might be semi detach, some of the sub-systems might
be what embedded, in some substance they reliability might be very high.
438
So, you have to estimate the cost of each sub-system separately, individually, then the
cost of the substance they are added to obtain the total cost. This will reduce the margin
of the error in the final estimate; obviously, we will get a less error or this will reduce the
margin of there are in the final estimate.
Now, let us quickly take a small example. A management information system suppose
we have to develop for an organization, which is having offices at several places across
the country. So, then this will be a very complex project, this is not an homogenous what
project, this consist of different types of sub-systems.
Like one part may be may contain the data base part, you may treat as a semi detached
system. One part one component might contain development of graphical user interfaces;
obviously, this will be treated as organic component or organic product. Similarly, since
the MIS has offices at different locations in the country. So, the these different offices
they have to communicate. So, you have to develop also a module for communication.
So, this module will be treated as embedded.
So, now what we have to do, compute the cost or estimate the cost for these three
components, which are of three different types by using either basic COCOMO or
intermediate COCOMO. Then what you do, add up the cost of the individual
components ok, first estimate the cost of the individual components separately for each
of these what components. Then you add up these the cost of these individual
439
components, which will give you the overall cost the total cost of the system. In this way,
the complete COCOMO or the detail COCOMO it works.
So, finally we have come to the summary we have discussed the fundamentals of
intermediate COCOMO, we have presented the different cost drivers or multiplies; 15
cost drivers and their values we have seen. Then also we have explained the cost and
effort estimation using intermediate COCOMO, we have shown some of the limitations
of basic and intermediate COCOMO. We have solved some examples on cost and effort
estimation using intermediate COCOMO, we have also discuss the fundamental concepts
of complete COCOMO or detail COCOMO, this is the summary that we have seen.
440
(Refer Slide Time: 29:47)
441
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 23
Project Estimation Techniques (Contd.)
Good morning, so now let us see another important type of cost estimation technique
another variation of COCOMO, we call it as COCOMO II. So, somewhere you will find
it as COCOMO roman letters II somewhere in some books they have written COCOMO
that digit as COCOMO 2 . So, that will that will be no problem.
So, COCOMO II you see that since the time that COCOMO estimation model was
proposed in early 1980s; then the software development paradigm as well as the
characteristics of develop projects have undergone a large change. The present day
software projects are much larger in size and they reuse the existing software
components to develop new products that has also become pervasive.
So, similarly other new techniques such as reverse engineering, reengineering all those
things have use of automated tools etcetera case tools of come up. So, they need to be
addressed while estimating the cost. So, COCOMO the initial version of COCOMO that
has been extended, and that is known as the COCOMO II.
442
(Refer Side Time: 01:32)
They have come up also new development paradigms are being deployed for web-based
applications for component-based software. So, they need to be also taken into account
while estimating the effort and cost. During the 1900 1980s rarely any program was
interactive but now you see due to because graphical users were GUIs Graphical User
Interfaces were also almost non-existent. But now did you see any program without
having GUI it is of no use nobody will use that one.
So, how to taken to account the development of GUIs while estimating the effort, cost
etcetera that is also very much important.
443
On the other hand, the present day software products are highly interactive, they support
elaborated graphical user interfaces, so they have to be considered while estimating the
effort and cost. Effort spent on developing the GUI part is often as much as the effort
spent on developing the actual functionality of the software, see almost 50 percent of the
effort is spent on the developing the GUI part, so that cannot be neglected.
So, since effort spent on developing the GUI part is often as much as effort spent on
developing the actual functionality of the software. So, we must have to taken to account
taken to consideration this developing GUI part while estimating the effort and cost.
444
In order to make COCOMO suitable in this changed scenario where this modern
programming paradigms are there, GUI is there and this is say other technique reverse
engineering reuse etcetera there Boehm proposed COCOMO 2 in 1995. So, in as I have
already told you some books they are using this COCOMO II roman letter some are just
this COCOMO 2 in this digit, so please do not confuse.
COCOMO 2 provides three models to arrive at the increasingly accurate cost estimations
taking into account on those all those new things modern techniques. This can be used to
estimate the project cost at a different phases of the software product. As the project
progresses these models of they can be applied at different stages of the same project.
Actually here it is written three models, but now a days we have found recently this
recent material they are saying four model, I will see one more model that is added that
is this for a reuse. So, you can correct it as four models.
So, now, let us see what are the main objectives of COCOMO II. The main objective of
COCOMO II is to develop a software cost and schedule estimation model which is tuned
to the life cycle practices of 1990’s and 2000’s. Another objective is to develop software
cost database and tool support capabilities for continuous model improvement. So, this
has been taken from the Cost Models for Future Software Life Cycle Processes:
COCOMO 2.0 in the published in the Annals of Software Engineering, 1995.
445
(Refer Side Time: 05:01)
COCOMO II models are, let us see the COCOMO II incorporates a range of sub-models,
it produces increasingly accurate estimates. So, here the estimates are more accurate than
the previous COCOMO models. The sub models in COCOMO II are I have already told
you there will be four sub-models, those are application composition model, early design
model, reuse model, and post-architecture model.
So, application composition model is used when the software is composed from the
existing parts. Some parts are there you are composing what these existing parts to
develop a new one, so when, so then you have to use application composition model. So,
this model is used when software is composed from existing parts.
Next one is early design model. So, this -sub model is used when the requirements are
available, but design has not yet started only requirements are known then you can go for
early design model; that is why the name early is used. Reuse model, so it is used to
compute the effort of integrating reusable components.
So, there are some component, some code, or some database they are already existing;
they can be reusable in another application then you can use reuse model. So, reuse
model is used to compute the effort of integrating some reusable components. And next
one is post-architectural model, this is used once the system architecture has been
designed and no information about the system is available.
446
So, when the system architecture has already been designed but more information about
the system and more information about the system is available then you can get use this
post-architecture model that is why the name is post, post- architecture Architecture is
already designed and more information is already available, that is why we are we have
to now develop the model and estimates the cost etcetera. So, since after this architecture
is prepared this is known as post-architecture model.
So, this is what summarized here, application composition model is based on the number
of application points and it is used for systems developed using dynamic languages, data
bases, programming data base programming etcetera and early design model is based on
number of function points which is used for the initial effort estimation based on system
requirements and design options.
Reuse model is based on number of lines of a code that is reused or generated and it is
used for effort to integrate reusable components or automatically generated code. Post-
architecture model is based on numbers of lines of source code which are used and this is
used for development effort which is based on system design specification.
447
So, in COCOMO 81, we are using the term DSI which stands for the Delivered Source
Instructions, but in COCOMO II, we are using term SLOC Source Lines of Code. So,
that we have which is, so this Delivered Source Instructions is very much similar to
SLOC, but there is a one difference.
The major difference is that, a single Source Line of Code it may be several physical
lines. For example, if there is a if-then-else statement, then this if-then-else statement
would be counted as only one source lines of code. But it may be counted as several
Delivered Source Instructions this is the important difference between DSI and SLOC.
448
Now, let us see what is the core model how can effort, how can estimate the effort for or
by using COCOMO II for any model. The core model says that
where pm is the effort in a person months, A is a constant whose value is 2.94, size is the
number of thousands of lines of code, sf is known as scale factor which is taken as the
exponent here and em this em i’s they are what; they are the effort multipliers already
effort multiplier we have seen in case of intermediate COCOMO.
Now, let us see the first sub-model, that is a application composition model. I have
already told you this is applicable to prototyping projects and projects where there is
extensive use. So, this is applicable to prototyping projects, I hope you have already
known about prototype models. So, this is applicable to prototyping projects and projects
where is extensive scope of reuse. This is based on the standard estimates of developer
productivity in terms of application points or object points.
So, this application composition model that is why the name is application. Application
composition model it is based on the standard estimates of the developer productivity in
terms of what; in terms of application points or object points per month. So, this what is
this application points we will see, the application points are normally computed using
449
the objects such as the number of screens, number of reports, number of modules or
components present in the your application.
Please do not confuse here this objects is nothing related to the object oriented
programming ok. Here objects maybe treated the examples of objects be screens, reports,
modules. So, the number of screens, number of reports, number of modules or
components etcetera they can be treated as the objects and by you have to count those
objects.
So, based on these objects you can compute the value for the application point or number
of application points. So, application composition model takes the case tools used into
account. So, now, let us see the formula for estimating effort using application
composition model.
So, the formula for computing for estimating effort using application composition model
is given by
where PM is the effort in terms of person-months, NAP stands for number of application
points which can be found out which can be computed by what using the different
objects such as numbers of screens, number of reports, numbers of modules etcetera and
the PROD is PROD stands for the productivity.
450
So, this model is suitable for softwares built around Graphical User Interfaces and the
modern GUI builder tools. This model uses object points as a size metric. So, I have
already told you this model uses application points or object points as the what size
metric, this is a extension of these object point or application point these are extensions
of function points.
Here we have to count these object use or the object point or application point it is a
count of the screens, reports, and modules; this I have already told you. This is based on
counting the number of screens, number of reports, number of modules etcetera which
are weighted by a three-level factor, either they could be simple, medium, or difficult.
So, the application points or objects points they are size metrics and they are based on
counting the number of screens, reports, and modules which are weighted by a three-
level factor may be simple, medium, and difficult.
So, now let us see quickly about application points. These are reused with languages
such as database programming languages, or scripting languages. Number of application
points is a weighted estimate of three things, I have already told here that these number
of application points they can be used or they can be found out by counting the number
of screens, reports, and modules weighted by a three-level factor.
451
You can see that the number of application points is weighted estimate of three things,
what; the number of separate screens that are displayed, the number of reports that are
produced, and the number of modules.
So, let us see the first one, number of separate screens that are displayed. So, here simple
screens they are counted as 1 object point, moderately complex screens they are count as
2 and very complex screens they are counted as 3 object points.
Similarly, number of reports that are produced, how they are categorised or rated in the
scale of say 1, 2, 3 or I think something more than that. Here it is rated as the screens
are rated as in between 1, 2, 3. Let us see how the number of reports they can be rated;
number of report that are produced they can be rated as follows.
For simple reports count 2 object points, for moderately complex reports you count to 5,
and for reports which are likely to be very much difficult to produce for count 8 object
points. Similarly, number of modules in languages the number of modules how we can
count? So, the number of modules in languages such as Java or C plus plus etcetera that
must be developed to supplement the database programming code. Each of these
modules counts as 10 object points.
So, if you are using languages such as Java or C plus plus etcetera to develop or they
must be developed to supplement the database programming code then each of these
modules they counts as 10 object points.
452
(Refer Side Time: 14:44)
See in the this is a table showing the application-point productivity. So, the developer’s
experience and capability they are rated as like on a 5 point scale; very low, low,
nominal, high, very high. Similarly the ICASE maturity capability, it can be very low,
low, nominal, high, very high; these are the values of productivity.
That means if developer’s experience and capability is very low and maturity and
capability is very low; use a factor of 4. This is the number of, so here productivity is
counted as number of application points divided by month. Similarly if developer’s
experience and capability is low, maturity and capability is low; then the productivity is
counted as 7.
So, this have been what Boehm has already produced. These values have been has have
already been produced by conducting so many research works by taking different what
data.
453
Now, let us see about The Scale Drivers, I have already showed you in the what in the
initial function there was a what factor called as this scale factor. Now, let us see how
this scale factor is defined. The scale drivers or the exponent parts are computed like this.
This is an important factor contributing to a project’s duration and cost. The scale drivers
they determine the exponent that has to be used in the Effort equation. Scale drivers are
replaced; they have replaced the development modes of COCOMO 81, see in COCOMO
81 we are having three development modes.
One for your organic, semidetached and embedded but in COCOMO II they have used
scale drivers and they have replaced all those three development modes of COCOMO
81. What are the possible 5 scale drivers used in COCOMO II; they are as follows.
precedentedness, development flexibility, architecture or risk resolution, team cohesion,
and process maturity.
454
Then we will come to the next model, that early design model and the post-architecture
model. So, here effort is calculated as follows,
where environment multipliers could be product, platform, people, and project factors;
size is what that normally that the basic size that is estimated and these here it takes into
account reuse and volatility effects, process scale factors are what, the constraints, risk
architecture, team, maturity factors etcetera.
The schedule can be calculated in early design and post-architecture model as follows,
where these are the process scale factors, constraints, risk architecture, team and maturity
factors.
455
Now, let us see another approach COCOMO II scaling exponent approach. So, here the
nominal person-months is calculated as follows,
B value may range from 0.91 to 1.23, there are five scale factors; and six rating levels we
will see. The five scale factors are those things precedentedness, development flexibility,
architecture or risk resolution, team cohesion, and process maturity and the six ratings
we will; so what are these six; so, process maturity it is a derived from SEI CMM model
that Capability Maturity Model. These are the codes they are referred they will be
referred while doing these problems. So, now let us see what are the six rating levels for
these five scale factors.
456
So, these are the six rating levels; like very low, low, nominal, high, very high, and extra
high. The scale factors are the codes we have used here like PREC, FLEX, RESL,
TEAM, PMAT etcetera. So, those are shown here these are the scale factors and they are
rated on what a 6 scale; 6 point scale and this PMAT, it is a weighted sum of 18 KPA
achievement levels in where in the CMM or yes, in the CMMI.
Then we will see the next model that the reuse model. So, reuse costs normally reuse
cost consider overhead for assessing, selecting and assimilating component. So, even
small modifications they generate disproportional large costs, so they are also what taken
care prior.
457
This reuse model takes into account a black-box code that is reused without any change.
The code that has to be adapted to integrate it with new code. So, this reuse model takes
into account two things, the black-box code that is reused without change, and the code
that has to be adapted to integrate it with the new code.
There are two versions for this reuse model; one is the black-box reuse model, the black-
box reuse model where code is not modified. An effort estimate is computed then. In
white-box reuse model the code is modified then you may estimate the effort. A size
estimate is equivalent to the number of lines of new source code is computed. This then
adjust the size estimate for the new code, this is happening in the reuse model.
458
The reuse estimates are like this, we will two possibilities we will see. For the generated
or reused code. Person month means the effort in person month is
PM = (ASLOC * AT/100)/ATPROD
Here ASLOC is the number of lines of generated code, AT is the percentage of code
automatically change generated, ATPROD is the productivity of the engineers in
integrating this code and when the code has to be understood and integrated then the
formula changes. So, here ESLOC is equal to
Where ASLOC already we have taken earlier, AT also we have already taken earlier,
ASLOC means number of lines of generated code, AT is the percentage of code
automatically generated, they are as usual before they have used, and AAM this is a new
term here.
So, this is the adaptation adjustment multiplier which is computed from the cost of the
changing the reused code, the cost of understanding how to integrate the code and cost of
reuse during decision making. So, this how when the code has to be understood and
integrated you can compute this E estimated you can compute the what ESLOC.
459
Now, we will see the last model that is post-architecture level. It uses the same formula
as the early design model, but with 17 rather than 7 associated multipliers. See in the
earlier one was that e or this earlier model sorry in this early design model actually there
are 7 associated multipliers.
In earlier design model 7 associated multipliers are were used, but in post-architecture
model 17 multipliers are used. The code size is estimated as follows, first the number of
lines of new code to be developed is found out, then estimate the equivalent number of
lines of new code which is computed using the reuse model earliers reuse model we have
already shown.
Then an estimate of the number of lines of the code that have to be modified according to
the requirements changes, whenever the requirement changes you may have to modify
some of the lines or some of the code. So, then an estimate of the number of lines of code
that have to be modified according to the requirements changes that has also to be
estimated.
460
Now, let us see about quickly about the COCOMO II scale factors. I have already told
you there are five scale factors, this already we have seen earlier. So, these five factors
appear to be particularly sensitive to system size. So, precedentedness represent the
degree to which there are past examples that can be consulted already similar cases you
have already handled earlier. So, what is the degree to which these past action examples
can be consulted.
Architecture risk resolution it says the degree of uncertainty about requirements. So,
these requirements are not clear cut to you, the requirements are very much uncertain
they are not certain. So, what is the degree of uncertainty about their requirements that is
represented by RESL and similar team cohesion among the team members, and process
maturity this could be assessed by the CMMI Capability Maturity Model Integration.
461
So, these are the different factors for this what COCOMO II scale factors. So, when
these are the drivers, these are the 1, 2, 3, 5, 6 you see rated as 6 scale and these are the
values these have been given.
Then now let us quickly take some examples for using of scale factor. For example, a
software development team is developing an application. It is very similar to previous
ones it has already developed. So already they have developed similar types of
application. So, now, you are developing a similar application right now.
462
It is a very precise software engineering document lays down very strict requirements ok,
a very precise software engineering document lays down very strict requirement. So, it is
say that precedent is very high is PREC is the precedent I think; yes, So, now precedent
is very high and for high value it is 1.24 perhaps high value it is precedent high value is
see very high is 1.24, so that you have used and similarly, it is say that flexibility is very
low that value is 5.07 you can see flexibility is low is very low it is 5.07 and the good
news is that requirements are unlikely to change requirements are it will not change, so
RESL is high with a score of 2.83. RESL you can see RESL here it is high it is 2.83. So,
RESL is 2.83 but the team is tightly knit.
So, team score is high, team see high is how much? High is 2.19. So, that we have shown
here 2.19, but processes but processes are very informal. So, PMAT is low and for
PMAT low value is 6.24; PMAT low is 6.24 now you can find out the scale factor.
The scale the formula for scale factor sf is given like this,
that is equal to 0.91 plus 0.01 into what are the four values you have got here 1.24, 5.07,
2.83, 2.19 and 6.24. So, those values will be multiplied, so on simplification you will get
1.0857.
463
If a system contained suppose the system or if a system contained 10 kloc then estimate
would be how much, the basic estimate is what suppose the system contained 10 kloc.
So, the estimate would be how much? The obtained value of 2 point value 2.94 the value
of a into size is 10 to the power this, so the exponent factor is 1.0857 this. So, on
simplification you will get 35.8 person months. So, using exponentiation that means to
the power of it adds disproportionately more to the estimates for larger applications.
These are the, so what besides these scale factors that we have already seen earlier five
scale factors. Effort multipliers they are also assessed. Followings are the effort
multipliers that you will using in COCOMO II like RCPX means product reliability and
complexity, RUSE means reuse required, PDIF platform difficulty, PERS personnel
capability, FCIL facilities available, SCED schedule pressure. So, these are the different
effort multipliers they are also to be assessed.
464
Their values are given also in a how much; 7 scale perhaps; 1, 2, 3, 4, 5, 6, 7. Yes, the
values of the effort multipliers are given in 7 point scale you can use them.
A new project has to be developed in ok, a new project to be developed is similar in most
of the characteristics to those that an organization has been dealing for some time. Before
they have already developed similar type of project, except the followings; what are the
exceptions; the software to be produced is exceptionally complex and will be used in a
safety critical system.
465
Software will interface with a new operating system that is currently in beta status and to
deal with this the team allocated to the job are regarded as exceptionally good but do not
have a lot of experience on this type of software, we have to now estimate.
So, here RCPX is very high so 1.91, PDIF is very high 1.81, PERS extra high 0.50, and
PREX is nominal 1.0. So, these values you can get from that table; all other factors are
nominal and suppose the estimate say estimate is 35.8 person month. For example,
suppose it is estimated as suppose the effort is estimated as 35.8 person months.
Then with these effort multipliers the revised estimate becomes how much; 35.8
multiplication of these multipliers, so this gives into 61.9 person months. So, in this way
you can use the effort multipliers and the scaling factors to revise to refine the initial
estimates using COCOMO II.
466
So, another equation I have we have taken another what example, a project with all
nominal cost drivers and scale drivers would have an EAF. EAF we have already
discussed it earlier Effort Adjustment Factor of 1. 0 and exponent, E, is 1.0997. So, these
things they have already calculated EAF is calculated as 1.0 and exponent E is calculated
1.0999.
So, a project with all nominal cost drivers and scale drivers, for this project it is already
calculated EAF value is 1.0 and exponent values E is 1.0997. Assuming that the project
is projected to consist 8,000 source lines of code, then we have to estimate the effort. So,
effort will be COCOMO II estimates 28.9 person-months using the previous equations
ok, now we have to complete it.
So, how we can complete it? So, COCOMO II estimates the effort as 28.9 person-month.
COCOMO II estimates that the effort is of 28.9 person months of effort is required to
complete it, so now we can revise it. So, effort is equal to how much; that is value of
constraint is 2.94 into what into the EAF Effort Adjustment Factor that is 1 into what is
the source code, see what is this source code; 8,000 source lines.
So, this is, so in this way see how COCOMO II estimates this 28.9 it is shown here. For
this data that it is a project with all nominal cost drivers, EAF is 1, exponent value is
1.0997 and the source lines is 8,000. So that means, project size is 8,000 source line, we
have to estimate the effort using COCOMO II. So, that will be based on the simplest
formula a into what the EAF into source lines to the power of the exponent.
467
So, value of a is 2.90 and EAF is equal to 1.0, size is 8,000 source line means 8 kloc and
to the power of the exponent part, exponent part is 1.0997 put the values in this equation
you will get the effort in person-months. So, on solving we will get approximately 28.9
person-months. So, COCOMO II estimates roughly 28. 9 person-months will be required
as the effort for developing this project.
So, finally, we have discussed the fundamentals of COCOMO II, explained the various
sub-models of COCOMO II, presented the COCOMO II scale factors and effort
multipliers, solved some examples on estimating effort and cost using COCOMO II
model.
468
We have taken the references from these books.
469
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
Indian Institute of Technology, Rourkela
Lecture – 24
Project Estimation Techniques (Contd.)
Good afternoon. Now, let us take another kind of Project Estimation Technique that is
Halstead’s Software Science which is an analytical estimation technique.
470
See, in the in the while you have started the project estimation technique, now we have
known that there are 3 main important categories of testing estimation techniques,
number one empirical estimation, then heuristic estimation and analytical estimation. We
have already discussed empirical estimation and heuristic estimation. Normally,
empirical estimation is based on making an educated guess of the project parameters. In
a project parameter, in a project, there are different parameters such as size etcetera.
In empirical estimation first, the project manager makes an educated guess of the what
the project parameters and then gradually they take a decision they estimate the various
what other parameters such as effort, costs etcetera. So, here basically this technique
based on making an initial educated guess of different project parameters and then they
calculate and refine what are the other what parameters. So, examples of empirical
estimation techniques are expert judgment technique and Delfi technique. We have
already discussed expert judgment technique and Delfi technique in the previous classes.
So, next category is heuristic estimation. So, this technique assumes that the relationship
which exists among the different project parameters can be satisfactorily modelled using
some mathematical expressions ok. So, here this estimation technique it is based on the
what it assumes that what are the relationships they exist among the different project
parameters they can somehow be modeled, they can be expressed using some suitable
mathematical expressions. And then you can then later on you can refine them by using
some what multipliers or cost drivers in order to get the refined or more accurate
estimates. The examples are COCOMO models, we have seen different kinds of
COCOMO models under this heuristic estimation such as basic COCOMO, intermediate
COCOMO, complete COCOMO or detailed COCOMO and COCOMO 2. So, all these
things we have seen in the previous classes.
471
(Refer Slide Time: 03:09)
So, these techniques they have certain scientific basis. So, these assumptions are not
vague assumptions. So, all those what assumptions or the different equations or the
different methods that will be used in these techniques they have some certain what
scientific basis. And under this category, you will see one such example is coming under
analytical estimation techniques that is known as Halstead’s software science. So, today
we will discuss about the Halstead’s software science how it can be used for different for
the estimation of different project parameters.
472
Halstead software science is very much suitable for estimating software maintenance
efforts.
This technique outform outperforms both the empirical and heuristic techniques as far as
software maintenance effort is concerned. So, as a as far as the software maintenance
input is concerned, it gives better results. It more accurately estimates the different
parameters such as effort and cost etcetera. It performs better results than the previous
two techniques we have seen empirical and heuristic techniques.
I have already told you that Halstead software science is an analytical technique for what
to estimate different parameters of a project, such as the size, the development effort,
development time, the program volume, and so many other things you will see for what
parameters it can be used to estimate.
473
(Refer Slide Time: 06:15)
Halstead, he has used a few primitive program parameters in order to estimate effort,
cost etcetera. So, what primitive program parameters, he has used for the estimation
purpose? He has used two important, what a primitive program parameters, one the
number of operators and two, the number of operands. So, by using the number of
operators and the number of operands, any manager - program manager, can estimate
certain parameters for each project.
So, what parameters can be estimated? The following parameter can be estimated. Using
Halstead software science you can derive expressions for estimating the overall program
length, the potential minimum volume, the actual volume of the program, the language
level of the program, the effort that will be spent to develop the program and finally, how
much development time will be required to develop that program, so that also can be
estimated. So, Halstead software science provides or by using Halstead software science,
you can derive the expressions for these parameters for estimating these parameters.
474
(Refer Slide Time: 07:44)
Now, let us see how software’s Halstead’s software science works. For a given program
let eta 1 be the number of unique operators which are used in the program, eta 2 be the
number of unique operands which are used in the program, N 1 be the total number of
operators used in the program, and N 2 be the total number of operands used in the
program.
So, basically eta 1 and eta 2 are unique operators under unique operands, whereas N 1
and N 2 these are total number of operators and operands used in the program. With this
we can derive the expressions for these parameters what we have shown in the previous
slide such as overall program length, potential minimum volume, actual volume,
language level, effort, and development time, etcetera.
Now, let us see how by using these terms how we can estimate, how we can derive
expressions for the desired parameters such as volume and the program length etcetera.
Before going to derive the expressions, let us first see some of the guidelines that will
help us to identify the unique operands, the unique operators etcetera.
475
(Refer Slide Time: 09:02)
These are the guidelines they may be used for identifying the operators. It says that
assignment operators, arithmetic operators and logical operators, they are usually
counted as operators. So, all such assignment operators, arithmetic operators such as plus
minus etcetera, and logical operators etcetera they are they can be used, they are usually
counted as operators.
Similarly, if you are using labels, a label is also considered as an operator label is
considered to be an operator if it is used as the target of a GOTO statement. So, if you
are using a GOTO statement, and the target is a labelled, then also label is are considered
as an operator. The construct such as if then else endif, and a while do, do while etcetera,
they are also considered as single operators.
A sequence such as the statement termination operator; a sequence operators such as the
statement terminator that you are using the semicolon very often you might have seen in
C programs etcetera that end with a semicolon these statements end with a semicolon.
So, here the semicolon operator is also considered as a single operator.
476
(Refer Slide Time: 10:25)
The subroutine declarations and variable declarations they comprise the operands. So,
now let us see about the we have seen some of the guidelines for the what operators.
Now, let us see guidelines for identifying the operands. So, subroutine declarations,
when you are declaring sub routines, so subroutine declarations and variable
477
declarations, they comprise the operands. The function name in a function definition is
not counted as an operator. Please remember that the function name while we are making
a function definition, the function name that is not counted is not counted as an operator.
So, similarly the arguments of the function call are considered as operands. So, whenever
you are putting the arguments where in a function call the arguments of the function call
they are considered as operands.
However, if the parameter list of a function in the function declaration statement is not
considered as an operands. While you are declaring a function, if you are putting the
parameters the parameter list that present in the function declaration, there that is not
considered as an operand.
Let us see the common language, programming language C, language what to do the
what a possible operators, so these are the possible operators like
478
these are all examples of operators valid operators in C as a as long as this Halstead’s
software science is concerned.
So, the operands are those variables and the constants which are being used in operators
in expression. So, in an expression, ok, in an operator, the operators while you are
putting in what in expression the operators while you are putting the expression. So, we
have to consider the operands also. The operands are those variables lets define the
operands operator since you all already have defined, let us define the operands,
operands are those variables and the constants which are being used with the operators in
the expression. So, along with the operators, so what variables and constants you are
using in the expressions, they treat it as operands. Note that the variable names appearing
in the declarations, they are not considered as operands. If you are using some variable
names in the declarations, they are not considered as operands.
So, these are the guidelines we have given for for identifying the operators and the
operands. Now, let us take this example. Consider that the expression I give a small
expression is given
a = &b;
So, in this example, we can see that a and b are the operands, whereas equal to,
ampersand sign and semicolon, these are the operators.
479
Similarly, if there is a function call statement
func (a,b);
then in this function, this function name func, then this comma in present in between a
and b, and this semicolon operator, these are considered as the operators, whereas
variables a and b they are treated as the operands. So, in this way given a program, you
can easily identify the operators and operands this will help you for how to derive the
expression for this volume, effort and the time etcetera.
Now, let us define two other things important things called as the length and the
vocabulary. The length of a program are defined by Halstead it quantifies the total usage
of all operators and the operands in the program ok. The length of a program is defined
as follows. The length of a program as defined by Halstead. What does it do; what does it
do? It quantifies the total usage of all operators and operands in a program.
So, we have earlier taken N 1, N 2, and eta 1, eta 2, Please I am recalling again, whereas
what is eta 1; eta 1 be the number of unique operators used in the program, eta 2 be the
number of unique operands used in the program, and N 1 be the total number of
operators, N 2 be the total this N 1 be the total number of operators used in the program,
whereas, N 2 be the total number of operands used in the program.
480
So, with this we can easily and we have seen the guidelines how to select the
operands, how to identify the operands and operators. We can see the length can be
defined by using this N 1 and N 2. So, length is equal to
N=N1+N2
as per the definition because as according to Halstead the length of a program is defined
as or it quantifies the total usage of all the operators and operands, simply add the
operators and operands. So, this will give, so N is equal to N 1 plus N 2.
The program vocabulary it is defined as the number of unique operators and operands
used in the program, not total operators and operands. It is defined as the number of
unique operators and operands used in the program. Thus, program vocabulary eta is
equal to
η = η1+ η2
that means, unique operators used in the program plus the number of unique operands
used in the program. So, in this way, we can define we can estimate the length and
vocabulary for a particular program by using Halstead’s software science.
Next parameter is program volume. How we are defining program volume? The length
of a program depends on what the choice of the operators and operands used ok. The
481
length of a program it will depend on the choice of the operators and operands used in
the program.
In other words, for the same programming problem, the length would depend on the
programming style, because different programmers they will write down the programs
using different styles. So, the same, for the same problem, for the same project, for the
same programming problem, the length would depend on the programming style that the
programmer has chosen. This type of dependency would produce different measures of
length for essentially the same problem when different programming languages are used.
So, when different programming languages are used even if the problem is same to be
that has to be coded, so that will produce different measures, different values for the
length ok. This type of dependency would produce different measures of length for
essentially the same problem when different programming languages are used by
different programmers.
Thus, while expressing the program size, the programming language used must be taken
into consideration ok. Thus, while expressing the program size, what you have to do?
The programming language that is used it must be taken into consideration. Program
volume is usually specified by the term V which is equal to
V=N(log2η)
482
N, you have already known. Total number of N is the what V is the program volume we
are saying, and N just we have seen N is the N 1 plus N 2. Where eta is, that means, total
number of operators and operands used. Where eta is equal to eta 1 plus eta 2 that is the
total number the unique number of operators and operands used. So, program volume is
specified by the term V which is equal to N into log 2 eta.
The idea behind is that intuitively the program volume V if the minimum number of bits
required to encode the program. So, what is the minimum number of bits required to
encode the program that is specified by V. So, in order to represent eta different
identifiers uniquely, what we need? We need at least log 2 eta number of bits, where eta
is the program vocabulary.
Therefore, the volume V, it represents what? The size of the program by approximately
compensating for the effect of programming language used. So, if you are using different
programming languages in order to compensate the effect of the programming language
used, you can use what V which represents the size of the program. Therefore, the
volume V, it represents this size of the program by approximately compensating for the
effect of the different programming languages used.
Next term that we have to discuss is program potential minimum volume. We have
already seen simple volume. Now, let us see potential minimum volume. The potential
minimum volume which is denoted as V star, it is defined as the volume of the most
483
succinct program in which a program can be coded ok. So, the volume of the most
succinct program in which a program can be coded is represented at V star or which is
known as a potential minimum volume.
For example, let us see the minimum volume is obtained, when; the minimum volume is
obtained, when the program can be expressed using a single source code instruction. Just
like we say a function that they are small function foo, just a single code source code
instruction, then we say that the minimum volume which obtained because the program
can be expressed using just a single source code instruction, for example, a function call
like foo(); It contains only one statement here ah
Thus, if an algorithm it operates on inputs and output data d 1, d 2 and d n ok. So,
suppose there is an algorithm form if an algorithm operates on input and output data d 1,
d 2, d n etcetera, the most succinct program would be how much f of d 1, d 2, d n for
which eta 1 will be equal to 2 and eta 2 will be equal to n. So, eta at the minimum
number of sorry the unique number of operators will be 2 and the unique number of
operands will be n. Therefore, V star or the potential minimum volume will be equal to
The program level, so now, we will define another term called as program level. The
program level which is represented as L, it measures the level of abstraction which is
484
provided by the programming language. So, a particular programming language what is
the what level of abstraction it provides that is represented as program level L. L can be
specified as
L = V* / V.
where V star is the what potential minimum volume and be the volume. Thus, what will
happen for the higher the level of a language the less effort it takes to develop a program
using that language.
So, if the level of a what language is high, then it will take less effort to develop the
program using that language. This result agrees with the fact that it makes more effort to
develop a program in assembly language, than to develop a program in a high level
language to solve a problem. We know that here what it takes more effort when where
developing in the assembly language, because it is a low level language. Whereas, if we
will take a what low level language which the high level language, then we normally put
less effort.
So, it match with this thing that because L is equal to V star by V means it implied that
higher the level of a language less will be the effort it requires to develop for that
program using that language. And normally from our normal intuition, we know that if
we want to write a program, then if will choose this assembly level language which is a
low level language, it will take more effort than or in comparison to the thing that if you
will choose a high level language such as a C or C plus to write down the same program.
485
(Refer Slide Time: 23:58)
Next we will define effort and time, what is exponentially required. The effort required
to develop a program can be obtained by dividing the program volume with the level of
the programming language is to develop the code. So, here already we have seen earlier
using COCOMO technique etcetera how the effort can be estimated.
Here the effort which is required to develop a program can be estimated by dividing
what? By dividing program volume with the level of the programming language which
will be used to develop the code. Mathematical you can say that the effort will be equal
to
E = V / L.
Here, E is, so normally; so normally E the can be treated as the follows, normally E is the
number of mental discriminations required to implement the program ok. So, normally E
is the number of mental discriminations required to implement the program and also the
effort required to read and understand the program. Thus, programming effort E can be
written as how much will put the value of L, we know value of L is equal to V star by V.
So, putting the value of L here we get E is equal to how much putting the value of L is
equal to V star by V. So, on simplification, it will give
486
E = V2 / V* (Since, L=V* /V)
The programming effort varies as the square of the volume. You can see from this what
equation the programming effort E it varies how; the programming effort E it varies as
the square of the volume. Here you can see the programming effort E varies as a what a
square of the volume V, the programming effort varies as the square of the volume.
Experiences shows that E is well correlated to the effort needed for maintenance of an
existing program. So, Halstead has conducted so many so much of what experiments also
other researchers has what conducted so many what experiments. And the experience
shows that E is well correlated to the effort which is needed for maintenance of an
existing program. Then another, what a term will define here that is the programmer’s
time.
where E is the effort and S is the speed of mental discriminations. The value of S has
been empirically developed from psychological reasoning, and it is recommended value
and this recommended value for programming application is 18. So, normally the value
of S for programming applications is 18 that means, what the speed of mental
discriminations is normally it should be set to 18. The value of S should be taken as 18
for programming applications.
487
(Refer Slide Time: 27:19)
We have seen in this class the about Halstead software science which is an analytical
estimation method. A Halstead’s theory, it tries to provide you formal definition and
quantification of such quantitative attributes such as program complexity, ease of
understanding and the level of abstractions etcetera, based on some low-level parameters
such as the number of operands, and the number of operators which are appearing in the
program.
488
(Refer Slide Time: 28:40)
We have taken and the references from these two books ok.
489
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
Indian Institute of Technology, Rourkela
Lecture – 25
Project Estimation Techniques (Contd.)
Good afternoon. Will now today see one of the important estimation aspect of the
projects that is staffing level estimation.
Here we will see two important models for staff level in estimation that is Putnam’s
model and Jensen’s model.
490
(Refer Slide Time: 00:33)
So, let us see about this project duration and staffing. So, as well as effort estimation
managers must estimate the calendar time required to complete a project and when the
staff will be required. Let us say effort estimation it is at all very much important.
Similarly, estimating the calendar time required to complete a project, as well as how
many staffs will be required when they will be required these are also important aspects
of a project estimation.
So, the calendar time for a given project can be estimated using COCOMO 2 formula.
Here T development is; that means, calendar time is represented as
TDEV = 3 * (PM)(0.33+0.2*(B-1.01))
So, here this formula is obtained by using COCOMO 2 estimation technique and here
PM represents the effort in person months and B is the exponent computed, normally, B
is one for early prototyping model.
The time required is independent of the number of a persons working on the project.
These time developed it is does not depend upon that how many persons are working in
the project. It is the time the development time required is independent of the number of
people working on the project.
491
(Refer Slide Time: 02:05)
So, the now let us see about the staff requirement, how many staffs generally do you
require for a given project? Staff required it cannot be computed just by dividing the
development time by the required schedule not at all possible. Because the number of a
people working on a project it varies depending on the different phases of the project.
You might require less number of people during the requirements analysis maybe again
more number of people during design. And during coding again it may either increase or
decrease depending on the nature of the project and in testing time again you may require
more number of people. So, that is why it will vary the number of people working on a
project, it will vary it will depend on the phase of the project that you are currently in.
The more people who work on the project the more total effort is usually required this
very obvious. If more number of people work on the project; obviously, the total effort
required will be more. So, a very rapid buildup of people upon correlates with the a
schedule slippage.
492
(Refer Slide Time: 03:10)
Now, let us see how to estimate the staffing level. The number of personnel required
during any development project is not constant I have already told you that it may
depend upon the what the phases. So, that is why the number of personal required during
the development of any project is not constant.
It will may be less at the initial time; that means, this requirement specific analysis and
specification, then again more number of people will be required during design. And the
highest number of people may be required during the testing time, so that is why it is not
constant it varies.
Now, then in 1958 he has analyzed he had analyzed many R and D projects, and
observed the followings; what he observed? He observed that the Rayleigh curve
represents the number of full time personnel required at any time. That means, the
number the number of full time personnel required at any time for a project can be
represented by using a Rayleigh curve.
493
(Refer Slide Time: 04:12)
So, let us see what is a Rayleigh curve? I hope you must have known Rayleigh curve
earlier see this is the form of a Rayleigh curve. Here in the x axis we have taken the time
and y axis we have taken the effort. You can see as the time moves the effort also it
gradually increases and there will be time come peak period and after which even if the
time increases the effort will slight it will fall down it will decrease.
494
(Refer Slide Time: 05:09)
Norden was one of the first persons or to investigate the staffing pattern. He considered
some general R and D kind of a projects, he concluded that this staffing pattern for any R
and D project can be approximated by the Rayleigh distribution curve.
So, he has observed several projects and then what concluded that this staffing pattern
then; that means, the number of staffs required for developing a project it will follow on
R and D project ok. Sorry the number of people required for developing an R and D
project it can be approximated by a Rayleigh distribution curve as shown below, and
Rayleigh distribution curve is specified by two parameters T d and K.
495
(Refer Slide Time: 06:04)
Then Putnam came, so Putnam has also done some work, he has studied the problem of
staffing of software projects. He has also what investigated many projects in 1976
Putnam studied the problem of staffing of software projects. He observed that the level
of effort required in software development projects has a similar envelope ok. He
observed he has observed that the level of effort required in software development has
also a similar effort because previously Putnam this Norden had studied normally what
some of the what R and D projects.
So, then Putnam had studied as some what this problem of staffing of software projects,
he observed that the level of effort required in software development has also a similar
envelope that happens in case of R and D projects. He also found that the he also found
that the Rayleigh Norden curve it relates the number of delivered lines of code to the
effort and development time.
496
(Refer Slide Time: 07:26)
So; that means, you will see this Rayleigh Norden curve it also relates the number of the
delivered lines of code to what to effort and development time. On Putnam analyzed he
has studied a large number of army projects and derived the expression for L. And L is
specified as
L=CkK1/3td4/3
Where K is the effort expended or the export the effort required and L is the size in terms
of KLOC, t d is the time to develop the software, C k is the is a constant it is known as
the state of technology constant.
This C k which is known as technology constant it reflects the different factors that affect
the programmer productivity, as you have seen in earlier cases COCOMO model
etcetera, there are many factors they affect the programmer productivity. Here C k also
represents the different factors that the that affect the programmers productivity.
497
(Refer Slide Time: 08:25)
Putnam’s work has established something Putnam has stated the followings. That after
conducting several experiments he has observed that C k the value of C k is equal to 2
for poor very poor development environment where no software engineering
methodology, no proper methodology used whatever poor documentation is there.
And no reviews there or very poor reviews there for that the value of ck may be set to 2,
but what where or the organizations where good software development and environment
is there. Software engineering principles are used, good programming languages are used
etcetera, then the value of C k may be set to 8 and if the environment is excellent.
So, using various current software engineering practices, modern programming practices
uses what you can see now automated tools case tools etcetera and then the value of ck
may be set to 11. Or very small number of engineers are needed at the beginning of the
project carrier for carrying out planning and specification.
We can see at the beginning only very small number of manpower are required for
carrying out the planning and specification. That means, requirement analysis and
specification ok, planning and requirements specification and analysis activities. As the
progress as the project progresses more detailed work is required. See the time is passing
are the project progresses more number of people are required because what more
detailed work is required. That is why the number of engineers slowly increases and
498
reaches peak as the time progresses as more number of work is required. Now, let us see
this peak time represents which phase of software development.
Putnam observed that the time at which the Rayleigh curve reaches it is maximum value
it corresponds to the system testing after product is released. So, you see initially I have
told this is the during the project planning or this what requirement analysis specification
analysis phase; then design will come, then coding will come.
So, this peak period T D will correspond to the testing phase that using system testing
phase. Putnam observed that the time at which the Rayleigh curve reaches it is maximum
value it roughly corresponds to the testing phase, the system testing phase and product
release.
Then after system testing is over, the number of project staff it falls gradually till the
product installation and delivery. So, after this is over see this is the system testing this T
D this point corresponds to the system testing phase. And after the testing phase is there
over then gradually the manpower required will be fall down, because now the system
will be implemented at this side and then gradually the number of person will be at
falling down that is what he says. So, Putnam observed that, after system testing is over
the number of a project staff it gradually falls down till the product installation and
delivery it can still the product installation over and delivery comes.
499
(Refer Slide Time: 11:46)
From the Rayleigh curve it is also observed that approximately 40 percent of the area on
the under the curve is to the left of the T D and 60 percent to the right of the T D. You
see it is also observed that almost 40 percent of the area what lies to the left of the curve
and 60 percent lies to the right of the curve sorry to the left of the T D. So, 40 percent
lies to the left of this point T D and 60 percent area lies to the right of this point T D.
Now, let us see Putnam’s model can be used to estimate what? Putnam’s model can be
used to estimate the lines of code which is represented as Ss. It can also be used to
500
estimate the person years invested the time to develop and a constant called as
technology coefficient constant C k.
Putnam has given these formulas. So, that the lines of code can be computed by using the
formula
SS Ck K 1/3td4/3
the meanings of the term solid who have told you earlier. From this formula you can
easily find out the value of K; that means, the person years invested.
3
S
K S4/3
Ck td
That means, you can say this is the effort required maybe in terms of person month. This
will be equal to S s, S s is the lines of code divided by C k into to the power 4 to the
power or 4 by 3. Also from this equation lines of code, you can easily derive the value of
td
3/4
SS
td 1/3
Ck K
501
So, given any, so lines of code there are you can see that four things, S s C k K and t d.
So, if three parameters are given C k, K and t d are given easily you can find out the
lines of code easily you can estimate the lines of code. So, similarly if the values of S s C
k and t d are given you can easily estimate the value of K. And if the values of S s ,C k
and K they are given you can easily estimate the time to develop that is td .
Putnam adopted this Rayleigh Norden’s curve, he has related the number of delivered
lines of code to what to the effort and the type ok. From the Rayleigh Norden curve he
has related the he has established an relation between the number of delivered lines of
code and the effort and the time required to develop the product. He has also studied the
effect of schedule compression on the effort as well as the time. So, instead in general
you can say he has studied the effect of scheduled compression on the cost.
4
td org
pmnew pmorg
td new
Here td is this schedule the time to delivery or org means what originally you have
planned to develop the what software, td new just the new time. Suppose you have
estimated that 12 months will be required to develop the project.
502
Then the customer comes some urgency, due to some urgency they are requesting you
delivery it within 6 months. So, now, td org is 12 this the original schedule and new
schedule is suppose now 6. So, pm org means the effort required for the original
schedule and here pm new means the effort that will be required for the new schedule,
because now according to the new schedule time has been reduced.
So, if time will be reduced how much what more effort you have to put to develop the
software that is represented here. So, pm new; that means, the new effort revised effort
will be equal to the original effort into what the original time to delivery original time
divided by the new time.
So, now let us see the effort applied versus the delivery time. There is a non-linear
relationship between effort apply and delivery time according to put Putnam Norden
Rayleigh curve ok. So, according to do according to the Putnam Norden Rayleigh curve,
you can observe that there is a non-linear relationship between the effort that will be
applied on the delivered time.
So, what does it mean that the effort increases rapidly as the delivery time is reduced not
linearly. The if the time is increased the effort that does not increase just linearly, it
increases very rapidly I will show one example. So, if the delivered time is reduced ok,
here you can see that if the delivery time is reduced the effort increases very rapidly.
503
(Refer Slide Time: 17:02)
Let us see how? So, initially you can say this is the optimal time; that means, if this is the
time to delivery the optimal report is required will be required it is given at this point.
Now, you see there is the compression suppose, so theoretically this delivered time it is
reduced this is t theoretical. Then what will happen how much is the required the effort is
required what increases tremendously.
And still if you will compress the time, again you reduce the time; suppose at this point
ok. This point then you will say that then it is almost impossible region you cannot what
get your work done you cannot complete the project. So, there is certain point up to
which you can reduce the time, you can reduce the schedule. Beyond that even if you
will schedule how many even if how many more number of personnel you hire you may
not complete the project let us see.
504
(Refer Slide Time: 18:05)
The conclusion says that what is the effect of schedule change on the cost? Let us see we
have already seen by using Putnam’s expression for L, L we have already seen earlier the
value of L is equal to ok. The value of L is specified like this using Putnam’s equation,
from this you can find out the value of the C k or the value of K let us see.
By using that equation you can find out the value of K which is equal to
K=L3/ Ck 3td4
And you can see that if will simplify, you will get K is equal to
K=C1/td4
some constant into 2 to the power 4. So, C1 is either constant and this will be C1 will be
coming to be L cube by C k cube.
So, if you will what simply this statement you will get K is equal to how much? L cube
by C k cube in into 1 by t d to the power 4. So, this L k cube by C k cube is a constant
ah, so for the same product size C this L cube by C k cube against C k to the power 3 is a
constant and this is represented as C k here. So, finally, K is equal to C 1 to the power
ok.
So, final k is equal to C 1 to divided by t d to the power 4. Now, if you will take say a
now on say there are two timings two schedules you will take t d 1 which one the
505
original schedule you have estimated. Now, due to some urgency the customer is
requesting you to deliver it before that time say that is t d 2, then you can get the
equation you can get two values here K1 and the K2. So, then you take the ratio K1 by
K2 is equal to
K1/K2 = td24/td14
So, this is just a simplification what I have done, here I have found out the values for 2
times t d 1 and t d 2. And accordingly I will get two values K 1 and K 2. So, divide K 1
by K 2 it will be simplified as t d 2 to the power 4 by t d 1 to the power 4. So, what does
it mean then what is it is what inference we can draw from this statement.
You can see that a relatively small compression in delivery schedule can result in
substantial penalty on the human effort. See if you are reducing then how much time it is
increasing almost 4 times, if you will reduce the job or sorry if you will reduce the time,
so the delivered time. Then the effort that you have to put and hence the cost will be
increasing very rapidly. So, this can result in, so if a relatively only a very small
compression in schedule or in delivery schedule is required, it can result in a substantial
penalty on the human effort.
506
You have to spend much more human effort, you have to spend much more cost. Also
observed that the benefits can be gained by using fewer people over a somewhat longer
time span, if you can use only few people for a long amount of time then you will get
more profit.
Let us take quickly a small example, if the estimated development time is 1 year then in
order to develop the product in 6 months, so what will the consequence on the effort or
the cost ok? So, original development time was 1 year, now the customer is coming and
requesting you to develop the product in 6 months due to some emergency.
So, now let us see what will be it is effect on the effort and the cost? So, what I will do
now I will put the values in the equation, what is the equation? Here you can see that
originally t d 1 year; that means, 12 month ok. Then t d 2 is equal is equal to how much?
New time what is what you can say that, so original time, so originally it was how much?
12 months or 1 year or 12 months and then it is subsequently reduced to how much? 6
months.
So, how much you will get? 12 by 6, so almost 2 to the power 4 that is 16 times. So, the
effort or the cost will be increased by 16 times if the development time is just reduced to
half, that is what I want to say. So, if the estimated developed time is 1 year and then in
order to develop the product in 6 months; that means, just half you are reducing the time
half. The total effort and hence the cost will increase how much? 2 to the power 4 that is
507
16 times. In other words, we can say that the relationship between the effort and the
chronological delivery time is highly non-linear.
So, let us take a small example and apply Putnam’s model for estimating this size effort
etcetera. Given that the size S s ok, I hope S s is the size, so given that S s is equal to
100000 lines of code loc, or C k is equal to 10040 and t d varies it may be what 1 month
2 month or whatever that. So, t d is varies or maybe you can take yes t d you have taken
in a months. So, now, you have to compute K for t d maybe 1 month, you are getting
how much? Putting the values S C K be the values the since I have already shown you
earlier that is the equations here.
So, S s, C k K and t d there, we want to compute the K which is the person years
invested or effort in person months. So, K is equal to what? So, this is the basically effort
k, so S s is given Ck is given t d is given. So, now, we can easily compute the value of K.
So, for 1 month you see that for 1 month by putting this value of S s and this value of Ck.
That means, S s equal 100000 and C k is equal to what 10040, then the value of K is
coming to be 988 person month, please put the values in the previous equation you will
get this. Similarly, for td is equal to 1.5 months you will get this is the value of K; that
means, this is the value of effort required, for td delivery time 2 months the value of
effort is 62 person month. Given these given the these values for the parameters S s and
508
C k and t d is equal to 2. In this way you can compute the effort required by using
Putnam’s model.
Now, let us quickly see what is the limitation of Putnam’s model? Putnam’s model
indicates extreme penalty for schedule compression, that we have already seen that if this
time or schedule is compressed a half. So, almost we have to what the effort will be
increased by 16 times, the cost will be may be increased accordingly or what because the
cost maybe increased 16 times.
And extreme reward for expanding the schedule, if you can go on expanding the
schedule then your profit will be more you will get more benefit. Putnam’s estimation
model it works reasonably well for very large system. So, if you are system is a very
large, your project is very large, then it will work perfectly the estimations will be more
accurate. But seriously overestimate the effort for medium and small scale systems. So,
if you are using just a small scale small scale system or a medium scale system, then on
the estimates will not be accurate they will be overestimated.
509
(Refer Slide Time: 26:16)
So, in order to what here you can see that there is a limit beyond which the schedule of
your software project cannot be reduced by buying any more personnel or equipment, I
have already told you in this graph that. So, this is the optimal time point of time and the
optimal effort is here. If you are reducing up to say this point you can reduce
theoretically and you see effort is increasing tremendously. But still if you will again go
on reducing there will be some point and beyond it you are reducing the time you cannot
complete your project you may not be successful.
So, what is that point, up to what point you can go on compressing the schedule and if
you come further schedule that then you may not succeed. So, that point Boehm has
studied many applications and he has concluded this. Boehm observed that there is a
limit beyond which the schedule of a software project cannot be reduced by buying any
more personnel or equipment.
Because some of the what because some of the activities are sequential they are not
parallel, so how many what personnel you may hire. So, since these things activity has
sequential, so they cannot complete the extra persons they will just sit idle. So, what is
this limit? This limit occurs roughly at 75 percent of the nominal time estimate. So, this
limit it will occur at 75 percent of the nominal time estimate beyond this if you will
schedule, so your project may not succeed.
510
(Refer Slide Time: 27:59)
511
(Refer Slide Time: 29:30)
We have seen that one of the major drawback of what this Putnam model is that it does
not work for what it does not work well among for the medium and small size systems.
So, in order to overcome that Jensen came he has proposed a model. This is known as
Jensen’s model, Jensen’s model is very similar to Putnam’s model, he attempts to soften
the effect of scheduled compression on effort.
Because we have seen in case of the Putnam’s model the effect of scheduled
compression on the effect on scheduled compression on effort is very hard it almost its
4 times and we have already seen it is of to the power 4. So, that is why when we are
reducing half the schedule almost 16 times effort is increasing ok. This makes it
applicable to smaller and medium sized project, since he has tried to soften the effect of
schedule companies on effort this makes it applicable to be applied in smaller and
medium sized projects.
512
(Refer Slide Time: 30:35)
So, it is a less sensitive to schedule compression than Putnam ok, it is very much less
sensitive to schedule compression than Putnam. It makes it applicable to smaller and
medium sized projects, according to Jensen or according to Jensen’s model the size can
be specified as
SS = Cte * td * K1/2
Here the terms of the user meaning S s is the size and C te is the it is the effective
technology constant, t d is the time to develop the software and K is the effort needed to
develop the software and these are the formulas is there. So, you can see this formula is
very much suitable for the small and medium sized program problems or projects.
From this equation similarly you can find out the what for the two values; one is the
original schedule another is the revised schedule you can find out the values of K1 and K
2 to the power them then where K represent the effort. Now, you what divide K 1 by K,
2 then you can see what is the effect of schedule compression on effort.
You can see that now it is of the order of the square previously it was of the order of to
the power 4, now you can see this ratio is of the order of square. So,
513
K1/K2 = td22/td12
K1 by K2 is equal to t d 2 by t d 1 whole square. So, here the effort it will not increase of
the order output to the power of 4 rather it will increase to the of the order of to the
power 2.
So, now if you will take the same problem that we have applied for Putnam’s model you
can see this will work like this. If the estimated development time is 1 year then in order
to develop the product in half that is 6 months. Now, the total effort and hence the total
increase the total cost increase how much time 12 by 6 is 2.
So, 2 square, 2 square means 4. So, now, the effort and cost will increase 4 times
whereas, in Putnam’s model it was increasing by 16 times. So, this is much less in
comparison to the Putnam’s model, so these are the advantages of Jensen’s model over
Putnam’s model. Now, first the if the schedule is reduced the time is reduced the effort is
not increased in a if the effort is not increased rapidly. As well as these model can be
easily applied to small and medium scale, small and medium scale, or small and medium
size small and medium sized projects.
514
(Refer Slide Time: 33:12)
We have seen in this class the details of Rayleigh curve, how Rayleigh curve can be
applied to what software industries. We have discussed the Putnam’s model for staffing
level information, estimation. We have also discussed Jensen’s model for staffing level
in estimation, we have presented the effect of schedule change on the effort of cost on
effort and hence the cost using Putnam’s model and Jensen’s model
515
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 26
Project Scheduling
Welcome to this lecture. In this lecture, we will start to discuss about Project Scheduling.
Project scheduling is not only confined to software projects, but to all types of projects.
And, naturally this is a very old technique, but then there has been lot of developments
recently. This technique, the project scheduling techniques date back to nearly a century.
Over last one century, many techniques have been proposed and we will see that there
are several charts that get developed, but a recent phenomenon is to use a computers in
the scheduling process.
The charts are automatically updated, different parameters are computed so, what used to
be once computed manually and very painstakingly now can be done at the press of a
switch. But, still the main ideas of scheduling are very important. The project manager
even though has a tool at disposal, but has to know the nitty-gritty of project scheduling
to be able to effectively schedule a project.
516
Before we look at project scheduling let us look examine the overall project steps. Once
the project has been selected we need to identify the project objectives and the
infrastructures. These are two main things that what the project needs to do and also
what infrastructure it needs to use, what will be the team, team organisation, hardware
and so on. And, once that is done we need to analyse the project characteristics – what
are the risks involved, what is the resources that are required, what will be the life cycle
model that will be followed based on these factors and so on.
And, once that has been done we come to estimation and scheduling. For that we need to
first identify all the activities, the different deliverables which we refer to as products,
the different deliverables to the customer and what are the activities to be undertaken to
produce those deliverables. And, we need to do a finer level risk analysis based on every
activity.
And once we do the risk analysis the activities may change. Their effort, duration
etcetera may change and once we complete this process we allocate resources and we
review and publicize the plan. And, this forms the planning process and then comes the
execution. In execution the activities do change we notice very finer level activities and
therefore, we do a lower level planning. We find that there are many new sub activities
and so on and then we do a detailed lower level planning so, again the same steps.
Let us look at how we do identify the activities and then we identify the precedence and
schedule. So, that is goes by the name project scheduling.
517
(Refer Slide Time: 04:33)
In the project scheduling process, we should have defined the objectives or the scope.
And, the first step here once having developed the project objective is to identify the
finer level activities. From the objective we get more detailed activities or finer level
activities which we represent in the form of a work breakdown structure.
And once we identify the activities we identify the precedence relationships between
these activities, which activity would complete then the other activity can be taken up.
And, once we identify the activities, the precedence relationship we need to estimate the
time duration for each of these activities. We need to determine the resource needed for
each of these activities and then we need to draw a PERT diagram which will is a focus
of this lecture.
518
(Refer Slide Time: 05:40)
And, once we draw the PERT diagram, we should be able to compute the earliest – latest
starting times, earliest – latest completion times, slack times, critical path and so on.
And, then once these are complete the project is ready for entering the execution phase.
In the execution phase, as we mentioned we are interested in monitoring and control. In
monitoring and control we are interested if let us say a developer becomes unavailable
what will be the impact on the schedule; if a new developer joins how will the schedule
change and so on. And, that becomes easier to do if we have a GANTT chart.
The project manager initially constructs a PERT chart, where the overall schedule is
drawn and various aspects of the project such as the earliest start and completion time of
the tasks, the slack times, the critical paths etcetera are identified. We will see what the
specific terms mean as we proceed in this lecture.
Once having done this overall planning process, the project enters the execution phase.
In the execution phase, we need to do monitoring and control. We need to identify what
if a developer becomes unavailable or additional developers become available and so on,
and that becomes easier to do with the help of a GANTT chart, we will look at the
GANTT chart.
If necessary using a GANTT chart we can change the developers or hardware. We may
put more developers to one task to make it complete if the task is getting delayed and we
can identify what is the impact of that on the schedule using a GANTT chart. And, as we
519
do this we need to continuously monitor revise the estimate for the entire project
duration.
The basic to project scheduling is an activity because we schedule an activity and its sub
activities. Let us be clear about what is an activity. An activity is something by which the
team achieves or does some work, and naturally each activity has a duration. Over the
duration a team member will be doing something and not only duration it will also be
defined by a start and end point. It will start at certain time and end by certain time and
in between it will have a duration. Of course, the start and end time difference must be
larger than the duration.
So, these are the characteristics of the activity. An activity is something using which a
team member or a few members gets some important work done. These are associated
520
with a duration and identified with a start and end point. Some resources must be
assigned to an activity, will carry out this activity. An activity may be dependent on other
activities and this we call as the precedence relation. If an activity A can start only after
some activity B has completed, then we say that B proceeds A or there is a precedence
relationship between B and A.
One of the most fundamental things that the project manager does is once the objectives
have been identified how to identify the activities from the objectives. There are two
main approaches that the project manager uses; one is known as the work-based. In the
work-based approach, the project manager identifies that to meet the objectives what are
the work that main work needs to be done and from the main work identify what are the
sub-work that need to be done.
So, basically identify the activities, break the activities into sub activities and so on, and
this is represented in the form of a work breakdown structure. That is, what are the main
work which are broken down into sub work and sub work is broken down into sub-sub
work etcetera until an atomic level. And, typically an atomic level activity is one where a
single developer can do it in let us say a week or something.
It is not a good idea to use a work breakdown structure to break the activities into level
of hours or minutes, that is because it will become too much of an overhead for the
project manager to develop the schedule minute wise, hour wise or day wise. Typically,
521
the schedule is done in terms of weeks 1 or 2 weeks. The developer is assigned work for
1 or 2 weeks and then the project manager monitors whether the work is completing on
time or is delayed.
And, we can also argue that work granularity of several months is also not good. Let us
say we have a work breakdown structure where the leaf level work is 2 months. The
main problem with this is that the project manager loses control of the project. The fact
that the project is getting delayed or some activities delayed, the project manager can
know only after 2 months and by that time it will be too late to control the project and
back put it back into the schedule. And, therefore, the project manager loses control of
the project.
To repeat the same thing I will say that in a work breakdown structure. The important
works are identified and these are broken down into sub works, until the granularity of a
week or 2 those form the atomic level or the leaf level activities in the work breakdown
tree. We will look at this in the next slide.
The second approach that the project manager uses to identify the activities is based on a
deliverable-based approach or product based approach. In the product based approach, a
deliverable-based approach the project manager identifies what are the deliverables or
the products to be delivered to the customer. And, for first makes the list of the
deliverables and then identifies the order in which these have to be created, and for each
of these deliverables the project manager identifies what are the activities that need to be
performed to create these deliverables. Let us look at some examples of the work-based
and deliverable-based approach.
522
(Refer Slide Time: 15:33)
In the work-based approach let us say we need to complete a project. We need to the
project manager identifies what are the main activities that need to be performed:
requirement specification, design, test plan, code, testing, code two different modules A,
B etcetera. So, this is a hierarchical representation of the important activities and the sub
activities and at the leaf level they should take about a week or 2 to complete.
At this point just shows the sub activities, the main activities and the sub activities, sub
activities and so on, but then the project manager does not accurately determine the
duration of these activities that is done later. Neither does it show any precedence
relationship. For example, the test plan should be created after the code that is not
represented in this diagram or the code must complete before the testing starts that is not
implied by this diagram.
The main objective of this diagram is to identify the important activities and then
decomposition of these activities into sub activities till the granularity of the leaf level
activities is about a week or two.
523
(Refer Slide Time: 17:16)
This is another example. This is the hierarchical chart, the MIS project that is represented
as a level 1. In the level 2, the important activities the feasibility study, requirements
analysis, design, programming, testing etcetera and then for each of these activities the
sub-activities for feasibility study get the overall requirements, perform risk analysis,
perform cost benefit analysis etcetera. For requirements analysis requirements gathering,
analysis of the gathered requirements, documentation in the form of SRS document and
so on. These form the level 3 activities and so on.
If this any of these takes more than week or two, then these are decomposed into more
finer level.
524
(Refer Slide Time: 18:20)
Project managers also use a hybrid approach which has both based on the activities that
need to be carried out and also the deliverables that had to be given. Let us look at this
example where a project has certain deliverables. These are represented here that the
system must be developed and installed; the different software components must be
delivered. So, this is the system which is both hardware and software.
The system must be installed at the customer site, the soft and the hardware let say it
exists or maybe you can have activity here to develop the hardware. Then, software
components have to be developed. The user manual has to be developed and the training
programme has to be carried out. For the system installation, we need to identify the
requirements, analyse them, outline the design, detailed design, integrate the system, test,
deliver and user testing.
For the software components to be developed need to get the requirements design, detail
design, code, test. For the user manual analyse the requirements because it write the
manual based on the requirements, design the manual, write the text part of the manual,
capture screenshots and insert them, do page layout, then print the manual. The training
program again analyse the requirements, design the course, write the manual, print
handouts and then finally, deliver the course.
So, this is the hybrid approach where which is having both deliverable based. So, some
of these internal nodes or rectangles or actually deliverables, and then from there these
525
are the activities and then we can split these into sub-activities and so on. When you are
part of a project or you are the project manager, do not be surprised to see that we have
work breakdown structure where each of these are activities and also hybrid approach
where there are activities and also deliverables as part of the work breakdown structure.
And, once the work breakdown structure has been developed we need to represent these
in a task network, they are also called as activity network. These not only represent the
important tasks that are present in the work breakdown structure, but also the
dependency relations among the different tasks.
And, not only that we need to also identify the task duration; we not only label these
nodes on the task network with the name of the task, but also we write the durations here
as you can see and we represent the dependencies among the task. And, naturally here
we can identify sequence which tasks need to done one after other and which tasks can
be done parallelly. For example, the task over here can proceed parallelly with these two
tasks, different developers might undertake these three tasks concurrently.
One important characteristic of these task network is the critical path. The critical path
you can see that there are many paths here from the start node to the end node there are
many paths; a path over here, a path over here and so on. There are many paths, but then
the path having the longest duration when we sum up all these durations for various
tasks, we get the total time needed for completion of the project for those tasks. And,
526
because there is a sequence shown by the dependency arrow need to do the one after
other and therefore, the total duration among this path is the addition of all these
durations.
And, the path having the longest duration from the start to the end is called as a critical
path. A path is any a single path is any path from the start node to the end node. The
critical path is the one where the duration is the longest and that is the minimum time by
which the project can exceed. Along one path it maybe let us says 20, another path
maybe 25 another path maybe 23. Then 25 is called as a critical path and 25 is the
minimum duration for the project. The project may exceed 25, if some of the path some
other tasks here get delayed, but it will not be done any earlier than 25.
This is an example of a task network. We write the name of the tasks and the durations
here. It will be possible to compute the critical path by computing by looking at various
paths here adding the total duration and finding the one which has the maximum
duration. But, then the main problem with this approach is that there may be many paths
we miss out, may do a mistake and so on. And, therefore, systematic techniques have
been developed to compute the critical path because that is a very important
characteristic of a project.
The critical path is important because project manager knows that is the minimum
duration for the project and also the tasks that are there on the critical path if those are
527
very important tasks, if they get delayed and the project will get delayed. Whereas, the
tasks which are not in the critical path they may get delayed, still the project duration
may not get affected. But the other question that, is it possible to have two critical paths
in a task network? The answer is yes. It can so happen the two of the longest paths each
have 25 or something as the duration, or may be three paths have 25 or three paths may
have 30. So, then there are three critical paths.
So, it is possible that there are multiple critical paths and it is very important to identify
the critical path from the project managers’ perspective because he needs to monitor
these paths, the tasks on this path very carefully. And for that over the last century
systematic techniques have been developed ok.
This is an example on this. If we compute various paths this the one shown on the red
that is A-B-C-E-K-L-M-N that is the critical path you can compute the duration across
this that is 3 plus 3 plus 7 plus 8 and so on. This will be the longest compared to the
other paths that are present here.
528
(Refer Slide Time: 27:05)
As I was saying that over the century many techniques have got developed. Two
important techniques are the PERT chart and the CPM. Both of these are task network
based techniques. We will see that at some point of time these were two distinct
techniques have in their own uses, but now this to have merged into a single technique
called as PERT CPM and these are available in many automated tools. We look at these
developments and how these are used to compute various characteristics of the project
we will discuss this in the next lecture.
Thank you.
529
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 27
Project Scheduling Using PERT/CPM
Welcome, to this lecture. In this lecture, we will discuss about Project Scheduling Using
PERT and CPM. In the last lecture, we had looked at the various issues in project
scheduling; we had discussed about breaking down tasks in the project using a work
breakdown structure and then we had looked at a task network.
Over the years, that is over the last century or so, these techniques have been used the
project scheduling techniques for various types of projects and therefore, different ideas
that were proposed regarding project scheduling have now been merged into very strong
and useful technique the PERT and CPM. And the focus of this lecture is to discuss
about how to use PERT, CPM for project scheduling. Let us proceed from this point.
If you remember, in the last lecture, we had discussed about task networks. In the task
network, we represent the different tasks in the project and also the precedence
relationships, that is, which task must complete before another task can start. But, then
530
over the last century or so, there are two main types of task networks that have gained
popularity, one is activity-on-node diagram.
In the activity-on-node diagram, as the names says, this network consists of nodes and
edges. In this network nodes and lines connecting the nodes and the names of the node
are labeled with the activity names. So, each node represents an activity that is why it is
called as an activity-on-node diagram.
On the other hand, we have another variant of task network which is activity-on-arrow
diagram. In this the nodes they represent end or start of activity, but the activity itself is
represented on the arrows or the edges connecting the nodes in the diagram. But, the
activity-on-arrow diagram had some issues. It is much more complicated than the
activity-on-node diagram. The activity-on-node diagram is very intuitive; whereas, here
in the activity-on-arrow diagram we have nodes representing start and end of activity and
the activity itself is represented by edges and on that count there are few complicacies
that arise and we might have to insert dummy activities and so on to model a project.
And possibly for that reason over the years activity-on-node diagram have become very
popular used overwhelmingly by the project managers very simple and also the different
CASE tools the computer tools and project management that are available they also use
largely the activity-on-node diagram. And in this lecture we will focus on the activity-
on-node diagram, but as a background knowledge we will just keep in mind that there is
a variant of the test task networks which are activity-on-arrow diagram.
In the activity-on-node diagram we might have more than single start node that is to start
the project we might have a few activities which are no dependencies, that is they can
start anytime and similarly there may be many end nodes in any project. But, then it
makes a bit complicated to use various computations on such a activity-on-node diagram
which has multiple start nodes and multiple end nodes we will make it complicated to
apply various project scheduling techniques.
And therefore, it is recommended that we use a dummy start node and dummy end node
which are single nodes single start node and single end node, so that application of the
techniques is becomes very intuitive and easy. And this point we need to keep in mind
that whenever there are multiple start nodes, we will have a dummy nodes start node
which will be single start node, similarly for the end node.
531
And here in the activity-on-node diagram as the name says that the nodes represent the
activities. The nodes are labeled with activities and also not only we have the activities
on the nodes, but also we marked the duration that the activity will require. And will
have arrows which will indicate the precedence of one activity with other activities.
The activity-on-arrow diagram here the nodes indicate the start and end events of
activities, but the activities themselves are marked on the arrows and also the duration of
the activities are marked on the arrows. Here also one single start and single end node
that makes it easier, as we already mention. Arrows represent the activities unlike the
activity-on-node diagram where nodes represent the activities; in the activity-on-arrow
diagram. The arrows represent activities the arrows are labeled with the activity names
and also the duration. And nodes indicate the beginning and end of activities.
Now, let us try to draw an activity-on-node task network diagram. Given that, we have
from a work breakdown structure we have identified what are the activities and then later
we also identify what are the predecessors of the activity. A has no predecessor; B has
532
predecessor A; C also has predecessor A; D has B as the predecessor; E has also B as the
predecessor; F has C as the predecessor; G has D as the predecessor and H has two nodes
E and F as the predecessor.
So, how do we draw this? Of course, we can straight away draw A, because A has no
predecessor. So, we can draw A here which has no predecessor and then we have B has
A as the predecessor. So, we can draw B which has A s the predecessor and then we can
look at C and C also has A is the predecessor. So, let me draw C here C also has A as
predecessor.
And D has B as the predecessor we draw D here which has B as the predecessor and then
E has B as the predecessor. So, we can draw E here, E has B as predecessor and we have
F has C as the predecessor. Let me draw F here; F has C as predecessor and G has D as
the predecessor. So, let me draw G here, G has D as the predecessor and H has both E
and F as the predecessor H has both E and F as its predecessor.
And we can since there are two nodes here which are two end nodes we can have a end
node here and this can be the task network, activity-on-node diagram. And if I redraw
this nicely so, we have this A has B has A as predecessor, C has A as predecessor H has
both E and F and Z is a end node. So, the same diagram just drawn it here. As you can
see that it is not very difficult very intuitive actually to draw a activity-on-node task on
network.
I am sure that given any other example you can draw the activity-on-node network
diagram. But, the activity-on-arrow diagram would have more complications and we are
not going to discuss that because overwhelmingly activity-on-node diagrams are being
used.
533
(Refer Slide Time: 12:40)
Now, the techniques PERT and CPM these are two task network based techniques as you
are saying that these have been developed over the last century or so. These are two
different techniques PERT and CPM; PERT stands for Program Evaluation and Review
Technique and CPM stands for Critical Path Method. These are two task network based
techniques. The PERT initially use the activity-on-arrow diagram and because these have
long history over the years sometimes the activity-on-node diagram was adopted.
If we look at the history of this both of these were developed based on the Gantt chart.
Gantt chart was a older diagram and Gantt chart is used for project monitoring, control,
resource allocation etcetera. And later in this course will be discussing about the Gantt
chart, but the project scheduling is done by PERT and CPM diagrams.
And this were developed in 1950s and Gantt was much earlier than this, may be 1920s
and 30s. The CPM was developed from Gantt chart and was used by the company
DuPont in its chemical plants for managing maintenance projects of chemical plants. The
PERT was used by US Navy for development of the Polaris missile.
But, just want to mention here that the CPM was used by DuPont for managing
maintenance activities and maintenance activities we know what are the type of
maintenance, what will be the activities, what is their duration can be precisely known.
And therefore, here the task durations are more deterministic; whereas, the other one, the
PERT was developed by US Navy for development of the Polaris missile.
534
And this is a development project unlike the maintenance of chemical plants which is a
routine work. The PERT was for development project where there are much more
uncertainties regarding the tasks. For example, the task start time, the task duration end
time these are all random because there can be delays. The durations etcetera are not
known deterministically unlike the maintenance projects and therefore, PERT uses a
statistical techniques for the task durations.
Both these techniques the considered precedent relationships on the task just like any
task network which represent the interdependencies among the tasks. And as I mention
the CPM uses a deterministic estimate of the activity time and the PERT uses a statistical
estimate of the activity time.
Let us look at our first diagram. This is the simplest PERT diagram and here we have
used a deterministic times for design, test plan; the test plan can start only after the
design is complete. The design is expected to take 15 weeks; test plan is expected to take
5 weeks and testing is expected to take 25 weeks. There are two main modules A and B;
after the design test plan can be done or coding of module A can be done which will take
18 weeks or the coding of B can be done which takes 23 weeks.
There is a parallelism possible that is these three tasks can be carried out in parallel if
sufficient resources exist. But, then it may so happen that we have only one developer,
one coder we have available and one tester. In that cases all the three cannot proceed
535
parallelly; testing can be done by the tester and the coder can take one of this and then
after completing it needs to do this.
And then after all of them complete the testing, coding of A and B then the actual testing
activity can be done, but then in the PERT diagram we do not show this constraint that,
are resources available to carry out all these in parallel and so on. These are done later in
the project planning, where the project manager use as a Gantt chart to do the project
scheduling where these allocation of resources to the tasks are done.
And also the Gantt chart is used for project monitoring and control, where the schedule
needs to be refined if there are any variations that are observed. For example, the design
instead of 15 it completed by 13, what is the impact on the other parts of the project? We
need to redo the schedule. What if the test plan instead of 5 it took 10 weeks, what will
be the impact on the schedule? So, these are all better than using Gantt chart. The
resource allocation monitoring and control done using Gantt chart and that will discuss
later in this lecture series.
This kind of network as we said was done for development of the Polaris missile 1950s.
And here each node we write the name of the task; you may use just a label instead of the
task name. If there are too many tasks it becomes unwieldy to write the task name like
this. We might use A, B, C, D etcetera and then those are the labels and then we will
have task names for those labels stored in a table.
536
The PERT chart, the tasks have uncertainty and therefore, multiple time estimates were
used for each task. This allowed for variation in the activity times that is completion
times of the task, the preceding task and so on. This can be accommodated represented in
the PERT chart and statistical inferencing can be drawn from the PERT chart. The
activity times are assumed to be random with some probability distribution.
As you are mentioning these techniques have been used over time, over a large number
of years and this technique has developed from its start 1950, that is about 70 – 80 years
back and originally it was an activity on arc network. But, then as I was mentioning that
now it is mostly used as a activity-on-node network.
For each activity in the PERT, 3 estimates are provided: one is the most likely time. This
is a estimate of the project manager that what is the time that it is most likely to take. The
optimistic time – this is the shortest possible time which the task the activity can take.
The pessimistic time is the longest possible time if things did not work out; the developer
became unwell, the resources broke down that is the computer hardware had problems
and so on.
The project manager provides 3 times for activity most likely optimistic and pessimistic
and each activity is provided 3 time estimates. And based on that a probabilistic
inference on the project duration is made that what is the likely time that the project will
take to complete what is the optimistic and what is the pessimistic time.
537
(Refer Slide Time: 23:27)
The CPM, the Critical Path Method, as you are mentioning it was developed for
managing maintenance project in chemical industry on DuPont. Here the maintenance is
a complex activity. There are many tasks here, but the tasks are routine and therefore, the
task durations are deterministic. And therefore, CPM did not required to use variation in
the activity times and the times there are estimated for activities or constant times.
The activities are represented as rectangles or circles. If you look at books and so on, as I
was mentioning that these techniques have been developed over large number of years
and therefore, there is a evolution of these techniques. And, we will find that some places
they use circle, some places use rectangles and so on, but in our lecture will use
rectangles for the PERT CPM technique.
And one important thing of CPM that it allowed to compute the critical path and
therefore, the critical path indicates the project duration and it give a method how to use
the task network diagram to compute the critical path. But, then nowadays we also do the
same thing with PERT. We can have the critical paths there and we can compute them
from the diagram.
538
(Refer Slide Time: 25:28)
Over the years, even though this were two very separate techniques for project
scheduling, but over the years both the techniques have merged into a single technique.
Both the techniques they are graphical techniques, they are similar, precedence relations,
the sequence of activities these are all represented here. Both are used to estimate the
projects duration; one uses statistical, other use deterministic.
And using both now we are able to identify the critical activities. The critical activities
are the ones which are on the critical path on the network and these are the activities
which cannot be delayed without delaying the project. If a critical activity gets delayed
the project duration will be affected and therefore, the project manager has to be extra
careful about the critical activities.
And not only that it indicates to the project manager the amount of slack associated with
non-critical activities. Some activities even though those are not critical activities, are
very a small slack associated that is they can be delayed only a little bit without affecting
the project schedule. On the other hand, there may be some activities which have large
slack or float time that is they can significantly vary the completion time. They may get
delayed and so on, without really affecting the project completion time.
And this is a very important hint to the project manager that he can redistribute the
resources so that he may take out a man power from a task which has lot of slack
associated with it. And put the man power on a critical tasks, so that the critical tasks get
539
completed and the one which is having slack even if it gets delayed little bit because is
withdrawn the manpower here and redeployed on the critical activity. But, the overall the
project will be managed well, it will complete on time.
And therefore, for every project manager it is very important to know what are the
critical paths, what are the activities on the critical path which cannot get delayed
without affecting the project completion time and what are the non critical paths and
which tasks have the maximum slack so that he can have little leeway in redistributing
the resources if required so that the critical activities at least proceed without delay and
therefore, we can meet the project schedule.
We are almost at the end of this lecture and in the next lecture, we will discuss how to
use the PERT, CPM network to compute for each activity the start times, end times, the
slack or the float time that is available and so on and also identify the critical paths.
Thank you.
540
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 28
Project Scheduling Using PERT/CPM (Contd.)
In the last lecture, we had discussed about PERT and CPM, some very basic ideas. We
discussed about PERT and CPM, their origin, the main components of the PERT CPM
diagram. We said that these were two very independent techniques used for very
different purposes. PERT was used in development of the Polaris missile, where there
were lot of uncertainty regarding task completion times is a challenging project; whereas
CPM was used by dupont for its chemical plant maintenance activities; which were more
or less routine activities even though the activities there are many activities and it was a
complex set of activities, but the activity durations where deterministic.
And the critical path method it allowed to compute the critical path, it gave a technique
using the task network; how to compute the critical path. But then over the years, both
the techniques have evolved; both the techniques are equally capable. Many of the things
that were there on PERT is observed in CPM and vice versa and now they have been
merged into a single technique. And we will discuss about how these techniques are
541
being used in project scheduling because this is a important tool for the project manager.
The project manager using the PERT CPM can determine the completion time of the
project; for each activity can compute the slack time the critical path and so on and that
is the focus of this lecture.
As you are discussing, though CPM and PERT were two different techniques; now they
are merged into a single technique and usually in the books and literature this is referred
to as CPM PERT.
This technique is very useful to the project manager can looking at the diagram easily
identify which tasks are precedence; that is which are the tasks need to complete for the
task to start. These techniques over the years becomes sophisticated, many tools that are
available. When the projects are complicated there are many activities with various types
of dependencies then these techniques become useful. If the activity has only, if the
project is very simple only 3, 4 activities sequential activities, there is no other
precedence and so on. .
These techniques may become a over kill, but then the typical IT projects are
complicated. There are many activities there hundreds of activates for large projects and
dozens of activities 2, 3 dozens of activities for even moderate size projects which are
obtained from the work break down structure. The project manager uses a tool, we will
discuss about few tools which you can easily use based on the concepts discussed here.
542
Those tools allow to draw the diagram, the task network diagram and also it
automatically computes the critical path; it shows the critical activities, the slack times
and so on. But then in this lecture, we will discuss about the main concepts here so that
we can effortlessly use the tools.
Once we develop the PERT CPM network, we can be able to answer that what will be
the project completion date. At any time we can also look at the PERT CPM and
determine whether it is on schedule within budget. And because these are statistical
parameters for the activities can be given; we can as the project manager determine the
probability of completing the project by certain date. What are the critical activities?
What is the impact of resource unavailability or availability; that is if some developer
becomes unavailable what will be the impact and so on.
543
(Refer Slide Time: 05:39)
The PERT CPM are used in the early phases of the project planning; these are more
generic in nature, we do not really represent the exact start dates. We do not consider
intervening holidays, off days, allocation of resources to the tasks and so on.
We just develop a very high level plan which is later low level plan is developed by
using a Gantt chart. Here for using the PERT CPM, we should have identified the
activities using the work break down structure. We should have determined the sequence
in which these activities and based on these two; that is what are the activities and their
precedence relations, we can create the task network.
And we need to determine the activity times; these are basically estimates and then we
represent these and then our diagram the PERT CPM diagram is ready for use. And
based on this diagram we can determine the earliest and latest start times, earliest and
latest finish times and also determine the slack or the float time.
We first try to understand that the earliest and latest start times; even though these are
intuitive, but then with an example we will try to make out what we mean by earliest and
latest start times, earliest and latest finish times and the slack.
544
(Refer Slide Time: 07:39)
But before that we will just discuss some guidelines of developing the task network. As
we have been saying that the project scheduling techniques, using the PERT CPM can be
easily applied when we have single start and single end node; if we have multiple start
and multiple end node it becomes complicated.
And therefore, we will require that we have one single node start node and one single
end node; if necessary we will create dummy nodes, dummy start and end nodes. The
other rule that we must enforce is that once you draw the diagram there should not be
any loops in the diagram. For example, an activity is dependent on one activity let us say
A is dependent on B and again B is dependent on A; these things are not very common
but then if we have such situation we need to avoid because this will make very difficult
to apply the project scheduling techniques that we will discuss subsequently.
We must ensure that there are no loops in the diagram and also there should not be any
dangles. In this diagram, there is a dangling activity here, but then if there is no other
activity to be done; we must have end node here and then have this edge connected so
that there is no dangling nodes.
If there are dangling nodes then it becomes very difficult to apply the project scheduling
techniques. So, these 3 points we must ensure before applying the project scheduling
techniques. The first is that the network should have single start and single end node,
there should not be any loops in a normal project; loops do not exist.
545
Of course, there can be iterations like let us say there is a coding and testing; testing
starts after coding, but then there are bugs may be detected. But and then the coding has
to be done again and that if we try to represent will become a loop. And once there are
loops in the diagram we cannot apply the project scheduling techniques, it become very
difficult.
And therefore, these we should avoid by having coding followed by testing and then we
can have bug correction, a separate activity not coding and that way we can break the
loops. And also there should not be any dangling activity like this and if that is the
situation that we have some problem description; then we have we have to have a arrow
connecting it to the end node.
This is one of the simple PERT diagrams, there is a single start node, single end node
and each node is labeled with the name of the activity. And there is a duration with each
activity and here in this example we are using week 1 week, 5 week, 2 week as a
duration, but there can be you can even use this for days or months and so on. And a
critical path here; in this project if we look at this that one of the path here from start to
end is A, B, C, E, F, another path is A, B, D, E, F.
On this path, the time taken is 6 plus 2; 8 plus 3, 11 plus 2; 13, but on this path we have 1
plus 5, 6 plus 7; 13; 3; 16 to 18; so this is the longest path. From this diagram, we can
say that the project duration is 18 weeks because that is a longest path here. And this one
546
A, B, D, E, F is the critical path because any delay to any of these activities on the
critical path will delay the project. Whereas, the activity that are not in the critical path
for example, C can get delayed little bit without effecting the final project schedule.
So, that is the intuitive idea and we will discuss the project scheduling techniques using
which we will compute the earliest start, earliest finish, latest start, latest finish; the slack
times the critical path. But for this we really do not need such a simple diagram we can
easily compute even without using the technique that we are going to discuss. It becomes
easy that to see here that there are two paths here and one path is the critical path takes
18 weeks to complete. And the other is non critical path and that takes 15 weeks to
complete, but A and B are critical activities because they are on the critical path.
Similarly, E and F are also critical activities, but C is not a critical activity is a non
critical activity. And the project manager has some laxity here or slack time or float time
which you can exploit advantage so that the project gets completed on time. He may
redistribute resources, take out men power allotted to C activity and deploy in another
activity which is getting delayed, so that the project overall project completes on time.
So, there is a single start node, the single end node and this is the critical path. And the
CASE tools the project management tools that are available many are open source easily
downloadable tools, where you can draw such diagrams and that would indicate the
critical path using a red color or something.
547
Now, let us see the activity how they are characterized on a PERT CPM chart. One is the
earliest start, the earliest start that is the earliest time that a task or a activity can start is
the earliest finish time of the preceding activity; only after the preceding activity
completes the task and start. Therefore, the earliest start of an activity is the earliest
finish of the preceding activity.
Similarly, the earliest finish of an activity is the earliest start time plus the activity time;
very intuitive that earliest start is the time by which the preceding activity completes and
then if everything goes well; it will start in earliest start plus the total duration of the
activity which we call as a earliest finish time.
Similarly, we can define the latest start and latest finish; the slack or the float time it
defines how long a non critical task can be delayed without effecting the project. And
from this definition all the critical activities have zero slack time the slack is the
activities latest finish time minus its earliest finish time; we will discuss these with help
of an example that will make it clear.
Let us look at this diagram, here the preceding activity can complete by this time and
here in the PERT chart; we do not give calendar dates we just give numbers here 10, 20
and so on; which later during the resource allocation, we give the exact dates there here
we do not give dates we just give numbers.
548
The earliest start may be a number which is a date and then this is the activity time, but
then the latest finish is here. And then this plus this is the slack time or the earliest finish
minus earliest start minus activity that gives the slack time. The earliest finish time is the
earliest start time plus the duration.
So, that is earliest start time plus the activity time that gives the earliest finish time;
the latest finish time minus the activity time that gives the latest start time.
For each activity, we use this notation each node in the task network will have such a
notation; where we have the activity name or the label of the activity, what is the
duration earliest start, earliest finish, latest start, latest finish and what is the float time
available.
This has lot of information here for every activity for the project manager; you can know
that whether it is a critical activity or not. Whether it is what is the amount of float that is
available, what is the duration earliest start earliest finish and so on; so there is lot of
information on every node of the diagram.
549
(Refer Slide Time: 19:49)
Now, let us look at an example; for a certain activity the earliest start is day 5 and the
latest finish is day 30, the duration is 10 days. And we want to know; what is the float,
what is the earliest finish and what is the latest start?
The earliest start is 5 that is the preceding activity can complete by day 5 and then only
this activity can start and the duration is 10 days. And therefore, the earliest finish will be
15 because the earliest start was 5 and it takes 10 days the earliest it can finish is by 15.
The latest finish is 10 days; sorry on day 30 the latest finish is on day 30 and the duration
is 10 days and what is the latest start time? For the task to finish at most on 30 day, we
must start it by 20 day 20 because it will take 10 days from then and it will complete by
day 30.
So, the latest start time is the latest finish minus duration which is 20; one way to
compute this float or the slack time is to have the latest start minus earliest finish which
is 5 days. So, we can compute the float in this case.
550
(Refer Slide Time: 22:17)
Now, one thing need to mention here that what is the day 0 for a project. We said that in
the PERT we do not use really calendar dates, we do not consider which are the
intervening holidays what exact date it completes here we just write days. It starts on day
0 and ends on some day; these are numbers rather than the actual dates.
This helps us in a high level scheduling where we ignore what are the intervening
weekends public holidays and so on. In a lower level scheduling which we will do using
a Gantt chart will take those into consideration. And here the notation is that the finish on
day 1 means end of day 1 and also the start of day; the start date is day 1; that means, end
of day 1 that is the notation here that if a project finish is on day 10, then it is end of day
10.
If a project start is on day 10 that is at the end of day 10 and from here; we can imagine
that the project start date is day 0 that is end of day 0 or that is the start of day 1. So, that
is the notation we will use the day 0 is end of day 0; start of day 1 and for every other
day also we will use the finish date is end of that date and the start date is also end of that
date.
551
(Refer Slide Time: 24:39)
Now, let us look at the importance of the float or slack and critical path for a project
manager. The slack is how much flexibility or float that activity has and this implies how
long it can get delayed without affecting the project completion date. It is a very
important thing for the project manager to know helps in redistributing the resources.
But then the activities that are there on the critical path are the critical activities and they
have zero slack; that is the project manager has to be extremely careful with the critical
activities; cannot really take out any resource from a critical path because it will get
delayed. But then if for some reason the activity on a critical path is getting delayed that
is a critical activity is getting delayed. And the project manager has to bring in man
power from non critical activities and see that the critical activities are not getting
delayed for the overall project schedule to be met.
Another thing that we must note is that the critical path identifies the minimum time to
complete the project. If we can identify the critical path on a very complex task network,
then we can sum all the activities there activity times and that will give us the minimum
time by is the project will get completed or the project completion time; that is the best
case project completion time. And the project manager will meet that time will try to
meet that time that is the sum of the activity times and the critical path.
552
We are almost at the end of this lecture, we will take some examples and then based on
the concepts developed and we will discuss a technique by which we can use the PERT
CPM to compute the various parameters per project. We will stop at this point.
Thank you.
553
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 29
Computation of Project Characteristics Using PERT/CPM
Welcome to this lecture. In the last lecture we were discussing about PERT CPM, we
had some general ideas regarding PERT CPM. We looked at the task network by which
we represent the different tasks or activities and the dependencies between them and then
we distinguish between the PERT and CPM. And we said that CPM we use deterministic
task times whereas, in PERT we use probabilistic task times because, tasks are uncertain,
the completion times are uncertain.
And in CPM we use deterministic times and therefore, we use deterministic conclusions
regarding the task parameters. Naturally, CPM is much more simpler then the PERT with
statistical times. We will first look at determining various project characteristics using
PERT CPM. And then we look at determining project characteristics using PERT where
we use statistical times; let us get started with this.
If you remember in the last lecture we were saying that the project manager during the
project scheduling would like to determine various project characters characteristics and
then manage the project based on these characteristics. One of the important
554
characteristics is to identify various activities the tasks and the dependencies among
them and also to identify several attributes of the activities. One attribute is of course, the
duration of the activity. So, the one that you see in the blue here this is the duration.
For example, the activity may be to ‘write software documentation’ and the project
manager estimates the duration to be let us say 7 days. But, then once the task network is
developed that is the PERT CPM network then it becomes possible to use the PERT
CPM to identify other characteristics of the activity. For example, earliest start time this
is you can see here the arrow mark here red arrow mark here, this is the earliest start
time, this is the earliest time at which the activity can start. Similarly, the earliest finish
time this is the earliest finish time based on the activity can finish.
These are important characteristics because, the project manager can deploy resources,
can start other activities. And another attribute of activity that will be determined based
on the PERT CPM is the latest start time. This is the time at which the activity can be
started without hampering the project completion time. If the activity starts anything later
than this arrow that you can see here this is the time line, the blue line that you see here
this is the time line. And, if the activity started anywhere later than the latest start time
then the project is going to be delayed. And latest finish time is the time by which the
activity must end, if it does not end by this then the project is going to be delayed.
Naturally, the latest finish minus the latest start time is equal to the activity duration, the
activity duration is shown in the blue here. And, latest finish to latest start time is the
activity duration, similarly earliest start time to the earliest finish time is the activity
duration. We will use the PERT CPM to compute these project characteristics and based
on these project characteristics, we will also be computing the slack time. The slack time
is the time that the task has for which the project manager has some liberty, if the slack is
0 and the project the earliest start time and the latest start time coincide. So, there is no
flexibility for the project manager to differ the task for sometime, but then here you can
see that there is a slack time.
There is the activity can start anywhere between the earliest start to the latest start and
this we call as the slack time. The latest start minus the earliest start which is also equal
to the latest finish minus the earliest finish, this is called as the slack time. This is the
time that is available to the manager to monitor and control the project; if one activity is
555
getting delayed you can defer this activity little bit and put resources on the other
activity.
But cannot delay this too much beyond the latest start, have to start by the latest start
time. Let us look at how to determine these project characteristics and later when we
discuss about project monitoring and control, we will see that how this project attributes
are actually used by the project manager for monitoring and control purposes.
Before we get started, let us just look at few PERT CPM terminologies which we will be
needing as we proceed with our discussion. We had seen the last lecture that a task
network the PERT CPM is based on a task network and the tasks are here rectangles, the
arrows are dependency. And we said that there has to be a single start node otherwise the
methods that we give for computation of project characteristics would become very
complicated.
We will need that there is a single start node, if there are multiple start nodes we will put
a dummy node and make that as a single start node. Similarly, we need a single finish
node, if we find that there are multiple finish nodes we will put a dummy node and we
will make it a single finish node. And then we have this burst point where a task or a
activity has multiple successors. Once this activity completes this two C and D can start,
once B completes C and D can start and that we call as dependency and here there is a
burst point multiple tasks are dependent on B, C and D both are dependent on B.
556
(Refer Slide Time: 08:41)
We have merge point where a task E has multiple predecessors; that means, both C and
D must complete before E can start. It may so happen that C completes earlier then D,
but E cannot start until D also completes both the predecessors of E must complete
before E can start and that we call as a merge point. And, D is a predecessor of E, C is
also a predecessor of E and F is a successor of E. These are some of the simple
terminologies that will use as we discuss about the PERT CPM technique to determine
project characteristics.
557
We had discussed in the last lecture that we will use this kind of notation, for a task will
represent a task in a form of a rectangle. The task or activity name will be written in the
middle and we would have determined the duration of the activity. Typically in weeks, 1
week, 2 week something like that or it can be in days, but rarely it is in hours or months.
It is typically in days or weeks and the one that you are seeing in the red one, the earliest
start, earliest finish, latest start, latest finish and the float time these will be computing
using the PERT CPM technique.
We already seen the definition of the earliest start, earliest finish, latest start this is the
point of time. If the activity starts anywhere beyond or after latest start then the project is
going to be delayed. Earliest start at most or at the earliest the activity can start at this
point and the float is also computed by our PERT CPM methodology. But, then the float
is the latest finish minus latest start which is also equal to the earliest finish minus
earliest start both give the float. This is the flexibility that the project manager has we
also call it the slack, slack or float.
Another point we need to mention, here we just write day 1, 2, 3, 4 etcetera rather than
giving calendar dates. Please remember that the project scheduling using PERT and
CPM is a higher level scheduling, where we do a high level scheduling in terms of days.
Starting from day 0, how many days it takes? And later the project manager does a more
detailed scheduling using the Gantt chart where, we use the calendar time. The idea is
558
that at a very high level scheduling we make a simpler plan, where we do not take into
consider week-ends, public holidays and so on.
We do not use a calendar time here, this is a high level simple schedule that we prepare
and later we consider various constraints. What are the holidays and then we consider the
resources whether a activity can be started based on the resource availability and that we
will do using a Gantt chart. But, the initial scheduling is done using a PERT CPM chart.
We will have this kind of a network here task network, the tasks and the dependencies
and as I was saying that these are very old techniques developed over a century nearly a
century.
And therefore, we have a divergence of notations. Sometimes we have circles, but then
we said that we will use rectangles for activity name and then we will have various
attributes that will also be indicated along side this for our computation of the task
characteristics.
The day 0 let us try to understand what exactly is meant by day 0 or day 10 or
something. If we say that the finish date is day 10; that means, it is the end of day 10
which is also a normal use in our sentences, we say that we will do something by day 10
means end of day 10. But, then the start date of day 1 means end of day 1, but then that is
not really what we say. We always start at the beginning of day 1, but for that purpose
we will say that we will start in day 0 which means that we will start at the end of day 0
which is also equal to the start of day 1.
Let me just repeat here that we always say that the project starts in day 0 and the
implication of that is end of day 0 or start up day 1. If we write day 1 here start of the
project activity is day 1 for the activity A, then it will mean end of day 1, but normally
we do not do that. We start at the beginning of day 1 and for that we will use the notation
day 0.
559
(Refer Slide Time: 15:44)
One of the important project characteristic, that is very useful to the project manager is
the float time or the slack time available for various activities and also the critical path.
The project manager looks at each task or activity and finds out how much flexibility he
has, in particular you can find how much he can delay the activity the starting of the
activity without really affecting the project completion date. This is very important for
the project manager during monitoring and control, because to affectively control a
project may find that some activity is getting delayed.
And, then can find out the activities which have enough slack and then may be take some
man power from the activity which has enough slack, put it on activity which is getting
delayed so, that there is no overall project delay. Therefore, the PERT CPM provides
very important results to the project manager which are the earliest starting time, earliest
completion time, latest start time, latest completion time, the float or the slack time and
the critical path.
The critical path is the sequence of activities from the start task to the last task with zero
slack. So, all the activities on a critical path will have zero slack, the project manager
should be very careful about the critical tasks or the critical activities. That is the
activities that are appearing on the critical path, the project manager gives special
attention to the critical activities or the critical tasks because, any delay to the critical
activities will delay the overall project. And therefore, the project manager typically
560
closely monitors the critical activities and at the eventuality or at the slightest indication
that critical activity is getting delayed, the project manager takes corrective action.
For example: deploy a additional resources for the critical activity so, they do not get
delayed. For this reason it is very important to identify the critical path and the critical
activities. Not only that the critical path also identifies the minimum time to complete the
project, that is if we sum all the critical activity durations that appear on a critical path
we will get the project duration. Let me just repeat, if we sum the durations of all
activities the critical activities appearing on a critical path then we get the project
duration, that is the minimum time by which the project can be completed if the project
manager monitors and controls the project well.
Now, let us looks at the technique that we will be using the PERT CPM using the PERT
CPM we will compute the earliest start time, the earliest finish time, the latest start time,
the latest finish time and the float. To start with will have the task network with activity
names and their duration. All the activities would have been identified, the durations and
the dependencies between the activities will be present. And we will now discuss a
technique by which we will identify the other project attributes like earliest start, earliest
finish, latest start, latest finish and the float.
Here we will have three steps; one we call as the forward pass where we look at the
activities are the nodes appearing on the network from left to right and compute the
561
earliest start and the earliest finish. And then we have a backward pass start from the last
node or the last activity and then slowly proceed towards the first activity. And we
compute the latest start, latest finish for every activity and also we compute the float or
the slack time which is the difference between the latest finish and the latest start time.
And, after having done that we identify the critical paths that is the path on which all the
activities have zero slack, there may be more than one critical path. But, we mark all the
critical paths typically on red color so, that the project manager gives attention to the
critical path, the critical activities.
Now, let us look at the forward pass, this is the first activity using which we will
compute some of the attributes. In our task network the first task will always be starting
at day 0, that is end of day 0 or day 1 and then we will work forward following the chain.
We will assign the earliest start date to an activity which is equal to the earliest finish
date for the previous activity. We will take an example and explain how we do it, we will
take a task from left to right and we will set its earliest start date which is equal to the
earliest finish date for the previous. So, any task here, we look at the earliest finish date
for the previous task and the earliest start date of this will become earliest finish date of
the previous task.
But, then if there are more than one previous task, let us consider the example of H
which has two previous task E and F then we will take the latest finish time of E and F.
562
The implication of this is that H can start only after both the tasks complete, even if E
starts earlier H cannot start. H will start only after E and F both complete and therefore,
we have to consider the latest time between all the preceding tasks to set the start time
for H. This is just an example, if certain task the earliest finish time is day 7 and another
task where the earliest finish time is day 10 and both are the predecessors of this task,
then this task cannot start until day 10.
So, this task will complete at the end of day 10 and this task will start at the end of day
10 or that is beginning of day 11. So, we have to consider the letter of the two preceding
tasks and set the earliest start time to be that. So, forward pass is simple, let us take some
example and then we will see how to use this.
Once we have done the forward pass, we will have to do the backward pass. In backward
pass we start from the last activity and here we compute the latest completion time of the
activity which is equal to the latest start time of its successor. We now look at the
successor of a network, successor of a task. So, if we look at B ok, let us look at a G then
we look at what is the latest start time of H Z, latest start time of Z and then we set the
latest completion time of G. The idea is that until if we find that Z needs to start on some
date, G has to complete by that date.
So, using that concept we use the backward pass and then if there are more than one
successor, the smallest of the latest start time of the activities immediate successor has to
563
be put. Let us look at the case of B, B has two successors D and E; now if D has certain
latest start time and E has certain latest start time. Then we have to take the smallest of
this two D and E and make that as the latest completion time of the B. Or, in other words
B must complete before E can start let us say E has the smallest completion time let us
say D has 10 and E has 7; then B has latest completion time must be 7 otherwise E
cannot start again.
Here in the backward pass the rule is simple, we will take an example to consider the
forward pass and backward pass. You can see the rule is extremely simple, we can do for
very small or moderate sized the task networks. But, then as I was mentioning earlier
nowadays there are many tools. We will discuss some of these tools, where the
computation of the characteristics of the project are done automatically. The one that we
do by backward pass, sorry forward pass and then followed by backward pass,
identification of the critical path these are done only on press of a switch, press of a
button.
So, but then it is important to understand the technique that they use to compute forward
pass, backward pass and then identification of the critical path and for small task
network. So, you do not even have to use a computer, we can just draw it by pen and
paper and within a minute or two we can compute the task characteristics. We are almost
at the end of this lecture; we will stop here and continue from this point in the next
lecture.
Thank you.
564
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 30
Computation of Project Characteristics Using PERT/CPM: Illustration
Welcome to this lecture. In the last lecture we had looked at the PERT CPM technique
by which we compute various project characteristics, the activity characteristics, the
critical path and so on. Now, we will quickly review what we discussed in a minute and
then we will take some examples to illustrate how to use the methodology. Very simple
we can do it in a minute or two for moderate sized networks. Now, let us look at let us
review first quickly and then get started with the example.
Just to recapitulate what we discussed in the last lecture, that we can compute the project
parameters by using some simple methodology. We will first do a forward pass through
the network; to start with we will have the activity names and the durations for all the
activities and the precedence between the different activities in the form of a network.
We will start with the leftmost activity or the start activity and then we will have a
forward pass, where we will computer for every activity, the earliest start and the earliest
finish.
565
In the next step we will do a backward pass, where we will start with the last activity and
we will have the latest finish and the latest start computed. And, then we will compute
the float which is the difference between the latest finish and the latest start. And, having
done that we will identify the critical paths that is we will find out all the activities which
have 0 float time. And, we will find a path through the network for which the float time
is 0 and we will call it as the critical path we will mark it on red, these are of special
importance to the project manager.
The forward pass we start from the beginning or the start task which is started day 0 and
then we work forward. We will set the start date which is we will set it as the earliest
finish date for the previous task. And, if there is more than one previous activity, we will
take the latest finish time. There are more than one previous task and for the earliest start
is the latest of the later of these two we will take 10 here.
So, the earliest start we will set as earliest finish and then for the task we once we find
the earliest start, we will add the duration to get the earliest finish for that particular task.
566
(Refer Slide Time: 03:53)
And, once having done that we will take the backward pass, start with the last task and
we will compute the latest completion time which should be equal to the latest start time
of its successor. The latest completion time of G should be equal to the latest start time
of Z. When we take the example we will see that it is a very simple step, just a small
arithmetic. The only case is that when there are more than one successor for a node, the
latest completion time should be equal to the smallest of the latest start time of both the
successors; this is the only thing that we must be careful.
567
Now, let us take an example: here we have project where we have several activities
which have been identified. Hardware selection, system configuration, install hardware,
data migration, draft office procedures, recruit staff, user training and install and test. In
our task network it will be difficult to write all these levels and the activity nodes and
therefore, we use the label A B C D E F G H. And, the duration in weeks written here
this is the estimated duration of every task. These are deterministic times we know well,
but later we will see that how to use probabilistic estimates.
So, even though there is no dependency for A and B you just draw a start node, just to
have a single start node and then for E we will have B as the precedence. So, we will
draw here E and for this B will be the precedence and then we will have F for which
there is no precedence. And therefore, S will be its precedence and then for G both E and
F are precedence. So, we will draw G here and for which both E and F are the
precedence and then H for which C and D are the precedence. For H C and D are the
precedence, you can draw it nicely because here there are arrows intersecting should not
normally have that. And, then for H and G there are two end nodes, we will put another
E node which is the successor node.
Now, once having drawn that we will apply our forward pass and backward pass.
568
(Refer Slide Time: 09:25)
So, this is the one, if we redraw the one that were drawn in the previous slide we will
have this. I have also written down here the names of the activities, but that is not
necessary A B C D etcetera should be enough. So, A B and F these are the successor
node at the start node, start and finish are dummy nodes, duration is 0, start date of start
task is day 0. And, then A takes 6 weeks, B takes 4 weeks, F takes 10 weeks that we can
get from the table that has been given and for every task we write the name of the task
and also the duration.
Now, let us apply the forward pass, in the forward pass the start task starts and completes
on day 0 and therefore, this can start on day 0. We will put 0 here and then this will be 6,
the earliest start time is 0, the earliest finish time is 6. Similarly, this is earliest start time
is 0 that is end of day 0 or start of day 1 and we put 0 here as the earliest start date
because, this has completes day 0 and then we add 6 here put 6 here, 4 here and 10 here.
So, that we have done here that for all these 3 tasks A B and F, we in the forward pass
we first the completion time of this is 0 0 0.
And, add the duration here 6 and put 6 4 and 10 very simple arithmetic and in the next
step we look at the successor here C is the successor for A and we put 6 here indicating
that. The earliest completion time of task A is 6 and therefore, the earliest start time of
task C in 6. And, then it takes 3 weeks we put 9 here that is the earliest completion time.
569
For task D the earliest start time will be 4 and earliest completion time is 8 and for task E
the earliest start time is 4, earliest completion time is 7.
Now, we have just written that the earliest time start time is 6, completion time is 9
added 6 plus 3 is 9 and here earliest start time is 4, earliest completion time is 8. And,
here earliest start time is 4 and earliest completion time is 7. In the next step, we will
compute for H, but for H just see that it has two predecessors D and C and both have to
complete for H to start. Even though D complete at 8th day end of 8th day, H cannot
start on 8th; it has to start on 9th that is end of 9th day. Now, G also has two
predecessors that is F and E and even though E completes at 7, it cannot start on 7 it has
to start on 10.
We will put here 9 and 10 and this becomes 11 and this becomes 13. So, I have done that
so, that is the end of the forward pass, all the earliest start time and earliest completion
time are all marked on the tasks. Now, we will start the backward pass where we will
complete the latest start time and latest finish time for all the tasks. We will start from
the finish node, the finish node, the latest finish time is 13 that is both have to complete;
even the earliest finish time is 13 because both have to complete. So, for the last node the
earliest finish time is equal to the latest finish time which is equal to 13.
And, we will start the backward pass for the last activity we will set the latest finish time
is the earliest finish time as I was mentioning from then we work backwards. The latest
570
finish time for a task will be the latest start time of the following task and either more
than one following activity we will take the earliest start time. And, then from the latest
start time, the latest finish time we will compute the latest start time by subtracting the
duration.
Now, let us start the backward pass the finish here it is 13 ; that means, that both of these
have to complete by 13, they have single successor. So, the thing is very simple we just
write 13 and 13 here and then we have the duration is 2 weeks and we from 13 we get
that here it must be 10. And, here it must be 11 subtract 2 from here sorry 2 from here we
get 11 here and from 13 we get 10 here. Now, in the next thing that we compute is the
slack time which is the difference between the earliest finish and the latest finish; sorry
the latest finish mean earliest finish or the latest start minus earlier start both will give 2.
So, we put 2 here, we put 0 here; now let us computer for this ones, now this has a single
successor which is completing at 9 sorry 11. And therefore, the latest it must complete by
11, similarly for this and for this it is 10 and we must have 10 here. So, 11 11 and 10 we
have written here and then we subtracted 3 from here and got 7 and 7 minus 4 or 10
minus 7 is 3 and similarly here 11 minus 4 is 7 here and here 11 minus 3 is 8 and 8
minus 6 is 2. In the next step for this it is simple, the preceding the succeeding task for
this F is G and G the completion the earliest start, sorry the latest start time is 10.
And therefore, it must complete by 10 and we put 10 here. But what about B? B has two
tasks here a successor D and E, but then for both of this it is 7 so, we can put 7 here and
for this we have C as the successor for A, C is the successor and 8 is the latest start time.
So, we will put latest completion time for A as 8, let us do that here both were same and
that is why you put 7 here. If one was 8 and one was 7 we would have put 7 here, if one
was 8 and 6, we would have put 6 here and then 7 minus 4 is 3 and 3 minus 0 is 3 is the
slack time.
Similarly, here 10 minus 10 is 0 and 0 minus 0 is 0, 0 slack time and for this similarly 8
minus 6 is 2, we write 2 here and then the slack time is 2 minus 0 put here 2. And, we
have got all the task characteristics here, we have completed the backward pass. The next
task is to identify the critical path, let us find out all the tasks that have slack time 0; here
we find that F has slack time 0 and we have G has the slack time 0. So, we indicate this
path as the critical path we will draw it using red colour ok.
571
(Refer Slide Time: 20:17)
We drew it using red colour. So, this is the critical path, critical path identification
becomes very simple once we identify all the activity characteristics and we can write
this in the form of a table all the task parameters.
We can complete this table; we have A B C D E F G as the tasks, the earliest start time,
duration, earliest finish time, latest start time, latest float time sorry, latest finish time and
the float time. We can even write in the form of a table, we will just copy from the
diagram; sometimes this representation also becomes useful to the project manager.
572
(Refer Slide Time: 21:32)
The critical activities here are the activity F and G, the critical path as we saw its the path
through the network with zero float. It is very important for the project manager to pay
careful attention to the critical activities because, any delay on a critical activity will
delay the whole project. But, just out of curiosity some questions one is that: can there be
more than one critical path for a project? The one example we discussed, we had only
one critical path that is the slack time were 0 on one path and we identified that as a
critical path.
But it may so, happen especially for larger task networks that there is more than one
critical path, it is possible that more than one critical paths and on each path the slack
time available to the activities is 0. But, is it possible that there is no critical path for a
task network? The way we computed there has to be a critical path, the forward pass and
backward pass. But, suppose we have a situation where the completion time is also
predefined, the last task the latest completion time is predefined; it is not computed by
the forward pass. And, then we can have it is possible that we can have no critical path,
all tasks might have some slack time or other.
But what about subcritical paths? We identified all tasks or activities that have 0 slack
time and we said that the project manager has to pay special attention to those tasks. But,
what if all tasks have other than the critical path all tasks have like slack time of 10 7
etcetera, but there is one task which has slack time of 1. It is not 0, but slightly more than
573
0 and therefore, those tasks are also subcritical; that means, that we do not have too
much laxity or too much flexibility with those tasks. The project manager also needs to
consider the subcritical paths, where the activity is do not have too much of even though
they do not have exactly 0 flexibility 0 slack time, but the slack time available is very
less.
Especially for larger projects it may be also necessary to identify the subcritical paths,
where the slack time even though is not 0 for the path, but then it may be a very small
number.
Here are few exercise; please try this out, draw the task network and then use the PERT
CPM methodology to identify the project characteristics.
574
(Refer Slide Time: 25:16)
There is another exercise, the activity name, activity label, predecessor of the activity
and duration. Please try this out, identify the critical path, if not look at the example that
we worked out, it is very simple. You have to use the forward pass, identify the earliest
start time and completion times, then start the backward pass. And, you will have the
latest start time and finish time and then compute the slack and based on that you can
identify the critical path. We are almost at the end of this lecture; we will stop here and
continue from the next lecture.
Thank you.
575
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 31
PERT, Project Crashing
Welcome to this lecture, in the last lecture we had looked at PERT CPM and how to
identify determine the project characteristics, various characteristics of the activities
critical path, critical tasks and so on.
In this lecture, we will look at PERT where statistical times for tasks are considered. We
will just look at an overview idea of that, we will not go into details and then we will
look at project crossing. That is if the customer wants the project duration to be reduced
how does the project manager go about doing it and then we look at team management.
Let us start with the today’s topics. So, these are the PERT Project Crashing and then
team management that is the plan for this lecture.
576
(Refer Slide Time: 01:17)
We had said earlier that in the PERT CPM, the task durations are deterministic; it is
simple that all the tasks have estimated duration. But then often in development work
things are uncertain; in a routine work like maintenance and so on; the durations are
known quite accurately.
But in a development work often the project manager; it becomes hard to estimate the
exact duration of activities. Only statistical values for activities can be given and the
PERT can be used for that. Here the activity durations are uncertain and variability of the
activity durations; that is probabilistic variations can be handled in PERT. Here there are
three time estimates that are given to each activity; we will look at the three estimates
and based on that various project parameters and the critical path and so on are
determined.
577
(Refer Slide Time: 02:55)
The three probabilistic time estimates for the activities are the most likely time which we
indicate by m. The optimistic time that is given that everything goes alright; the task will
get completed in time a, this is the shortest possible time. And then we have b which is
the pessimistic time that is if things do not work out obstacles appear and so on. This is
the worst case time for the activity b; b is the worst case, a is the optimistic and m is the
most frequent time.
And based on that; the task estimated time is given as a plus 4 median plus b by 6
578
a 4m b
te
6
(b a ) 2
s2
36
b minus a whole square by 36. And based on these statistical parameters inferencing is
done just like the way we were using in the PERT CPM.
As I said that we will not go details into the statistical calculation of project times, but
there are several tools available; where these can be done easily the PERT CPM and
PERT can; there are large number of tools are available many are open source and some
are price tools. But then some tools which we have used it is easily downloaded, takes
very less space, does meaningfully well is the Gantt project. And if it is a task which is;
Gantt project is typically the project manager uses alone on a desktop standalone
machine.
But if there are multiple people who would like to use like the team members would like
to examine their task characteristics, the tasks assigned to them, give inputs and so on
then Redmine is a tool which is again open source; Gantt project in redmine. Redmine
579
runs on a server and there can be users and different computers whereas, Gantt project is
a very simple tool, do not even need a user manual you can start right away using it.
The project characteristics are computed automatically the critical path is shown as you
input. And of course, there are many other tools that are available; it will be nice as part
of this course you can download some of these tools and get used to it. If you get used to
one tool you will see that it becomes very easy to use the other tools.
Now, let us look at project crashing that is common problem that the manager estimate
certain time; let us say 6 months. And then the customer says that 6 month is too long
can it be done in 5 months? Then what does the project manager do?
The project manager tries to reduce the project duration that is called as project crashing.
As we have already discussed that the longest path in the task network is the critical path.
To reduce the project duration, we need to reduce the time for the critical path, but then
as we reduce the total duration for a critical path; we will see that there are other critical
paths that appear that has to be considered. But how does the project manager reduce the
duration of the critical tasks that appear on the critical path?
Of course, by deploying more resources if there was one tester testing was the task; then
might deploy more testers, coding might deploy a additional coders. By assigning more
resources to a task the duration of the critical task can be reduced, but then which critical
580
task to take up. If the requirement is let say 15 days reduction in a 6 month project which
critical task to take up?
We can do that by a simple approach for project crashing. First is we must identify the
critical path and then look at all the critical tasks on the critical path and then find the
cost per day to expedite each task on the critical path. And then identify the cheapest task
to expedite and then gradually reduce it, but then it may so happen that as we reduce it
by a day or 2 day, 3 day etcetera we will find that the critical path changes.
The critical path which was let say 6 months and as it becomes 5 months 25 days; it may
not remain the critical path, there maybe another path which becomes critical. And then
we need to reduce that critical path the current critical path after reduction. And we keep
on doing this steps until no more reduction is possible or the target is met.
581
(Refer Slide Time: 10:03)
Just to give a small example of the crashing; let say a very simple projects with a four
tasks A, B, C, D. The task durations are 10, 10, 20 and 8; we can easily see the critical
path shown here in red the duration is 40 that is A, B, C is the critical path with 40 weeks
as the duration. And then the project manager needs additional information that
deploying additional resources here costs how much? The crash cost for a is 500 rupees,
but it can at most be reduced by 2 weeks; it will not be possible to do it in less than 8
weeks. Task B, 1800 is the cost per week, but then it can at most be reduced by 3 weeks.
Tasks C, 2000 is the cost per week and only it can be reduced by 2 weeks and task D is
2500 per week and can at most be reduced by 2 weeks. Obviously, the project manager
will choose A to reduce at first reduce it by 1 day; A becomes 9, B 10 and C 20. The
critical path becomes 39, the overall project duration reduces by 1 day with an expense
of 500 and the critical path does not change because the other critical path is 38.
Now, reduce it by 1 more day the critical path becomes 38, but the other path becomes
36; so still the critical path does not change. The next task to reduce possibly is 18; sorry
B and the cost is 1800; now let us reduce it by 1; the critical path becomes 8 plus 9 plus
20; 37 and still remains the critical path the project duration reduces to 37. And we
reduce it by one more it becomes 36 and there are two critical paths now; we have to
reduce both just reducing this; does not reduce the project duration you need to reduce D
as well.
582
So, this is basically the approach the project manager takes to reduce the project duration
or the project crashing.
A is reduced as much as possible because that is the cheapest task to reduce and the
duration for a becomes 8, the critical path becomes 38; the project duration is 38 now
and the sub critical path is 36. Now, let us look at the topic of team management; how
does the project manager go about doing the team management.
583
To manage the project team; the project manager has to put in daily effort, has to track
the team member performance on a day to basis day to day basis, motivate the team
members to improve their performance. Provide feedback which can help them and if
there are issues and conflicts among the team members these need to resolved; so that the
overall project performance improves.
The project manager needs to be aware of some theories and observations which have
been made over the last century or so; these are very fundamental results applicable even
now and the project manager needs to be aware of this. One is the Hawthorne effect; way
back in 1920; a series of experiments were conducted at the Hawthorne plant in western
electric, Chicago. Here the idea was that how does the team performance change as the
lighting conditions improve? More lights were added it was made very bright room,
lights were reduced made relativity darker, but then the surprising observation the thing
was actually repeated across many workers.
But something that was clearly identified was that regardless whether the light levels
were raised or lowred; the productivity always increased, surprising that not only that the
light level did not matter so much, but also for both cases the whether the light levels
were raised or lowered; the productivity increased. What can be the cause for this? It was
concluded that since the workers were under observation, their productivity was being
measured; they were paid attention that is what increase the productivity.
584
The conclusion from here the Hawthorne effect is that; if the workers are given attention,
the manager pays close attention that what they are doing their productivity increases.
Simply showing an interest in a group increased it productivity.
The manager has to select the best people for the project, here we must note that Belbin;
he distinguished between eligible and suitable candidates. Eligible candidates are the
ones who have the right qualification and suitable candidates are the one who can do the
job.
May be the eligible candidate may not be suitable and a candidate who is not eligible;
does not have the right qualification may be actually the suitable candidate. But the
danger is to employ somebody who is not eligible sorry who is eligible, but not suitable.
As per educational qualification other requirements the candidate is eligible, but then for
the project is not suitable; the project suffers and also the candidate suffers, it does not
help anyone.
But the best cases that employ someone who is suitable, but not eligible; they would be
cheaper because they do not have the right educational qualification and they will stay on
the job they will be motivated. So, this is the best case to employ somebody who is not
eligible, but suitable, but then how to identify that is the; that is a problem.
585
(Refer Slide Time: 19:13)
One of the early thoughts in organization behaviour was by Frederick Taylor; long back
he recommended that select the best people for the job. And then instruct them in the
best methods and then give financial incentives in terms of pieces of work completed.
Of course, this was for a manufacturing job for a software job it is not so well defined
that what do you mean by pieces of work completed. But still let us see what he really
meant that select the best people for job instruct them in the best methods and give
financial incentive quite intuitive actually.
586
But then McGregor, he propounded two theories; theory X, there is no need for
coercion, direction and control there is a need for coercion direction and control of
people at work. He assumed that whenever the manager finds the worker; he needs to
coerce monitor closely ask them to do the work because the workers by themselves do
not work. They have a tendency to laze around not work and therefore, the manager
needs to have coercive techniques, control the people.
But then the theory Y is a different type of manager who assumes that workers they love
to work. Work is as natural as rest or play just needs to encourage them; these are two
different ways in which managers work. One is the theory X manager; who assumes that
people always cheat, they are not motivated, they will not work unless they are scolded
they are forced and so on.
And theory Y is the manager assumes that the workers are efficient; they love to work. It
will be very easy to spot what is the type of the manager if you visit a team and find that
the day the manager is absent; if you find that everybody is relaxing feeling happy that
the manager did not come and then that is a theory X manager; who is a dominating
things that the people do not work, he needs to tell them to work and so on. And theory
Y manager if you visit and see that even when the manager is not there; the workers are
working as usual then the manager; you can think of him as a theory Y manager.
587
But do the software developers have some characteristics which make them do a good
job; because that will give us a hint how to choose a good developer. In a very old study,
1968; the difference in time taken by different programmers to code the same program is
1 is to 25. Somebody who takes 1 hour to code a program, the other programmer takes
25 hours and these are employed by the same organization.
So, the proficiency level of the programmers vary widely that was the conclusion the
study in 1968. Obviously, this can be interpreted to mean that during the selection; we
need to find out who is a good coder, codes very fast writes good code. But then we also
need to remember that nowadays software development is not just coding; coding is a
small part of the activity may be in 1968; coding was on one of the major activities in
program writing, but now coding is a small part of the activity of a software engineer.
For example, finding out which reusable libraries to use, which tools to use etcetera. So
and of course, testing, design, requirements analysis and so on. So, this study is an
indicator, but then we must remember that the times have changed; now we have not
only coding, but other activities, but then still we can interpret this, to say that there is a
wide variation in the competency of different engineers in a project. And we need to
identify who is a good developer who can help the project and select the right people.
One of the study early study found that those who were good in maths; they had good
software development skills, but then later it was found that this not may not be so.
Another study found that the software developers are less sociable than other workers.
But then later surveys found no social difference between IT workers and other; maybe
when the study was done, it was the case where the programmer has to just write the
code that was the major activity.
But now as you are mentioning software development has many other activities and
possibly that may be the result that the developers have a broader role now and they have
to be social. We are almost at the end of the time at this lecture, we will stop here and we
will continue in the next lecture.
Thank you.
588
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 32
Team Management
Welcome, to this lecture. In this lecture, we will discuss about Team Management, some
very basic concepts and then, we will look at the organization structure.
589
In the last lecture, we had seen some very basic theories that the manager must be aware
for effective team management; today, in this lecture we will see few more concepts. The
first one is with respect to motivation. We had said that software project manager sorry,
the software developer has certain skills and there is a wide variance in the capability of
the software developers.
But, then we must also be aware that motivation and hard work can often make up for
the shortfall in the skills. We had said that some studies found that the competency or the
effectiveness of the developers varies from a in a scale of 1 to 25, but then the one who is
less competent, if he is very sincere, motivated and hard worker he can make up for the
gap in the competency.
We had discussed about the Taylor’s approach about giving financial incentive as the
major motivator. But, Abraham Maslow he said that financial incentives often do not
work. The motivation actually vary from individual to individual and also based on the
time or the point of time in his carrier when you are giving him the incentive. For every
individual there is a hierarchy of needs, as the lower items in the hierarchy are filled then
the higher ones emerge.
At the lowest level the requirements are food, shelter. A worker who is not having
enough food, shelter and so on, and you provide him he will feel motivated, but then the
workers who were well of, employed for long time and then you give them these
incentives it may not motivate them much. And, at the highest level of the hierarchy are
the self-actualization. Let us look at this.
590
(Refer Slide Time: 03:20)
Maslow’s hierarchy of needs we can see that at the lowest level of the hierarchy is the
physiological or survival needs – these are food, shelter and so on. And, then at the
safety needs, freedom from danger and so on. Safety then the social needs and once these
are filled like belongingness, liking the organization, people liking, then the esteem
needs and finally, self-actualization.
As one of the level is filled for a project member then the higher ones emerge, once the
lower one is filled more of that does not motivate the team member. We need to give a
higher level incentive. Let us look at these incentives are sorry this hierarchy of needs in
some detail.
591
(Refer Slide Time: 04:39)
The physiological needs are the need to overcome hunger, thirst, sleep etcetera. The
safety needs are security, protection from danger and so on. The social needs are
friendship, engaging in social activities, group membership and so on. The esteem needs
are self-confidence, achievement, recognition, awards, appreciation and so on. And, at
the top level is the self-actualization where all the previous needs are filled and the
employee, the motivation is the desire to develop one’s full potential.
592
The Herzberg’s 2 factor theory: here Herzberg suggested that two sets of factor affected
job satisfaction. One, he called as the hygiene or the maintenance factor and if these are
not right the employees become dissatisfied. For example, the pay, working conditions
etcetera. And, there are several things which are motivators. These factors make it
worthwhile the make the job worthwhile and there is a sense of achievement for the
employees.
Let us look at the motive motivators, these are also called as satisfiers. Examples of these
are recognition of effort and performance. The manger or the organization need to
recognize employees who put lot of effort, show superior performance. Another
motivator is the nature of the job itself – is it a routine job where the employee finds no
challenge at all or it makes him think put his best. The sense of achievement, opportunity
for promotion and so on these are the satisfiers.
593
(Refer Slide Time: 07:23)
Vroom identified three influences on motivation. One was expectancy; the belief that
working harder leads to better performance. Just to given example where expectancy
does not hold let us say a software has a bug and different developers are put large
number of hours trying to locate the bug and still are not able to find it. And, here they
are demotivated because they do not have the expectancy. They do not believe that if
they put another 10 hours, they would really be able to detect the bug because already
hundreds of hours have being spent without detecting the bug. There is no expectancy
that few hours of additional work will help them detect the bug.
Instrumentality – the belief that better performance will be rewarded. Let us say the
developers are developing a software and if they implement all the features well, then
this will be accepted by the customer, appreciated and so on. But, let us assume that
already the customer has got hold of another software which is working quite well and
since they have already given this project it is being developed, but then they may not
use it actually, just because they had given it and not able to cancel, then the project
team feel demotivated.
They do not have the instrumentality, that is, if they do a good software that will be
appreciated because it may just gather dust and put into cold storage, nobody will use it
because the customer is already using another software. So, in that case, there is no
594
instrumentality; just giving an example. The perceived value of reward. This is another
motivator what the developers perceived the reward to be.
The job characteristics which make the job more meaningful and motivate the team
members is given by the Oldham-Hackman job characteristic model. Here he identified
three aspects of the job; one is skill variety. If the developer is trying do just one type of
thing over the years, there is no skill variety and the developer loses motivation. To think
of it just somebody who is doing coding over several years may not be as highly
motivated, as somebody who is having skill variety, for example, a designer or an analyst
who is having a much wider things to do.
The task identity ones the task is complete can the developer be identified who has
contributed to develop it or is it that there several people who contributed and no one in
particular can be identified about who can be given credit. Here, if the manager gives
clear objective, clear work to the developers then there is a task identity and this is the
motivator, that somebody is responsible for some work and he as he completes does a
good work it will be appreciated that this specific work is done by certain team member.
The task significance whether the task that is assigned to somebody it is significant,
whether it will be useful to the team to the organization to the customer; if it is a very
insignificant work like let us say just collecting all information from various developers
595
and just storing it, then it may not be so motivating as doing a crucial piece of the user
interface or certain I/O tasks and so on which are easily noticed and are identified.
Two other factors he identified which contributed to the job satisfaction are autonomy, is
the developer has freedom to work on his own way or he is told every step. He is told the
exact steps he has to follow, he cannot use his mind then he lacks autonomy and he will
be less motivated. And, the feedback from the manager, that is another factor which
motivates the team members.
What does the project manager do to improve job satisfaction? Have to set specific
goals; this clearly defines the activity that is given to a developer. Provide feedback and
progressed towards meeting those goals. Consider job redesign if it is becoming too
mundane, widen the job. Job enlargement, job enrichment that is give some decision
power.
596
(Refer Slide Time: 14:16)
Another aspect which the project manager needs to be aware is the stress that the team
members undergo. Edward Yourdon quotes a project manager saying: Once a project
gets rolling, should be expecting members to a putting in at least 60 hours a week and
also the project manager must expect to put in as many hours as possible. Every project
when it starts there is not much stress on the team members, but as the project develops,
the project team is under time pressure and there is a stress. Often they are asked to work
long hours as his quoted by Edward Yourdon.
But, then as a manager will must be aware of the 1960 study in US, where people under
45 who worked more than 48 hours a week had twice the risk of death from coronary
heart disease. And, many of the popular development practices like the extreme
programming here it clearly identifies 40 hour working week.
597
(Refer Slide Time: 15:48)
An effective project manager reduces stress on the team. If the team is under stress it is
partly the project manager who is to blame. A good project manager should have
reasonable estimate of the effort. The project plan should be proper. There should be
good project control, so that the crises in the project are very few and what is expected of
the team member should be unambiguous. If the project manager is not able to assign
clear work to the team members that creates lot of stress.
Reduced role conflicts that is if a project manager assigns different duties, different work
to a project team and those are conflicting. For example, he has to attained the project
team member has to attained two meetings at the same time because he is member of the
quality team and also he is a developer, he has to attend a review and at the same time he
has to attend a quality meeting, that creates stress. And, often the project manager tries to
control the project using bullying tactics we just saying the theory X managers, but then
the theory X managers are incompetent project managers, there should be theory Y
managers.
598
(Refer Slide Time: 17:41)
Now, let us look at the organization and team structures. In a software development
organization if you visit in organization, how is the organization structured into teams?
We can visit various companies and find how are these companies they have structured
into teams like organization might have been might be having let us say two dozen
projects and each project will be having a project team. So, how are these two dozen
projects organized, whom do they report and who are the members of this team?
The other question is that how the teams individually are structured. These two aspects
we call as the organization structure. The organization how is it organized in to teams
and then the second thing is the individual teams, how are they structured. First let us
look at the organization structure, then we will look at the team structure.
There are three basic types of organization structures. If you visit a organization you can
identify that some organizations have functional organization. We will look at what is
meant by functional organization. The second is the project organization and the third is
the matrix organization. Let us look at three organizations structures.
599
(Refer Slide Time: 19:29)
In the functional organization we have various functional groups. For example, we might
have analysts several analysts under the functional group. There may be another group
called as designers, another group as coders, testers, database experts, networking
experts and so on.
And, the organization might have several project teams and as the project starts the
manager determines who or what type of expertises needed, he will contact the
respective functional group managers and then they are assigned to the project. For
example, project team 1 might have some designers, database experts and as the
designers complete their work, the manager may determine and he will return the
designer to the functional group and then request for coders and so on.
This is a well known structure for military, police, hospitals etcetera. For example, in a
hospital there may be functional group of nurses, there may be functional group of
various types of doctors, specialists, pharmacists and so on. And, for each activity for
each patient they might need some of these. Similar is the case for military. There are
various types of functional groups here, there are doctors, hospitals, there are soldiers,
there are airman and so on.
And, here it is a typical pyramidal structure with each functional specialization. Here the
advantage is that since the analysts are together a team reporting to a manager. Similarly,
the designers together form a team, a functional group. There is a better coordination
600
among them. They discuss among each other. They develop better capabilities. The
specialist groups at the functional groups who share knowledge. And, also the project
becomes cost effective because the project team request for personnel agent when the
need arises and returns them and takes the other functional groups as happens in a
hospital and so on.
But, what about in a software organization? Let us look at the other organizations and
then we will compare that how does these different organizations they perform in a
software development work.
So, this is the functional organization. We have different functional groups and as the
project teams are formed and the project develops they request from different functional
groups and return them. And, all of them the project team and the functional team they
report to the top management.
601
(Refer Slide Time: 23:39)
This is just a different diagram to explain what happens. The analysts they form a
functional group, report to the analyst manager, and for different projects they may
request analysts and the analyst manager assigns them specific analysts. Similarly, the
architects are needed for different projects and they have each of these team members
have two mangers to report: one is their functional manager and another is the project
manager.
Similarly, the coders, the developers, the testers they report to the test manager and also
to the project to which they have been assigned. And, all of them, all the managers report
to a managing director.
602
(Refer Slide Time: 24:42)
And, one of the biggest advantage of the functional organization is the flexibility in use
of the staff. As you are saying that as and when required the specialists are requested to
complete their work and go back to their functional group, but in a project organization
as we will see shortly that the team members are assigned permanently to the project.
And, therefore, somebody who is a tester, he may not have work all the time. Somebody
who is a designer once design is complete what does he do?
Compare to a project organization where the team members are there throughout the
project, in a functional organization they come and go and this provides better manpower
utilization. We will see more of this as we proceed in the other lectures.
603
(Refer Slide Time: 26:32)
The disadvantages of the functional organization is that each team member has two
managers to report to. They might give different directives and there can be conflict.
And, also the project manager is control over the team decreases because he might ask
his team members to do something and they might say that see we have other work to do
as given by our functional manager. We will just stop at this point because the lecture
hour is getting over and we will continue in the next lecture.
Thank you.
604
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 33
Organization and Team Structure
Welcome to this lecture, in this lecture we will discuss about the Organization Structure
and the Team Structure.
In the last lecture we had said that if you visit a Software development organization; the
first thing you will notice is that the organization is structured into teams, it has many
projects and different teams are carrying out the projects. The way the organization is
structured into teams is called as the organization structure.
On the other hand if we look at the individual teams and how the team is structured that
we call as the team structure. We had started to discuss about the organization structure
that is if you walk into a software development organization, what will you notice, what
are the different structures that the organization might have, how the teams are organized
in the software development organization.
605
Broadly you will notice that any software development organization will have one of the
following types of organization into teams; one is functional organization, project
organization and then the matrix organization.
In the functional organization we will find functional group or departments. The projects
come and go, the projects are started projects dissolve, but then what is permanent; is the
functional groups are departments. There is a department for analyst, department for
designers, coding, testing, database, networking and each one of these has a functional
manager each department has a manager called as the functional manager.
Now, as the projects are formed a project manager is appointed; and the project manager
plans the project and finds that at different points of time of the project needs different
expertise. And then consults the corresponding functional manager, for example, might
initially requires some analyst. Request the manager corresponding functional group
manager for, the required number of analysts and as they complete their work they go
back to their department and the manager would requires some designers from the design
department.
Even though in a typical software development organization this kind of set up is bit
rare, but then there are many places where this is a very well accepted structure. For
example, military, police, hospitals, even educational institutes. In a military, the hospital
is a department there are only doctors there; the doctors might be called up on for a
606
specific project. There may be administrators administration unit, there may be
transportation unit, logistic units, store house, etcetera. Similar, is police and also the
hospitals there are various types of specialists and these are organized into departments;
for example, the surgery department, the medicine department, administration
department and so on.
But if we look at the functional groups themselves; for example, let us say the design.
The design department inside that if we look we will find the pyramidal structure where
there is a manager is assisted by some senior designers, in turn the junior designers they
report to the senior designer, the senior designer reports to the functional manager for the
design unit and so on. One of the major advantage of this is that all designers they are in
a department and therefore, they interact with each other, enhance their technical
knowledge, educate each other and so on.
The specialist groups since the testers are all there any new tool, any new testing
technology they share with each other. Another major advantage of the functional
organization is that; the projects become cost effective because each time a project needs
certain number of persons gets them and then they return to the respective department
after they complete their work. Just imagine if they have to employ a person throughout
the project duration, the designer initially he has work; but later what do he do they
might ask him to do something else or he may idle.
So, as far as the cost effectiveness is concerned, functional organization is very cost
effective because when a specific expertize is required it is procured and retuned as soon
as they complete their work.
607
(Refer Slide Time: 07:05)
So, here the project team has taken designers and database experts who are working
under project team. They would be returned as soon as they complete their work and
possibly the project manager would need other expertise; and here both the functional
groups and the projects, the managers here they report to the top management.
This is another view of the functional organization where each project has got some
analysts, architects, developers. And they have two reporting; one is to the manager of
the project and other is to the functional group manager.
608
(Refer Slide Time: 08:07)
609
The disadvantage of functional organization if we think of it, one is that each employee
reports to two managers; one is the project manager other is the functional manager. And
what if they give contradictory directives; the functional manager wants him to work in
two projects and the project manager says you cannot and so on.
So, this sometimes is a stress on the workers; the project manager control decreases
because he may not get exactly the persons he needs. The functional manager assigns
any person that he thinks suitable. Another big problem here is that a designer keeps on
doing design, a tester keeps on testing, a coder keeps on coding from one project to
another project and so on. So, there is a lack of job variety.
Possibly the most popular organization in software development companies is the project
organization. Here when you walk into the organization you do not see the departments
the department of designers, department for coders, department for testers and so on, the
testing department, coding department and so on. Here what you see is the projects. The
company as a whole is structured into number of projects and the workers belong to the
project, they fully report to the project manager and therefore, the product managers has
full control on the employees.
The team members are selected for a project and they stay on for the entire duration; but
then the question is that suppose we have an analyst and then he is with some project
team, initially he does requirements analysis and so on. But after that what does he do?
610
Ok; he also does design, also does coding testing and so on. So, here in a project
organization typically each member does a variety of roles and therefore, there is a job
variety here. A coder need not be coding always, he will also have option for testing,
design and so on.
And that is a big advantage of the project organization, is the job rotation. Because there
is a lot of manpower turnover in software industry. If you employ somebody as a coder
he will always try to change his job and say; that may be I would like to become a
designer or a analyst or a tester why should I just keep on coding all the time. That
problem would arise in a functional organization, but not in a project organization
because a different points of time in the same project he will be doing different activities.
So, this is the big advantage in the project organization leads to job satisfaction and less
manpower turnover.
611
Communicating with them with the persons who actually designed, who actually coded
may become difficult; because they might be working on an other project and when the
telephone or something and try to meet and find out, they may say that we are busy on
another project. So, here communications of those who do the different parts of the
project become difficult.
Another big advantage of the project organization is ease of staffing. The manager has
the option to select the best people for the project and they continue to stay on the
project; they develop domain knowledge and the project. On the other hand in a
functional organization when the project manager requests for a designer after the
analysts have completed; the design manager might say that see right now we do not
have designers they are all busy in respective projects and they will be given to you after
2 weeks. So, that problem does not occur in the project organization. So, this is has much
easier staffing compared to the functional organization.
But let us look at the disadvantage of the project organization. Here there are team
members continue to stay on a project, the things they learn from a project becomes
difficult to share across other projects. Because the projects may run for 6 months, year
and so on and they continue to work on a project and by the time they move to different
project, they might have forgotten what they learnt in project.
612
But possibly the biggest disadvantage is inefficiency; in any project it is well known that
to start with need few people in the project, in during the requirement analysis just 1 or 2
persons, during design may be 3, 4, during coding may be 10 and during testing may be
20. So, the manpower overtime in a project is not constant, it varies with time and
typically given in the form of a Rayleigh distribution. But then in a project organization,
the project manager appoints the project personnel at the start of the project and
therefore, by definition the project organization the manpower in the project is constant.
If the project manager employees ten people, initially they have very little work, but then
during design also not everybody is busy and then during coding everybody becomes
stressed, there is not enough hands everybody works overtime. And during testing the
situation become still worse and therefore, stress is inbuilt into the project organization;
initially everybody relaxes and as the project picks up the members are under stress.
Initially there is idling; later there is requirement of overworking and so on.
The third category of organization structure is the matrix organization. Here essentially it
is a functional structure, you can find here the functional groups here, these are the
departments; but then over the functional group there is a project structure you can see
the projects here.
So, here again, just like a functional organization the members report to both functional
group manager and the project manager. You see here that the functional group one
613
which may be the analyst group, there are two persons who working on project 1, no one
is working on project 2 and 3 working on project 3. Here the members reports
simultaneously to both functional and the project manager. But one advantage of this is
that, a member in a functional group may work on two teams that is also becomes
possible.
On the other hand in a strong matrix; the functional managers have are much more
powerful, they decide whom to depute for a specific project and the project managers
they have to accept them.
614
(Refer Slide Time: 20:07)
Let us look at the advantages and disadvantages of the matrix organization. Disadvantage
is just like the functional organization here each members has multiple bosses to report,
the project managers have reduced control as compare to the project organization.
And in a strong matrix organization the functional manager often finds that different
projects are facing problem and some projects are comfortable. So, the functional
manger would like to shift manpower. The competent designers he might take from one
project half way replace him; take him to another project which is facing difficulty and
so on.
615
(Refer Slide Time: 21:34)
Now, let us look at the team structure. If you again work into different software
development organization and look at the individual teams, you will find that the team is
organized in either of the three ways; the democratic team, chief programmer team and
the mixed organization. And we will see that each of these team structure has it is own
advantages and disadvantages and depending on the project a specific team structure
may be advantageous.
616
First let us look at the democratic team. These are small teams; we will see that for a
large team the democratic team structure becomes very very inefficient nobody would
like to have a democratic structure for a large project. Here as a name implies all
technical decisions are based on consensus. Everybody can give his opinion on every
issue, they talk to each other and here there is a manager who provides the administrative
leadership.
But the technical part is supervised by a technical leader; but the technical leader is one
among the team member who is elected by rotation. And if you look at here since
everybody is consulting everybody, there is lot of time is goes for discussion. One good
thing is that they can brain storm come up with good solutions, but then the other thing
that it wastes lot of time and specially; if the team size is large, you can see that there are
for a team size N there are N square communication path. So, it may become inefficient,
too much of time may be spent in talking to all the team members and therefore, it is
rarely used for large teams.
617
explain. But in a democratic team these are egoless teams; that is no one owns anything
there is no ownership we can’t say that only he knows or one person knows.
Here the ownership is shared each team member whatever work he does is reviewed by
other members. So, everybody has knowledge about every aspect of the project. This is a
big advantage of the democratic team when we have egoless programming.
The third type of organization is the chief programmer team. This is again suitable for
small teams where there is a chief programmer; he is responsible for all high level
decisions, all technical decisions, works out the overall solution and gives small pieces of
work to the different team members. And then reviews the work that is completed;
monitors and reviews the work and then integrates them. We will see that this is an
efficient organization for very simple problems; but then for challenging problems this
may not be the right structure. We are almost at the end of the time; we will stop here
and continue in the next lecture.
Thank you.
618
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 34
Team Structure (Contd.) and Risk Management
Welcome to this lecture. In the last lecture we were discussing about the organization
structure and team Structure. In the team structure we were discussing that each team has
some structure, the way the members do their job report to somebody there in the team
and so on. We found that there are three major team organization; one is the democratic
team suitable for small projects where there are about 7, 8 persons.
Rarely used for large projects, because the decisions there is collectively taken
everybody consults everybody. It is suitable for research type of projects where there is
lot of innovation, new ideas are required; but then it is inefficient in the sense that lot of
time goes in discussions, meetings and so on.
We were looking at the chief programmer team. In the chief programmer team, these are
also suitable for small projects where there is one person designated as the chief
programmer. He is the expert for the project, he is in overall charge of the project,
provides the high level solution, divides the project into small pieces of work, assigns to
the team members, helps them the team members monitors and supervises them and then
gets the work from them and integrates.
This is suitable for very simple programmers, it is a very efficient team structure for
simple problems because hardly much time goes in discussions meetings and so on. Let
us look at the chief programmer team first; we were already looked at the democratic
team in the last lecture. We will start with the chief programmer team and then we will
look at the hybrid organization.
619
(Refer Slide Time: 02:44)
The chief programmer team as the name says we have one the programmers designated
as the chief programmer. All other team members report to the chief programmer; the
chief programmer is responsible for the project. He is the technical lead and also the
managerial lead; all other team members report to him, he visualizes the solution, works
out the design, explains the work to different team members. For example, the
programmers he might explain them what coding exactly they would have to do, to the
network specialist he may say what network solution is needed, to the testers he would
say that how to test and so on, what aspects to test and so on.
So, here the entire project solution is with of chief programmer. The other team members
have only partial views of the project, they do not fully understand what is going on; but
whatever has explained the small tasks that has been explained to them they only know
that and they do that. The chief programmer monitors the work, provides any help that
may be needed further explanation and so on. And as the different team members
complete their work, reviews the work and once these are done satisfactorily, integrates
and slowly the full system develops.
620
(Refer Slide Time: 04:44)
Here the chief programmer is the main person responsible; he is both a manager and a
technical expert, carries out the architectural design detailed design and then allocates the
coding among the team members helps the team members and the critical parts of the
code himself writes, handles all interfacing issues how to integrate the different parts,
reviews the work of team members, anything deficient explains them and asks them to
do those things and the chief programmer alone understands the complete project every
line of code, every design everything is known to the chief programmer. All of the team
members have only partial views, only small parts of the project they know.
As you can see that one of the major problem here is that what if the chief programmer
becomes indisposed. The project would stall, but then this is a very efficient way of
doing simple projects, because hardly there is much time wasted in discussions and so
on. The chief programmer does everything, effectively supervises gets all the work done
on time, he knows how much time it takes because he is the expert there and therefore,
this is the fastest way of doing the simple projects.
621
(Refer Slide Time: 06:31)
But what about large projects? Neither the chief programmer team nor the democratic
team is suitable. We said that both of these work for small projects. The chief
programmer cannot supervise too many other programmers; the democratic everybody
discusses with everybody and therefore, as the size of the team increases, the number of
communication path increases is N square.
And quickly it becomes very inefficient for even moderately large projects. And for this
reason the hierarchical team structure is used for large projects. This is as the name says
this is the hierarchical organization; there are junior programmers who report to senior
programmer, the senior programmer report to a project leader, the project leader reports
to a senior leader and so on.
622
(Refer Slide Time: 07:44)
So, as you can see this is the tree structure where the junior programmers for a specific
part; the project leader assigns work to the senior programmers. The different parts of the
project are done by different senior project programmers, they have their own group of
junior programmers and they talk to each other.
See here there is a democratic setup at the bottom level and a chief programmer set up at
the top level. The technical leadership unlike the chief programmer here it is given by the
project leader and the senior programmers.
623
The chief programmer is the best structure for simple low difficultly projects. The
democratic team, its suitable for research oriented, innovative projects, but the
communication overhead increases rapidly with team size; and therefore, democratic
team is also used for small projects, the hierarchical team is used for large projects. We
will now discuss about the Risk management issues; how does a project manager carries
out risk management?
The first thing that to know here is, what is a risk? A project suffers from many types of
risks; in the PRINCE 2 risk is defined as the chance of adverse consequences of future
events as the project progresses many events occur and the risk is the adverse
consequence of a future event.
For example, one event may be that a team member leaves that is a event and that has an
adverse consequence; that who will do his work. Another future event maybe that the
budget becomes very constraint, the customer faces financial difficulties and is not able
to provide the funding and so on. Now, let us look at the PM-BOK definition Project
Management Book of Knowledge; the definition there is an uncertain event or condition
that, if it occurs, has a positive or negative effect on a project’s objectives.
So, this is again similar definition about an event which may or may not occur, it is an
uncertain event or condition; if it occurs has a positive or negative effect. So, there is a
positive risk; the positive risk is that something good happens. For example, the
624
customer may say that you can take an additional 2 months because our actual product
launch is delayed, for your part you may take another 2 months, so that is a positive risk.
The negative risk maybe that the customer says, that you need to complete it by one
month early because our project launch has been prescheduled. But in all these
definitions the risk relate to some events, that may occur in the future not the current
one’s. So, the risk is some event that occurs in the future and the problem that may arise
because of that.
There are many many types of risks that a project may suffer; one possible categorization
of the risks is the market risk. The market risk is the project complete successfully, but
then the market conditions have changed and the product is not accepted in the market;
may be the competitors have come up with the better product, maybe the technology has
changed and it is not no more acceptable in the market, for the specific type of
technology it was being developed it has further advancement and the product is not
acceptable in the market. Those are all market risks examples of market risk.
Financial risk is risk of financial commitment by the customer not being honored; the
customer gone bankrupt cannot pay for the project halfway through the project and so
on. The technology risk is that the project assumes some technology and then the project
proceeds based on that technology; but it may so happen that better technology gets
developed and comes into use before the project completes. So, the technology
625
obsolescence is a example of a technology risk. And then there are project risks like
project personal leave unable to find replacement for a specific role, the schedule gets
delayed and so on.
Now, let us look at the risk management approaches, there is hardly any project that does
not have risk. Every project has several dozens of risks that it faces and it is a important
job responsibility for the project manager to handle the risks.
And broadly two options are available to the project manager. One is the proactive
approach, here the project manager anticipates sits down and thinks, what are the risks
that may the project may be susceptible to, anticipate all possible risks and also plan
what actions to take. If the risk actually becomes true and then as the risk becomes true,
take the planned actions and manage the risks.
This is the one which is very popular among the managers the proactive approach; but
then there is a reactive approach, where the project manager cannot even anticipate the
risks and therefore, takes no action until the event occurs. Typically the project managers
take the reactive approach that as the risk occurs because these risks are not predictable
they cannot be foreseen by the project manager take no action until the unfavorable event
actually occurs.
626
(Refer Slide Time: 16:35)
But in the proactive approach we said that in the reactive approach just handles the risk
as they occur; but in the pro active approach how does the project manager go about
dealing with risks? The first thing of course, is to anticipate the risks. Risk identification
that is what risks can be there in for this project, notes down all the risks anticipates,
finds out what technical risks can be there, what project risks like people leaving the
project what will happen, schedule delayed what actions to take and so on.
So, the first step in dealing with a risk is to identify all possible risk at the start of the
project. And once the risks have been identified need to analyze the risks, what will be
the impact of the risk, which is a serious risk, need to prioritize all the risks; that which
risks are the most damaging. And then the risk planning; what to do, when the risk
actually becomes true and then as the project proceeds, risk monitoring is the risk
probability becoming higher and so on, what is the current state of the risk.
627
(Refer Slide Time: 18:27)
The first thing in this framework is Risk identification. How does the project manager
identify the risk because these are future events and these are uncertain events; how does
the project manager anticipate the risks? One is use checklist. Find out the common risks
common type of risks that have occurred across different projects that he has worked or
the projects that were running in the organization. And finds out what were the types of
risks that the other projects faced and then checks out whether such risks are likely to be
relevant for the project at hand.
The second way the risk can be identified is by brainstorming; get the team members,
other stakeholders and then get their concerns and those at the risks. And the third type
of risk identification is the causal mapping here identify the possible events that may
occur and the affect that will of these events.
628
(Refer Slide Time: 20:07)
It is important for the project manager to know the different types of risks that fall on
various projects and what are the possible risks reduction techniques that are suitable for
these risks. Boehm has identified 10 most frequent risks that development project suffer;
and therefore, this is quite in instructive for every project manager to identify what are
the most frequent risks that the other projects have suffered and what are the best ways
available to handle those risks.
The first risk is of course, relates to the manpower; personal short fall, lack of competent
manpower in the project. We have the project personal, but they are not able to perform,
the project is getting delayed this is a major risk here. And Boehm suggest the
corresponding reduction technique says that staff to start with the project get the top
talent and also in the areas of the project provide training and career development.
Unrealistic time and cost estimates; the project manager could not really estimate the
time and cost correctly and quoted a wrong figure to the customer. And now the
customer says that, since you said that you will do within this budget and within this
time, why you are not able to do it. To get more realistic, accurate estimate, the project
manager needs to use multiple estimation techniques. And also one way to handle it by
using incremental development because, making a long term estimate is very difficult it
is error prone; but a short term estimate that is for one feature at a time is a much
accurate way of estimating and that those two are the risk reduction techniques.
629
Developing the wrong software functions; the customer wanted something and finally,
the development team develops some other functions, which the customer is not happy
says that, we did not want it this way and that happens very frequently. And the risk
reduction technique for this is improved software evaluation; user surveys trying to find
out what exactly the customer needs; prototyping and getting it evaluated by the
customer these are the risk reduction techniques.
Developing the wrong user interface, this is also a risk similar to the functionality; here
the interface the customer may not like; here again prototyping getting it evaluated by
the customer; task analysis do a proper task analysis where identify for performing a
specific functionality or task, what are the input and output that are required; user
involvement in the project these are the risk reduction techniques.
Gold plating; in gold plating the team members they think that some features may be
appreciated by the customer, even though those are not really documented in the
requirements. They start developing something thinking that, this work will be highly
appreciated it will be very useful; but then it may so happened that the customer does not
need them actually this is just a waste of time, it just delays the project, increases the
cost. The risk reduction technique is requirement scrubbing, that is dropping any
requirements that have been put as a later thought and are not actually required;
630
prototyping having the user evaluate a prototype version and say that whether it is
required or not.
Late changes to requirements that is the customer may suggest some changes later; here
change control if the change actually has to happen then we have to somehow have a
procedure to effectively handle the change and that is called as a change control.
Incremental development, so that the customer. Incrementally evaluates and the changes
are not too much at the end, because the changes are troublesome; if they are to be done
at the end once everything completes.
Shortfall in externally supplied components the projects do give out, contracts to other
developers to other organizations and they may turn in inferior work products; here the
risk handling is through benchmarking, inspection, formal specification, contractual
agreement and quality control.
Short fall in externally performed tasks here it may not be a component, but a task which
is performed externally; maybe let us say coding or may be testing and so on. Here the
risk reduction technique is quality assurance procedures, competitive design, etcetera.
Performance problems here the risk handling is through simulation, prototyping and
tuning.
The development becomes technically too difficult; the risk reduction techniques are
through technical analysis, cost benefit analysis, prototyping, showing it to the customer,
evaluating by the team members, the prototype helps in developing the actual solution
and then of course, training.
The project managers need to be aware of various important risks that the past projects
are suffered and what were the risk reduction techniques. And these are the 10 most
frequent risks according to Boehm and the corresponding risk reduction techniques he
has identified; every project manager needs to know this. We are almost at the end of the
time for the current lecture, we will stop here and continue in the next lecture.
Thank you.
631
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 35
Risk Management (Contd.) and Introduction to Software Quality
Welcome to this lecture, in the last lecture we had looked at some basic concepts on Risk
Management we had discussed what is a risk different types of risk and a framework for
risk management. Today we will discuss further on risk management and after that we
will start discussing about quality management; let us get started.
In the last lecture, we had said that there are two main ways for risk management; one is
a proactive approach where we identify and make plans for a risk. In a reactive approach,
we do not make any specific plans; possibly the risks are hard to anticipate we handle the
risks as they appear during the project. In the proactive risk management, the first task is
risk identification. We list down all the risks we had discussed some guidelines about
how to go about identifying the risks and once we have a list of risks, we do an analysis
of risks.
The analysis is in the form of what is the probability of the risk becoming true and what
will be the cost implication of that risk. And based on that we can prioritize the risk the
one which will be most damaging to the institute to the project that will be the most
632
serious risk; we need to make elaborate plans for that. In the risk planning we will try to
see that how the effect of the risk can be either avoided or minimized.
And finally, the risk monitoring where we see that what is the status of the risk. Once we
have listed all the risks in a project to the project manager; we need to anticipate the
risks. And once we have made the list of risks we need to analyze the risks; we mean
analysis of the risk means that we qualitatively assign a probability of occurrence; it will
be very hard to give a quantitative value for example, we cannot say that the occurrence
of this risk is 66.5.
It will be hard and also we qualitatively identify or estimate the potential damage due to
the risk. And based on this, we compute a Risk Exposure or RE for all the risks which is
the multiplication of the potential damage and the probability of occurrence. The
potential damage we assign a money value approximate money value for the risk.
For example, a flood would cause 1 billion of damage and the probability of the flood
occurring is 0.01; that is 1 in a 100 chance a rare chance. And for this two values the
potential damage is 1 billion and the probability of the risk occurring is 0.01; we can
compute the risk exposure for this case is 1 billion into 0.01 which is 10 million. We can
think of this as the insurance premium; the insurance is basically computed something
similar way that what is the probability of an accident occurring and what is the cost
633
implications of the accident and that would give an indication of the insurance premium
to be charged.
It is hard to give a quantitative value to the risks, often we can tell qualitatively that
whether the chances are very high, the chances are significant or moderate and low. And
based on this qualitative probability of a risk; we can give quantitative values. When we
say that the risk is high it is something greater than 50 percent; we give a value greater
than 50 percent based on the qualitative estimation of the risk probability. Significant is
something like 30 to 50 percent, moderate is 10 to 30 percent and low is less than 10
percent.
634
(Refer Slide Time: 06:49)
And once we have prioritized risks based on the risk exposure computed for various
risks, we need to do risk planning for the most damaging risks. The various ways that
risks can be dealt with a risk acceptance, risk avoidance, risk reduction, risk transfer, risk
mitigation and contingency measures. In risk acceptance, we determine that the things
that we need to do; to handle the risk is more expensive than the risk itself.
The risk acceptance is done for very low risk exposures; the damage will be very less
and the probability is less; so this we do the risk acceptance. For these cases maybe the
things that we need to do to handle manage the risk will be much more expensive than
the risk itself; in that case it is better to have the risk acceptance. And even sometimes
with risks with high risk exposure; it may so happen that we can do we cannot do much
about that.
For example, that one of the key person leaving we cannot really employ a another
person in the start the project that will be too expensive; it will be more damaging than
the risk itself; so in this case we do the risk acceptance. Risk avoidance if it is possible to
avoid the risk for example, if we find that one of the modules of the software is very
difficult to develop and we are likely to have delay, bugs in this module; we may
outsource it. So, here we have avoided the risk because we did not have capability and
we gave it to some third party who have good capability of developing the module and
we have avoided the risk.
635
Risk reduction in this case we make plans so that the impact of the risk will be less. For
example, if we anticipate that some key persons might leave we might have good
documentation; we might have review so that other members are aware of the work that
is carried out by the key person. In this case, the new person who comes might refer to
the documents already prepared and also discuss with others who have reviewed the
work.
Risk transfer here we might buy insurance so that the risk is transferred to the insurance
company we give premium but then the risk is now transferred to the insurance
company. Risk mitigation and contingency measures we might prepare plans; if risk
happens what contingency measures we can take.
The risk reduction leverage, this is the effectiveness of a risk handling mechanism. The
risk reduction leverage is a quantitative value for the effectiveness of a risk handling
technique. Let us say we deploy a some risk handling technique and the risk handling
technique; the cost of the risk handling technique and the reduction of the impact; we
consider this here that the reduction in impact is; the risk exposure before deploying the
risk handling technique minus the risk exposure after the risk handling technique is
deployed.
636
So, this is the reduction in exposure after the risk handling technique is deployed and this
is the cost of the risk handling technique. For example, that initially we had 1 percent
chance of a fire causing 200 k damage.
So, the potential damage is 200000 and the probability of the risk occurring is 1 percent;
so, the risk exposure before the risk handling technique is deployed is 1 percent of
200000. And now that we have installed a fire alarm which costs 500 rupees and now the
probability of the damage due to a fire is 50 percent. So, 1 percent of 200 minus 0.5
percent of 200000 which is equal to 0.5 percent of 200000 which is equal to 100000; 1
percent of 200000 is 2000 minus 0.5 of 200000; so, it is equal to 100 sorry 1000 and
1000 by 500 is equal to 2.
And therefore, we benefit by deploying the risk reduction technique which is a fire alarm
costs 500, but then the risk exposure reduces by 1000 and therefore, the risk reduction
leverage is greater than 1. And whenever the risk reduction leverage is greater than 1, we
say that this risk handling technique is worth doing it. With this discussion, we will
conclude the risk management and we will discuss about Software Quality management.
How do we define a quality product? Which one would you would you call as a quality
product? Traditionally we say that a product is of quality; if it does what exactly the user
wants it to do; this is the traditional definition of quality, it is called as fitness of purpose.
For example, you want to buy a table fan and let us say you got one from the market.
637
And found that it runs smoothly and gives good wind; you say that it is a good quality
product.
But for traditional items like a mixer, grinder, a table fan or a car, a bicycle the fitness of
purpose is intuitively clear. But what is the fitness of purpose of a software product?
When do you say that software product has fitness of purpose?
One way to define the fitness of purpose of a software product is to say that all the
requirement specified in the SRS document have been satisfied. In other words, for a
software product quality of the software product is essentially conformance to its
requirements; every software is developed with some requirements in mind and as long
as it meets all the requirements we might be tempted to say that it is a quality software.
And its accepted by many including Philip Crosby in his “Quality is Free” book.
638
(Refer Slide Time: 17:05)
639
The answer is that for software product, functional correctness is only one aspect of the
quality there are many other aspects. For example, what if, its functionally correct, but
users find it almost impossible to use very difficult user interface; we can’t call it as a
quality software.
Similarly, another software it satisfies all the requirements that are written down in the
requirement specification book, but then it has a incomprehensible and unmaintainable
code. So, any bugs, any enhancements will be very hard to do for this software; let us say
we installed a student grading software and find that after few days that there is a bug.
Now, we want to maintain this software we requested that this bug be removed, but since
it has a incompressible and unmaintainable code it will be hard to make any changes.
Similarly, if any enhancements are needed let us say all the functionalities were satisfied
have been implemented correctly, but then we need a small enhancement, but it will be
hard to do because the code is very bad we cannot call this as a quality software. So, one
thing is clear that unlike other products; software products cannot be defined to be
quality product based on just fitness of purpose or satisfaction of the requirements. We
need to define the quality of a software in terms of a set of quality attributes. Of course,
one of the will be correctness, but then there will be other attributes that need to be
satisfied.
640
(Refer Slide Time: 20:41)
Let us look at what can be the set of quality attributes. Correctness is definitely one of
the attributes for a quality software, reliability, chances of failure after uses. Efficiency it
should not make wasteful use of resources for example, disk, memory space, CPU time
and should satisfy the response time requirements.
Portability; tomorrow we want to run this on new hardware we should be able to easily
do it that is also a quality attribute. Usability should have a good user interface the users
should feel comfortable, easy to use. Reusability; if we want to extend this software,
modify this for different purpose; it should be possible to do.
It should be possible to maintain the code should not be so bad that we cannot even
maintain it. It should be possible to extend it, these are a possible set of quality attributes
correctness, reliability, efficiency, portability, usability, reusability, maintainability, but
there are can be many other quality attributes that may be considered.
Several practitioners and researchers have come up with quality models; the quality
models basically try to estimate the quality of the software product based on some
quality attributes; we will not really look at the quality models but we just need to be
aware that based on these quality attributes; the quality models have been proposed so
that given a software product we can measure the quality of the software product based
on the quality model.
641
(Refer Slide Time: 23:11)
Now, let us see how the quality systems have evolved over years. In the last 6 to 7
decades; the quality system have evolved very rapidly; prior to the World War II; that is
1940s, 30s; it was accepted that to have a quality product, we must have a rigorous
inspection of the finished products and then eliminate the defective products.
For example, a company manufacturing nuts and bolts; they would define a tolerance for
the bolts and the nuts. And if they are beyond the tolerance just separate them out and
reject them; the good product should be given to the customer and the bad one’s are
rejected. This was the only way that quality product where produced by various vendors
before World War II; that is to rigorously inspect and test the product and then eliminate
the defective product and give only the good products to the customers.
642
(Refer Slide Time: 24:53)
But then after the World War II, there have been four stages of evolution and may have
come from the Japanese which helped resurrect Japanese economy and Japan is known
to be producing very quality goods.
The initial quality efforts were testing that is identifying the defective products through
testing; once that which fail the test are rejected good products are passed to the
customer, bad products are rejected. But after that slowly we will see that the focus
shifted from testing and rejection of the bad product to the process of manufacturing. We
are almost at the end of this lecture, we will stop here and we will continue from this
point in the next lecture.
Thank you.
643
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 36
Resource Allocation
Good morning. So, today we will take up a new chapter that is on Resource Allocation.
We will see about little bit basic concepts of resource allocation. But what do you mean
by resource’s what are the different types of resources; then, we will discuss about the
nature of the resources; categories of resources and then, we will see how to identify the
resource requirements in a project.
644
(Refer Slide Time: 00:42)
You know that in project scheduling, I hope you have already known about project
scheduling. You must have used the terms critical part method or CPM PERT project
evaluation and review technique etcetera those are used for scheduling a projects.
In project scheduling, normally, activity network analysis techniques they are used to
plan where the different activity just take place. So, people are using activity network or
the precedence networks, these are the difference what basically diagrams they will be
used to make this planning regarding which activity will take when; which activity will
take place when.
Now, these plans that are being made for the project scheduling, they do not take into
account the availability of the resources. But actually we may make the plan if we do not
take into what resources are available, then that plan will be just ideal, it will not work
properly. So, now, that is why we will discuss how to match the activity plan that we
have made to what to the available resources.
So, if the available resources are not sufficient enough, then we might have to change the
activity plan. We might have to change the plan, then in that case we will assess the
efficacy of changing the plan to fit the available resources. So, whenever it is necessary.
So, we will see basically two things here; how to take into account the resources while
making the activity plan and then, if the resources are not sufficient; how we can change
the plan depending upon the available resources.
645
(Refer Slide Time: 02:25)
Now, let us see what is the result of the resource allocation. As we know that every phase
of software development lifecycle is having an output. So, similarly here, the final result
of resource allocation will be a number of resource will be number of schedules; those
schedules are activity schedule, resource schedule and cost schedules. So, what do you
mean by activity schedule?
So, this schedule will indicate the planned start and the completion dates for each
activity. You know every activity we must have a plan when it should start and when it
should be complete. So, this activity schedule will indicate the plan start date as well as
the completion date for each activity and how it can be prepared? It can be prepared by
using a special type of diagram or a graph, we call it as a precedence network or activity
plan.
So, another outcome is resource scheduling. So, where it shows the dates on which each
resource will be required and level of that requirement. So, as I have already told you in
the activity plan we should also take into account the availability of the resources. So,
this resource schedule will show the dates on which the different types of the resources
will be required the level of that requirement.
Then, the another output of this resource allocation is a cost schedule which will show
the planned cumulative expenditure; what will the cumulative expenditure that will be
incurred by the use of the resources over the time the during the project development.
646
(Refer Slide Time: 04:00)
Now, let us see, first of all, what you mean by resource? A very trivial what the concept
we can define conventionally that resource means it is any item or a person which is
required for successful execution of the project. Some resources they will be required for
the whole duration of the project and some of the resources will be required only for a
single activity. For example, you can take the project manager.
He is required for the whole duration of the project whereas, the programmer he will be
required only for a specific activity, may be the coding. Designer will be required for
only when we will perform the design and similarly, analyst or system analyst they will
be required during the requirement analysis phase. So, some of the resources will be
required for the whole duration of the project; whereas, some others will be used or
during will be used only for a single activity or a what couple of activities.
A project manager is very much vital to the success of the project. So, it does not require
the same level of scheduling as a programmer; programmer you might be required for
coding, you might require say two programmers or so. But a project manager it does not
require the same level of scheduling as a programmer or a system designer or a tester.
So, the manager may request for the use of a programmer who belongs to a pool of
resources at the program level. So, we have already seen earlier the difference between
project and programme. So, the manager is responsible for one project.
647
So, now, but a programmer may belong to a pool of resources as a programming level.
When you require the help of a programmer, you have to consult to the program head
and you will release a particular programmer for your work. So, for your project. So, that
is why they are the level of this programmer may not be same as that of the what the
project manager.
Now, let us see categories of resources this is very much important what could be the
possible what categories or the types of the different resources. We will see the different
important categories of resources are level labour, equipment, materials, space, services,
time and money. So, let us see one by one. So, labour means normally this will include
the members of the development project; the persons, the members of the development
projects such as project manager system analyst program or system designer, tester these
are all coming under this category labour. So, basically they are the members of the
development project.
So, then comes equipment. So, the various computing and other equipments they are
required for successful execution of the project. They will come they will come under
these resources will come under equipment. For example, the servers workstations
desktops, PCs, laptops, keyboards, printers, scanners, all those things are coming under
the equipment and since, they require some accessories like furniture etcetera, they will
also come under the category of equipment.
648
Materials, these are the items that are consumed rather than used. So, these items they
will be consumed. So, some items are there you can use them for some period of time or
for some duration. But these are the materials which will be consumed that it is after
some time or when it is used. So, for example, you want to copy what some of these was
softwares from one system to another system etcetera we have to use disks, CDs etcetera.
So, once they are used normally we will treat them these are consume. So, papers
etcetera, these are the items those are consumed rather than being used. Space is very
simple. So, you may require some office space or working space in order to execute your
project where your programmers, also system designers etcetera they will sit.
Services, some projects may require procurement of specialized services. So, along with
the of the your own specialities, they may require what hiring some specialized services.
For example, if you are developing a wide area distributed system, you may require
scheduling of the telecommunication services. So, you may require what some help of a
telecommunication services. You may require postal courier services etcetera. So, these
are coming under these are also, even if they are services they will be considered as a
resources and they are coming under the services resources.
Time is also consider an important what resource. So, here we consider time as elapsed
time. So, the time that is elapsed. So, elapsed time it can be considered as a resource. The
elapsed time can be reduced by adding more staff. So, this time requirement, the required
649
time if you are meeting you the deadline very soon, then the elapsed time that can be
reduced by adding by hiring more staff members.
See money it is also considered as a resource very important, but normally we do not
consider it as a primary resource. It is considered a secondary resource. Because this
money you can you are using this money to buy other resources that is why money is
also considered as a resource; but it is a treated as a secondary resource. So, it is similar
to other resources in the sense that it is available at a cost, but it is that in case of the
interest charges.
So, here we may in order to purchase something, we have to borrow money from the
bank. We have to borrow money as a loan from the bank with some interest charges. So,
that is why it is also similar to other resources and it is available at a cost. In this case
this is the interest rate.
Now, let us see how to allocate the resources to different activities. So, first step in this
resource allocation is identify what are the resources, they are needed for each activity
and then, create a resource requirement list. So, the first step in resource allocation is
identify the different types of resources needed for each activity and create a resource
requirement. We have already seen different types of resources just few minutes before.
Then, we have to identify what the resource types, we have already discussed about
some of the possible resource types.
650
So, in some cases the individuals are interchangeable within a group. For example, if you
are using VB programmers and they can be interchangeably used with this software
developers. So, after identifying the resource types, we have to allocate the resource
types to various activities and examine the resource histogram. So, after this identifying
the resource types, we have to allocate the different resource types to the different
activities.
We will say how to allocate them and then, we have to prepare a diagram called as a
resource histogram and then, we have to examine the resource histogram whether the
resources are properly allocated or not; they have uniformly allocated or not we have to
see this resource histogram. I hope this histogram you have already studied earlier in
10th or plus 2 level; basically these are some kinds of what graphs bar graphs etcetera.
We will see as I have already told you one of the important step is identifying the
resource types, I have already identified several what resource types. But now let us see
in a software development project; so, what could be the possible resource requirements.
So, basically in a software project development, the followings are the resource
requirements; we have to identify the hardware, we have to identify the software that will
be required to develop the project and identify the supporting staff. The support staff
required to carry out the project. So, basically these resource requirements are needed.
651
(Refer Slide Time: 11:58)
So, let us first say about the identify the hardware. What are the steps they must be
followed to identify the hardware? So, first we have to see first we have to make or we
have to prepare a list of all the hardware needs of the project. So, your project will
require what kind of hardware; what types of hardware your project will need. First list
all those things.
For example, you have to list if your project required some desktops, laptops, servers,
backup media, keyboard, switches, different networking ports and cables, display units,
printers, plotters, touch screens etcetera. So, examine your project requires what kind of
hardware devices you will list them.
Then, for each piece of hardware specify what it needs to do. So, each piece of hardware
suppose a cable what desktops server, what kind of activity it will do; what will its
functionality. So, for each of the hardware specify what it needs to, then for each piece of
hardware specify what it needs in order to function properly. So, first of all we should
see that for each piece of hardware, what it will do; what will its objective; what will its
functionality.
Then, in order to do that functionality in order to achieve that functionality. For each
piece of hardware, we have to specify what it needs in order to function properly. In
order to achieve that functionality, so what it needs for example? Whether the what
hardware, it requires some special kind of software tools; it requires some cables for
652
interconnection etcetera. So, for each piece of hardware, we also specify what exactly it
needs in order to properly function.
Then, we have to specify who needs what hardware to his task. So, there are several
manpower several persons associated with the project. So, which person needs what kind
of hardware to perform its task. See please do not provide everyone with the every piece
of hardware. So, all the persons they will we have seen different types of hardware in a
project, they may be required. So, do not assign do not what provide all the types of
hardware to all the different persons.
Because they may not require all these things. They may require only a suppose desktop,
they may not require a server. Similarly, they may require a what just display monitor,
they do not require touch screen etcetera. So, that is why do not provide everyone with
every piece of hardware, if they do not require it and finally, specify when the team
needs the hardware may be in advance.
So, it is before what taking of this execution specify when the team, when the project
needs the hardware; at what point of time if you can identify a little bit advance that will
help in asking the resources from the top management. So, that is why specify when the
project needs, when the team needs the hardware and if possible in advance. These are
the different steps required to identify the hardware requirements of your project.
653
Then, the next is how to identify the software. So, we know that in order to develop a
software project many other supporting software are also needed. So, some examples are
like the basic software’s like operating systems, compilers, linkers, loaders etcetera they
may be required for developing a project and if these basic software like operating
system, compilers etcetera, they are wrongly installed or they you have not using the
proper latest version, upgraded version; then, they may also cause projects.
So, different versions of software mismatched service packs and the different release of
libraries, they may also create problems. If we will ignore the issues that will lead to
severe problems during the software development.
The experienced project managers they know that the software upgrades during the
project, they cost the team productivity and may create serious problems. So, that is why
they take proper care when to use the proper when to use the when to go for
upgradations, when to use the different versions they judiciously use they judiciously
decide. So, there is a need to plan for the upgrades, we should not go for the upgrades
randomly. The project manager should plan for the upgrades at right time, judiciously,
properly and they schedule them and the impact on the team and the project can be
minimized.
So, the project manager to see that when the impact of this was upgrades on the team and
the project, it can be minimized then he may go for a plan for upgrades and scheduling
654
them. Project managers can get a handle on software support by following the below
steps let us see what the steps are.
So, he has to list. So, let us see we are discussing now, the steps for identifying the
software. The project manager first he has to list the software requirements just like
listing the hardware requirements. So, for any software to be developed or in a software
project, the project manager has to prepare a list containing the software requirements.
Then, specify the software versions; what are the different versions that you your
software will use.
Identify the upgrades and service packs; what are the different upgrades and the service
packs and when you should go for them that identify. Also identify who upgrades which
software; who is responsible for upgrading which software, you must ensure that because
all persons they cannot be what responsible for upgrading the software. Then, specify
when to upgrade which software; at which point of time here we are identifying which
software needs upgrades. But then also we have to specify when the software needs
upgradations and which software needs upgradation at what point of time that you have
to specify.
655
(Refer Slide Time: 17:54)
Let us see the first step here that is list the software requirements. The team may need we
are discussing about the list of the software requirements; what kind of software we may
additionally required for developing your software project. The team may need a well
specified set of software products so that product inconsistency can be minimized the
project or the team, they may require a well specified properly specified set of software
products.
So, that the product in inconsistencies can be minimized. Upgrades can be planned
properly and licensing also can be done properly. To get this set the project manager
needs to manage the details of all the software to be used. So, project manager he has to
properly manage the details of all the software’s, they are to be used in the project. He
may start by specifying the software that the team will need including the following. So,
let us see what he project manager should specify? He should specify these additional
softwares.
656
(Refer Slide Time: 18:51)
What operating systems, he will use or what operating system the team members will
use; what compilers will be used in the project; the configuration management system
will see towards the end of the session about configuration management system; what
kind of email what system you will in project will be used; the different software
supporting software such as FTP, web browser etcetera, what will be the list of that.
Then the different libraries or the other DLL dynamic link libraries, standard templates
libraries, graphic user interface library etcetera that will be used in your project. Then,
office of the office software such as word processing package, excel sheet, power point
presentation etcetera that will be required list of that and if any what software tools are
required by your projects or such as various CASE tools, design tools such as if you
know that RSA, Visual paradigm etcetera.
Testing tools such as JUnit and these what JUnit, Jumble, Selenium etcetera the testing
tools and finally, the SPM software project management tools such as Gantt chart tools,
Libra project etcetera. So, what kind of different software tools are required. So, that also
have to be listed. So, these are the additional or the supporting what components that the
project manager must prepare this list.
657
(Refer Slide Time: 20:18)
Then, the next step is specify the software versions. Given the list of the existing
software products, the project manager has to specify the team needs exactly which
version of each of these pieces of software because there are different versions. So,
which version exactly the team members needs that also we have to specify. We have to
also specify when changes to these might become available. So, when changes will
become available to these what versions that also the project manager has to specify.
658
Then, next step is identify upgrades and service packs, you know software is not a static
entity, there will be upgrades and enhancements from time to time. So, the project
manager, he needs to know which pieces of software need these upgrades and when
these upgrades will occur during the project. So, the project manager may have to answer
the following questions for each software product, the team will use.
For example, what will the length of your project in relation with the upgrades service
packs or new versions of the software products. Will the team need to upgrade whether
the team request for upgradation or not? Similarly, when if yes, when the upgrade be
available; when the upgrade will be available? So, these answers must be made.
Accordingly, he can identify the upgrades and this service packs.
Then, identify who upgrades which software. Please see you have to be project manager
has to identify who is responsible for installing and upgrading of each piece of software.
Because all persons are not responsible for that. Whether the system administrator
overlay the system; administrator he is responsible or for a your particular project is the
project may be a project what manager or a project developer or a system designer.
He is responsible for these upgradation that the project manager has to what identify.
Make sure that the project manager has identified who does what and everyone agrees on
who performs installations and upgrades. So, the project manager has identified the
correct person who will make these upgrades and everyone agrees to that that which
659
person is doing that; who is doing the installation and upgrades, everybody must know it
everybody must agree on it.
Then, specify when to upgrade which software. As I have already told you once the
project manager has the handle on the list of software and an idea of what upgrades the
team will need to make during the project, he can decide when to upgrade ok. So, the
once the project manager has a handle on the list of the software and the idea of what
upgrades; then, the team will need to make during the project that he can decide when to
upgrade because we have to specify at what point of time the he has to upgrade and
which software.
So, upgrades. So, normally if you will upgrade the software, just before a major
milestone; then it is very risky because that what upgradations may not work properly
and you may what miss the milestone, you may miss the target date, but upgrading
immediately after a milestone will allow to get some time to troubleshoot the problems
that arise, but may validate the testing run prior to the milestone.
660
(Refer Slide Time: 23:32)
Then, specify. So, consider all the software that the team will use in the project if you
want to upgrade the software, you have to plan to upgrade when the team and project can
best handle the unexpected problems. So, when the problems are not there, you are
expecting at this point of time problems may not occur. Then, at that point of time, you
may go for the upgradation. Do not go for the upgradation before some major milestone
or some what major target line. So, never a plan on a best case scenario, when upgrading
the software do not be optimistic that at this point of time no problem will be there. So,
we may go for the upgradation.
So, always expect that problems to occur and plan time to handle them. So, if ah. So, do
not say that do not expect that problems will not occur and you should go for this what
upgradation. Always anticipate that expect that problems may occur and plan the time to
handle them; if problems they do not pop up, then its fine. The team will get extra time.
If they do not pop up and if they pop up, then what will happen? If the problems pop up,
then we have already done the planning. The planning will provide time to handle the
problems. So, this is how we have to specify when to upgrade and which software.
661
(Refer Slide Time: 24:48)
Then, identify the support. So, first here identify these steps are like this identify the
support needed for each group; from each group of this what from each group of the
different persons or these stakeholders, identify the support needed from each group.
Then, specify when support will be required at what point of time so that the project can
make progress without making any delay.
Then, specify how the support will be occur whether the staff will be available physically
or via phone or by email etcetera, then gain commitment from each group for the support
required For example, you may get the commitment through a verbal commitment or
through a contract or a commitment letter and then, maintain a good relationship with the
project staff. Unless you maintain a good relationship with the support staff, its very
much difficult to complete the project.
662
(Refer Slide Time: 25:43)
So, then after identifying the resources, we have to prepare a list called the resource
requirement list first. The first step in producing a resource allocation plan is to prepare a
resource requirement list which will contain the different resources that will be required
along with the expected level of demand. So, this preparation of resource requirement list
can be done by considering each activity present in a precedence network. I hope in
project scheduling you must have studied precedence network or this activity network.
So, preparation of resource requirement, a list can be done by considering each activity
present in the precedence network and identifying the resource required. We have
already seen what are the different types of resources required. Now, we will prepare
sample resource requirement list from a given activity network or a given precedence
network.
So, you I have told you that there can be some resources that are not activity specific, but
they are part of the whole project infrastructure. For example, the project manager he is
required for the whole completion for the full duration of the project or required to
support some other resources etcetera.
663
(Refer Slide Time: 26:57)
See this is a very sample what example of a precedence network. So, here I hope for this
development of precedence network, activity network we have already seen in the
project scheduling. Here say these are different stages are given and here, the earliest
start time, earliest finish time, latest start time and latest finish time is given.
Also what is this float I mean the free time also can be is given. Free time means by this
much amount of time if the activity can be delayed the project completion date will not
be affected. So, in this way a given precedence network, its shown where the different
activities are there like specify the overall system. Then, what design, module A, design
module B, design module C, design module D are there and similarly coding and other
activities are mentioned in this diagram.
So, since you have already known this thing in the project scheduling, I have not
discussing the details of this. My objective is how to create a resource requirement list
from this given precedence network.
664
(Refer Slide Time: 28:11)
So, looking at this we have we will prepare the list like this it will see that the resource
requirement list will contain these items like what are the stages, what are the activities,
what are the resources, for how many days the resources are required, the quantity if any
and if any notes or remarks are there you can see that.
So, we require as I have already told you whatever may be the stage, whatever may be
the activity, for all the stages or the activities we require a resource person we require a
resource called the project manager. He is available for the whole project and how many
days are required for the total day and he can see that the total duration of the project is
104 days. So, that is why here we have written that he must be available 104 days full
time.
And similarly, how many work station are there. So, for the stage one for all activities,
we require some around 34 number of what work station and also for those working of
the work stations, we must check the whether the associated softwares are available or
not and then, comes P 1 ah. Here this is the what project activity 1; for this activity 1,
how many days required you can see I think this is 34. So, we require some system see
this is a stage 1 and activity 1.
This is we require 1 system or 1 senior analyst and that should be required for how many
days? As I have already told you, here it is required 34 days. So, he may be required 34
full days and then, for we can go to stage 2. This is stage 2 and we require some
665
workstations or the resources say maybe 3 workstations one for person essential and for
P 2, I mean activity 2 we can see; this is activity 1, this is activity 2. You see how many
days are required? I think this is 20 days.
So, see in this case we require for activity 2, we require analyst and or designer for how
many days? 20 full time days and like this you investigate all the stages all the activities,
identify what are the different resources required and maybe for how many days and
what is the quantity and if any note or remarks you can write here. In this way, you can
prepare the sample requirement list.
So, this sample requirement list has been prepared, I using this given precedence network
or by taking this precedence network. So, in this way we can prepare the resource
requirement list given the precedence network ok.
So, in this way you can prepare the resource requirement from a given precedence
network. So, this is the first step before the resource allocation, you have to prepare the
resource requirement list from or by using a given precedence network ok.
666
(Refer Slide Time: 31:13)
So, today we have discussed about the different categories of resources. We have to
explain the identification of resource requirements such as what the different types of
resources are the labour and time, money, equipment etcetera. We have also presented
the steps for identifying the hardware software and supporting staff, we have also
discussed how to prepare a resource requirement list from a given precedence network.
So, today, we have seen the basic concepts or the basic steps of the at least we have seen
the first step of resource allocation. These are the references we have taken.
667
Thank you very much.
668
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 37
Resource Allocation
(Contd.)
Good morning. Now, now let us take up the next phases of Resource Allocation.
We will see we have already seen identification of resources in the previous class. Now,
we will see the scheduling of the resources; resource histograms; how to smooth the
resources that is known as the resource smoothing; then the resource classes and how to
prioritize the different activities.
669
So, let us first discuss about the scheduling resources. I have already told you the first
step in resource allocation is preparing a resource requirement list. After preparing the
resource requirement list the next stage is to map this list into the activity plan. Why? To
assess the distribution of resources required over the duration of the project.
So, this mapping can be done by representing the activity plan as a bar chat and then,
using this bar chart, we can produce another diagram called as resource histogram for
each resource. We will see I have taken an example in the next figure which illustrates an
example activity plan as a bar chart and the corresponding resource histogram for the
analyst and designers.
So, normally in this what mapping that we are doing. So, each activity has been
scheduled to start at its earliest start date. I have already told you in the activity network,
there are four important components; activity start sorry earliest start, earliest finish or
earliest completion; then, latest start and latest completion.
So, in this kind of scheduling each activity has been scheduled to start at its earliest start
time. So, let us see what is the, what problem with earliest start date scheduling. So, in
case of or when you are using the earliest start dates for scheduling, then we will see it
will frequently create the histograms, it frequently create resource histogram that start
with a peak and then tail off; let us see how there will be so many what ups and downs
will be there.
670
See. So, for the previous example that we have taken the earliest this activity network or
the precedence network, from that activity network or the precedence network, we have
created or the project manager can create a bar chart where in this you see the weeks the
week numbers are written in the x axis and here, we are taken the different activities
here, like in the here we have taken the specify the activity specify the overall system.
Then, what we can say specify module A, specify module B etcetera. Then, design
module A, module B etcetera. So, like this you can see. So, this white space is required,
what are the scheduling time and the shaded portion is known as total float. Total float
means it is a measure of that time that you can see that how much time it is free and that
is freely available this is slack time so that that activity can made little bit delay. So, that
the whole project will not be affected, the completion date of the project will not be
affected; so, float means the slack time, the free time that is available with the activity.
So, you can see that for what specify model A. So, this much time the shaded portion is
what it is slack, it is free. So that means, if this starting time can be little bit delayed. So,
that it will not what affect the completion of the project because this will be adjusted this
free time can be adjusted so that this activity can be finished in proper time. So, you can
say that for the previous diagram that we have shown, this is the sample bar chart, where
the week numbers are written in the what x axis and in the y axis.
Here we have taken the different activities and the activities are consisted with the
activity; what durations these timings are divided into two parts. What is the scheduling
671
time and this float time? Float time is the free time or the slack time available so that the
project can be delayed by some amount so that the project duration the whole project
duration the completion date would not be affected.
So, the upper part is this bar chart and this lower part is the called as this histogram. I
hope histograms we have already discussed maybe in the matriculation or plus 2 carrier.
So, this is a histogram and you can see that from this sample bar chart, we have prepared
this what histograms. So, this you taken you have seen in the x axis again, we have taken
the weeks and y axis we have taken the number of what developers, may be number of
system analyst or the programmers or whatever that.
So, you can see that here, there will be and I have already told you this we are now
scheduling the various activities. We are estimating how many numbers of manpower
will be required. This scheduling is based on the earliest start date. We have assumed
that the activity is started at the its earliest start time. So, then these kind of graphs you
are getting you can see that if you are the scheduling is based on earliest start time, see
there are so many ups and downs. There are peaks; then, suddenly it falls down. Again,
there is a peak; then again, it is falling down.
So, it is not uniform it is not even, this is the problem of what this is the problem of the
scheduling based on earliest start time that I have already told you that, I have told you
that this figure illustrates the example activity plan or the bar chart and then, the
672
histogram for the analysts designers. The upper part is the bar chart; the lower part is the
what histogram. I have already told you that.
Here, this activity the activities or each activity has been scheduled according to the start
at its earliest start date. The problem of earliest start date scheduling, I have already told
you earliest start date scheduling. It will frequently create the resource histograms that
starts with a peak and then tells up immediately that starts with a peak and then, it falls
down, again It starts with a peak; then, falls down. It is not uniform, it is not even. So,
this is the problem with earliest start date scheduling.
So, changing the level of resources on a project over time particularly changing the
personnel ok. So, you may change the level of resources, but, but changing the level of
resources on a project over time particularly the personnel or the manpower, it generally
adds to the cost of the project. You know that recruiting why because recruiting staff it
also, it has also some cost. It takes some cost. Even if when staff are transferred
internally, sometimes will be required for familiarization with the new project
environment. So, this may also add some cost.
So, then we will see that the resource histogram shown in the previous figure, I have
already shown you this is the resource histogram. So, the resource histogram shown in
the previous figure, it poses it poses some kind of problems. Let us see what kind of
673
problems it faces? It leads to what kind of problems? It poses what kinds of problems?
Some analysts or designers may sit idle for some days ok.
Let us see between the specification and design stage of the histogram, you see here. So,
this is the specification phase of 2 and then, they started the design. You can see some
people are idle during this see here you take this example. So, initially four people were
required 1 2 3 4. Now, you see after this what analysis phase is over; now only 2
designers are used, another 2 designers are sitting idle. How many weeks? 2 weeks 2
designers are sitting idle.
So, similarly here only 1 is used. Then, I mean in 3 designers or 3 analysts, they are
sitting idle and similarly here. So, this is the problem due to or since we have used the
scheduling based on the earliest start date scheduling that is I have already told you the
resource histogram shown in the previous figure poses some problems. The problems are
due to the fact that we have used what kind of thing, we have used? The scheduling
based on the earliest start date scheduling. Here, some analyst or designers may sit idle
for some dates.
For example, between the specification the design phases stages of the histogram as I
have told you. Initially, 4 system designers or this analyst were used; then, 2 are used and
the 2 are sitting idle. Then, 1 is in the job and 3 are sitting idle. So, what will happen that
it is it may not be possible that these idle persons, they will be required in another project
requiring their skills for exactly those particular periods of time.
So, what will happen? So, this idle time then even if they sit idle, but their salary etcetera
so that the idle time may also be charged to the project. So, that is why what we require?
We have to smoothen this because see there are so many peaks and the ups and downs.
So, if you can make it even we can make it uniform, then probably this idle time will be
less.
So, now let us see how we can minimize the idle time; what should be the idle resource
size histogram? The idle resource histogram should be smooth, should be nearly even
with an initial build up and may be staged run down later on.
674
So, you see an example this is a resource histogram which is not even see so many ups
and downs; peaks are there and so, this is what this is sample histogram which is not
even which is not uniform. So, what we have to do? We have to smooth it. We this
process we call as resource smoothing so that the idle time will be minimized.
Like so, we have to go for resource smoothing. So, what is resource smoothing?
Smoothing, it is usual difficult to get specially staff who will work odd days to fill in
gaps. So, we cannot make busy all the persons always. It is very much difficult to get
specialist staff who will work only odd days to fill in the gaps. There is a need of the
675
staff to learn about the application etcetera, it will take some time. So, staff often have to
be employed for a continuous block of time.
So, we have to keep them busy staff often, they have to be employed for a continuous
block of time. Therefore, it is desirable to employ a constant number of staff on a project.
So, the good project manager always he will desire to employ a constant number of staff
on a project, who as far as possible are fully important full time employees. So, hence,
there is a need for resource smoothing ok.
So, then that is why there is a need for resource smoothing and you see this is a example
of resource smoothing. See the previous example I have taken see how many ups and
downs are there. It is not even that means, here one person is sitting. So, one the person
is sitting for yes; 1 person is sitting idle here, 1 unlike this, they are sitting idle. So, we
have to what make it uniform, make it even, the histogram has to be smooth, the
histogram has to be met even or just to met uniform see. Now, this it is more or less even
and you see this person sitting idle it is very less.
So, this is the example resource histogram of smoothing the previous example here. So,
this histogram is being what smoothened here and let us see how can smoothen the
resource histograms.
676
So, before smoothing this thing let us see what is the another problem with this uneven
resource histogram. So, another problem with an uneven resource histogram is that it
may call for the level of resource beyond those available; say few its or say 10 resources
are available, but this type of histogram uneven resource histogram, it may require 15
number of what manpower. So, what to do? So, another problem with an even resource
histogram is that it may call for level of resources beyond those available 10 are
available, but it requires 15 resources.
So, the next the next figure, it shows how by adjusting the start of the some of the
activities and splitting the other resource histogram can be smoothened to contain
resource demand at available levels subject to the constraints such as precedence
requirements. So, now, let us see how the need is now to smoothen the histogram. The
question is how to smoothen the histograms?
So, this histograms can be smoothened by using two important what methods; one by
adjusting the start date will not exactly what start from the earliest start date wherever
some slack time is there, wherever some float time is there. We will avail will take
advantage of the float time and we may little bit delay the starting time that is number 1;
number 2 is these activities what are they are we not put in continuously wherever
possible we may split some of the activities.
So, by using these two methods; that means, by adjusting the start date little bit or by
delaying the start date little bit and then by splitting some of the activities, resource
677
histogram can be smoothened to contain the resource demand at available rates. So, what
is available resources, we will only avail those many number of what manpower from the
resources. Of course, subject to constraints such as the what are the precedence
requirements that also has to be satisfied.
Now, let us say this what we have taken the first example we have taken this, you see
this is looking very uneven. This is not smoothened. We will take the same example and
here, you will see many activities they are having float times free time slack times
decided portion. So, instead of starting from the very beginning, we can little bit what
shift we can delay the start date so that the project completion date will not be affected
because some free time is available here.
So, then we will we can minimize, then we can say that the resource requirement we can
met and the resource histogram will be even or uniform. We will take the same example
and we will show it here.
See it will take that same example. Here there or you can see that see from this you see
what are the activities are there like A B C D different work activities are there and if we
will take up a sample case and example of histogram, see for a project suppose the tester
available like this. So, see this histogram is it is not smoothened properly. It is not a even
see many times what are the resources the manpower, they are sitting just idle say it just
idle. So, that is why we want to smoothen it.
678
How we can smoothen it? I have already told you we can follow these two important
methods; we can adjust, we can delay the start date wherever float is available this slack
time is available. Another is we can split some of the activities so that the resource
histogram can be smoothed. Here you can see and here what A B C required the different
letters in the figure represents staff working on a series of modules distinct task ok.
So, what are the staffs required in a series of what modules testing task. For example, 1
person working on task you can say. 1 person only working on staff. 1 person is working
on activity A. Here one person is working on task A. Similarly, 2 persons are working on
task B. 2 persons are working on task C like this it is explained and now, by delaying
some of the activities and by splitting some of the activities, see this resource diagram
which is not even here it is made even. So, what activities? So, you see some of the
activities have started on late little bit delayed where free time is available and let see
which tasks are splitted see C G are continuous here.
You can see that C G splitted. Here, you see up to this and again C is after some weeks
again C is started. Then, D we can see that first these what it is it is performed here.
These weeks D is performed and you see there is a break and again these performed here
these. So, activities or task C and D they are splitted. So, this is the histogram which
shows the demand of staff; this is before smoothing and this is the histogram after
smooth after smoothing; that means, we are doing this smoothing.
The histogram is smooth smoothed by delaying some of the activities, where total float is
there where start time is there and by splitting some of the activities or tasks such as C
and D. So, this histogram is obtained this you can see it is much more even, it is must
much more uniform than the previous one. So, another advantage you can get from this
that here before smoothing, we require how many manpower? You can see 9 manpower
we required my 9, but available how many? 5, but in this what you see A, B, C, D, E 5
what staff members are here. But it requires 9 staff members, but after smoothing you
see now the requirement is 1 2 3 4 5.
So, now, 5 are 5 staff members are required and also how according to this resource
scheduling, our resource scheduling is done by taking our resource scheduling is
performed by taking those 5 staff members. You know only no extra staff members, this
is the advantage of resource smoothing.
679
(Refer Slide Time: 20:25)
In this figure, the original histogram was created by scheduling the activities at their
earliest start dates. This one. This is done as I have told by using earliest start date
scheduling. So, here you can see the resource histogram so that the typical peaked shape
peaked shape caused by earliest start date scheduling and calls for a total of 9 staffs;
whereas, 5 are available for project. I have already told you this you see count 1 A and 2
for B, 2 for C like this.
So, 9 staff members are required, but only 5 were available. So, that is why this is the
unknown the problem of what uneven resource histogram. So, by as I have already told
you we can smoothen this thing by 2 techniques; one is by delaying little bit the start of
the some of the activities where float is available and another is by splitting some of the
activities.
So, by delaying the start of some of the activities, it is possible to smooth the histogram
and reduce the what manpower reduce the maximum level of demand for the resource
and here, you can see after smooth smoothening, we require only 5 manpower, 5
designers or five testers.
680
And I as have already told you by this process, we have to revise the precedence
network. We have to slightly delay from the earliest start date, where total float
available. So, you see so that is why the original what precedence network, we have
already given earlier and this is the revised precedence network where the earliest start
dates have been slightly delayed wherever float is there.
Please note that some activities such as C and D have been split. I have already told you
in the what last example that here C the C it was not continuous here. It is delayed, it is
splitted after see again these activities are followed then C and similarly D after this
681
again it is splitted and it is taken up in the other weeks. So, here C and D activities are
splitted ok.
So, notice there are some activities such as C and D have been split where non-critical
activities can be split they can provide a useful way of filling troughs in the demand of
resource. So, whenever please remember that do not try to split the critical activities. So,
only the non-critical activities can be split, where the non-critical activities can be split,
they will provide useful way of filling the troughs. It will help in making even the
histograms in the demand of in the demand for a resource.
But in software projects, it is very much difficult to split the tasks without increasing the
time they take that is one that is another problem. Some of the activities may call for
more than 1 unit of the resource at a time. For example, you see F if you will see here F
requires see 2 2 persons for activity D and for 2 weeks; that means, we require if you
this thing; that means, two 2 persons for activity F for 2 weeks, it can be replaced with if
you are having less manpower. We can take only 1 person for activity F, what we can
give for 4 weeks.
Of course, we have not shown it here, but that is another possibility that is what I am
saying that activity F required 2 programmers each working for 2 weeks, but it is
possible to reschedule this activity to use only 1 manpower, 1 programmer over 4 weeks
that is another possibility.
682
Then, we will say about the resource clashes. So, resource clashes means where same
resource is needed in more than 1 place at a time. So, only 1 resource suppose that is
needed in more than 1 place at the same time. So, this problem can be resolved by
following one of these what methodologies.
First possible that you may delay one of the activities so that the work that one can wait
for the available to be for the resource to be free to be freed; so, that can be allocated to
the needed activity. You may take advantage or float to change the start date because I
have already told you we can use the what you can what use the float time. So, take the
advantage of the float time or the free time so that the start date can be little bit delayed.
Delaying start of one activity until finish of the other activity or the second possible that
delay the start of one activity until the what previous activities finished that resource
being used on puts ok. So, delaying start of one activity until finish of the other activity
that resources being used on; so, but this will put back the project completion, the project
completion might be delayed.
Another possibility is moving the resource from a non-critical activity. So, if a critical
activity, there is targeted each appearing what we can do we can take away, we can
migrate some of the resources from a little bit less critical or non-critical activity to one
of the critical activity that will also this resource clashes can be solved.
683
So, then the last activity will see it is called as prioritizing the different activities. In
practice, the resources are allocated to a project on an activity by activity basis.
Normally, in a real life project the project manager allocates what the resources on what
allocates the resources based on activity by activity basis on an activity by activity basis.
But finding the best allocation is very much time consuming and difficult.
Allocating a resource to one activity limits the flexibility for resource allocation and
scheduling of other activities. Once you have allocated to some resource one activity
means it will now limit the flexibility. You do not have that much flexibility to allot that
resource to another activity and schedule the other activities. So, there is a need to
prioritize the different activities based on their priority based on their importance; so,
that the resources can be allocated to the different competing activities in some rational
order. The priority must always be to what the priority must always be to allocate
resources to critical path activities.
So, you must what given topmost priority for allocation of resources to the critical path
activities. The next priority can go to that the activities which are most likely to affect
other activities. In this way what will happen? The least priority will go to the what the
activities, those are very what less of those are very less important. So, in this way the
lower activities, they are made to fit around the more critical already scheduled activity.
684
So, in this case we are first what scheduling the highest priority activities. So, now,
somewhat time gaps are there, around the critical activities. So, those gaps those free
times can be what filled by the lower priority activities during scheduling.
So, there are two main ways of prioritizing the activities; one is total float priority,
another is order list priority.
So, in total float parity, we have to see what you mean by total float. I have already told
you total float means it is a measure of how much this start or completion upon activity
685
may be delayed without affecting the end date of the project that is some free time slack
time is available. So, this float shows that by how much time, it is a measure by which
says that by how much time; how much time the start or completion of an activity. It can
be delayed without affecting the end date or the completion date of the project.
So, in this method; that means, in the total float priority method the activities which are
having the smallest float; 0 float or 1 float, 2 float or 1 days, 2 days etcetera the float. So,
the activities with the smallest float will have the highest priority because they do not
have any free time. If you will delay, then obviously, project the completion date will be
delayed. So, that is why in this method the activities with the smallest float. They are
given the highest priority the activities are allocated resources in ascending order of their
float.
So, in this method what you do? Arrange the activities according to or their floats
arrange them in the ascending order of their floats as scheduling process the activities
may be delayed. If resources are not available see while during scheduling or a
scheduling proceeds, some of the activities may be delayed because some of the
resources are not available at their earliest start dates and then, what will happen?.
It will consume from the total float and hence their total float will be reduced. So, it is
desirable to recalculate the floats and hence, you have to reorder the list each time the
activities delayed. So, each time the activities delayed, you have to re-compute the float
because the float is changing it is consumed and the list may be changed. So, hence you
have to reorder the list.
686
So, this is happening in total float priority. The next is order list priority. Here in these
method activities that can proceed are they ok. Here activities that can proceed at the
same time. So, in this method the activities that can proceed at the same time, they are
ordered according to set up some simple criteria; one example is Burman priority list and
in this method this method takes account of duration of the activity as well as the float.
In the previous case, we have taken only float. This method takes into account both the
things the duration of the activity as well as the float the total the float.
So, in Burman’s priority list, priority is given in this order first priority is given to the
activities having this test what or we can say that the shortest critical activities; activities
687
having what very shortest what or we can say lowest float. So, shortest critical activities,
they are giving first priority; then, other critical activities. Next priority go to the shortest
non-critical activities; that means, what these non-critical activities having this what
shortest time or so, and then non-critical activities with least float. So, non-critical
activities what they are having what float is least and finally, which are the least priority
will be given to the non-critical activity.
So, in this way Burman’s priority list works priority given to the different activities in
this way. First priority goes to the shortest critical activity. Then, next priority to other
critical activities; next priority to shortest non-critical activities; next priority to non-
critical activities with the least float and the least priority goes to non-critical activities.
So, what are the alternative to resource smoothing? Resource smoothing is not always
possible because deferring activities to smooth out smooth out resources, it peaks often,
it puts back the project completion date. The project completion date may be delayed.
So, in this case the project manager need to consider some alternative solutions such as
he may have to increase the available resources resource levels in that case more cost
will be incurred or he has to follow different working methods; he have to altering he
may he is altering the working methods he may have to alter or change the working
methods.
688
Resource usage; the project manager needs to maximize the percentage of usage of
resources; that means, he will he will he should what allocate the resources in such a way
that it he should reduce the idle periods between the task. There is a need to balance the
cost against the early completion date; also there is a need to allow for some contingency
in hand.
So, creating critical path scheduling the resources can create new critical paths because
delaying the start of one activity because of lack of resource will cause that activity to
become critical if it uses its float. So, further it delay in completing one activity can
delay the availability of the resources required for a later activity. If the later one is
689
already critical, then the earlier one then the earlier one might now have been made
critical by linking the resources. So, while scheduling the resources, it is possible that it
can create a new critical paths because, what slack period the float period is changing.
Scheduling the resources can create new dependencies between the activities. It is
possible to while you are scheduling allocating resources or scheduling resources. It may
create, it can create new dependencies between activities. So, that is why it is better not
to add dependencies to the activity networks to reflect resource constraints. Why?
Because it will make the network very messy the activity network will be messy as well
as a resource constraint may disappear during the project, but its links still remains in the
activity network. So, that is why do not do not add the dependencies to the activity
network to reflect the resource constraints rather you may amend dates on the schedule
rather you may amend the dates on the schedule to reflect the resource constraints.
690
So, we have seen these resources scheduling of resources and the resource histogram,
resource smoothing. Also we have seen the fundamental concepts of resource clashes
and we have also discussed the methods for prioritizing the prioritizing activities. Here
we have seen two methods based on the total activity as well as these order priority list.
These are the references.
691
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 38
Resource Allocation (Contd.)
Good afternoon, let us see the; remaining portion of Resource Allocation. In the last
class, we have seen that there are different outcomes of resource allocation.
One is activity schedule then resource schedule and cost schedule; last class we have
discussed about activity schedule and cost resource schedule. Now, today the main focus
will be the cost schedules. We will see about how to allocate resources to individuals,
then how to publicize the resource schedule, in what forms, then the cost schedule and
what will the scheduling sequence.
692
So, far we have concentrated on trying to complete the project by the earliest completion
date technique with the minimum number of staffs. I have already told you the in the
resource scheduling; this resource scheduling was based on the concept of earliest start
date and earliest complete date. So, we have tried to complete the project by the earliest
completion date where minimum number of staff is required. Doing this it will place
constraints on when the activities can be carried out, this also may increase the risk of
not meeting the target dates.
693
So, there are some alternatives; the alternatively the project manager can consider using
more manpower, using additional staff or he may lengthen the overall duration; he may
extend the overall duration of the project.
Of course, the additional cost of the employing extra staff; it would need to be compared
with the cost of delay. So, if we will add some extra manpower what will the cost
required; we say this if this delivery can be delayed and how much risk will be include.
So, all those things they have to be compared ok. So, the additional cost due to
employing extra staff that has to be compared to the cost of delayed delivery; as well as
the increased risk of not meeting the scheduled date.
Then an appropriate action the project manager may take; either he will delay the
delivery date or he will if it is so, urgent that you have to meet the target date, then he
has to employ extra manpower so as to complete the job complete the project in time as
per the prescribed schedule.
Now, let us see how the project manager can allocate resources to individuals. Allocating
resources and smoothing resource histogram, they are relatively straightforward where
all the resources of a given type they can be considered more or less equivalent. But
actually you will see all the resources they cannot be of same important, they cannot be
considered in practice they cannot be considered as equivalent.
694
When allocating labours to activities; let us take a small example; suppose you are
building a what project and when allocating the labours to activities in a large building
project. You are constructing a large building say 50 storeyed building also, there we
need not distinguish among the individuals; we will treat the labours at part. We will
treat the labours as what similar they will be treated at part; they may be treated as equals
so far as their skills and productivity are concerned, but this is not the case while
developing software projects.
Because of the nature of the software development skill and because due to the or
because of the nature of the software development; here the skill and experience of
different personals, they cannot be treated at work, they cannot be treated equal.
So, skill and experience they play a significant role in determining the time taken and
potentially quality of the final product. So, depending upon what the situation; we have
to assign different persons, different experienced persons to different what jobs different
tasks or different activities. Usually the project manager; so allocate individual members
of staff to activities as early as possible so, that if some problem happens; some time will
be there in the managers hand he can do the; he can take the remedial action.
695
In allocating individual to tasks, several factors are there which need to be considered.
The following factors may be considered while allocating the individual to tasks.
Number one is availability; that means, in the project manager needs to know whether a
particular individual will be available when required. See you are requiring now certain
programmers, but these programmers; they are busy in developing some other projects;
so during your need so they are not available. So, that is another that is one of the main
important factor you have to take into account that is availability.
Whenever the project manager or whenever you as a project manager need a particular
individual; you should know whether that is available or not. So, that is why it is
sufficient before, in advance you have to say that for this I require 5 programmers to the
central pool, so that 5 programmers can be what employed for your project in the due
course of time.
So, project manager needs to know whether the particular individual will be available
when required or they are busy in some other projects so they; so that they cannot be
what allotted to your project immediately. Next factor is criticality, allocation of more
experienced personnel to activities on the critical path often helps in shortening the
project duration or at least reduces the risk of overrun.
696
So, which are more critical activities? So, as a project manager you should allocate more
experienced personnel to those critical activities this will help in shortening the project
development or the project duration also this will reduce the risk of over run.
Another factor in which will; which may affect allocation of individual personnel to
different task which risk. So, identifying the activities which are posing the greatest risks
and knowing the factors which are influencing them will help the project manager to
allocate the staff. So, you must identify which activities are posing greatest risk; as well
as we have to know the factors which are influencing them, this will help in allocating
the staff members to the different tasks or the activities.
So, as the rule states; allocating the most experienced staff to the highest risk activities.
So, as a project manager you should allocate the most experienced staff to the highest
risk activity. Because this has more effect in reducing the overall project uncertainties,
the risk will be reduced; the project uncertainties will be reduced; if I will you will allot
the highest or the most experienced staff to the highest risk job. So, but you know that
more experienced staff; obviously, they will be more expensive; they will require more
salary or more what you can say they are more fees for that; so more charge. So, more
experienced staffs are usually more expensive.
Another factor that may also affect the allocation of individuals of tasks is training; it
will benefit the organization if junior staff are allocated to non critical activities. So,
which are very critical activities, which are very what important activities we must allot
697
very senior and experienced staffs to those activities. But which are little bit not so much
critical, non critical activities ordinary activities you can say; so there we can apply there
we can allocate junior staff members.
Because there will be sufficient float time there will be sufficient slack time or there will
be sufficient free time for those activities, so that those junior staff members they can
learn, they can train and they can develop their skills. There can even be direct benefit to
the particular project; since some of the some cost may be allocated to the training
budget. So, training is also another factor while allocating or while taking the decision
regarding allocation of individuals to different tasks.
And the last one is team building; so the selection of the individuals it also take into
account the final shape of the project team. How the project team its structure will look
like and the way they will work together. You know that there; there could be different
team structures like chief program or team structure or democratic team structure or
mixed control team structure.
In chief programmer team structure you know, there will be a single programmer as the
chief the boss under him there are some what programmers are working. And under each
individual programmer, there are some sub some what subordinate staffs are working.
So, these subordinate staffs will report to the programmer and in turn the programmers;
698
they will report to the chief programmer. So, this is one type of structure called as chief
programmer.
Similar, another one is democratic; in democratic team structure each developer or each
programmer can communicate to every other what programmer or every other developer;
so there is no bar they will what report to a single boss. So, all can communicate among
themselves. And another is mixed control of team; that means, here this is a hybrid team
structure here both the features of single chief programmer team as well as democratic
team; what the both the features are there.
Because each phase each team structure has its own advantages and the disadvantages. In
order to avail the advantages of both the team structures; so a mixed control team
structure is followed. So, that is why while allocating the individuals to different tasks,
you must see that what will the team structure; where this individual will work whether
he is working in a chief programmer team structure based project or a democratic team
structure based project or a mixed control team structure based project.
So, based on this team structure or the final shape of the project; you should decide that
which individual you will allow; you will allocate to that particular project. So, this is
how team building also has to be taken into or the also it should be considered while
allocating individuals to task ok. So, that is why the team building also plays another
important role while allocating individuals to the task.
699
Now, let us see so far we have already seen about how to prepare this, how to allocate
the resources. First we have prepared this resource requirement list and then we have
mapped to a resources histogram. So, then let us see how efficiently we can publicize the
resource schedule.
In allocating and scheduling the resources, we have used the; what we have used the
activity plan or a precedence network. We have also used the activity bar chart as well as
resource histograms. So, these three things already we have discussed in the last class
how to what construct activity, plan or precedence network and a activity bar chart and a
resource histogram in order to or for allocating and scheduling the resources. So, these
already have seen, but these are not the best way of publishing and communicating the
project schedules.
Let us see one of the; efficient way or best way of publishing and communicating the
project schedules that we call as a work plan or a work schedule. So, for this; that means,
for publicizing; for publishing the communicate for publishing the project schedules, we
need or the project manager needs some form of work plan or work schedule. So, what is
the work plan?
Work plan is a commonly work plan and it is commonly published as either a list or a
chart ok. So, normally the work plans are they are commonly published either some kind
of lists or charts one example I will show in the next slide.
700
See here this is one example of a; this is the example of a work plan or work schedule we
are writing who are the resources? The manpower, the name of the what developers or
analyst or testers. Then what activity they are supposed to do? Then the week numbers
and under week each days are given ok.
These are the activities and as you know there are two parts the scheduling part as well
as the float or the slack time everything has been shown. So, this is a better way of
representing or publishing the resource allocation. So, work plans or the work schedules
are the best ways of publishing the schedules; the resources schedules.
So, here basically we show the name of the; the name of the employee, then or the name
of the personnel. Then what activities they are supposed to do and then we will see that
the weeks will show, under each week the days we are showing and here we show the
progress; what the schedule we want show here. So, in this way the schedule can be
shown ah; the resource schedule can be shown in the form of a work schedule.
Now; so now, we have to see something about this cost schedule. We have already seen
the activity schedule, we have also seen the resource schedule; now this is the time to
discuss something on the cost schedule. So, cost schedules normally they show the
weekly or monthly cost over the life of the project.
701
So, during on the life of the project what is the weekly cost or what is the monthly cost
that can be shown in the form of a what figure or a diagram that we call as cost schedule;
this will provide a more detailed and accurate estimate of the cost. So, this cost schedule
it will give you a more detailed and accurate estimate of the cost. This will also serve as
a plan against which the progress of the project can be monitored.
You want to; if you want to monitor the progress of the project then you can use the cost
schedule ah. So, this can be treated as a plan against which you can measure the progress
of the project against which you can monitor the progress of the project. Calculating the
cost is normally straightforward where the organization has standard cost figures for
different staff members and other resources. But where this is not the case, then the
project manager you will have to first calculate the costs then he can or first estimate the
cost, then he can prepare the cost schedule.
In the beginning of this what course we have I have already told you the different types
of costs in different based on different what metric different types we have done. Like
we have seen direct cost versus indirect cost, benefit then tangible cost versus intangible
costs, fixed cost versus overhead cost; these cost we have already seen earlier.
And now let us see for might be development of software normally the cost are
categorized as follows like the staff cost overheads and usage charges staff cost. Staff
cost will normally include the staff salaries as well as other direct cost of the employees;
702
such as employers contribution to social security firms, pension scheme contributions,
holiday pays and sickness benefit I mean medical benefits etcetera.
So, these are all coming under staff cost costs; these are commonly charged to the project
at hourly rates based on weekly work records of the completed by this staff. And
overheads these cost represent the expenditure that an organization incurs which cannot
be directly related to individual projects or jobs. So, some of the examples are so these
are valid ah, these remain valid for all the projects that the organization undertake.
For example, the for the space how much you are giving space rental how much rents
you are paying for the building or the space, then the if you have brought some loan from
the bank what is the interest charges. And the cost of the service departments such as
human resources etcetera HRA department. So, all those things they cannot be what
marked for a particular project; this is valid for all the projects those are undertaken by
an organization; so these are all examples of overhead cost. Similarly, usage charges in
some organizations projects are charged directly for use of the resources.
For example, if you are using what some computer; so then per hour you may give some
money. If you are using this internet for some hours you can give what these what is the
charge. If some online tools you are using for that or some online software’s you are
using per hour basis you have to pay something. So, these are the what charges; these are
some of the cost which are charged based on the uses.
So, that is why in some organizations projects are charged directly for use of resources
such as computer time etcetera or you can say internet usage etcetera and this will
normally be on as used basis this charges will be made. So, these are some of the what
charges these are some of the costs that the software development organization may have
to incur and then based on this cost, you can prepare the cost schedule.
703
This is an example of a cost schedule; here you can say that in the x axis we have taken
the weeks and y axis the estimated weekly cost. So, here we have taken first the
overheads and the overheads may be charged in a fixed amount say there are total
overhead cost is suppose say 1 lakh and there are 5 projects running simultaneously.
So, the overhead may be equally divided among all the five. So, maybe per project its
coming 20000; so, normally this is charged as a fixed rate. So, that is why I see overhead
is coming as a; fixed rate here and then the here in this project this staff cost is initially it
was a constant. Then there are some what more critical jobs or so, then the more number
of manpower required. So, staffing staff cost is increasing rapidly and then gradually it is
coming down going up like that; so, in this staffing cost is shown here. So, here for this
cost schedule; this is you can see it is given for different weeks; so the we have
overheads and the staff costs are shown.
704
So, in that figure you see it shows the weekly cost near about for over 20 weeks; yes it is
shown up to 20 weeks; this project manager expects the project to take ok. So, this figure
shows the weekly costs over 20 weeks that the project manager may expect that the
project will take. So, this is a very this is a typical cost profile, building up slowly to a
peak you can say that this cost initially towards almost constant; then it is slowly it is
going up a high amount going to a peak.
And then then tailing up quite rapidly at the end of the project and then it is tailing up
you can see this is falling down rapidly ok. This is falling and towards the end the staff
cost is falling down because by; by that time what we are at the verge of completion of
the project. So, that is why this what staff cost is falling down. So, this is how you can
prepare a cost schedule ah; maybe a weekly cost schedule for your organization taking
into account the staff cost on the overheads.
705
Now, this is another example; here instead of using this weekly what cost you can take
into account to the cumulative project costs. So, here again in the x axis you are taking
the week number and here instead of taking the just weekly cost we have taken the
cumulative costs.
So, if you see the cumulative cost normally the cumulative cost slowly what it increases
the number of weeks increases. So, this is another example ah for plotting the cumulative
project cost in the form of a graph; so this is another example of a cost schedule. So,
these cost schedules can give an indication that whether your; your progress is on the
right track as you have planned or the cost is exceeding or so that or it is just as per your
expectation or it is below your expectations. So, accordingly you can take remedial
action looking at the cost schedule.
706
Now, let us see how to schedule the sequences. So, as I have already told you what will
the scheduling sequence? First, we have already seen last class that we have to first
prepare a activity plan or a precedence plan. So, going from an ideal activity plan till
getting the cost schedule; it can be represented as a sequence of steps.
Let us see what are the steps you should follow from what constructing the ideal activity
plan to the cost schedule; usually the project managers to start it; he should start with the
activity plan. So, in first he should create or build the activity plan, then this activity plan
can be used as the basis for the risk assessment just like this figure I have shown.
707
So, initially you have to prepare the activity plan; a kind of graph we have already done
possible what the activity plan for our graph our problem last class. Then from this
activity plan can be solved as a basis for the risk assessment. And then what will happen?
Now, the activity plan and the risk assessment they would provide the basis for the
resource allocation.
So, based on the activity plan you can create the what resource requirement list; then
from the resource requirement list and the risk assessment ah by considering both the
things you can prepare your resource allocation that we have already shown here. The
activity from the activity plan the project manager can prepare the risk assessment, then
the activity plan and the risk assessment will provide the basis for allocating the
resources you can prepare the resource allocation; then from the resource allocation you
can prepare the final cost schedule.
So, these are the sequences that you must follow for what preparing the cost schedule
starting from the activity plan ok. So, the project manager should start with the activity
plan and use this as the basis for the risk assessment. Then the activity plan and the risk
assessment would provide the basis for the resource allocation and schedule the resource
schedule. Now, from the resource allocation and the resource schedule; you can prepare
the final cost schedules like this ok.
So, but you can see that these arrows are bidirectional; that means, each one affects the
other one. So, if there is a change in activity plan; there will be a change also in the risk
assessment and when risk you are assessing the risks; so you accordingly you might have
to divide the activity plan.
Similarly, based on the activity plan you are preparing the; what based on the activity
plan and the resource allocation; you are preparing the cost schedule. So, whenever there
will be some change in the activity plan or some change in resource allocation; the cost
schedule or the estimated cost will be different. So, if you want to again divide the cost
maybe what making high values or high value high cost or low cost. Then accordingly
you have to revise the activity plan and the resource allocation.
So, that is what I want to say that these components these steps you can say these are
dependent on each other. If you want to make any change at any step, the other steps
have to be revised accordingly.
708
(Refer Slide Time: 26:19)
That is why I have to say that usually the successful resource allocation upon
necessitates revisions to the activity plan. So, if you want to make a successful resource
allocation; you it might be necessary to revise the activity plan. And if you are revising
the activity plan then of course, what will happen? You have to this will affect risk
assessment; you have to divide the risk assessment because the risk assessment the risk
assessment may be affected.
Similarly, the cost schedule might indicate the need to reallocate the resources or revise
the activity plan particularly where the schedule indicates a higher overall project cost
than originally anticipated. What is saying here according to your previous plan activity
plan and risk assessment resource allocation plan what cost you are getting; it is a very
high value which you cannot afford. So, what you have to do? We have to now reduce
the cost.
So, if you want to reduce cost then effectively you have to what revise the activity plan,
you have to revise the resource allocation might be we have to reduce some of the staff
members or so. So that is what I want to say here that the cost schedule might indicate
that there might be a read need to reallocate the resources or revise the activity plans;
When the schedule indicates a higher over overall project cost than originally
anticipated.
709
And this high must this highest or the higher overall project cost, you cannot afford in
that case you have you have to reduce you have to revise the cost schedule. And
indirectly, since you are reducing the costs schedule; you have to also what since you are
you are revising the cost schedule, you have to also revise the activity plan and you have
to also revise the resource allocation plan. And you may have to reduce some of the what
resources for the use, you might cut down some of the manpower also.
The interplay between the plans and the schedules is complex, you can see that
everything is interconnected with everything else. They are bidirectional, everyone is
affecting the, every component is affecting the other components.
The interplay between the plans and schedules is very much complex, any change in one
schedule will affect each of the others. So, some factors of while preparing the schedules
some factors can be directly compared in terms of money. So, charge cost of having
additional staff it can be balanced against the cost of delaying the projection date.
So, if it is possible that the project’s completion date can be delayed little bit; then we
may not go for this what hiring additional staff. But if the project’s completion date has
to be met strictly; then we have to hire additional staff members. So, that is why some
factors can be directly compared in terms of money. For example, cost of hiring the
additional staff; it can be balance, it should be compared what should be the cost if the
project’s completion date can be extended or can be delayed.
710
But some factors; however, they are very difficult to express in terms of for money; for
example, the cost of an increasing risks. So, if we will not meet the; what deadline what
will the risk whether some customers; they will go away from our company and they
may migrate to another company. So, how to measure these kinds of increase risk if I
will not automate these our running activities people because nowadays everybody is
going for computerization activities; so if still we are doing our activities manually, so
there might be a risks that the clients may migrate the clients may migrate to other
companies.
So, what risks is associated here? It is very much difficult to express the cost of this risks
in terms of money and it will include an element of subjectivity; of course, these are
subjective in nature, we cannot quantify them. So, some factors can be directly compared
in terms of money such as the cost of hiring additional staff; which can be balanced
against the cost of delaying the projects end date, but some factors; however, they are
difficult to express in terms of money for example, the cost of an increase risks and the it
will include of course, on element of subjectivity.
While yeah yes there are so many what the project planning software’s are available.
While good project planning software; they will assist greatly in demonstrating the
consequences of change and keeping the planning synchronized, but successful project
scheduling it is largely dependent upon the skill and experience of the project manager;
711
that matters a lot what is the skill what is the experience of the project manager; that will
what affect a lot during this project scheduling; in juggling in the many factors that we
have seen they are involved in the project development.
So, while some of the good project planning software; they will assist greatly in
demonstrating the consequences of changes and keeping the planning synchronized. The
successful project scheduling is totally dependent; it is largely dependent on the skill and
experience of the project manager in juggling the many factors involved in the project
development this is something about the cost schedule.
In this way, we have seen on the resource allocations the different schedules we have
seen. So, we have discussed the allocation of resources to individuals and the factors to
be considered during allocation.
We have also presented how to publicize, how to publish the resource schedule using a
work schedule or a work plan. We can better, in a better way you can publish the
resource schedule using a work schedule. We have also presented how to represent the
cost schedules, we have also explained what is the scheduling sequence starting from the
building the activity, activity plans to the cost schedules.
712
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 39
Project Monitoring and Control
Good afternoon. Now, let us take up a new chapter that is Project Monitoring and
Control. So far we have seen how to prepare schedules, prepare resource schedules, how
to prepare cost schedules etcetera, how we can represent, how we can display the
schedules, may be in the form of a work plan or work schedule. Now, let us see how to
monitor and control the progress of a project.
We will see little bit about introduction of the project monitoring and control, then we
will see a cycle known as project control cycle for controlling the project for controlling
the progress of the project, then how can see the project reporting structures what will
the structure, then how to assess the progress of a project.
713
(Refer Slide Time: 01:15)
Once the work schedules have been published and the project is started, then the project
manager must pay attention or must focus on the progress. I have already told you how
we can publicize the outcome of this resource scheduling, may be through the help of
work schedules, we can publicize the outcome of this what resource scheduling. So, once
work schedules have been published, published and the project has been started, so
attention must be on attention must be given on the progress of the project.
So, how to what to focus on the progress of the project? So, this requires monitoring of
what is happening in the project, comparison of actual achievement against the schedule
one, and wherever necessary, we have to revise the plans and the schedules to bring the
project as far as the as far as possible back on target. If in schedule something is what
proposed and we are running were we are running away from the schedule, then we have
to revise the earlier plans so as to bring the project target so as to what so as to bring the
project to the target date.
714
(Refer Slide Time: 02:48)
So, once the project is undertaken, and the project manager has to start monitoring the
project, the during this or for the project monitoring the project manager designates some
key events such as completion of some important activity might be a completion of the
requirement analysis, completion of the design, completion of the coding, etcetera. So,
these activities may be set as a milestone ok.
So, for this monitoring purpose, the project manager designates certain key events such
as completion of some important activity as milestones. The important activity could be
completion of requirement analysis, completion of design, completion of coding,
completion of testing etcetera, there may be what treated as the milestones.
715
(Refer Slide Time: 04:13)
So, once a milestone is reached, the project manager can assume that yes some
measurable progress has been made. If any delay is there in reaching a milestone or it is
predicted that there will be some delay in reaching a milestone, then the corrective action
has to be taken by the project manager. So, this may entail what, so if you are expecting
that there will be some delay in reaching the milestone, then the project manager has to
take some remedial action.
This may entail reworking all these schedules, so whatever schedules we have repaired
earlier; you have you may have to revise them, you have to rework them, because some
delay is expected and we have to producing a fresh schedule.
716
(Refer Slide Time: 04:53)
So, there are some tools there are some techniques helpful for monitoring the progress
for controlling and the project. So, one such technique is PERT, PERT you have known
earlier stands for Project Evaluation and Review Technique; so this strategy especially,
very much useful in project monitoring and control. In PERT actually there are several
paths and a path you know it is defined a path in a graph is defined as any set of
consecutive nodes and edges from the starting node to the last node, this definition we
have already seen in graph theory ok; also a path in this graph that means, in this PERT
chart is any set of consecutive nodes and edges from the starting node to the last node.
And among all the paths there can be some path, so we call them as critical path and how
define the critical path, a critical path is a path along which every milestone is very much
critical to meeting the project deadline. In other words, we can define that if any delay
occurs along a critical path, then what will happen the entire project would get delayed.
Therefore it is very much necessary to identify all the critical paths in a schedule.
717
(Refer Slide Time: 06:07)
See there can be more than one critical path in a schedule. The tasks along the critical
paths, so we call them as critical tasks. The critical task need to be very closely
monitored, they has to be controlled, they have to be very closely monitored, because
what if there will be any delay, then the critical task will meet will miss that deadlines
and hence the overall project duration completion time will be delayed.
So, the critical tasks need to be closely monitored and corrective actions need to be
initiated as soon as any delay is notice. So, whenever the project manager expects
predicts that some delay will occur or already some delay has delay has been occurred,
then he has to initiate some corrective actions in order to what bring the project into the
main track into the what scheduled track or the schedule duration. So, if necessary a
project manager may switch resources from a non-critical task to critical task so that all
milestones along the critical path are met.
So, if a critical task it is getting delayed. So, as a what precautionary action what you can
do, the project manager can divert some of this staff which are allocated to the non-
critical task to the critical task, because the non-critical task you know it they have some
what free times, they have some slack times or float times. So, even if there will be little
bit delayed that can consume therefore, what free times or the slack times and it will not
affect the overall project completion date.
718
So, if you find that the critical task they are getting delayed, then as a project manager
you may switch some of the resources, you may divert some of the resources, some of
the manpower from the non-critical task to the critical task so that all milestones along
the critical path they will be met.
So, several tools are available for project monitoring, etcetera; so we will see some of the
tool here now. Several tools are available which can help you figure out the critical paths
in an unrestricted schedule, but figuring out an optimal schedule with resource
limitations and with a large number of parallel task is very very hard problem.
There are several commercial products or tools for automating the scheduling sequences.
For example, the popular tools to help draw the schedule-related graphs include the MS-
Project software available on personal computer So, in if you are having you the personal
computer is there, then there is a what software there is a you can say software MS-
Project software that is available on almost all of the personal computers and this tool
you can use to draw the schedule-related graphs such as CPM, PERT etcetera.
719
(Refer Slide Time: 09:09)
So, in previous classes we have discussed the importance of producing the plans,
different plans how to produce we have seen earlier how ok. So, in the previous classes
we have discussed the importance of producing plans, which can be monitored. Now, we
will discuss how the information about the progress of the project is gathered.
So, how far the progress we have already made the plan, but how far the project is
progressed, is it according to the what schedule mentioned in the plan or not; if not, then
what action we have to take so that is why, right now let us discuss. How the information
about the project of the about the progress of the project how it can be gathered and what
actions the project manager must take to ensure that the project meets its targets. If he
expects some delay will is to occur, then how he should take some precautionary
measures, so that the delays can be avoided and the project can meet the target dates that
is the subject matter of today’s class.
720
(Refer Slide Time: 10:11)
Let us see about the how to create the framework. Exercising control over a project and
ensuring that targets, they are met is a matter of regular monitoring. So, a regular
monitoring means finding out what is happening now and comparing it to what the
target. So, what is currently happening and what should be they are at this time as per
mentioned in the schedule as per the target, so compare that. Finding out what is
happening and comparing with what the targets as mentioned in the schedules.
There may be a mismatch between the planned outcome and the actual ones. Then if
there is a they are not matching, then what will happen we have to do replanning. Then
replanning may be needed to bring the project back on target, so that some remedial
actions has to be taken they have to be taken in order to bring the project, what to its a
original target. So, if it is not possible replanning is not possible, then the target that is
they have to be revised.
721
(Refer Slide Time: 11:14)
So, let us see a particular model a specific model of project control cycle. So, how to
what control the progress of a project. So, first what we are doing, first we are what we
are doing publish the initial plan we had we had made the initial plan as a project
manager you have made the initial plan, then you publish it. Then you gather the project
information regarding the progress, so gathered the project information regarding the
progress of the project, then what you do compare the progress versus the targets.
So, in the initial plan the targets you have mentioned, then you have collected the current
progress information compare them. Is it satisfactory that means, are you moving
according to the plans is the progress, is the progress or is the progress is happening is
the progress happening according to the initial plan or the scheduled plan, is the progress
satisfactory, if yes, then terminate the project end the project.
If it is not sorry, yes if it is what satisfactory, then we will see whether the project is
completed or not; if the progress is not satisfactory, we have to take a remedial action,
then we have to revise the initial plan, then we have to publicize the revised plan and
then again we have to gather the current information, the current progress information or
we have to gather the project information regarding the progress or the at present
progress, current progress.
And again we have to compare, again we have to see that whether it is satisfactory or
not; again if it not this loop will continue and when will be satisfied that yes the progress
722
is according to the plan made in the schedule, then we will check whether the project is
completed or some portions are left. If the project is completed we have to declare that
yes project is ended, project is completed; if it is project is not completed, then again we
have to go back, again we have to gather the information of the project regarding the
current progress and again the loop will fall, again the loop to loop will be followed.
And after the project is ended what we have to do, we have to review the project, then
we have document the conclusions, we have to draw the inference, document the
conclusions, this we must note down, because if in your future we will get similar types
of projects. The lessons what we have learnt earlier that will help us in executing similar
types of projects in future and then we will end. So, this is how a model of the project
control cycle looks, in this way you can control the progress of a project.
So, this whatever I have explained that I have written here, you can see yourself that the
project control cycle it shows ah how, once the initial project plan has been published,
then project control is a continual project please see, project control it is not a one time
job, it is a continual process of monitoring progress against what; against the initial
against a plan and wherever necessary we have to revise the plan to take account of the
definitions. It also illustrates the important steps that must be taken after completion of
the project, see after completion of the project we have to review the project, then we
have to document the conclusions, etcetera.
723
So this also so, so this framework also illustrates the importance steps that must be taken
after completion of the project, so that the experience the lessons that we have gained in
any one project can feed into the planning stages of the similar future projects, thus it
allows us to learn from the past mistakes.
So, in practice the project manager normally concerned with a four types of shortfalls
what are they, number one, delays in meeting target dates, why there is a delay in
meeting the target dates that we must as a project manager you must see. Then shortfalls
in quality, whether the quality as we have expected as we have what anticipated, whether
that quality is maintained or there is a shortfall in that.
Similarly, inadequate functionality whether the functionality that of the final product is
sufficient or as far ours what plan or it is inadequate. And finally, cost going over the
project, what cost we have plan, what cost we have mentioned in the cost schedule,
whether it is within that or it is over the target. So, these are the what four types of
shortfalls that project manager normally concern do it during development of a project.
724
(Refer Slide Time: 16:09)
Now, let us see the responsibilities whose responsibility is what, the overall
responsibility for ensuring satisfactory progress on a project is the role of the project
steering committee, sometimes it is known as project management board; the this is the
overall responsibility that is given to the steering committee of the project management
board.
The day-to-day responsibility normally it is given to the project manager, it rests it relies
with the project manager and you know for every project there is a project manager.
Other normal aspects of the projects, they can be delegated to the team leaders.
725
(Refer Slide Time: 16:49)
And this is how the reporting structure this looks likes. So, steering committee looks
after the overall aspects of the project, overall progress of the project. Then under the
steering committee, the project manager works and under the project manager there are
several team leaders are there, and or not the one team leader several what other what
personals are working.
Like one under one team leader, the analysis and design section their employees are
there. Another team leader the programming section I mean, the programmers coders are
there. Another team leader the quality control section is there, testers are there. Another
team leader the user documentation employees they are there.
So, these what bottom level employees they report to the team leaders, then the team
leaders they prepared this summary and they report to the project manager, project
manager adds his comments, etcetera and modified this one revises the what reports
given by the team leader and sends the reports to the steering committee; and steering
committee forwards to the these reports forward to the clients, sometimes the project
manager also can directly communicate to the client regarding the progress or the reports
documents, you can directly communicate to the clients.
So, this is how the reporting structure look like. The end developers or the end personnel
the end or the bottom level personnel in the structure, they will report to the team leaders
726
and the team leaders will report to the project manager, project manager will report to the
steering committee.
So, I have already told you that this previous figure, it illustrates the typical reporting
structure, which is found with a minimum with medium and large scale projects, but for
small projects we are less than half a dozen or fewer staff are there. The individual team
members, they usually report directly to the what project manager, but in most cases the
team leaders will collate the reports on their sections progress and then they will forward
the summaries to the project manager.
Then these, in turn, I mean these reports are given by the project manager they will be
incorporated into the project-level reports for the steering committee and, via them or
even if in some cases directly, the progress reports for the client it will be sent. So, these
progress report for the clients may be directly sent from this project manager to the
clients or via this steering committee.
727
(Refer Slide Time: 19:37)
So, reporting could be oral or written, see let see the different kinds of reporting. We
have already shown you the reporting hierarchy, this is the reporting structure. Then how
do the what staff members they report, the reporting can be oral or written, it can be
formal or informal or it can be regular or adhoc. So, informal the communication is also
necessary and important, but whenever any such informal reporting of a project progress,
it is followed, it must be complemented by some formal reporting procedures.
728
See these are the categories of reporting, like this is the oral formal regular type of
reporting type and one example is weekly or monthly progress meeting, and this is the
comment like while reports may be oral, formal written minutes this would also be kept.
Then oral formal adhoc type of reporting, example is end to end-of-stage review
meetings. And written formal regular, example is job sheets, progress reports, etcetera.
Then another type is written formal adhoc, examples are exception reports, change
reports, etcetera. Oral informal adhoc, example is like discussion in canteen, social
interaction, etcetera, these are some of the content. So, these are the different categories
of for reporting.
Now, let us see how to assess the progress of a project; how as a project manager, how
we can assess the progress of a project. Some information are used to assess project
progress or the progress of the project that should be collected routinely, while some
other information will be triggered by specific events. So, in order to assess the progress
of the project, some information used to assess the project progress will be collected
routinely, maybe weekly or monthly also.
While some other information and they will be triggered by specific events, maybe when
the requirement specification document SRS document is over, say may be you have to
collect some information. When the design is over, you may collect some other
729
information like this, whenever some specific event is triggered, then you will collect
some information.
So, now wherever possible this information should be very much objective and tangible.
For example, whether or not a particular report has been delivered or not, so this kind of
information, please try to make them objective and the tangible as far as possible.
So, it is essential to set a series of checkpoints in the initial activity plan. So, in order to
assess the progress of a project, it is very much essential to set a series of checkpoints in
the initial activity plan. What you mean by checkpoint, checkpoints means these are
some predetermined times when progress is checked. So, you have to determine some
ok, you have to pick some predetermined times, when the progress is checked.
So, the checkpoints can be up for two types, event driven and a time driven. In event
driven the check takes place when a particular event has been achieved. So, what could
the example of a particular event, might be what the production of some report or other
deliverable or what the coding is complete or testing is complete like that some event is
completed, then you check whether you have achieved whatever it is mentioned in this
schedule or not.
So, another checkpoint is time driven. So, here date of the check is pre-determined. So,
on what date you will make the checking that is pre-determined that means, it is regular
730
might be weekly basis or monthly basis; you will check the progress, you will you will
check the progress so that is why this is regular, this is known as time driven
checkpoints.
Now, let see what is the frequency of reporting, how frequently you should do the
reporting. The frequency of progress reports will depend upon the size and degree of risk
of the project. So, the how frequently as a project manager you should check the reports,
you should prepare the reports like this.
So, the frequency of progress reports will depend on several factors such as what is the
size of the project, what is the degree of risk of the project; if this is a what real time
system, if this is a safety critical system, where risk is more, so then the frequency will
be very what it is very frequent might be every day you have to monitor this thing. If it is
a very large size project, might be again very frequently you have to check the progress.
So, the as per the rule the higher the management level, the less frequent is this what
reporting.
For example, longer the gaps between the checkpoint, so if higher is the management,
less is the frequent reporting or longer the gaps between the checkpoints and less detailed
the reporting needs to be, because the high level management they are not interested in
the detailed what report, they will see the summary of the report etcetera.
731
But lower the management more frequent in the reporting and more detailed should be
the ah reporting documents. For example, you see the team leaders may assess progress
daily. So, we have already seen what the hierarchy somewhere else here.
So, at the bottom level like team leader they will require the reports more frequently
might be on every day whereas, project manager may require the report may be on every
week or so. So, steering committee maybe they will require the reports on every month.
So, higher is the management lesser is the frequency and lesser is also the detailed
reports.
Ok, this I have already told you that higher is the management level, the less frequent is
the reporting as well as less detailed the reporting needs to be. In other words we can say,
longer if the gap between those between the checkpoints. Example I have already told
you, the team leaders may assess the progress of the projected daily whereas, the
managers do that in every weekly and the steering committee may do that in every
month.
So, project project-level progress reviews generally takes place, when at some particular
points during the life of a project maybe in one month or so, so these particular points are
commonly in known as review points or control points. So, this is how the frequency of a
reporting it occurs, as I have already told you. It will be what if it is a bottom level
management, the frequency of reporting is more, the detailed reports will be more, but if
it is a as we are moving towards the upper level of the management, high level of the
management; then the frequency of the reporting will be less as well as the detailed
reports, they will require a less detailed report.
So, this is about how frequently you should do the reporting in a what project
organization.
732
(Refer Slide Time: 27:57)
So, finally we have discussed the importance and the need of project monitoring and
control in project development. We have also explained the project control cycle, so not
only that after the project is completed; what we should do like project review and
documents, etcetera from where we can gain some lessons, some experience which will
help us in handling similar types of projects in future.
We have also presented the project reporting structures and we have also seen the
frequency of the reporting, we have also explained how to assess the progress of a
project. So, this we have seen in this today’s class.
733
(Refer Slide Time: 28:45)
734
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 40
Project Monitoring and Control
(Contd.)
Good afternoon to all of you. Now let us start the some other aspects Project Monitoring
and Control.
We will first see how to collect the progress details of the project, then we will see a
special technical called as partial completion reporting, then we will see about the project
review.
735
(Refer Slide Time: 00:39)
So, there is a need to collect the data about the progress details, may be the achievements
that you have made, the costs that have been incurred in the project etcetera that we have
to collect the data regarding these details, but a big problem is how to deal with the
partial completions; that means, the 99 percent completions syndrome. So, some what
activities are there, they are always when we will see 99 percent complete, still 1 percent
remaining.
So, how to deal with this partial completion problems. There are that some possible
solutions to these problems; number one we have to control the products, control of the
products, we should control the products not the activities. We should have controls on
the product not the not on the activities. Similarly, if larger complex activity are there we
should sub divided these complex activities into a number of sub activities.
736
(Refer Slide Time: 01:39)
Now, let us see how to collect the data. Normally the managers should try to break down
the long activities, complex activities into more controllable tasks of 1 or 2 weeks of
duration, still it will be necessary to gather information about the partial completed
activity. That means, the activities which are not fully completed they are partially
completed, we should collect the data regarding these activities. In particular we should
collect the information regarding the forecast of how much work is still left that has to be
completed, but it is very difficult to make such forecast accurately in advance. So, where
there is a series of products partial completion of activity is easier to estimate.
737
So, where there is a some series of products then partial completion of activity is easier
to estimate. Now let us see counting the number of records specifications or screen
layouts produced for example, it can provide a reasonable measure of the progress. So, if
you can count that what is the total number of records and how many records you have
specified. The counting the number of records specifications or what is the total number
of screen layouts and how many you have produced so far. Those things also can give a
reasonable measure of the progress, but in some cases intermediate products they can be
used as inactivity milestones ok.
So, in some cases some intermediate products can be used as inactivity milestones. For
example, the first successful compilation of a program, even if it is not the completed
one, but first time the first successful compilation of a program which is an intermediate,
what result it can also be considered, it can also be considered as a milestone, even if it is
not the final product is it is an intermediate product, but it can be considered as a
milestone.
So, now let us see about the partial completion reporting, we have to collect data
regarding the partially completed activities, we have to report the status. Many
organizations they use standard accounting systems. Several organizations are there they
use standard accounting systems with weekly timesheets to charge stop time to
individual jobs. So, they use something a timesheet, weekly timesheet in order to
738
determine what will the charge, what will the, what salary that the staffs they have to be
paid on weekly basis. The staff time booked or to a project indicates the work carried out
and the charges to the project ok.
The staff time booked to a project, it will indicate the work carried out by the staff and
the charges to be to the project, but it does not what tell the project manager what has
been produced or whether the tasks are on schedule or not. These weekly timesheet, it
will not give any information regarding that whether what has been produced or what has
been what or whether the task or on schedule or not, so that is why it is common to adopt
or we can enhance the existing accounting data collection systems to meet the needs of
the project control. We have, we should enhance or extend the existing accounting data
collection systems.
So, let us see how these weekly timesheets look like. The weekly timesheets are
frequently adopted by breaking job or the task, breaking down the jobs or task to activity
level ok. So, in timesheets the, what the jobs or the tasks they are broken down to
activity level and they require information about work done in addition to the time spent.
So, we also, they also, so it also requires information about the work that has been done
in addition to the time that has been spent.
739
So, in next slide you will see an example of a what weekly timesheet, where we can see
that this example, this figure, we will see one example of this weekly timesheet. So, here
what is being shown.
So, this figure is shows an example of a report form requesting information about likely
what is the slippage, what is the likely slippage of the completion dates as well as the
estimates of completeness. You can see in the timesheet is there what is the staff name
the for which week you are preparing. Then mainly two types of charges are there the
rechargeable hours and the non rechargeable hours, then for which projects he has
worked, then what the activities he has performed, the corresponding activity codes and
the description of the activities like coding and documenting etcetera, then how much
hours in this week he has worked for this activity.
Like in the coding activity he has spent 12 hours, so and also it shows what is the
percentage of the work completed, 30 percentage completed and what is the schedule
completion, say 24 April, but what is the estimated completion same 24 April.
But for the next activity document take on hours this week he has worked 20 hours and
percentage completed 90 hours. As a result the schedule completion was 6th April, but it
was see it will be estimated early before 2 days before, so it 4th April. So, because 90
percent of the work have been done and it will, so that the total rechargeable hour is 32.
There are some non rechargeable hours, because he has done in lieu or something, so
740
hours 8 week, this 8 hours, this is what you can say this is non rechargeable. So, now,
this is rechargeable hour 32, if you know per hour what is the price multiply by 32. So;
that means, this much what amount you have to pay, you have to pay to John Smith. So,
this is how a times sheet looks like.
So, other reporting templates are also possible people are using. For example, rather than
asking for the estimates of the percentage complete some managers may prepare to ask
for the number of hours, already worked on the task and estimate and an estimate of the
number of hours needed to finish the task off. Just like as here your saying 30 percent is
complete, so 70 percent is left and 90 percent is complete, 30 percent is left. Instead of
asking that, so the manager may ask in terms of hours, how much hours you have spent
and how much hours still required to finish the tasks of.
These are alternatives to this reporting method, this what timesheet or this weekly
timesheet form, there are this is this can be another alternative. Now, let us see a
particular technique called as read amber green reporting method for this partial
completing work, for this partial completion reporting aspect.
741
(Refer Slide Time: 08:47)
One popular way of overcoming the objections to partial completion reporting is, to
avoid asking for the estimated completion dates. So, we can avoid asking the estimated
completion dates, but we can ask the team members about the estimates of the likelihood
of meeting the plan target date.
What is the possibility, what is the probability or what is the likelihood of meeting the
plan target date? So, one way of doing this is by using a traffic light method. As you
know in traffic light there are three lights are there; red, green and amber or yellow. So,
those kinds of things also be used here for overcoming the problems of partial
completion reporting. So, this approach works as follows, the steps are followed in this
way.
742
(Refer Slide Time: 09:27)
First step is identify what are the key elements or the first level elements for assessment
in a piece of work. Then break these key elements which are the higher level or the first
level into some constituent elements, we will call them as a second level elements. Now
we have to assess each of the second level elements on the scale of these three colors;
one is green color for on target; that means, the activity is on target ah. So, no problem.
So, second one is amber; that is for not on target, the activity currently it is not on target,
but it is recoverable. If we can take some remedial actions we can recover, we can meet
the target and the last, the final colour is red which is represents that not on target right
now and recoverable only with difficulty. If we will put much more effort then it may be
recovered, but it may be recovered with a difficulty. So, these are in this way we can
assess each of the second level elements on these the on the three color scale
Then after assessing each of the second level element we should review all the second
level assessment to arrive at the first level assessment and then we have to review the
first level and second level assessment to produce an overall assessment; that means,
overall assessment for the project. So, this is how we can overcome some of the
problems of the partial completion reporting.
743
(Refer Slide Time: 11:09)
So, following completion of assessment forms for all activities, the project manager uses
these as a basis for evaluating the overall status of the project. As I have told you in the
critical activity, normally it is as is classified as amber or red will require further
consideration. As long as it is green no problem it can meet the deadline, it can meet the
target, but any critical activity which is classified as amber or red it will require further
report further consideration and often leads to a revision of the project schedule. In this
case we might have to require to revise the project schedule.
Non-critical activities, they are likely to be considered as a problem if they are classified
as red. So, if the non-critical activities, see critical activities amber or red means we have
to reconsider, we may have; we may have to revise a project schedule, but the non-
critical activities if they are green or amber, no problem so, but if they are red then also
problem might be there. So, in this case we have to what, we have to revise the plan
because, so this they are float they are free or the slack time may be consume. So, after
some time again they may be they will be termed as critical activities.
So, non-critical activities they are also like to be considered as a problem if they are
classified as red. Especially, if they are float or slack is very less and likely to be
consumed in near future, so that again they will be converted to critical and then it will
difficult to what meet the target lines. So, also we have to take remedial actions to bring
the project, to get the project to the right track to the schedule.
744
(Refer Slide Time: 12:55)
This is a sample traffic light assessment you can see here. So, its that the name of the
staff say Justin and reference number which project, activities per code and test module
of C. These are the week number 13, 14, 18 etcetera, you can see that. Let us see first the
components first component is screen handling procedures, its you can see this in what
green and also file update procedure also green, so these are all green. So, we can say
that activity summery at the end of 13th week its green.
But at the end of 14th week this screen handling procedures it is amber; that means, what
amber means we have already told you. Amber means it is not on target, but it is
recoverable. So, it shows that it is amber, so it can be recovered, but file update
procedures it is red, again these are green and program documentation is amber. So,
overall assessment is that it is. Sorry this is overall assessment here in this case is, at the
end of 14 it is amber. Similarly or end of 15, at the end of 15th week you can see
different components of different values green, amber or red accordingly the activity
summery is rated as is assessed as A.
And for this you see which is one of the main important thing is compilation, this is a
critical activity, this is red also program documentation is red. So, as a result at the end
of 16th week the activity summary is assessed as the red. That means, now we have to do
something; otherwise it will go beyond the what schedule, so we have to take some
remedial actions, so that we may what revert back, we may meet the what target lines,
745
but with lot of difficulty. This is how the sample light asses assessment based on the red,
amber, green scales this works.
Then we will take another important technique called as review. So, from a managers
perspective review of work products is very much important, it is an important
mechanism for monitoring the progress of the project and ensuring the quality of the
work products. Every project each developed through iterations over a large number of
work products. What could be the work products? The work products could be
requirement documents, design documents, project plan document, code documents, test
plans etcetera.
So, each of these work products they can have large number of defects in them, because
human beings are preparing this thing, so there is a possibility that each product may
contain a large number of defects in them due to mistakes committed by the development
team members which are human beings. That is why it is necessary to eliminate as many
defects in this work products to realize a product of acceptable quality.
746
(Refer Slide Time: 15:43)
747
In fact, review has been acknowledged to be more cost effective in removing defects as
compared to testing. Early review techniques they focused on code and systematic
review techniques were developed for this specific purpose, but over the years the review
techniques have become extremely popular and have been generalized for the use with
other work products also.
This is a very cost effective defect removal mechanism. Review usually helps for what?
review usually helps to identify any deviation from the standards including the issues
that might affect maintenance of the software. So, anything else, so this for review it can
be used to help identify any deviation from the given standards. Reviewers suggest ways
to improve the work product such as using, so these the persons associated with the
review, normally we call them the reviewers.
The reviewers they suggest different ways to improve the quality of the work product;
such as using efficient algorithms which are more time and space efficient using specific
work simplifications, using better technology opportunities that can be explored etcetera.
So, these reviewers can suggest during the review process and hence the quality of the
product may be improved.
748
(Refer Slide Time: 17:47)
749
So, now let us see candidates work products for review. What things can be what
reviewed? So, all interim and final work products they are usually candidates for review,
they can be reviewed. Usually the work products considered to be suitable candidates for
review are as follow, let us see what work products, what products can be sent for
review. So, like this SRS document, software requirements specification documents, user
interface specifications, design documents.
Then different types of design documents; such as architectural design documents, high
level design documents and this detailed design documents, test plans and the what test
cases those have been designed, then project management plan, configuration
management plan. All those things can be reviewed, these are the possible candidates for
review.
Now, let us see what is the review roles what are, which persons are associated in
review, what roles are given during the review process. In every review meeting a few
key roles are needed to be assigned to the review team members. So, the some of the
review are these roles are moderator, recorder and reviewer. So, some, so for a work
product, so the project manager will assign one person as the moderator, maybe one
person as the recorder and there can be a group of persons they will be designated as
reviewers.
750
(Refer Slide Time: 20:03)
So, let us see what will the role of the moderator what he will do, what is the job of
moderator. So, moderator he plays the key role in the review process. The principal
responsibility job moderator, it include scheduling and convening the different meetings,
distributing the review materials to the reviewers, leading and moderating the review
sessions and ensuring that the defects are tracked to closure. So, these are so; that means,
whatever the suggestions are given by the reviews they have been addressed ok. So, he
ensures that the defects are tracked to closure. So, these are the responsibilities of the
moderator.
751
Similarly, the jobs of; the jobs of recorder are like this the main role is to record as a
names of the recorder, he will record the following things, he will record the defects
found, he will record the total time spent and he will record the effort that they have put
in conducting this review.
And reviewer what is the role the. So, we have already seen about the moderator, we
have seen this recorder then the role of the reviewer. The review team members normally
they review the work product, as their names are review. So, these review team members
will review the candidate work products and they will give specific suggestions to the
author of the work product about the existing defects and also they will also point out
how to improve the work product, how to eliminate the defects, how to improve the work
products.
752
(Refer Slide Time: 21:37)
Now, let us see how the review process works? The review of any work product consists
of the following 4 important activities. First you have to do the planning, then review
preparation and an overview, then usually review meetings and finally, you have to
rework on the suggestions and follow up. These steps, these activities can be put in the
form of a figure like this.
So, the candidate work product it will be supplied at the input. Then first activity is
planning, you have to prepare the planning. In planning stage the project manager will
753
appoint one moderators and one recorder, as well as you will form a committe of what
review members. Normally 5 to 7 members of the members will be selected as the in the
review team. Then the output will be this what review learn on the schedule and then it
will be a preparation meeting. The review moderate, the moderator appointed by the
project manager, he will what convene a brief meeting for preparation, he what
documents has been submitted by the author, the work product or the author of the work
product first submit some documents.
So, these documents in this first brief meeting the moderator he circulates among the
review team members. The review team members individually they carry out review for
on this submitted work product, then they whatever the outputs it is mentioned in a
document called as the reviewers log, then the actual review meeting takes place. Here
the reviewers give the suggestions on the work products to the what author, then the
author responds to the issues raised by the reviewers, other reviewers also participate
here, then describe write down all the details.
And finally, the document prepared its called a defect log and this defect log has been
will be handed over to the author. Then the author sees what are the issues raised, then he
will rework he will revise the work product, he will address all the issues raised by the
reviewers then he will prepare a rejoinder a review, saying what are the points raised by
the issue what reviewers and how he has addressed all those things.
Then again a final meeting will be convened, the reviewers will investigate, they will
look what whether how the author has addressed the issues, how he has corrected, how
he has incorporated the suggestions he will check. The moderator will finally, see that
yes all the defects which have been there in the earlier version now they have been
removed they will, then they will prepare the summary report. In the summary report
they will see the details of the defects, the time spent etcetera that will be that will
mentioned in the summary report.
So, in this how, you know in this way the review process model works. These are the
different activities carried out in the review process and these are the interim or
intermediate products and the summary reported the final product of this review process.
754
(Refer Slide Time: 24:59)
So, what I have already told that has been explained here all these stages like planning
and this preparation, review meeting, rework, I have already explained in the diagram,
they have been mentioned here.
755
And then final step is data collection. In data collection what we are doing, the results of
the review meetings should be properly recorded. So, because we are human beings after
some days we may forget everything else, the documents may be or they discussing, the
results of the discussion may be lost. So, the results of the review meeting this should be
properly recorded. Not only that the data about the time spent, how much time you have
spent by the reviewers in the review meeting that also have to be captured.
A record of the defect data is needed for tracking defects in the project. Why will record
all the details of the defect data? Because a record of the defect data is needed for
tracking the defects in the project. So, in order to what material, what error free, defect
free the product, so this record of the defect data it will be needed, it will be helpful for
tracking the defects in the project.
756
(Refer Slide Time: 26:13)
So, now, let us see what kind of reports are prepared the during the review, the different
reports in which the review data are captured as follows; first one is review preparation
log, which contains the data about the defects, their locations at which point they are
identified they are located. Their criticality how much critical they are, whether they are
very much serious or they are what normal or things like that they are criticality, the total
time spent in doing the review etcetera.
So, all those things are mentioned in the review preparation log. So, then the review log,
it contains the defects which are agreed to by the author. The author has agreed that yes
these defects are there and I will remove in the next revision. So, review log it will
contain the defects that are agreed to by the author, then the review summary report as its
name suggest summary. So, this report it is summarizes the review data and it represents
an overall picture of the review.
So, the outcome, the summary of the outcome of the review meeting is documented here.
It also contains the information regarding the total defects present into our product and
how much time its spent, the total time spent in this review process. So, these are the
different documents, these are the different reports they are prepared during the review
process, may be review preparation log and review log are intermediate what documents
report or reports or as the review summary report is the final report.
757
(Refer Slide Time: 28:01)
So, the model that we have proposed it captures the sequence of the activities that need
to be carried out and the input to the activities and the output produced from activities, so
everything. So, we have shown what is the input to this process. So, inputted the work
product, these are the intermediate products such are the review team and the this
schedule, this reviewers log, the defect log these are the intermediate products and the
final product is the summary report ok.
758
So, here we have seen that this. So, this review process model, it captures the sequence
of the activities. 4 activities we have shown here that need to be carried out for the
review process. Now the input to the activities that is the candidate work product and the
output produced from the activities. So, there are some intermediate outputs as I have
already told you and that is the final output that is the summary report.
So, in this lecture we have discussed how to collect the progress details of a project. We
have also explained the partial completion reporting method. We have seen how the
drawbacks of the partial completion reporting method can be overcome using the what
the red, amber, green method. Also we have presented how to carry out to the project
review, what are the activities of project review, what are the input and outputs of this
project review. We have seen the some of the logs that will be used like project review.
759
(Refer Slide Time: 29:49)
We have seen some of the what logs like review preparation log, review summary report
log which are the intermediate outputs of this review process and the final output of the
review process in the review summary report. So, these are the outputs of the review
process. So, we have seen how to carry out the project review. So, this is all about this
project review.
760
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture – 41
Project Monitoring and Control (Contd.)
Good morning to all of you. Last class we have discussed about Project Review
Technique. Today, we will discuss about Source Code Review.
We will see two types of code review that is code walkthrough and code inspection.
Also, we will discuss something about a relatively new technique called as cleanroom
technique.
761
(Refer Slide Time: 00:36)
So, the yesterday we have discussed that all the interim and final work products they are
usually the candidates for a review, and particularly the following work products they are
suitable candidates for review. Those are like requirements specification documents or
SRS document, user interface specification and design documents. Architec different
design documents are such as architectural design, high-level design, detailed design
documents, also source codes obvious, this can be also reviewed.
Then various test plans on the test design test cases, they can also be reviewed. Also, the
various project management plans, the configuration management plan etcetera, they can
also be reviewed. So, right now we will discuss about how the source codes they can be
reviewed.
762
(Refer Slide Time: 01:25)
We have also seen that yesterday review is a very effective technique to remove the
detail defeat detects from the source code. And in fact, a review has been acknowledged
to be more cost effective in removing the defects as compared to testing. Over the years
various review techniques have become extremely popular and they have been
generalized to be used with other work products also.
So, now let us see when the code review should start. The code review for a particular
module it is undertaken after the module is successfully compiled that is all the syntax
763
errors that have been alimented eliminated from the module. Obviously, you see that
code review does not target to design the syntax errors. Please remember, code review it
does not target to design the syntax errors in a program. So, it does not target to syntax or
design syntax errors in a program. It is designed or it is targeted to detect logical errors,
algorithmic errors and programming errors. That should be kept in mind.
The code review has been recognized as an extremely cost effective strategy for
eliminating the coding errors and for producing high quality code. So, here it will be a
cost effective, much cost effective than this a what testing techniques while detect
eliminating the or and detecting the errors.
Now, let us see what is the reason why code review is so much cost effective than the
testing technique. The reason behind a why code review is a much more cost effective
strategy is to eliminate the errors from code compared to testing is that review directly
detect the errors. Let us see what is happening in testing, why in a review it is so much
cost effective, because review is that directly detect errors while testing they only help
detect the failures and the significant errors.
By using the testing technique you can only detect the failures and the significant, and
you can only detect the failures. But in case of this review you can directly what detect
the errors. So, in and here in case of testing you have to spend a lot of effort to locate the
errors during debugging.
764
(Refer Slide Time: 03:55)
The rationale behind the above statement is explained as follows. Why it is cost
effective? What is the rationale behind this? You see the eliminating an error from code
it involves 3 main activities, one testing, then debugging and then correcting the errors.
So, a 3 activities are there where in case of eliminating errors from the code.
Testing is carried out to detect if the system fails to work. So, normally testing is carried
out why? It is carried out to detect if the system fails to works for certain types of inputs
and under certain types of circumstances, and when a failure is detected once a failure is
detected, then debugging is carried out to locate the error that is causing the failure and
then you can remove it. But in a what a code review you can directly detect the error, so
that is why this is much more cost effective than the testing process.
765
(Refer Slide Time: 04:57)
So, of this as I have already told you there are 3 activities are called what carried out to
due while you are eliminating the errors that is testing debugging and correcting the
errors. So, out of these 3 testing activities, debugging is the most lubricous and time
consuming activity.
So, we will see that in code inspection which is a particular type of review, we will see
that there are two types of review, one is code walkthrough, another is code inspection.
In code inspection what is happening the errors are directly detected. So, here in what
testing what it is only help you in detect the errors which is again a time consuming and
lubricous activity, but in code inspection the errors are directly detected thus saving the
significant effort that is required to locate the error. So, that is why a review is much
more cost effective than the testing technique.
766
(Refer Slide Time: 05:55)
Normally, there will be two types of code review techniques, one code walkthrough and
the other is code inspection.
Let us see about this things. So, as I have already told you that when this what code
inspection or code walkthrough can start after it is what compiled, after a module has
been coded, then code inspection and code walkthrough can be carried out.
So, code inspection and code walkthrough they ensure that the proper coding standards
that have been followed like the standard errors you have used and what proper variables
767
etcetera that you have used it ensures the various coding standards that the programmer
has used in the software that he is developing, it. They also help to detect as many errors
as possible before testing. So, before this system testing or integration testing the this
code review techniques that is code inspection and code walkthrough they help detect as
many errors as possible before the system testing and the integration testing starts.
So, this techniques they detect as many errors as possible during a inspection and code
require, ok, the code inspection and code review, they detect as many errors as possible
during this process. They also, and these detected errors require less effort for correction.
So, after this detection we must correct these errors. So, the errors detected during
inspection and walkthrough they require less effort for correction, otherwise it would
require much higher effort if these errors was would be detected during the integration
and system testing.
So, we will summarize that the errors detected during a code inspection and
walkthrough, they require less effort for correction, in comparison to the errors if they
could have been detected during what integration or the system testing level.
768
(Refer Slide Time: 07:58)
We will say first the code walkthrough. Here, this is basically an informal code analysis
technique. So, code walkthrough is considered as an informal code analysis technique. It
is not a formal code analysis technique, it is a informal code analysis technique which is
undertaken after the coding of a module is complete. So, after one module or one unit its
coding is over, then you can go for this code walkthrough which is an informal code
analysis technique. Now, let us see how this code walkthrough process system works.
So, a few members of the development team select some test cases. So, a small what
committee maybe formed and a few members of the development team and they select
some test cases, and those test cases then they simulate execution of the code by using
those test cases. So, they first select some test cases, then simulate the execution of the
code. How? Not by computer, by hand, by mentally, by hand using these test cases that
is these members, this development team members they mentally trace the execution
through or different statements and a functions of the code. So, this is how this code
walkthrough works.
So, that is why its name is code walkthrough. You are, just as if you are working through
the a code you are executing the code by hand or you are mentally testing the execution
that is why this is known as a code walkthrough.
769
(Refer Slide Time: 09:27)
The members the while they are executing the code by hand or by mentally, they note
down their findings of their code walkthrough and discuss those in a walkthrough
meeting. So, in that discussion they will say what kind of what defects that they have
found or what could be the cause. So, those details they discuss in the walkthrough
meeting.
So, even though this code walkthrough is a in is an informal technique, but several
guidelines have already evolved over the previous years, so which makes a or which
make this naive approach, which make this naive, but this is a very useful analysis
technique and this is a more effective technique. So, these guidelines those have been
evolved over the years, they are based on what? They are based on the personal
experience of the programmers, they are common sense and several subjective factors.
770
(Refer Slide Time: 10:24)
The guidelines should be considered as example please say. So, as I have already told
you during a or in these guidelines have evolved over the years, these guidelines are not
actually very a hard and fast that you have must have to follow. So, rather these
guidelines should be considered as examples, rather than accepted as rules to be applied
dogmatically. These are not that hard and fast you have to follow exactly, they can be
considered as only examples.
The team performing the code walkthrough should not be either too big or too small. So,
let us say a how many what members should be there in their code walkthrough
committee. It should not be a either a too big or too large. Ideally, it should consist of
between 3 to 7, 7 members.
771
(Refer Slide Time: 11:13)
Now, let us see in this code walkthrough meeting a what should be focused, the member
should focus on what. Discussion in the code walkthrough meeting should be focus on
only the discovery of errors, they should not discuss how to fix the discussion discovered
errors.
The objective is, the focus is only on discovering the errors, but not on how to fix the
discovered errors. So, to faster cooperation and the members should avoid on filling that
they are being avoid, they are being evaluated in the code walkthrough through meeting.
You know they should that fear not be there that somebody is evaluating them in the
meeting, so that is why managers should not attend the walkthrough meetings, only the
developers or the programmers, their peers they may attend the walkthrough meeting.
So, that this fear should not be there, that they are being evaluated in the walkthrough
meeting, that is why managers or draft able managers they should not attend the
walkthrough meetings.
772
(Refer Slide Time: 12:14)
Now, let us see, we have seen about code walkthrough what is code inspection. In
contrast to code walkthrough on code inspection aims mainly at the discovery of the
commonly made errors that usually creep into code due to programmer mistakes and
over sites. I am repeating again.
So, in code walkthrough what is happening? That programmers they execute the what
test cases by hand or by mentally trace the execution of the code but here in code
inspection the objective is discovering ops of commonly made errors, because the human
the developers are human beings and they may make some common mistakes.
773
(Refer Slide Time: 14:23)
So, the objective the aim of code inspection is to discover those commonly made errors
that usually creep into the code. Why? Due to the programmer’s mistakes and their
oversights. This code walkthrough also it aims at checking adherence to coding standards
for the proper coding standards like a proper header files global variables etcetera you
have used properly or not. So, it also what checks that.
So, another objective of code war, another objective of code inspection is that to check
adherence to the coding standards. So, now, let us see what is for performed during code
inspection. During code inspection the code is examined for what? For the presence of
certain really certain common kinds of errors.
So, in code inspection, the code is examined, it is checked for the presence of certain
common kinds of errors that the programmers might commit the mistakes due, they
might do, they might commint those errors due to some oversight issue. So, but in case
of code walkthrough what is happening? You are the programmer or the team members
they perform some handy hand simulation of the code execution, that is done in code
walkthrough. But in code inspection, these code is examined for the presence of some
commonly made errors.
What are the benefits of code inspection? I have already told you in code inspection the
objective is to find the commonly made errors, so several commonly made errors which
are committed by the programmers they are detected here. The programmer also
774
receives; so, let us see what are the side benefits, the side benefits of code inspection is
that the programmer receives a feedback on his programming style, the choice of
algorithm or the algorithm you have used and the pre the programming techniques he has
used, you can get a feedback on these things, ok. So, these are the side benefits of code
inspection.
And now let us take a simple example of code inspection. So, for instance let us consider
the classical error of writing a procedure that modifies a formal parameter, you have
written a procedure and that is modifying a formal parameter, when while the calling
routine calls the procedure with a constant actual parameters. So, there is a you are
calling a routine and the calling routine calls a procedure with a constant actual
parameter and you are trying to modify a formal parameter.
This is obviously an error, and this error can be detected by this code inspection. So, it is
more likely that such an error will be discovered by looking for this kind of mistakes in
the code, you while you will inspect the code probably it will come to the eye of the what
developer or so.
So, by looking for this kind of mistakes in the code, this types of errors can be
discovered, rather than by simply hand simulating execution of the code. So, for this kind
of errors code inspection is the better one, rather than the code walkthrough where you
are simply hand simulating the execution of the procedure.
775
(Refer Slide Time: 16:21)
So, a good software development companies, they collect statistics of errors committed
by their engineers, ok. From their previous projects what kind of errors the engineers
have committed, in similar kinds of projects, they collect all those information, they
collect all those statistics and record in a file. So, the good software development
companies, they collect the statistics of errors which are committed by their engineers
and then they identify the types of errors most frequently committed.
So, out of all those errors those are statistically recorded then they identify which kind of
errors the programmers, they are most frequently they are committing. Then, what they
do? They prepare a list, they prepare a list of a common errors made by the programmers
and this list of common errors can be used during the code inspection. So, they can
prepare just a checklist and this checklist can be prepared from the previous experience
of the programmers.
so then this checklist can be given and to the what programmers while doing code in
inspection for the other types of projects, then this list checklist of the common errors it
can be a used during code inspection for searching for possible types of errors. So, now,
let us see, let us look out about some commonly made errors which are there or which
the a programmers might commit during the development of a software.
776
(Refer Slide Time: 17:56)
So, these are some of the examples of some commonly made errors. Like use of un
initialized variable. The programmer has used a variable, but he has forgotten to initialize
it. So, that can be searched during code inspection. Similarly, use of incorrect logical
operators or incorrect president precedence among operators, the programmer due to
what oversight he has used some incorrect logical operators or incorrect precedence
among the operators he has used that also can be detected during code inspection.
And non-terminating loop here that, the programmer suppose has written a loop like a
while do, while or for, but that is not terminating he has forgotten to give a what put the
termination, the condition for termination. So, then the loop will be a what leading to
what infinite or the for loop will be will be leading to for infinite loop. So, those kind of
non-terminating loops. These errors also can be detected by your code inspection.
Similarly, array indices out of bounds. You have written an array maybe a 100 element
array, but you are searching an array or an array which position is more than 100. So,
array indices which are out of bounds these errors also can be detected. Then
incompatible assignments, you know that in assignment step statements left hand side of
the left hand side must correspond to the right hand side. If the left hand side is what
integer and right hand side is a floating point variable then this becomes incompatible
assignments. So, those type types of errors also can be detected by code inspection.
777
Similarly, improper storage allocation and de-allocation. You know that memory
allocation and de-allocation like malloc and dealloc etcetera, so if you have improperly
done this storage allocation and de-allocation or some mistakes, you have put some
errors by mistake have put that also can be detected during code inspection.
Similarly, actual and formal parameter mismatch in the procedural calls. While you are
writing a procedure call like a some the number of factual parameters should match with
the number of formal parameters. The types of the form actual parameter should match
with the type of the formal parameters.
So, if there is any mismatch among them then also that means, any mismatch between
the a number of actual parameters to the formal parameter, number of formal parameters
or the types of actual parameters with the types of formal parameters. So, then these
kinds of mismatch also can be easily find out, can easily be found out during this code
inspection and similarly jumps into loops. You have use some go to statements or so, and
those are eventually leading to the loops. So, jumps into loops which is not desirable. So,
this is also a commonly what made data error, what, one of the commonly made errors
by the programmers. You can use these code inspection techniques to detect those things.
Then, improper modification of loop variables you have used a loop variable inside your
loop and you should not actually a modify, so you are, if you are improperly modifying a
loop variable. Then, what will happen? This will be an error and this error also can be
detected by using code inspection. So, these are some of the commonly made errors that
a programmers might do due to some common mistakes or due to their oversight. So,
these types of errors can easily be detected using code inspection.
So, in the for these types of things it is better to go for code inspection rather than going
for this code walkthrough. Similar examples, so you can as a programmer you can also
experience, you might have experience. So, all such commonly types of errors can be
detected by using code inspection techniques.
778
(Refer Slide Time: 21:48)
We will take up a little bit of a new technique called as cleanroom Technique. So, this
cleanroom technique is basically pioneered at IBM and let us see why they have use this
term cleanroom. How this was coined? The term cleanroom was first coined at IBM, ok.
This term first coined at IBM. How? They have drawn some analogy with what they
have drawn, they have drawn an analogy to the semi-conductor fabrication units.
So, in the IBM they were trying to establish an analogy to the semiconductor fabrication
units, where the defects are avoided. How? By manufacturing in an ultra-clean
atmosphere. They are trying to not allow the defects. They are trying to avoid, they are
trying to what avoid the defects. How? They are thinking that if the what semi-conductor
fabrication it can be manufactured in an ultra-clean atmosphere then probably the defects
will not arise at all. The defects can be avoided from the beginning. So, that is why this
name clean they came, and they have seen from this and they have an, they have made an
analogy with the same with the software development and then that is why they have
given the term cleanroom testing or this cleanroom technique.
So, this technique cleanroom till, this technique that the cleanroom technique it relies
heavily on these various review techniques we have seen, ok. So, this cleanroom
technique relies heavily on code walkthroughs, code inspection and formal verification
for bug emoval. So, here the programmers are not allowed to test any of their code by
executing the code, ok. The programmers are just name suggest they will try to avoid the
779
defects. So, here programmers will not be allowed to test any of their code by executing
the code. They have to use what walkthroughs, inspections or formal verification for bug
removal for what detecting the bugs and removing the bugs. They cannot use. What?
They are not allowed to test any of their code by what executing the code.
They may do only some exceptions like the accepting doing some syntax testing testing
they may due to using a compiler, but normally they are not allowed to what test their
code by a code execution. They can only perform walkthrough, code walkthrough
inspection and formal verification for bug identification and bug removal. This is what is
the objective of cleanroom testing or cleanroom technique.
Now, let us quickly see what is the advantage of cleanroom technique. This cleanroom
technique reportedly produces documentation and code that are more reliable and a
maintainable. So, this technique produces reportedly produces various documents and
code. Those are more reliable, also they are maintainable than other development
methods which are reliable relying heavily on code execution based testing.
So, the technique as it produces a various documents and code, and these are more
reliable and maintainable in comparison to other methods which are based on execution
based testing, but the disadvantage is that the testing effort is increased as walkthroughs,
inspection and verifications are time consuming for detecting this simple errors.
780
You know that code walkthrough; so, how many inputs you will take to for hand
simulation? You it will take a much long amount of time. You have to inspect the code
line by line. Again, it is a time consuming process. Also, formal verification, that is also
another time consuming process. So, that is why the testing effort is increased to a large
extent as all these techniques, walkthroughs, inspections and verification are very much
time consuming for detecting this simple errors and the cleanroom technique is based on
only these what code review un-formal execution techniques.
Another problem is that some errors might test still escape doing manual inspection. You
see all these walkthrough or inspection etcetera they are manual techniques. So, some
errors might escape from the eyes of the team members or the programmers during
manual inspection, because after all a we are a manual human beings we may commit
mistakes. So, some of the errors might escape during manual in inspection. So, those
errors, those escaped errors can be maybe detected using the error base detection
techniques.
So, these errors which can be expect what escaped during this manual inspection, they
can be detected easily, these escaped errors maybe detected easily if you will use some
testing based error detection techniques. So, that is another disadvantage of cleanroom
testing.
781
So, today we have discussed two important source code review techniques. Those are
code walkthrough and code inspection. We have also presented a list of some commonly
made errors which can be detected by what a code review, particularly by a code
inspection. We have also discussed briefly about cleanroom technique which is which
heavily relies on code structured walkthrough and or code walkthrough, code inspection
and formal verification technique.
We have taken the materials from these books, ok We have taken the materials from
these books.
782
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 42
Project Monitoring and Control (Contd.)
Good morning to all of you, now we will take up how to visualize the work schedules.
We will see we can visualize the work schedules might be a different work charts such as
the Gantt charts, slip chart, time line chart and then we will see how to monitor the costs.
783
(Refer Slide Time: 00:37)
Let us see how to be visualize the what progress of a project. So, we have seen last class
how to collect the data about the progress of a project. So, after collecting the data about
the progress of a project, the manager he needs some way of presenting that data to the
greatest effect. So, there are some methods for presenting a picture of the project and its
a future. So, some methods that provide a static picture such as a single snapshot for
example, Gantt chart there are some other methods which tried to show how the project
has progressed and changed through time that they represent the dynamic aspects of the
system and example in the timeline charts. We will see first the Gantt chart then we will
see slip line chart and then we will see the timeline charts.
784
(Refer Slide Time: 01:35)
Gantt chart has been named after its developer Henry Gantt, this Gantt chart is a form of
bar chart that we have already seen earlier also in your matriculation or plus we must
have seen about a what is a bar chart. So, a Gantt chart is a special form of bar chart.
Here the vertical axis it lists the tasks to be performed, the bars are drawn along the y
axis and the 1 bar for each task the Gantt charts are used in software project
management, they are actually an enhanced version of the standard Gantt charts and in
the Gantt chart as we have already seen that each bar consists of two parts one unshaded
part and a shaded part we will see in the example.
785
Now, let us see what the shaded part represents and what the un shaded part represents.
The shaded part of the bar shows length of the time each task is estimated to take. So,
each task is estimated how much time it will take so, that time is represented at the
shaded bar shaded part and the un-shaded part so, the slack time or the lax time.
I have already discussed what is a slack time or float time or a free time in the last class,
the un-shaded part represents the slack time or the lax time the what is the lax time we
can say that? The lax time represents or the slack time this slack time or the lax time
represents from the leeway or the flexibility that is available in the in meeting the latest
time by which a task must be finished. So, here some free time is available you can little
bit delay the task so that, it will be what it can be finished before it can be completed
before its target time.
A Gantt chart is a special type of bar chart that we have already discussed, where each
bar represents an activity and the bars normally they are drawn along a timeline. The
length of each bar is proportional to the duration of the time that is planned or expected
for the corresponding activity. The Gantt chart representation of a project schedule is
helpful how let us see how Gantt chart is helpful. The Gantt chart representation of a
project schedule is helpful in planning the utilization of the resources, how we can plan
the utilization of the resources for that you can use a Gantt chart.
786
So, but those who have I think you have already seen CPM PERT earlier. So, while
PERT chart is useful for monitoring the timely progress of the activity. If you will
compare between Gantt chart and PERT chart you can see that Gantt chart is helpful in
planning the utilization of resource while PERT chart is very much useful in monitoring
the what timely progress of the different activities.
Gantt charts are also useful for resource planning how to allocate the resources to the
activities based on what. So, these Gantt charts are very much useful for resource
planning that is how to or during allocation of resources to activities. The different types
of report resources you know that need to be allocated to various activities, you have
already discussed in last class they can be the resources could be staffs, could be
hardware, could be software, could be other equipments, could be what space even could
be time these are various what resources.
How the different types of resources that can be allocated the different types of resource
resources that need to be allocated to activities such as staff, hardware and software these
allocation can easily be represented using Gantt charts.
787
(Refer Slide Time: 05:30)
So, normally the Gantt charts you can easily construct if from the activity network or
from the precedence network. So, first step is that you construct the activity network or
the precedence network then from that activity network or the precedence network you
can easily consider you can easily construct the Gantt chart.
So, this is a sample what activity network for a MIS problem where the various activities
and their expected times they will take to what for their the these activities how much
time they are expected to take that is also written like requirement specification will
takes 15 days, designing database will take 45 days, designing GUI will take 30 days and
then writing manual will take 30 days then what code database will take 105 days coding
GUI part will take 45 days integrate and testing will take 120 days and then complete.
So, you can say that in this activity network the dependencies are also shown like before
coding design of database should be there, before coding GUI design of GUI must be
completed and before integration and system test this coding of database and GUI should
be completed. So, in this way if a given activity network ok. If in this way if an activity
network is given you can easily construct the Gantt chart you see.
788
(Refer Slide Time: 06:54)
So, now you see you see first we will be we have already seen what is the first? First one
is the specification, requirement specification that will take 15 days. So, accordingly we
have seen that, we have started with specification and suppose the starting date is
January 15. So, it will take how many days we have already seen it will take 15 days. So,
up to January 15 it will continue then as we know that only after specification is
completed we can start 3 activities parallely what are they? Design database, design GUI
and write user manual.
So, you see now after January 15. So, from January 15; that means, 15 days after the
specification January specification is started in January 15 January 1st it will take 15
days, then 3 activities can be started what parallely designing database, design GUI and
write user manual see all those 3 activities have been started parallely from January 15
and you have to show the timings.
So, design database will take 45 days, design GUI will take 30 days and writing man
user manual to 60 days see. Design part should take how much days its take 45 days so;
that means, January 15 to what may be what February 15 and then you will take that up
to what you can say April 1 or so, then design the GUI part tool should take how many
days it should take what 30 days. So, here its I think January 15 to what see it is taking
up to March 15.
789
So, I think this what days are little bit wrong you can correct accordingly. So, this should
take from January 15 it should take how many 45 days. So, count what is January 15 to
45 days accordingly please write down this one might be a small mistake here. So,
similarly designing the GUI part it takes 30 days so; that means, the January 15 to, it will
take what maybe February 15 or so. So, accordingly please change the date here and then
what you see this will take how many days writing user manual it will take 60 days may
be January 15 to what march 15 or so, see exactly what the 60 days and accordingly you
can proceed.
So, then after this you can after this designing database, you can do the coding the
database it will take 115 105 days. So, coding will starts from this see after this 105 days
coding can only start and then coding GUI part can start coding GUI part it can start after
what after these design GUI part is over. So, here design GUI part is over from here. So,
then you can start the coding in GUI part and finally, you can do the integration and
testing after this what code database and code GUI part is over. So, code database is over
only it takes maximum days 105 days. So, even if your code GUI part it takes less than
45 days, but it has to wait because integration and test is testing, but can start only after
this code data base part is there.
So, you can see this is the slack time or the flex time because this time is free. So, if
sufficient resource is not available, then you can little bit delay the code GUI part in
order to consume this free time part. And you see write user manual takes 60 days, but
here see this the here this much time is what free time is there and if sufficient user is
staffs are not there resources not there you can also delay what this write user manual so,
that it can be it will not affect the end date, it will project can be finished in its
appropriate target date. So, this is how from this activity network we are able to construct
the Gantt chart for this MIS problem.
790
(Refer Slide Time: 11:05)
Now, as we have already told you that Gantt chart is one of the simplest and oldest
techniques for tracking the project program progress. This is essentially an activity bar
chart which indicates the scheduled activity dates and the durations. The frequently
augmented with activity floats I have already shown you how you can represent the
floats these un-shaded parts are the float or times or the slack times. We have already I
have only told you float that is float or activity float is the time by which an activity may
be delayed without affecting any subsequent activity.
So, the reported progress is recorded on the chart normally by shading the activity bars
ok. The progress is represented by this bars or the shaded bars and un-shaded part is this
what slack time or the what free time. So, the reported progress is recorded on the chart
normally by shading the activity bars and today cursor provides an immediate visual
indication of which activities are ahead and which activities are behind the schedule. So,
this is how you can prepare the Gantt chart.
791
(Refer Slide Time: 12:12)
Another example also has been shown here. So, here these are the what different
activities, we have taken coding and testing this is another a plan. This is another
example coding and testing for module A coding and testing for module B and like code
and test module C, code and test module B etcetera and these axis x axis we have taken
the weeks where week 12, week 13 week 14 etcetera then the days of those weeks and
this shaded parts said that code and test module A takes a this much period and this un-
shaded part is the free time.
If we are on today at this point; that means, at 17th weeks we can say that in module A is
ahead of the schedule also you can see that in today this is the 17th week and we have
also done some extra; that means, code and test module D is also ahead of the planned
schedule, but if you look at the other two activities code and test module B and C you see
this is the today’s date this one is the today’s date.
So; that means, we are behind the planned schedule, we are lagging behind the planned
schedule for this code and test model B and C. So, accordingly you have to take
remedials what action how you can what meet the time maybe by hiring more what
manpower or so. So, you should try to meet the what a target line, this is how you can
prepare a sample Gantt chart.
792
(Refer Slide Time: 13:41)
Now, let us see the other type of what the chart that is called a slip chart. A slip chart is a
very similar alternative which are favoured by some project managers its a similar
alternative to Gantt chart. And so here these as you can see that is a very similar
alternative this these are favoured by some project managers who believe that it provides
a more striking visual indication of those activities that are not progressing to schedule.
So, this slip chart it can provide you a more striking visual indication for those activities
793
which are not progressing as for the schedule, that is you can see that the more the slip
line bend you can say that greater the variation from the plan, let us take a small example
say.
Like this similar to your Gantt chart, but you see here if this is today you see some what
you are connecting and these what activity the progress of different activities through
somewhat lines. So, more it is bend; that means, more there is a variation from the
schedule, had it been straight line then we are what moving as per the schedule as per the
plan. If there are more bendings if the slip line is more bended; that means, is more
variation more deviation from the what the planned schedule which should not there. So,
you should take remedial action to overcome this problem.
That is what we are saying here that this chart it will provides a more striking visual
indication of those activities that are not progressing to the schedule and more the slip
line bend just like this is the slip line, if it is more bend it indicates that the greater the
variation from the plan and hence you have to take remedial action. So, additional slip
lines also they can be added at intervals and as they build of the project manager will get
an idea as whether the project is improving; that means, the subsequent slip lines bend
less or not.
So, here you have put one, then after take the remedial actions afterr two weeks another
again you draw the what slip line C whether this is becoming more straight or not. See if
more straight means we are meeting the target lines we are as far the schedule more
bending means it is more deviation from the schedule.
So, periodically after every 1 week 2 week draw the what slip lines and observe whether
you are moving into the planned schedule or not. If you are not moving towards the
planned schedule you are if you are deviating then take remedial action such as a higher
more man power etcetera so, that you can meet the target dates and you can move as far
the schedule ok. So, a very jagged slip line it indicates a need for rescheduling if the slip
line is very much jagged, then it indicates that there is a need for re scheduling the
activities.
794
(Refer Slide Time: 16:58)
Next one is the timeline chart. So, let us see one of the drawback of this Gantt chart and
slip chat. So, one drawback is that they do not show clearly the slippage of the project
completion date through the life of the project. So, these two diagrams we have
discussed Gantt chart and slip chart, they do not show the slippage of the project
completion date through the life of the project. So, analysing and understanding the
trends the project so far allows us to predict the future progress of the project. If we will
there should be some mechanism for analysing and understanding the trends in the
project. So, far happen and this will allow us to predict what will happen in future. It will
help in predicting the future progress of the project.
For example, suppose if a project is behind the schedule because so, far productivity has
not been as high as I assumed at the planning stage, then it is likely that the scheduled
completion date also it will be pushed back even further unless action is taken to
compensate for or improve the productivity. The timeline chart is a method its a
technique for recording and displaying the way in which targets have been changed
throughout the duration of the project I have already told that Gantt chart is a static
figure it represents the static aspect of the project, but timeline chart it represents the
dynamic aspects how the progress is moving with respect to the time.
795
So, its a it represents the dynamic aspect. The timeline chart is a method of recording and
displaying the way in which the targets have changed during what the development of
the project.
Will take a simple example. See this is a what sample timeline chart we are here we have
taken the wig in this y axis we have taken the week numbers and here in the down if you
will see the actual time, it will go downward this represent the actual time and in x axis
also we are taking the what plan time in week number in y axis we are taking the actual
time in terms of the week numbers and in x axis you are taking the plan time also in
terms of the week number. So, these are say there are 11 weeks here and here there are
also 11 weeks and we see how we are progressing how the project is progressing this can
be visualized from this graph let us see what is saying.
796
(Refer Slide Time: 19:34)
This figure given in the previous slide that is the timeline chart, it is show the sample
timeline chart at the end of the 6th week you see at the end of the 6th week. It is a 4-5 it
is the 6th week suppose we want to see this is the progress at the end of the 6th week we
want to know what is the progress that is shown here and the plan time which plotted
along the horizontal axis and the elapsed time along down the vertical axis. So, here x
axis is planned a time y axis this actual time. The timeline and the lines mean
meandering down the chart is represent the scheduled activity completion date.
So, the this line these are meandering the lines meandering down the chart represent the
scheduled activity completion dates. So, this it is meandering, so, they represents the
completion dates. So, at the start of the project if you will see the analyse existing system
is scheduled to be completed by the Tuesday of week 3. So, this is this what analyse the
existing system, it is expected to be completed in the 3rd week on the Tuesday and then
at the start of the project as I have already told you the analysing every system with
scheduled to be completed by the Tuesday week 3 and obtain user requirements by
Thursday of week 5, it was initially planned that week 5 this activity should be finished
by what thursday of week 5 and then these issue tender etcetera by Tuesday of week 9
issue tender where is the issue tender? This is about issue tender it was initially planned
to be completed in week 9 maybe Tuesday ok.
797
(Refer Slide Time: 21:25)
But the at the end of first week the project manager review the progress review the target
dates and leaves them as they are that is the lines are therefore, drawn vertically
downwards from the target dates to the end of week 1 on the actual time axis. So, what
we have seen? At the end of 1 week you see it at the end of the 1st week at the end of 1st
week see lines are just straight forward it was what see this is the 1st week actual one
this 1st week, just straight forward we expected that every activity will meet their dead
line then what happened?
At the end of the week 2 then the project manager he could have observed that obtained
user requirements will not be completed until Tuesday of week 6. So, this obtained user
requirements you can see that after this first week he has just seen that this obtain user
requirements it is getting delayed, it cannot be finished this it will take some more days
might be what week 6 it will take one more week. So, that is why it said that at the end of
week 2 he decides that he felt that obtained a user requirements cannot be completed
until what Tuesday of week 6 and hence he therefore, extends that activity line
diagonally to reflect this.
So, that is why what has done up to 1st week see it is straight then he found that it cannot
be finished by Tuesday of 6. So, that is why it has bended the he has what made bent the
line up to what this the 6th week Tuesday and then again now it is what moving straight
forward ok. And similarly what other activities also they are expected to be delayed and
798
accordingly he had made them as bended after this 2nd week. So, he has bended all these
lines for other activities ok.
So, then by the Tuesday of week 3 analyse the existing system is completed. So, by the
use of 3 see 3 this one is finished. So, they have made a circle in order to mark that the
activity is completed and then at this what time at the end of at the by the Tuesday of
week 3 analyse the existing system is completed and the project manager puts a blob on
the diagonal life timeline in order to indicate that this has already completed. Then at the
end up week 3 decides to keep the existing targets he has tried to meet the existing
targets and at the end of week 4 he adds another three days to draft tender and issue
tender.
So, then he has seen that. So, again at the end of week 3 he has seen that this thing are
not able to be finished the target dates again they will be delayed, see again after this
again it is dealing to this. So, whenever he expressed that there will be further delay
again he will bend the line and every after every week you will see the progress if it is
delaying, then you will accordingly bend the line. Then what note that by the end of
week 6, two activities have been completed and this 3 are still unpinning not completed.
So, this is week 3, first activity is completed then at the end of week 6 he has seen that
this is also completed, but you see this is originally it was supposed to be finished in 5th
week.
799
But since it is expected that it cannot be finished in a week 5 the line has been bended
and it is expected that it will be over in 6th week maybe by Tuesday. So, accordingly this
has been bended. So, at the end of 6th week the project manager again he reviews the
progress and he has observed that two activities first two activities are being finished and
still three are not finished. So, in after every week again he will what review the progress
and if again it is expected that these activities will be delayed accordingly he will bend
the lines and when it will completed again he will mark a what dark circle in order to
indicate that those activities have been finished.
So, at the end of 6th week the manager observes that the first two activities are finished
and other activities are still they are not completed, they will take some more time to be
completed. So, periodically he will review the progress of these activities and then if
they are not moving as per the plan or the schedule, then he will be in bend the lines as
and when required. So, this is how this timeline chart it represents the dynamic aspects.
Dynamically the lines are just what keeping unbending as it is expected that this will take
more time the lines are being bended.
So, it represents the dynamic aspects how the project is progressing it represents the
dynamic aspects. So, gantt chart represents the static aspect whereas, the timeline chart it
represents the dynamic progress of the project. So, now, we have observed that by the
end of week 6, two activities have been completed and the three activities are still
unfinished. Up to this point he has revised the target dates on the three occasions and the
project as a whole is running seven days late.
It will see on a as a whole the project is running almost to 7 days late it was it should be
finished in 9th week, but it is moving towards what a 10th week. So, almost 7 days the
whole project is delayed the timeline chart is ok.
800
(Refer Slide Time: 27:13)
So, this we have seen the timeline chart is very much useful both during the execution of
a project as well as the part of the post implementation review. Analysis of the timeline
chart and the reasons for the changes they can indicate the failures in the estimation
process or other errors that might be avoided with that knowledge ok.
So, if you will analyse the timeline chart analysis of the timeline chart and the reasons
for the changes why you have made the changes they can indicate the failures in the
estimation process, they can or the other errors that might be there and with that
knowledge these what errors or failures they can be avoided or these changes also
changes in the timelines etcetera they can be avoided in future. These errors also can be
avoided in future by using by analysing the timeline chart.
801
(Refer Slide Time: 28:09)
We will quickly say about the cost monitoring. A project could be late because of the
staff originally committed, they have not been deployed at the right time. And so, in this
case the project will be behind the time, but under budget it will be behind time, but still
it is under budget. A project could be on time, but only because additional resources have
been added ok. A project can be completed on time, but only when we will hire
additional resources of course, this will be over budget you have to spend some more
money. So, that is why you have to need to monitor both the achievements whether you
are completing in time or not and the cost that we are spending.
802
So, expenditure monitoring is an important component of project control, it not only in
itself, but also because it provides an indication of the effort that has gone into or at least
that has been charged to a project. A project might be on time, but only because more
money has been spent on the activity that originally budgeted. So, we have originally
expected this much money will be spent, but since you have what moving late, you might
have to add more manpower and hence you have to what spend more money.
So, in that case the project might be on time, but on the cost of what are the cost of
additional what money. A cumulative expenditure chart you can draw which will be
provide a simple method of comparing the actual cost or the actual expenditure and the
planned expenditure by itself it is not particularly meaningful, but a project made on late
or on time, but the chart shows the substantial cost savings.
So, what is the substantial cost savings that the chart can show this a sample example of
the what the cumulative expenditure chart. So, here in the x axis you are taking the time
weeks and y axis we have taken the cumulative cost, you can see the actual cost was
estimated sorry the planned cost was estimated to be what this, but the actual cost is
moving like this. So, this shows the comparison between the what actual cost and
planned cost you can track the cumulative expenditure by looking at this expenditure and
by looking at this graph and accordingly you can take remedial action.
803
(Refer Slide Time: 30:29)
We need to take account of the current status of the project activities. First we should
take account of the current status of the project activity, before attempting to interpret the
meaning of the recorded expenditure the cost charts they become much more useful if
we add the projected future cost ok. So, these costs that have been developed there can
become much more useful if we can add the projected future cost which are calculated
by adding the estimated cost of the uncompleted work to the cost which have already
incurred.
Where a computer based planning tool is used, revision of cost schedules is generally
provided automatically if you are using tools software for what these what the cost
schedule the revision of cost schedule is generally provided automatically once the actual
expenditure has been recorded.
804
(Refer Slide Time: 31:20)
So, there is another diagram you shows this what the cumulative expenditure, here we
are taking the time in weeks and here the cumulative cost. You can see that the
cumulative expenditure chart it can also show what is the revive what are the revised
estimates of the cost and on the completion dates. So, this was the original estimate and
then since the progressive the during the progress of the project, you might add more
manpower or some additional cost.
So, now the cost is being changed. So, this is the this diagram this line show the dotted
line show the revised estimate, you can compare this what original cost estimate versus
the revised estimate similarly this was the what original completion date and since you
are lagging behind and this is the revised completion date, as the progress as the project
progresses you can draw the revised completion date the you can find out the revised
completion date.
805
(Refer Slide Time: 32:22)
You can see that this figure illustrates the additional information available once the
revised cost schedule is included. In this case it is you can see that the project is behind
schedule and over budget. Please see the project is what the time original completion
with time, but it is revised completion t. That means the project is behind the schedule.
Similarly the original cost was this and revised cost is this; that means, say you are
spending it is expected that more money has to be spent. So, in this case it is very much
apparent that the project is behind the schedule and over the budget.
806
So, we have discussed the various ways to visualize the progress of a project by using
Gantt charts, slip charts and timeline charts we have also presented the fundamental
concept basic concepts of cost monitoring. How can use cumulative cost chart to expect
or to visualize the progress of the project with respect to time and the cost.
807
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 43
Project Monitoring and Control (Contd.)
Good morning. So, now, let us see the another aspect of Projector Monitoring and the
Control. We will see about something a different concept called as earned value analysis
and then we will see the fundamental concepts of a baseline budget.
808
(Refer Slide Time: 00:32)
So, yesterday in the last class we have discussed about cost monitoring. So, earned value
analysis nowadays has gained in popularity and it can be viewed as a refinement to the
cost monitoring process that we have discussed in the last class. So, this earned value
analysis it was originated in the USA’s department of defence as a part of a set of
measures to control different projects which are being carried out by contractors for the
department of defence. This concept is based on assigning a value we have to assign
basically a value to what to each of the tasks or the work flow packages. You have
studied the WBS work breakdown structure, there you have seen how a job is divided
into several work packages.
So, this early earned value analysis a concept this analysis is based on assigning some
value to each task or work package in case of a WBS based on and the original
expenditure forecasts or based on the original expenditure prediction.
809
(Refer Slide Time: 01:43)
So, let us see an analogy with this earned value analysis. So, this earned value analysis it
is similar as viewing and the price that might be agreed by a contractor for doing some
unit of work. The assigned value is the original budgeted cost. So, that value will be
assigned it to a task or the planned what the forecasted the cost we can say the estimated
cost that the assigned value is the original budgeted cost for a particular item or a
particular task we can say and it is known as planned value or it is also known as
budgeted cost of work schedule or BCWS.
So, planned value can be considered as the value which is the original budgeted cost for
some item or for some a task. Now a task that has not started yet it is normally assigned
an earned value of zero ok. So, a task that is. So, far not started a task that has not started
yet is assigned initially an earned value of zero and then when it has been completed,
then the project this task as well and hence the project is credited with the original
planned value of for that task.
810
(Refer Slide Time: 03:07)
The total value credited to a project at a point of time is known as the earned value ok.
So, we have already seen about what is this planned value, the total now we will see is
something about earned value. Earned value is the total value which is credited to a
project at any particular point of time this is known as earned value, this is otherwise are
known as budgeted cost of work performed or BCWP. So, this value can be or this
earned value be represented just we can think of as an analogy like this. So, this value
can be repented as a money value or an amount of staff time or a percentage of PV. In
any of the either ways the EV can be represented maybe in terms of money or an amount
of staff time maybe in terms of personal months or so, or as a percentage of the PV.
So, similarly just we have seen analogous to PV present and that value we have already
seen an analogous to this planned value, now similarly let us see an analogous to the
earned value. Earned value can be thought of as the agreed price that has to be paid to the
contractor once the work is completed.
811
(Refer Slide Time: 04:26)
So, now let us see in case of when a work is started how we can assign these what values
the earned values. See there are different techniques when the task has been started, but it
is not completed so far, then how to assign the earned value. So, where tasks have been
started, but are not yet completed then some consistent method of assigning an earned
value should be applied. In software project development these are the common methods
which are applied for the tasks which have been started.
But so, far they have not been completed in order to assign an earned value this
techniques are 0 by 100 technique, 50 by 50 techniques, 75 by 20 technique 75 by 25
technique, milestone technique and a percentage complete technique.
812
(Refer Slide Time: 05:19)
Now let us see one by one these above techniques. First we will before going to the
techniques let us see some common general things on these techniques, you can say that
the 0 by 100 technique is normally preferred for a software development we will see why
it is more useful. If you will use 50 by 50 technique this then this may give you a false
sense of security by over valuing the reporting of activity starts say it is not so far say
completed 50 percent but the what developer may say that we have done 50 percent.
So, they may ask for the payment for the 50 percent. So, it might give a false sense of a
security by just over valuing over rating the reporting of the activity starts. The milestone
technique is a suitable for the activities with a long duration estimate. So, for the
activities we are having long duration activity long duration estimates, in those cases a
milestone technique is more appropriate. But while you are applying milestone based
technique say, it is better to first to break the activity into a number of smaller tasks, then
you can apply milestone based technique for this earned value analysis for assigning the
earned value.
813
(Refer Slide Time: 06:27)
Now, let us see the first technique that is 0 by 100 techniques. So, in this technique a
tasks is assigned a value of 0 first initially. So, in this technique a task is assigned a value
of 0 until what time? Until such time that it is completed. So, until it is completed no
value is given. So, a task is assigned a value of 0 until such time that it is completed, but
when it is and when it is given a value of 100 percent of the budgeted value. So, when
the tasks is completed, then only totally 100 percent the value of 100 percent of the
budgeted value is assigned to that tasks this is known as 0 by 100 technique.
814
50 by 50 technique says that a task is assigned a value of 50 percent of its value as soon
as it is started. So, as soon as the tasks started, a value of 50 percent of the total value is
assigned and the rest 50 percent is assigned once the task is completed and they are given
a value of 100 percent once it is completed. So, this matches some contractual
arrangements where a contractor is given half the agreed price when starting the work,
suppose you are considering a building construction.
So, the contractor has been what agreed to some price and then the 50 percent of the
price you have to give when he starts because he might have to what a pay for raw
materials and the pay for labour works and so, like that. So, first 50 percent you will you
have to pay as soon as he starts the work and the rest can be paid upon successful
completion of that project. So, this is something known as 50 by 50 technique as I have
already told you this might work for building construction other projects etcetera.
But software development this may not work because how to measure that exactly they
have done what some 50 percent what will be the 50 percent that will difficult to what
compute. So, that is why here it may not be suitable and for software normally software
development we use 0 by 100 technique.
815
completion of the project. So, this is often used when a large item of equipment is being
bought. So, for example, you are purchasing a very large generator or a very vast a huge
server you are purchasing. So, while you are purchasing you the what contractor he may
say that, I want 75 percent because I have to immediately know what purchase and give
it and the rest 20 percent may be given on successful installation ok.
So, this technique may be used when a large item of equipment such as big generators
big what servers etcetera or clouds etcetera they are being bought. So, on in the in those
cases 75 percent of the total value is paid when the equipment is actually delivered and
the rest of 25 percent may be paid after successful installation and the testing.
So, next technique is called as milestone techniques. So, here a task is given a value
based on the achievement of some milestones. So, here we do not use that what a 50 by
50 or 75 by 20 etcetera the value they assigned based on achievement of some milestones
what could the milestones? The milestones could be suppose requirement analysis over
SRS is successfully prepared 1 milestone. Design is your for we have successfully
prepared the data flow diagrams, UML diagrams etcetera design is over.
Another milestone could be that coding is over, another milestone could be that testing is
over. So, normally here the value will be assigned based on achievement of some
milestones. So, in this technique a task is given a value, it is assigned a value based on
816
the achievement of the milestones that have been assigned values as part of the original
budget plan.
The next is the percentage complete. As its name suggests percentage complete here we
have to count we have to measure the amount of work completed, then we can say that
what is the percentage of completed 10 percent 20 percent 50 percent like that. In some
cases there may be a way of objectively measuring the amount of work completed how
much what is the total work and so far what percentage of work we have completed let
us take a let us take an example. As part of the implementation upon information system,
a number of data records have to be manually typed in the database. Say in an institution
there are 10,000 students and for each for each student you have to create a record.
So, and those are what student records has to be typed manually. So, that and they will
be saved in a database. So, in this case the value and can give in terms of percentage. So,
after 1 month, you say that only what to the 1000 students record has been entered; that
means, 10 percent is completed. So, based on that 10 percentage data entry we can assign
a value. After say 2 months we have seen 20 percent 2000 records have been completed;
that means, say 20 percent work has been done.
So, basically we are counting here the number of data records and based on this what
amount of work or the number of records have been completed we can assign some value
this is something known as the percentage complete. So, as I have already told you in
817
this example, suppose as part of the implementation of an information system a number
of data records have to be manually typed into the database, then the actual number so,
far it is completed can be objectively counted, it can be objectively measured and then
the value can be assigned accordingly. This is known as something this percentage
complete method.
Now, I have already told you two important terms, planned value and earned value. So,
planned value it can be considered as that the original estimate of the effort or the cost to
complete a task. So, in order to completed a task what is the original estimate may be in
terms of a effort or person months or a cost this is known as what planned value or PV,
where earned value or budgeted cost of work performed this may be considered as the
total of the planned values for the whole work completed at this time ok. So, earned
value may be considered as the total of the PVs for all the task for the whole work
completed at this time this is known as earned value. So, with this you will move further.
818
(Refer Slide Time: 13:24)
So, the we will used those terms earned value and planned value etcetera in the
subsequent slides now let us say about this baseline budget. What do you mean by a
baseline budget? So, the firsts step in setting upon earned value analysis is which create a
baseline budget
So, we are discussing now on the earned value analysis, the first step in setting up or in
preparing the earned value analysis its to create a baseline budget. So, what is a baseline
budget? The baseline budget is based on the project plan ok. So, we have already
discussed planning earlier and so, this baseline budget is based on the project plan it
shows the forecast growth in earned value to time. So, as time passes what is the forecast
growth what is the estimated growth or what is the predicted growth in earned value this
is represented by the baseline budget.
So, the earned value let us see how it can be measured. Earned value can be measured in
monitor values, but in case of staff intensive projects were number of staffs are working
such as software development, it is usual to measure the earned value in person hours or
person months and in workdays. So, basically earned value will see it can be represented
it can be measured in monetary value such as rupees, but for software development
organizations normally people use this are people will present the earned value in terms
of person hours or person months and in terms of the work days.
819
(Refer Slide Time: 15:08)
Now, let us see how it can prepare a baseline budget. So, a baseline budget it can be
prepared easily if we Gantt chart is already given to you and Gantt chart we have already
seen earlier this is a simple Gantt chart is shown this Gantt chart already also we have
shown earlier in the last classes. So, here you can say that here in the x axis the weak
numbers are taken and here the resources have been considered and here is the showing
that from which day of the week it has been started and two parts you can see in these
part in the bars.
The one is the shaded part, another is unshaded part and the un shaded part represents the
float or the free time or the slack time this we have already seen while discussing about
the Gantt chart. Now let us see how you can prepare a baseline budget from thee from
given a Gantt chart.
820
(Refer Slide Time: 16:08)
See we have to see that we want to calculate these what earned value and we want to first
prepare the baseline and budget. So, first you have to list what are the task, the task are
already shown here yes the tasks are already shown here on the left hand side. And then
what is the budgeted work days? What if the planned work days? You can see from this
what Gantt chart the different planned worked the planned the budgeted work days are
given like first one is that specify the overall system. So, specify the overall system you
can see it is taking 34 days starting from the beginning specify the overall system it takes
how many days that it a 34 days.
So, that we have to lists here. So, specify the overall system it takes 34 days and
scheduled completion it will start from 0 so; obviously, it will be taken what 0 plus 34
days and cumulative work days also 34 days. So, now, then specify module B and you
can see from the diagram above that specify module B and the specify module D also
specify module A these three activities these three tasks are started parallely.
These are what a performed parallely after these 34 days of this specify overall system is
over. So, this specifying module B takes how many days? 15 days specifying module D
also 15 days and specifying module A 20 days. So, that we have to write here that
specify module will B 15 days, specify module D 15 days, specify module A 20 days
now what we this is the budgeted workdays its planned work days. Now the scheduled
completion will be what?
821
The first activity says specify overload system have started from 0 th date. So, scheduled
completion in 34 days module. Specified module B and module D can be started only
after this overall system specification over. So, after 34 days you have started how many
days it takes? Budgeted work days 15 days. So, 34 plus 15 is 49 this is D and specifying
module D and they also started parallely along with B.
So, specifying module A will also take comment at 15 days, when it can be started after
this 34 days. So, 34 plus 15 is 49. And now you can see what is the cumulative work
days? So, cumulative work days first work job is over on 34 days then next this second
job takes 15 days. So, 34 for 15 is how much 49 and next; that means, to specify module
D also takes 15 days.
So, 49 plus 15 is 64. So, up to the third activity the cumulative work day is 64 days. So,
similarly the fourth one is specified module A this take 20 days after what? After the first
job is completed because these three are running parallely we just started parallely. So,
after 34 days plus 20. So, scheduled completion is 54 days. So, now, what is this work
what cumulative work days? This activity takes 20 days.
So, 64 plus 20 is equal to 84 days. So, cumulative workdays is equal to 84 days. In this
way you can find out see the budgeted work days is already available in the Gantt chart
then you can find out the scheduled completion and then you can add the what values
appropriately, then you will get the cumulative work days. So, up to; so, now, you can
say that, the total cumulative work days for completing all the tasks is coming to be 237
days.
Now, we should find out what is the cumulative or percentage of cumulative earned
value. So, total cumulative days is 200; total cumulative work dates required is 237 days,
now we will take into account the first activity it takes 34 days. So, what is the
percentage? 34 divided by 237 into 100 which will come to 14.35 percentage. So,
similarly for this second and third activity the cumulative work days is 64 and the total
cumulative work days is 237. So, 64 by 237 into 100 is coming to be 27 percentage.
Similarly for the fourth activity you can see the cumulative work days up to 430 to each
is how much it is 84 total cumulative days is 237. So, 84 by 237 in to 100 is coming to be
35.44 percentage.
822
So, in this way you can compute the percentage of cumulative earned value. So, I have
already told you what is this baseline budget. Baseline budget is the what based on the
initial project plan the budgeted plan and the shows the forecast growth in earned value
through time as time passes, what will the forecasted rate growth in earned value that is
shown in baseline budget and we have seen for this sample example. This is the
forecasted the growth this is the forecasted growth in earned value.
So, in this way you can get the percentage of cumulative earned value and of course,
finally, the total earned value will be 100 percentage. So, in this way you can prepare the
baseline budget for a given project. So, if the Gantt chart is given preparing this will be
much more easier preparing the baseline budget or calculating the baseline budget will
be much more easier.
The same thing has been shown in the form of a graph. The same thing we can say from
this data, from the baseline budget, from this given data this can be represented as a what
graph, let us analyze the graph let us interpret the graph.
823
(Refer Slide Time: 22:09)
So, in this budget out of the techniques we have given. So, you see 0 by 100 technique is
used for crediting the earned value to the project what do you mean by 0 by technique in
this context? You can see that the project is not expected to be credited with any earned
value until their 34.
When the activity specified overall system is to be completed you can say that. The first
task is expected to be completed on 34 days. So, till 34 days no value is assigned. So,
this means we are using 0 by 100 technique initially no value is assigned when the job is
completed then we assign the full value what is the full value here? 34 days because I
have already told you earned value can be represented can be expressed in monetary
terms in what personal months or in work days.
So, here the work days is taken to account. So, earned value the initially no earned value
is given 0 value is given and when the work is completed this job is completed on 34
days. So, on 34 days the value 34 working days value 34 is assigned to this earned value
that is what I am saying here you can see also in the graph. This is 30 up to 34 see the
value is 0 and from 34 then the value goes how much on 34 days.
The cumulative this work days means the cumulative work days. So, this is the elapsed
days the actual days passed. So, on 34 days the cumulative work day becomes 34; that
means, earned value of 34 is assigned here and now as the time proceeds from these
values you can. So, next is what 64 and that this values can be given when on forty nine.
824
So, when a 49 days passed here you can see 49 here. The value is what? We are giving to
like this. So, you can see. So, when 49 then this is 64. So, that is when it is a 49 days you
can see that this is something what 64 it will be given. So, in this way you can give you
can assign the different values from this table you find out the different what cumulative
work days and you can assign the value.
So, up to 34 no value is given 0 and on the day of 34 full value is given that is 34 is
given here. So, in this way you can see that here the 0 by 100 technique is used for
assigning earned values. This I have already told you that this project is not expected to
be credited with any earned value until day 34, when the activity first activity that is
specified overall the system is to be completed.
This activity was forecasted to consume 34 person days and hence it will be therefore,
credited with 34 person days of earned value when it has been completed that I have
already told you when it is completed 34. So, you see earned value of 34 is assigned
here. The other steps in the baseline budget chart coincide with the scheduled completion
dates of other activities. So, other steps in the baseline budget chart, they coincide with
the scheduled completion dates of other activities. So, then this is running perfectly.
So, now today we have seen about the 0 by 100 technique where initially 0 values
assigned and the full value is assigned once the job is completed, then in 50 this is most
suitable for the software development on projects and 50 by 50 technique here once the
825
work is started the job is started 50 percent value is assigned. And the rest 50 percent is
assigned after the work is job is completed and in this is suitable for like a building
constructions etcetera for paying about labour charges, for purchasing raw metals
etcetera.
The 75 by 20 technique it is used in this case where you want to purchase what a large
equipment or so. So, they are initially you have to when the you have when the
contractor supplies the equipment then you can what assign 75 percent or 75 value you
may assign and rest 25 value may be assigned to the earned valued or next rest 25
percent payment may be made after successful installation and testing. And in milestone
based technique the value may be asked based on achievement of some milestones of
SRS is completed. So, some value in design is completed, some value assigned coding is
completed, some value may assign and the testing is completed, this is another milestone
some other value some value may be assigned like this.
And in the percentage complete technique here basically here you have to count here you
have to measure the work progress and the example we have seen that in a you suppose
you are developing a database, where you have to enter 1000s of student records see
after the what end up 1 month, if the total records entered to be is 100 sorry 10,000 and
you have entered only 1000; that means, a 10 percent what data entry has been made.
So, accordingly the value can be assigned. We have also presented the concept of a
baseline budget. So, the concept of a baseline budget will help in the what earned value
analysis. So, in the next class we will see how these PV values EV values baseline
budgets they will be helpful in monitoring the progress of the project.
826
(Refer Slide Time: 28:09)
So, this thing we have taken from this book, you can refer this book for further details.
827
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 44
Project Monitoring and Control (Contd.)
Good morning, now let us start this how to monitor the earned value.
Last class we have discussed about earned value analysis; now let us see how to monitor
the earned value.
828
So, after creating the baseline budget; we have already discussed base line budget in the
last class. So, after creating the baseline budget the next job is to monitor the earned
value as the project progresses through time. So, this can be done by monitoring the
completion of the tasks ok; this can be done by monitoring the completion of task or
activity starts or milestone achievements in the case of the other crediting techniques.
So now, we have seen that how the earned value can be assigned, how a value can be
assigned to the earned value. So, as well as recording the earned value the actual cost of
each task also can be recorded; it can be collected which we call it as actual cost or AC.
So, similar to a recording EV; the earned value, the actual cost of each task can also be
collected this is known as actual cost or AC; this actual cost is also known as actual cost
of work performed.
829
Now, we can use a chart called as earned value tracking chart; this chart can be used for
monitoring earned value, we will see that three different lines are shown here; this initial
this dotted line represents the earned value.
This middle line; solid line represents the baseline budget which is also known as the
planned value and this upper dot line; it represents the actual cost to date up to date. So,
suppose this is the time now this middle line this vertical line is the time now or today’s
time. So, given this we want to measure various parameters; you can see that in x axis;
we have taken the month number and y axis we have taken the cumulative cost may be in
terms of 1000s. So, you can see that the bottom line which represents the earned value
and acts; this is the today’s point.
So, and it was the earned value, but what was the planned value? You can see the
corresponding planned value is here. So, this difference; that means, what is the planned
value and what is the earned value difference between this; it is known as what you can
say scheduled variance which may be expressed in terms of for cost.
Similarly, this difference this is the actual cost and this is; this what we can say earned
value. So, the difference between the actual cost and this earned value; this can be known
as cost variance. And similarly, here you can see that the work is planned to be finished
at this time, but you can see that it is finishing actually; initially, it was planned to finish
at near about somewhat a 9.25 months or so and actually it is finished at what 11 months;
830
so this difference almost 11 minus 9.25 maybe 1.75; so this difference is known as time
variance.
So, basically this earned value tracking chart here, it is showing us three variances; the
schedule variance; sorry four variances schedule variance, cost variance, budget variance
and time variance. So, let us see how you can compute these variances so that we can
monitor the progress of a project.
I have already told you this figure illustrates the following performance statistics which
can be shown directly or they can be derived from the earned value. Those are schedule
variance, time variance, cost variance and we will see some performance ratios; so those
all these variances have been shown here. So, this is the; schedule variance that is
difference between PV and EV and this is the cost variance; difference between actual
cost and this is what you can see this is; this you can say earned value and this is the time
variance.
831
Now, let us define these terms mathematically; the schedule variance is measured in
terms of cost. Mathematically we can say that schedule variance is equal to earned value
minus planned value.
SV = EV-PV
What does it indicate? It indicates the degree to which the value of completed work
deffers from the planned, what was the planned value and what are the earned value?
So, it will indicate the degree to which the value of the completed work; that means, the
earned value; it is differing from what from the planned value. Let us take a small
example; suppose a work with a PV of planned value of 40000 pound should have been
completed by now; by today a work with planned value of 40000 pounds should have
been completed.
In fact, what had happened? Some of some portion of the work has not been completed;
so far has not been done so that its earned value as in today is only 35000. So, what does
it mean? So, schedule variance we can compute like this earned value minus planned
value. Earned value, we are saying 35000 minus planned value is already we have shown
40000 that is minus 5000 pounds, so; that means, it is negative. So, what a negative S V
represents? A negative schedule variance means the project is behind the schedule, here
you can see actually why it is not 40000? Because some portion of the work has not been
done and what does it mean? That means the project is lagging behind the schedule.
832
So, a negative S V means the project is lagging behind the schedule. So, this I have
already shown in the graph that schedule variance usually EV minus P V. So, this is
schedule variance; so this is the; what EV, this is the P V. So EV minus PV, it gives what
this schedule variance and you can see that a negative; that means, it is what it is lagging
behind is this is less. So, this is lagging behind the planned schedule; this is lagging
behind the schedule.
Now, let us see about time variance. So, how time variance you can calculate? So time
variance is defined as the difference between the time when the achievement of the
current earned value was planned to occur and what is the time current now?
So it is defined as the time when the achievement of the current value was planned to
occur, it was originally planned to occur and the time now. So, what you can see here?
When to occur and what is the time. So, you can see that this is the current time and if
you will see; it was planned to be finished at this time, this is roughly 9.25. So, in 9.25
months it was planned to complete but it is as on today; it is 11th month; it is almost
completed on 11th month. So, time variance will be how much? 9.25 minus 11. So,
coming to be almost 1.75 minus 1.75; so that is what we do show here.
So, in this case as I have already told you; current EV should have been achieved in the
early part of month 9; early part of month 9 means around 9.25, but it is completed on 11
month 11. So, time variance is 9.25 minus 11 coming to be minus 1.75 months; so again
833
here it is negative. So, negative time variance indicates what? It indicates the project is
running late; the project is running late. You can see here that the project it should be
completed what by 9.25 months, but so it is completely 11; that means, the project is
running late the project is running slowly; it is running late.
Next variance is cost variance as its name suggests cost variance; it can be calculated as
what? We denote it as CV and is calculated EV minus AC from the graph you can see
cost variance.
CV = EV-AC
What is the actual value on today? This point AC and what is the earned value? Earned
value is which one? This one, this is the earned value. So, EV minus AC it will give you
the cost variance that we have shown here. This is calculated as EV minus AC; what
does it indicate? It indicates the difference between the earned value or the budgeted cost
and the actual cost of a completed work ok.
So, cost variance is calculated; is defined as the difference between the earned value or
the budgeted cost for the completed work and the actual cost of the completed work. So,
now so that means, CV is equal to earned value for the completed work minus actual cost
of the completed work. Now let us take a small example the previous example let us
take; so say that when the S V above was calculated minus 5000 for the previous
834
program for the previous problem; we have already seen S V is coming to be minus 5000
pound. So, say that the S V above was calculated minus 5000 and actually 55000 pound
had been spent to get this EV. So, to get this earned value actually 55000 pound have had
been spent.
So, now what will this CV then accordingly? So, CV in this case would be how much?
35000, which is the EV value minus 55000; what is the actual cost? So, this is coming to
be how much? So, 35000 minus 55000 coming to be minus 20000 pound; so you can see
that again it is negative. So, a negative means what? A negative CV means that the
project is over cost, the cost that is budget you have planned; so, it is the it is costing
more than that; so it is over cost.
So, a negative CV means the project is over cost had it been positive; that means, the
project is below cost it is under control. So, here as we have seen that here negative cost
variance means it represents that the project is over cost. It can also be an indicator of the
accuracy of the original cost estimate. How accurate was your original cost estimate that
can also be indicated by the CV. So, CV also indicates the accuracy of the original cost
estimates.
Now, with this three terms cost variance, schedule variance and time variance; we can
measure, we can compute some performance ratios. So, we will see two important
performance ratios; here two ratios are commonly what tracked here. One is this cost
835
performance index known as the CPI, this is defined by as EV divided by AC earned
value divided by actual cost.
CPI=EV/AC
SPI=EV/PV
Now, let us take again the previous example and let us calculate CPI. So, CPI would be
how much as these definition? CPI is equal to EV by AC. So, what is the value of EV
and AC you can see? In the earlier example, we have seen that EV is equal to here you
can see EV is equal to 35000 and AC actual cost is how much? 55000.
So, this is equal to how much? 35000 by 55000 that is a ratio is coming to 0.64.
Similarly, what is the SPI? SPI is equal to EV by PV earned value by planned value. So,
what is the earned value? Again it is 35000, but what is the planned value? We have
already seen earlier planned value is 40000; planned value is 40000; so 35000 by 40000
coming to be 0.88. So, now let us see what is the interpretation, what is the inference?
The two ratios, they can be thought of as some value of money indices; they can be
treated as that as if this represent the indices of value for money ok. The two ratios can
be what considered as can be thought of as value of money indices.
Now, let us see if the value there are two possibilities the values may be less than 1, the
values may be greater than 1. A value greater than one will indicate that the work is
being completed better than the planned ok. If the values are greater than one, then this
indicates that the work is being completed better than the planned whereas, a value of
less than one will indicate that the work is costing more than it is planned or you can say
earned or the work is proceeding more slowly than it is planned.
So, it is desirable that it should be the value should be greater than one; if it is; that
means, it we are moving better. But if it less than one, then two possibilities will be
there; the cost is over shooting the project is over cost than what planned and the project
is moving very slowly than planned ok. So, in this way these ratios will help in
monitoring the progress of the project.
836
(Refer Slide Time: 14:31)
So, the CPI value that we have already seen; here CPI is equal to how much EV by AC
or let us see what it is used. CPI can be used to produce a revised cost estimate for the
project called estimate at completion ok. The CPI value it can be used to generate a
revised cost estimate and updated cost estimate for the project we call this as estimate at
completion or a EAC. And EAPs, EAC can be calculated mathematically can be
represented mathematically as BAC by CPI.
EAC = BAC/CPI
CPI we have already known earlier and BAC represents budget at completion; that
means, it will say that what is the budget at completion? That is BAC.
So, EAC is equal to BAC by CPI; so here. So, BAC is the what current projected budget
for the project. So, as you have seen that EAC is equal to that BAC by CPI, where BAC
which transfer budget at completion; it is the current projected budget for the project if
BAC was 100000. Let us again take the previous example or we can take also this as a
may be a fresh one; if the BAC was 100000 pound; then a revised estimate we can
prepare by computing EAC; where EAC will be equal to BAC by CPI. So, what is the
BAC value? BAC is estimated to be 100000 divided by CPI we have already computed
CPI in our earlier case; it is 0.64.
837
So, 100000 by 0.64 is coming to be this; that means, say initially it was what initially it
was the current project budget was almost BAC was 100000, but then as we progress
projected as the progress; as the project is progressing; we can revise it we can estimate
it observing the current progress and as CPI is equal to 0.64, then you can see that now it
is exceeding the 100000 value.
Now, it is 156250; that means, we require more cost, we require more money to
complete the job. So, similarly the current value of SPI you can calculate by using the
formula you know what EV by PV. Similarly, the current SPI can also be used to project
the possible duration of the project given the current rate of progress. You can also what
use SPI to what; you can also use the current value of SPI to the or for the project the
possible duration of the project given the current rate of progress. So, what is the; what
possible duration that it will take to complete the revised duration that it will take to
complete the project that also can be computed if you will use this what current SPI
value.
Now, let us see take another performance parameter. Suppose the planned total duration
is 23 months; initially it has planned that it will take 23 months to complete the job. In
earned value terminology this is called as scheduled at completion ok; say that the
planned total duration is 23 months. In earned value terminology; it is called as schedule
838
at completion using this term scheduled at completion, another parameter can be
calculated which known as a time estimate at completion.
What is the estimate for this completion; a time estimate completion a time estimate at
completion which is in short is written as TEAC; it can be calculated as SAC by SPI.
TEAC= SAC/SPI
What is the SAC? We have already seen schedule at completion, SPI is that schedule
performance index. I think SPI you have already seen here SPI is the schedule
performance index.
So, this will be equal to how much? This would be equal to see SAC is given is 23
months and SPI you have seen in the previous slide SPI is coming to be 0.6 0.88; SPI
0.88. So, then this ratio is coming to be 26.14 pounds; that means, from the current
progress; we can measure that the work cannot be completed in 23 months; it will require
another 3.14 months extra; that means, the revised duration for completing the project
would be 26.14 months.
So, this is only an approximate guide; where there are several parallel chains of activities
being carried out concurrently. Then; this can be this is not a hand hard and fast rule; this
is only an approximate guide, but where there are several parallel chains of activities are
being carried out concurrently, then the projected duration will depend on to the degree;
to which the activities that have been delayed to which the degree to which the activities
have been that have been delayed are on the critical path. How many of them are in the
critical the activities we are delayed how many of them are on the critical path; so that
factor will also decide that what will be the revised duration.
839
So, once as you have computed the SPI etcetera then you can revise the forecast. As I
have already told you that the CPI can be used to revise the cost estimate. Similarly, SPI
can be used to divide the project duration; so once CPI and SPI are done; they have
found out; you can also what revise the whole estimate see. After computing the CPI and
SPI the earned value chart; it can be revised the other parameters can be revised, you can
prepare a revised earned value chart.
So, this is the example of an earned value chart with revised forecast. You can see here
that this is; this vertical line is the today’s time or the today’s date. So, this was the
earned value and this is the actual value; you can see that. So, today this is the actual
value, but based on this planned value and based on the current progress, you can
estimate that you can again forecast that the actual cost will be now increasing and may
come to this.
So, similarly the time it was previously estimated according to planned value the work
will be completed here. But as the progress is there as you are observing the progress,
now the project will be completed almost at this time, so this gap we call as the
forecasted project completion delay. Similarly, this was the original plan regarding the
cost, but actually it is estimated revised estimate for the expenditure is this. So, this we
call them as the revised expenditure forecast. So, this was the original completion date
you can see and this is the revised completion date; so this gap we call that the forecasted
project completion delay.
840
And similarly, as we have already seen that this was the actual cost; this was the
estimated actual cost; this is the estimated future cost what actually you can say that this
is the what? This value is the we can say sorry this value as the actual cost; you can see
this the actual cost and now it was the revised cost; this value is the revised cost. So, this
gap we call as the estimated future cost this is the; revised expenditure forecast. So, in
this way if you know the CPI and SPI; you can prepare a revised earned value chart.
From the revised earned value chart, you can estimate the forecasted project completion
delay; as well as you can know the estimated future cost.
This earned value analysis has not again up to this we have seen; this also we have
completed ok. The earned value analysis has not yet gained much what acceptance
universal acceptance ok. Why? Let us see what is the reasons. Earned value analysis has
not yet get universal acceptance for use with software development projects. It might
have worked perfectly for or it; this earned value analysis might have been worked for
other projects. But earned analysis has not yet gained universal acceptance for use with
the software development projects.
What is the reason; let us see that perhaps largely because of the attitude that whereas, a
half built house has value reflected by the labour and material that have been used. But a
half completed software project has virtually of no value at all; it is very much true that a
building you have given for contract.
841
So, a half build building has some value is it not it? In terms of what the labour and
material that have been used, but a software unless it is fully completed fully installed
and tested nobody will use it does; it does not have any value at all. So, that is why this
earned value analysis has not gained what universal acceptance to be used in software
development projects ok.
So, this is the so; this is to misunderstand you purpose of earned value analysis which is
a method for tracking what has been achieved on a project ok. So, that is why this is to
misunderstand that the purpose of the earned value analysis which is normally carried
out or which is a method for trafficking what has been achieved for a on a project; which
is measured in terms of the budgeted cost of completed tasks or products.
Now, let us quickly take a small example; so, suppose there are three tasks to be
performed in a project specifying a model which what it is planned to take 5 days coding
module; coding the module it is planned to take 8 days and testing the module is planned
to 6 days. So, now let us say we will observe the progress on the beginning of the 20
dead date; 20 date or 20th day. So, you can say planned value will be how much? 5 plus
8 plus 6; so 19 days. If everything, but testing is not completed if everything is
completed, but the testing is left if everything, but testing is completed then EV becomes
how much? So, 5 plus 8 there is 13 days.
842
So, now we will compute schedule performance we know the formula for schedule
performance schedule variance is equal to EV minus PV; so, 13 minus 19 which is
negative; so minus 6 and schedule performance indication SPI is equal to EV by P V. So,
EV is equal to 13 by 19 is equal to 0.68.
So, here you can see that SPI is negative as well as a scheduled performance indicator is
less than 1. So, both the things suggested that actually we require schedule value
schedule variance positive and schedule performance indicator should be greater than 1.
So, since SPI is negative, as well as SPI is less than 1, this indicate that the project is
lagging behind the schedule.
Now, let us use actual cost in this example; we know that actual cost is also known as
actual cost of work performed or ACWP. So, let us again consider this previous
example; suppose in this previous example specify module actually took 3 days; it is
planned for 5 days, but actually it has taken 3 days. Similarly coding module actually
took 4 days, but it was planned for 8 days. So, now the actual cost will be how much? 7
days, I have already told you these cost means also you can use the work dates; so actual
cost will be 7 days. So, now you can perform can compute cost variance CV which is
equal to EV minus AC.
So, a EV we have already what calculated earlier that is 13 days and minus AC; actual
cost is 3 plus 4, 7; so it is 6 days. Again now it is positive; a good symbol and cost
843
performance indicator or CPI is equal to EV by AC; 13 by 7 is equal to 1.86. So, you see
both the cases cost variance is positive as well as cost performance indicator is greater
than 1. So, this indicates that the project is within budget; within budget ok. So, positive
CV or CPI is greater than 1 means the project is within a budget; it will not exceed the
planned budget; so this is under control.
So, CPI can be used to produce new cost estimates that we have already shown in the
earlier slide; CPI can be used to produce new cost estimates. So, budget at completion
now let us see what is the budget at completion. So, budget at competition I have already
told you this represents the current budget allocation to total cost of a project. So, and
estimate at completion this also we have defined earlier this is the updated estimate
which is equal to BAC by CPI. So, for example, say the budget at completion is given
you a as 19000 and CPI is given as suppose 1.86, then EAC can be computed as BAC by
CPI is equal to 10215.
Now you can see that it is the budget it is reduced, the project cost is reduced project
projected cost is reduced because work being completed in less time; is not it? Because
you can see that budget at completion estimated 19000, but now it is reduced to 10215.
The projected cost is reduced because the work being completed in less time, the work it
is completed in less time that is why this has happened. is not it?
844
And we have already seen earlier that the work is what it was planned it is what taken let
us see cost performance indicator is 1.86 and why? Because this you has planned to be
over completed in 8 5 plus 8; 13 days, but actually it is taking 4 plus 3 7 days; so that is
why it is what it is taking less days. And since your how we are computing that CPI is
equal to 1.86 previous value we have used here this 1.86 you have used. And hence and
the budget at completion even if it is estimated 9000 and we are finishing the work much
early that is why the cost is reduced. So, the projected cost is reduced because the work
being completed in less time.
So, let us again explain this time value; time variance as an example. So, and I have
already told you time variance means the difference between the time when specified EV
should have been reached and the time it actually was what is the current time. For
example, say on a EV of 19000 was suppose this EV. So, we have expected that this
19000 pound; for example, say an EV of pound 19000 was supposed to have been
reached on 1st April; it should have this job should have been done by 1st April. And it
was actually found that it was reached it was completed on first July, then TV is equal to
how much? 1st April minus 1st July you can say.
So, fourth month this is seventh month that is it will be minus 3 months. So, negative
symbol indicates that TV negative indicates the project is running late ok. So, this is the
845
conclusion this is the inference that your TV is coming negative and the TV is negative
means; it indicates that the project is running late.
Let us quickly take another 2 minutes this another small example. Suppose a project is to
be completed in 1 year it has to be completed in 1 year at the cost of rupees almost
100000. After 3 months, you realize that the project is only 30 percent complete at a cost
of 40000, then assess the performance of the project how can assess let us see.
846
We have to find out the planned value, the budgeted cost the sorry the earned value
budgeted cost, CPI, SPI etcetera. You can know the planned value what is the planned
value? See planned value it is given here, how much? 100000, but we are evaluating
when? After 3 months.
So, a year is 12 months, after 3 months means 25 percentage. So, planned value means
the planned percentage completion of work into budgeted cost. Budgeted cost is 100000,
the planned percentage of completion is after 3 months means it is 25 percentage to 25
percent in 100000 coming to be 25000. Earned value I have already told you earned
value also can be represented as a percentage of EV.
So, within the problem it is that how much? Only 30 percentage completed here. So, here
earned value EV is equal to percentage work actually completed into budgeted cost. So,
what percentage of work was completed? Is 30 percent into what is the budgeted cost?
100000; this is coming to be 30000. Now, you can easily compute CPI and a SPI. So,
CPI is equal to what? EV minus actual cost EV divided by actual cost; EV is equal to
30000 and actual cost is you have already given in the problem actual cost is 40000.
So, this 30000 by 40000 is coming to be 0.75 and schedule performs index SPI is equal
to at EV by PV; EV we have already seen 30000, PV we have sorry EV by PV. So, EV is
computed to be 30000 PV is computed to be 25000; so 30000 by 25000, coming to be
1.2. So, here you can see that CPI is less than 1 and SPI is greater than 1.
847
So, what is the conclusion we can draw? So, the assessment of project performance is as
follows since CPI is less than 1, it is infer that the project is over project because CP is
CPI less than 1, it should be greater than 1. So, CPI since it is less than 1 it is estimated
that the project or; it can be infer that the project is over budget.
Here you can see that for every rupee spent we are getting only 0.75 worth of work; see
the CPI 0.75; its meaning that for every rupee spent, we are getting only 0.75 worth of
work. And you have already see SPI is greater than 1; it is 1.2; since SPI is greater than
1, it indicates that the project is ahead of the schedule; we are not lagging behind the
project is ahead of the schedule ahead of the schedule.
So, if the project is moves the; if the project moves at this rate. So, if the project moves
at this rate or the progress is made at this rate, then the project will be delivered ahead of
the schedule much before of the scheduled time the project will be delivered. The project
will be delivered ahead of the schedule, but at over budget but at over cost the cost will
be more. So, since the cost will be more we have to do something.
So, here are corrective action needs to be taken since the money you know; it will take
more cost even if the schedule it will be delivered early. So, even if the project will be
delivered much early than the what planned schedule but the project is what; it is
exceeding the cost it is at over budget. So, necessary corrective action needs to be taken
in order to what reduce the budget or in order to reduce the cost.
848
So, these things already we have seen. We have already seen these things we have
discussed this monitoring of earned value through different techniques, different
parameters such as schedule variance time variance cost variance and performance
ratios. We have explained the all the how to calculate the above parameters; now we
have explain the above with suitable examples ok.
849
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 45
Project Monitoring and Control (Contd.)
Good afternoon, last class we have discussed about cost monitoring and monitoring
earned value. Now, let us see how we can prioritize the, we can prioritize the monitoring
activities.
So, prioritizing the monitoring first will see; then will see a if a project is getting late
how to get the project back to the planned tracked; to the original target.
850
(Refer Slide Time: 00:41)
So, far we have assumed that all the aspects of a project; it will receive equal treatment
in terms of degree of monitoring applied. That means, during monitoring we are treating
all the activities the similar footing. We have assumed that all the what activities during
monitoring they will receive equal treatment, but actually in practice that does not
happen some activities they require a more priority, some activities they require less
priority.
Now, let us see how we can assign priorities to the different activities. For example,
which are critical path activities they should given maximum priority highest priority,
but which are activating what which having less what float less then we can give priority
to the activities with no free float. Then activities is less than a specific fold float and a
we can also give some priority to high risk activities and we can also consider activities
which are using critical resources.
So, in that way we will assign some priorities depending or to the different activities
depending upon certain parameters.
851
(Refer Slide Time: 01:55)
We will first see about critical path activities. We know that if there is any delay in an
activity lying on the critical path then that will; obviously, cause a delay in the
completion of the whole project. So, critical path activities they have to be given very
high priority during for close monitoring; otherwise the project the end a completion date
of the project will be delayed and we cannot meet the target line.
So, critical path activities; that means, activities lying on the critical path must be given
very high priority.
852
Similarly, activities which are having no float; activities with no free float they also
should be given some priority during monitoring; we have already discussed what is free
float in some of the earlier classes.
Free float it represents the amount of time that an activity may be delayed without
affecting any subsequent activity. So, we can there is some free time available with some
activities such that if a some urgency occurs; that activity can be a little bit delayed. So,
that that free time can be consumed.
So, we define free float as it is a slack time or a free time. So, it is amount of that free
time that an activity it can be delayed, so that the other subsequent activities will not be
affected; they are target dates their completion dates will not be affected. Now, you see a
delay in an activity which has no free float; it will delay at least some subsequent activity
later on.
If the delay is less than the total float value, then what will happen? It might not delay; if
it is less than the total float, then it will it might not delay the project completion date.
But if delay is more than the total float then it will affect the other activity, it will delay
the project completion date of the it will delay the project completion date for the whole
project.
Now these subsequent delays can have serious effects; so in such a way if one activity is
affecting another activity another the next activity. If we are changing this what their
execution dates; the subsequent activities they are delayed, then in turn the end date of
the whole project it will be affected and it will be delayed.
So, that is why this subsequent delays it can have serious effects on the resource
schedule also because it delay in subsequent activity it means that the resource for that
activity; it is now become now unavailable for that activity completed. And other
activities they will be now waiting for the resources held up by that activity which do not
have any free float, but that is being delayed. So, these activities should also be given
priority during the monitoring process.
853
(Refer Slide Time: 04:51)
Next is activities with less than a specified float, say you are saying that a float is one
week. And if the activity is delayed by more than 1 week; what will happen? Obviously,
the subsequent activities that deadlines will be missed. So, that is why we should give
also some priority to the activities with the less than a specified float maybe 1 week or
what 2 weeks or like that. So, if any activity has very little float maybe 1 week or less
than that; then it might use up this float before the regular activity monitoring brings the
problem to the project managers attention.
And hence; what will happen? This activity which is have been less than a specified
float; if the whole free time is consumed due to delay then what will happen? It will
become an actor what critical activity and this will create problem. So, that is why it is a
common practice to monitor closely those activities which are less than or which have
less than a specified float maybe a 1 week of a free float or like that.
854
(Refer Slide Time: 05:51)
Similarly, high risk activities also given some priority during monitoring; so, you know
in several cases that a set of high risk activities should have been identified. So, the
activities user of cost which are of risky if any what dead line is missed or so then a
severe consequence will occur. So, these; so, first we have to identify the set of high risk
activities and as part of the initial risk profiling exercise, I hope you have already read or
will read on this risk handling methodologies.
So, the so first we have to identify a set of high risk activities and then and you see you
have already known about PERT. So, if you are using PERT; then what will happen?
You can easily identify which are of high risk activities. We should designate high risk
to those activities that have a high estimated duration variance.
So, the activities which are having high; what variance they will be treated as what most
risky and you should give some priority to what while monitoring them. So, these
activities will be given very close attention; they should be very closely monitored
because they are most likely to overrun the schedule or they might what most likely to
overspend the budget might be more. So, you should give a sufficient attention; you
should give more priority in handling these activities said while the monitoring these
activities.
855
(Refer Slide Time: 07:23)
Then activities using critical resources; see there are some resources which are very
critical and those critical resources are very less. So, the activities which are using these
critical resources, they have to be also monitored very carefully. So, activities can be also
critical because they are very expensive just you can consider the example of case of
specialized contract programmers. And those contract programmers are very much
expensive and they are very much expensive.
So, the activities those are using those specialized contract programmers; they are very
much critical. These contract programmers are hired for only few days; if within those
few days say 1 month, you cannot complete the job then they cannot stay beyond that 1
month or if they will stay then again they will charge a very high amount.
So, that is why activities they can be also critical because they are expensive. So, staff or
other resources might be available only for a limited period. As I have already told you
the a specialized contract programmers they may be available for an activity maybe for 1
month. If you do not finish your work within 1 month; then they will not stay or even if
they will stay they will what charge very high amount.
So, that is why; so you have to give what special priority to them while you are
monitoring you should closely monitor their progress. So, staff or other resources might
be available only for a limited period; especially if they are very a controlled outside if
they are controlled outside the project team.
856
Now in any event an activity that demands a critical resource requires a high level of
monitoring. So, if a project if there are some activities that those demand a critical
resource; they should be monitored with a high priority.
Now, we will see the another aspect for this project monitoring that is getting the project
back to the target. Suppose you have made a plan, but the project is not moving
according to schedule it is lagging behind. So, how what steps we can take to bring the
project back to the original target? So, almost any project it will be subject to delays and
unexpected events that is not under our control. So, one of the tasks to the project
manager; so here we will see that one of the task of the project manager is to recognize
when this is happening; that means, you when this delay is occurring, when these
unexpected events are occurring ok.
So, that is one of the important task of the project manager to identify when these
unexpected events or these delays they are happening. So, and with the minimum delay
and disruption to the project team he should attempt to mitigate the effects of the
problem. So, he should try somehow to mitigate to handle these delays on unexpected
events. In most cases, the project manager or at least initially what we can say; he should
try to ensure that this scheduled project end date remains unaffected; whatever happened
maybe in between he should tried that the project end date, it should not be affected; it
should remain non effected. There should not be any change in this.
857
So, this he should take utmost care to the see that the project end date it remains
unchanged. So, this can be done by several ways; we will see some of the ways how we
can whatever problem might happen in between how we can try to see that the project
end date still it is met it is unaffected.
This can be done by shortening the remaining activities say what activities so far we
have already executed and what are left. So, time is less; so what we can do? We can
shorten some of the remaining activities, we can reduce their execution time or we can
shorten the overall duration of the remaining project. So, what is the total project say 1
year and a by 6 months if the 50 percent of the activities are not over; we have to reduce
the overall duration of the remaining project. So, these are different ways by following
which we can get the project back to the original schedule or the original target.
So, now but this is not always the most appropriate response to disrupt your plan.
Always it is not actually suitable or appropriate to disruptive to a plan. So, there is a little
point also in spending considerable sums in overtime payments. It is also not what its
very little required that always you will a higher manpower or you will employ your
existing employee in overtime basis to speed up a project this also may not be what
viable every time. And particularly if the customer is not overly concerned with the
delivery, if he accepts that delivery date can be little bit late or there is no other valuable
858
work for the team members in other what activities; then why you will employ them in
overtime and why you will pay extra money? Not required.
So, there are, but let us see how we can get back or how the project can get back to the
original target; so two main strategies are there. So, one is shortening the critical path,
two, altering the activity precedence requirement; let us see first the first one how to
shorten the critical path length so that we can what try to get back to the original target
date.
There are several ways in which this problem can be solved; number 1 by adding
resource, you can add more resources especially more staff members of course; it will
lead to more cost. Then increase use of concurrent resources; if some things some
activities can run parallelly give concurrent or assigned concurrent resources; maybe
concurrent or what manpower or concurrent computers or concurrent output devices or
concurrent input devices so that some activities can run parallelly.
859
Reallocate staff to critical activities. So, bring the take away some of the staffs from
noncritical activities and reallocated them to the activities lying on the critical path and
then reduce the scope otherwise you can reduce what is the original scope. Now, we see
it is not possible to achieve all the scopes. So, we can reduce this scope and the last
alternative is reduce the quality. If it is not possible to what achieve the desired quality
with the limited amount of time; you will be forced to reduce quality so that the project
can be completed on the specified date on the original date.
By such means so now, let us see how to shorten the critical path. So, by this means we
can attempt to certain the timescale, to reduce the timescale for the critical activities until
such time as we have other either we have brought the project back to schedule or further
efforts to be unproductive or non cost productive effective. So, these ways we can follow
so that the critical what path time or the time required for the executing the critical
activities that it will be now reduced.
So, these things should be continued till either we have brought the project back to the
original schedule original target or if we will prove that or if we can justify that; if we
will further do this it will be unproductive or it is not cost effective, then we must stop
this activity here. So, remember that a shortening a critical path upon causes some other
path or paths to become critical.
860
So, when you are shortening a critical path; then it can happen that some other path may
become critical, ok, or either only one path or more than one path also become critical
during the process of shortening a critical path. So, you have to handle this case
accordingly.
Now, let us see the second option that is this adding resources; especially the staff
members. So, exhorting staff to work harder might have some effect; although frequently
a more positive form of action is required such as increasing the resources available for
some critical activity.
So, whenever you have exhorting the staff to work harder; it might have some effect
definitely and more positive form of action is required; such as you have to increase the
resources available for some critical activity. So, what other resources are required you
have to increase those resources.
For example, suppose you are doing the requirement analysis; you are conducting the
fact finding a process, collecting the requirements; then this process can be or this
activity can be speeded up by what allocating an additional analyst to interview users.
You know the different fact finding techniques like interview and a questionary, on site
observation, record review etcetera. So, if you are using interview method for collecting
the facts then what you can do? In order to speed up the process, you can assign, you can
861
allocate another analyst staff to help in the interview process; so, we can meet the
deadline.
So, it is unlikely; however, that the coding of a small module would be shortened by
allocating an additional programmers. See, one programmer is doing the coding and if
you will see that say that I will speed up the coding process; I will allot another program
or two programmers will do, that will not help in speeding up the activity. And it might
be rather counterproductive because the additional time that is needed for organizing and
allocating tasks and communicating that will be more, so you should not do this kind of
thing.
So, you have to judiciously think which activities can be what speed up can be speeded
up by allocating additional staffs that you have to think judiciously.
So, while adding more staff maybe able to speed up the progress; this should be at an
additional cost this I have already told you. You can take more staff members, but;
obviously, it will take more cost. We have already last class discussed the terms like EV
that earned value, PV planned value, SV schedule variance, CV cost variance etcetera.
So, if we will express this thing in EV terms then a negative schedule variance may be
reduced. If it is negative schedule variance can be reduced, but at the price of increasing
negative cost variance.
862
So, if we will add a staff members then mathematically we can say that this negative
schedule variance it may be reduced, but at the cost of increasing a negative cost
variance CV.
Now, increase a use of concurrent resources. So, a resource levels can be increased by
making them available for longer. So, the resource levels what you can do? You can also
increase them and by making them available for a longer period of time.
Thus the staff might be asked to work overtime if the staff is working daily say 8 hours;
then what you can do? He may be requested to work overtime maybe another 2 hours or
4 hours for the duration of a activity. And the competing resources also you should
provide at during the overtime; they must be are made available at those times such as
evenings or weekends or holidays etcetera; when they might otherwise be inaccessible.
863
(Refer Slide Time: 19:21)
Another thing is that, a reallocate staff to critical activities. The project manager may
consider allocating more efficient staff to activities on the critical path or swapping
resources between critical and noncritical activities. So, there are two major activities as
you know critical activity or non noncritical activity. So, if you observe that after some
days the project is getting delayed; the critical activities they are getting delayed what
you can do? You can transfer some of the staff members, you can swap some of the;
either you can transfer or swap some of the staff members from the noncritical activities
to the critical activities so that you can take an attempt to meet the deadlines of the
critical path activities.
When a project is actually executed, the critical path may change as the actual durations
of activities will vary from the original estimates and staff allocations maybe adjusted to
reflect this; this very correct that when the start activities are executing. Initially you
have a drawn the critical path, you have identify what is the critical path at the plan.
But actually when the project is running the activities are running due to the actual
running times execution times; there might be changes in the durations of the critical
activities and hence the critical path may be changed. So, if the critical path is changed
accordingly you have then that will be what different from the original estimates and the
staff allocation that you have originally made that may be changed.
864
So, accordingly you have to change this plan and you have to accordingly alert
accordingly estimate the resources, accordingly alert the what staffs to the critical
activities based on this change in the critical path durations.
So, this I have already told you about reduce scope; the amount of work to be done could
it be reduced by reducing the scope of the functionality to be delivered. If you see that
the actually the project is getting delayed; we cannot meet all the, we cannot achieve all
the, what planned scopes. So, what we can do? Then we can reduce the scope, the
amount of the work to be done it can be reduce by reducing the scope, but the client may
prefer to have a subset of the promised features on time.
So, the client definitely he may want to have a subset of the promised features on time;
that means, that you have agreed during the software requirement specification; so,
especially if they are the most useful ones rather than have the delivery of the whole
application delays. So, instead of making or delivering the whole application delayed;
the customer may want that the some subset of the promised features may be delivered
on time.
865
(Refer Slide Time: 22:07)
And one option is also there to get back the project onto the original track; that is
reducing the quality. Some quality related activities such as system testing etcetera could
be curtailed because there is a very less time. And this would probably lead to more
corrective work having to be done to the ‘live’ system; once it has been implemented.
So, by reducing the quality also we can try to what get back to the original target.
But you see that the customers sometimes may not be agreed if you are reducing the
quality, if we are skipping the system testing the customer may not be agreed. So, there
866
is a what; there is a doubt or there is a what, you can say dilemma what to do in this case.
And next this is the last option if all these what, thing what or the if you are not able to
reduce the critical activities; so, this is the last option that we have to reconsider the
precedence requirements.
So, the original project network would most probably have been drawn up assuming
ideal conditions on normal working practices; while you have considered the while you
will have drawn the what original project network or the activity network; there we have
assumed two important things that the project is running under ideal conditions and
normal working practices are followed, but I in practice that will not happen. So, that is
why we have to; what we have to reconsider the precedence requirements. It might be
that to avoid the project delivering date; it is now worth questioning whether as yet
unstarted activities really do have to await the completion of others.
So, is it possible that to avoid the end project delivery date in order to avoid the end
project what; the late in the end project delivery date what we can question that whether
it is possible that the unstarted activities can they really await till the completion of
others?
867
(Refer Slide Time: 25:01)
So, it might be in a particular organization on that the normal it; it may be normal to
complete system testing before commencing user training. Let us see one of the example
how we can reconsider the precedence requirements. Normally, the system testing must
be or should be conducted before commencing the user training. First complete the
system testing then you can give training to the users.
But if we are having what so; that means, this user training is dependent on the system
testing. After system testing; then start the user training, but if you do not have time;
what you can do? In order to avoid late completion of the project, what can be done?
That the; we have to alter the normal practice that start the training first and then you can
what after this training is completed then you can perform the system testing, you can
alter the normal practice, you can reconsider change the precedence requirements.
So, now, we will assume that training is not dependent on system testing so that a first
system testing can be performed; sorry first this training can be performed and then
system testing can be done.
868
(Refer Slide Time: 26:17)
So, now one way to overcome the precedent constraint is to subdivide an activity into a
component, which can start immediately and the one that is still what constraint before
see. So, one way to overcome the precedence constraints requirements is that we can
divide the activity into a component, which can start immediately and one that is still
constrained at before; it is under constraint.
So, for example, suppose a user has a; user handbook can be drawn in a draft; form from
the system specification. And then be revised later to take account of this subsequent
changes what we can do here? You can see that one first; we want to prepare a user
manual, what we can do? We can prepare a draft form; from where? From the, what
system specification and then that can be revised later on to accommodate the subsequent
changes.
869
So, whether the; a customer can accept this a compromise with the quality or not? We
have to take a judicious decision whether we can compromise the quality when we are
changing the precedence requirements.
It is equally important to assess the degree to which changes in work practices increase
risk. So, during this what, re-considering the precedence requirements; now it is also
equally important to assess the degree to the extent to which the changes in the work
practices it increase the risk. It is possible for example, to start coding a module before
its design has been completed. See normally, normal practice say that first we should do
the requirement analysis, then design, then coding, then testing. So, if somebody said
that the first we will start coding and later on we will do the testing; then you see what is
the; what problem will occur.
So, that is why before what changing the precedence requirements; please assess what is
the degree to which changes in the work practices they increase the risk. So, here starting
870
the code before design it is a risky thing and you have to re do so much so many works if
the design is changes. So, though and the quality also maybe what compromised. So, that
is why a while a doing or making these changes in the precedence requirements or
changing the normal practices; judiciously think and then take the action.
So, now, let us summarize what we have studied how to get back on track. First negotiate
renegotiate the deadline with the client, if the kind client is happy yes we can still wait
some days we can tolerate a few more days delay; then no problem. If he does not
agreed, if negotiation is not possible regarding the deadline then if not possible then try
to shorten the critical path by what working overtime or by allowing your staffs to work
overtime. By re-allocating staff members from the noncritical or the less pressing work
to the critical activities or by what buying or hiring more staff. Also you can reduce the
scope of the work to get back the project on track.
Another option is that we can reduce the quality so that by compromising the; with
quality; we can finish the project on time. And the last option is that reconsidering the
activity dependencies wherever possible you can change alter the activity dependencies.
You can reconsider the activity dependencies by overlapping the activities so, that the
start of one activity it does not have to wait for completion of another and by splitting the
activities. Also by splitting the activities you can change the activity dependencies or
871
these, what did by splitting the activities you can also what alter the dependency among
the activities.
In this also in this way also you can get back or the project can be the project can be
brought back to the original target or the original track. It can be possible to meet the
target line by following these options or by following these methods.
So, in this class we have discussed the priorities that might be applied while monitoring
different activities such as we should give a top priority to the critical path activities and
we should also give priorities to the activities having no float and etcetera. And we have
also discussed the project how; we can back to the target different options I have just
told.
872
(Refer Slide Time: 32:07)
873
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 46
Project Monitoring and Control (Contd.)
Good afternoon. Now let us discuss another important aspect of Projective Monitoring
and the Control that is known as change control and software configuration management.
First let us discuss about change control, then we will go to Configuration Management.
874
(Refer Slide Time: 00:28)
So, when documents such as a user requirement etcetera it is being developed, there may
be many different versions of that document, because as it undergoes the cycles of or the
different phases of software development process and review. Any change control
process at this point would be very informal and it will be flexible and later on we will
see the details.
At some point what is assumed to be final version that has to be frozen, at some point or
some point will come when it is assumed that this document is final it is the final version
or that has to be created; then we must what freeze that document development process
there. So, this will be treated as the baseline version and this is all most effectively
frozen at this point.
875
(Refer Slide Time: 01:19)
Baseline products are normally the foundation for the development of further products or
further documents; for example, if you want to develop what the interface design
documents, then you can develop it from the baseline user documents.
Now, any change to the baseline document it may create what a knock-on effects on
other parts of the project; that means, the other parts of the project might be affected, if
you will make any changes to the baseline document, but sometimes these changes are
876
required. So, that is why what for this reason, the subsequent changes to the baseline
documents they have to be very stringently controlled and monitored.
Now, let us see what are the typical change control process, what steps are followed in
the change control process. First, one or more users they might perceive that there is a
need for a change; then if they perceive that there is a need for the change. Then the user
management, they have to decide that the change that is perceived by the users that is
thought by the users it is valid and worthwhile and then they will pass it to the develop
management. Next a developer from the development group, he will be assigned to
assess what is the practicality and what is the cost of making the change; if it is
practically feasible then they may go for to the next step.
Similarly, they will evaluate how much cost will be involved with the change. Then the
develop management, they reports back to the user management that, yes this is the cost
that will be required for the change and then the user management has to take a decision
whether to go ahead with the change based on the cost. If the cost is high, they may not
go further or if they cost which acceptable with them, they may give green signal yes you
can go ahead for the change to the development team.
877
(Refer Slide Time: 03:17)
Now, then after the user management is agreed, then one or more developers they are
authorized to make copies of the components they have to be modified. Then the copies
are modified by these one or two developers; then after initial testing, a test version is
released to the users for conducting acceptance testing by the users. Then the users they
conduct the acceptance testing, when the user satisfied with the acceptance testing, yes
the change is ok; then the operational release of that product or document is authorized,
it is released and then the master configuration items they are updated with these new
changed one’s.
878
So, during this change control process, so in every organization normally one staff is
there known as the configuration librarian. Let us see what can be the duties of the
configuration librarian. So, first he should do the identification of all the items that need
to be subject to change control; what items, what documents that need to be undergo a
change control. He will also responsible for establishment and management of a central
repository of master copies. So, all the master copies will be what stored in a central
repository; so it is the job of the configuration librarian to for making arrangements for
the establishment and management of a central repository of the master copies of all the
project documents and software products.
Then he is also responsible for setting up and running up a formal set of procedures to
deal with the changes; he should set up and run a formal set of procedures. Also, he is
responsible for maintenance of the records of who has access to which library items and
when and what is the status of each library item; like whether it is under development, or
under test or it is released. So, those kind of things will be maintained by this
configuration librarian.
Now, let us see the important aspect that is the software contribution management. So,
normally changes in a project, it can take place in any of the work product such as
requirement document, SRS document, design documents, coding test plans, etcetera and
this may be due to many regions such as the bug fixing, changes on account of the work
879
simplification and considering the efficiency issues, etcetera; you might have to what
change these various documents.
So, change management can be done actually manually, it can be done manually by a
designed configuration librarian. In the last slide, I have told what are the duties of a
configuration librarian, the configuration librarian can manually what do the change
management. But the manual change management process gets overwhelmed it very
much difficulty and time consuming; when we consider the changes taking place on all
work products.
So, when all the work products you have to in a project there are several work products
and if you want to change all the work products; then it is very much difficult to do it
manually. Also when there are multiple variants of the product were there, multiple
variant, several variants of the product are there and then several persons may
simultaneously work on these multiple variant they may try to change, then this will
create problem. So, that is why manual change management is very much difficult for
this, you have to go for automating it.
880
software. So, software configuration management, it is concerned with how to track and
how to control the different changes made to a software.
So, in a team development environment what will happen, each member of the
development or maintenance team they would be decide they will be assigned to handle
some modification requests. So, both from the development team as well as management
team, maintenance team, so some members will be assigned to handle the modification
request. Therefore, every work product would have to be accessed and modified by
several members.
So, every work product they have to be accessed first, whether it is now possible to
change it or not; if it is possible that you can make a change it is viable, then it can be
what modified by several members. But if several members simultaneously they want to
modify a single document, then problem will be there, it will be it may lead to
inconsistency. In such a situation, unless you use a proper configuration management
881
system, then several problems can appear. We will see what problems can appear, if you
will not use a proper configuration management system.
So, let us see what is configuration management, as I have already told you that
configuration management deals with, it concerned with tracking and controlling the
changes to a software. So, we can define configuration management like this, it is the set
of activities through which the configuration items are managed. We will see what are
configuration items in the next slide. So, configuration management is the set of
activities through which the different configuration items they are managed as well as
maintained as the product undergoes it is lifecycle phases.
882
(Refer Slide Time: 09:31)
Now, let us see what are the context in which configuration management is necessary.
So, two important phases will use configuration management during the development
phase as well as during the maintenance phase. During the development phase, the work
products they may get modified as a development activities are carried out, so you have
to what you go for configuration management.
Similarly during the maintenance phase, the work products may change due to various
types of enhancements and adoptions that are carried out including the what bug fixing
etcetera. So, for this also you may have to go for changing the, what you may have to go
for changing the configuration. So, change also configuration management is also
necessary at this point of time. So, thus the state of the work products they continually
change both during the development period as well as the maintenance period.
883
(Refer Slide Time: 10:29)
Now, let us see what do you mean by configuration? The state of all work products at
any point of time is called the configuration ok. The state of all the work products such
as SRS document and these who can say that design documents, coding, test plans,
etcetera are the state of all work products at any point of time is called the configuration
of the software product. Software configuration management deals with I have already
told you that, software configuration management deals with effectively how to track and
how to control the configuration of a software product during its entire lifecycle.
884
For effective configuration management, it is necessary to deploy a configuration
management tool. There are many configuration management tools available out of
them, some are open software, that is there you can get them free of any licensing fees
and others are commercial tools; towards the end I will give a list of some commercial
tools or some free open source tools for software configuration management. So,
configuration management practices it includes two important things; version control and
the establishment of the baselines.
Now, let us see some of the important terminologies associated with configuration
management. First configuration, I have already told you that, configuration of software
means it is the state of the various work products those are under configuration control
ok.
So, configuration means, configuration of software means it is the state of various work
products such as SRS document, design documents, codes or codes test plans, etcetera
that are under configuration control. The work products that are under configuration
control are usually referred to as configuration items ok. So, these work products which
are under configuration control they are termed as contribution items. It is very much
convenient to think up a configuration as the following; you can consider them as a set of
files representing various work products this may be thought of as configuration.
885
For example the configuration of a sample software product which we have shown in the
next slide, it consisting of the following configuration items or the work products like
this.
This is a says work product. So, W1 it is divided into so many configuration items, here
the configuration items are W 1 W 2 W 3 up to W n etcetera.
Now, let us see the next term that version; as development and the maintenance activities
are carried out on a software product, it is a configuration keeps changing. So, I already
886
told you during development period as well as maintenance period, so whenever they are
carried out on a software product, it is configuration that is one or more of the
configuration items they keep on changing. It is often necessary to refer to the
configuration that existed at certain point of time; we want to know that, what is the
configuration say before two days like that.
So, very often it is necessary to refer to the configuration that existed that the
configuration of the software that existed at certain point of time. For example, say we
want to refer to the last week’s configuration of the software or last year’s configuration
of the software. So, in that way we might have to refer the configuration at some point of
time. Therefore, a version means it is a configuration that existed at certain point of time.
In very briefly we can say, that a version is a configuration that existed at some point of
time at certain point of time. So, more technically versioning is a numbering scheme that
help us, identify a specific configuration at a certain point of time, ok. So, technically we
can say that versioning is means it is a numbering scheme. Why it is required? That helps
us identify a specific configuration at a specific or at a certain point of time. How this
can be achieved, how this numbering scheme can be achieved, how this versioning or
numbering scheme it can be achieved? This can be achieved by a configuration tool by
tagging the files representing or yes representing the configuration items with the version
name.
887
(Refer Slide Time: 14:53)
As a software is released and used by the customers; many errors are discovered, and
then enhancements are made to what improve, but the functionalities. So, a new release
of the software, then several releases of the software come; a new release of the software
is an improved system which is intended to replace an old one. So, you know that in
many versions are coming, many releases of the software coming may be Microsoft; so a
new release of the software it is an improved system which is intended to replace what
the older one. So, usually a product is described as version m and release n or simply as a
version m dot n; where m refers to the version and n refers to the release.
888
Now, let us quickly see about what is a revision. A revision system is a numbering
scheme that is used to identify the state of a configuration item anytime. So, because you
must be clear about the difference between what version and revision; a revision system
is also a numbering scheme that is used to identify the state of a configuration item at
any time. Each time a work product is updated, it is state; obviously, it changes. So thus,
we can think of a work product which is going through it is series updates till it reaches a
desired state. So, till reaching a desired state a work product may go through a series of
updates. So, the successive of states of a work product are it is successive revisions.
So, whenever you are making any updation, updates, so the successive states of a work
product they are it is a successive revisions. So, thus each time a configuration item is
updated, any time any configuration items such as code or design document or a SRS
document is updated a new revision gets formed. Then it becomes possible to refer to a
specific state of a work product by using it is revision number. Every time you are when
you are making upgradation, a number you are assigning it becomes possible to refer to
the specific state of a work product by using it is what revision number.
Now let us see what is a baseline? Baseline is a software configuration that has been
formally reviewed and agreed upon and it serves as a basis for further development. So,
here the what, first this a software configuration that has already been formally reviewed
by the developer also the users have seen in it and they have agreed on, both of them
889
have agreed upon it and then it can serve as a basis for further development of the
software products.
Variants; variants means they are the versions that are intended to coexist. So, these are
the different versions they are intended to coexist will. And different variants may be
needed to run the software on different operating system ok. It might be required that
different variants, they might be required to run the software on different operating
systems or on different hardware platforms, etcetera or maybe on different browsers.
So, let us take a small example that one variant of a mathematical computation package
might run on Unix-based machines, another may run on Microsoft windows machines,
another variant may run on Solaris etcetera. So, this could be one example. Another
example could be those who have used LaTeX, so LaTeX normally it is used in where in
Unix version; but another variant is they are, if you are using windows version the other
variants is called a MiKTeX. So, that is a text processing software. So, you can use in
Unix variant what LaTeX and for windows the variant is with MiKTeX.
890
(Refer Slide Time: 18:47)
So, variants may also be required to be created when the software is intended to be used
with different levels of sophistication of the functionalities; for example, for novice users
it is the novice version, for enterprises it is enterprise version and for experienced
versions you can say or for professionals the version may be professional version.
So, variants are often created during the operation phase, during the development phase,
and as and when the software products with overlapping functionalities they are required,
and then you can create different variants. Even the initial delivery of this software might
consist of several versions. So, whenever you are initially, first time you are delivering,
the initial delivery of software might consider up several versions and more variants may
be created later on.
891
(Refer Slide Time: 19:32)
Now, let us see what are the purpose of SCM why what software configuration
management is used, what is the purpose. So, there are several purposes, some important
purposes I have written here that why software configuration management is required;
first, problems associated with concurrent access.
So, one configuration item, simultaneously many users or many developers they are
trying to access then, it will lead to inconsistency. So, in order to avoid the problems
associated with concurrent access, we should go for software configuration management.
Similarly undoing the changes, several times what is required, we may have to we have
done something wrong thing and we want to undo it. So, in order to undo the changes,
this, what software a configuration management is required. Another is system
accounting, in order to keep track of that who had made the change, when he had made
the change and why he had made the changes.
So, to keep track of all the details, that is you that we call in technology the system
accounting. So, that can be another reason why SCM is required. Similarly handling the
different variants and helping fix bugs in them, I have already told about variants, how to
handle the variants, how to tackle the variants and how to what help the, how to what fix
the bugs in the variants, for them also you can use this software configuration
management.
892
Similarly, accumulated determination of the project status, what is the current project
status, we want to know the accurate what status of the project. So, for that, you can also
use the software configuration management. And similarly preventing unauthorized
access to the work products, see only the project manager he will give the permission to
the developers to access the work products.
Without the permission anybody or anybody else cannot actually access this. So, if
anybody is trying to access the work products unauthorized way, then that will be
prevented; if you are having software configuration management. So, this is another
reason that preventing unauthorized access to the work product this can be achieved by
using software configuration management. So, these are the different reasons purposes of
using software configuration management, these things have been explicitly what
explained here, I am skipping this thing, these are easy things you can read there
yourself.
Now, we should see that what is the, what we have seen so far, we have first seen that,
what is the need of change control, why change control is at all required. Then we have
also seen that the steps of the change control, how the change control can be achieved in
an organization. We have also discussed the change control procedure; we have also
explained the concept of software project management, what is software sorry what is
software configuration management, what is the need, what are the purposes. We have
893
also discussed what the context in which it is necessary, in which context, what the
software configuration management is necessary.
For example I have told that it is required at the development phase also, it is required at
the what maintenance phase, also what are the different purposes just I have already told
you that, the important purposes for which software configuration management we
required. We have also defined various terms, the common terms which as a what as a
student you must know that what are the different terminologies used in configuration
management such as what is the configuration, what is a configuration item, what is a
baseline item and what is revision, what is a release.
So, those kind of basic things also we have discussed here. So, actually next actually we
must see about, we have seen now the basic concept of configuration management; then
we must see about how to do it, how to what process, you should follow what, for doing
this configuration management. I have already told you doing configuration management
manually by a configuration librarian is very very difficult, you must require you have to
automate it.
So, what are the tools there available for automating this. So, I will tell that some of the
tools are there for doing this and some tools are freely available, some tools are what
commercial tools; for example, freely available tools will use SCCS and RCS. So, those
are details of the tools we will discuss in the next class; but first we will discuss about
what are the steps they must be followed for achieving this or for making software
configuration management and those steps will discuss in the next class.
894
(Refer Slide Time: 24:36)
And these things we have mainly taken from these two books ok.
895
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 47
Project Monitoring and Control (Contd.)
Good afternoon. Now, let us see the remaining portion of software configuration
management.
We will see first the configuration management process, then we will see some of the
tools available for configuration management, and then at last we will see some other
project management tools.
896
(Refer Slide Time: 00:33)
897
(Refer Slide Time: 01:27)
We will first see about configuration identification in detail, and then we will see about
this configuration control. So, the project managers normally they classify the work
products associated with a software development process into three main categories. So,
the work products could be as I have already told, there could be SRS document, design
document, codes, test plans etcetera or the design test cases they are the work products.
They can be divided into three categories, controlled products, pre-controlled products
and uncontrolled products.
So, let us see what are controlled work products, controlled work products are those that
are put under configuration control, that means, the work products which can be put
under configuration control, these are known as control work products. The team
members must follow some formal procedures to change these; they have to follow some
accepted procedures, formal procedures to change this. We will discuss the procedure.
Then pre-controlled work products are those work products which are not yet under
configuration control, but eventually they will be under configuration control may be in
near future.
898
(Refer Slide Time: 02:40)
And then last one is this uncontrolled work products. These are the work products, which
will not be subject to the configuration control; they will not be subject to the
configuration control. These controllable work products and pre-controlled work
products and pre-controlled work products together, they are known as controllable work
products.
Then let us see what work products can be coming under which work products may
come under controllable work products like as I have already told you requirement
899
specification document, design documents such as DFDs, UML diagrams etcetera, the
tools which are used to be in the system such as compliers, linkers, loaders, lexical,
analyzers, parsers, etcetera, they can come under control level work products. Similarly,
the source code for each module that can be also considered as a controllable work
products; the design, test cases, test plans etcetera, they will be also coming under
controllable work products. And finally, the problem reports they can also be what; they
can also come under the controllable work products.
Next we will see about configuration control, how to perform configuration management
or how to perform this configuration control. So, configuration control is the process of
or configuration control is the part of configuration management system, which most
directly affects the day-to-day operations of the developers. The configuration control it
allows only authorized changes. So, those persons which have being authorized might be
a project what steering committee or the project manager.
So, the configuration control, it will allow only those persons only authorized the
changes made by those authorized persons to the control objects, and it prevents any
unauthorized changes. Anybody else whoever want we cannot make change. So, they
have to get permission either from the project manager or from the steering committees.
So, that is why configuration control it allows only the authorized changes to be
controlled and it prevents the unauthorized changes. The project manager can give
900
permission, can give authorization to some specific members who will be able to change
or assist the specific work products, and others they cannot make any unauthorized
change or any unauthorized access.
In order to change a controlled work product such as a work code, etcetera, a developer
he has to get a private copy of the module through a operation called as a reserve
operation. He cannot just do the change on the master copy. He has to first request or he
has to first reserve for a private copy; and once his request is approved, he will be given
a private copy where you will make the changes, then that copy will be tested if
everything is what successful. If the project committee of the project manager approve
the changes, then only that master copy will replaced by this developed private copy, this
is how the configuration control, this process it works.
So, let us explain how this configuration controls this process work. As I have already
told you, there will be two main important operations in configuration process, one
reserve; another is restore. You see, so suppose this is the developer’s work space. One
developer wants to make to some changes to the what configuration item W 3. So, what
he has to do, he has to send a reserve message to this project manager or to the steering
committee requesting that I want to make some changes in the configuration item W 3.
So, if his request is approved by the project manager or by the steering committee, then a
private copy of the W 3 this configuration item will be given to the developer. Then the
901
developer will make any sort of changes whatever he wants; what is justifiable he will
make the changes on W 3.
He will, please remember he will not directly do in the masters copy. A copy of W 3 will
be given as the private copy. The developer will make the changes in this private copy
and then this what, changes what he has made in the private copy that will be sent to the
project manager or the steering committee, we call it a committee as a committee called
as CCB. I will just show you, so that committee then he will investigate the various
changes various aspects of the changes if they are satisfied, then they will approve the
changes.
And after that, if everything is approved, yes, the changes are correct, then the, what
developer will replace this master copy W 3 with the developed or the yes with the
developed copy or with the private copy through a operation called as restore operation.
So, through reserve operation, he can reserve a copy of W 3. And he will be issued a
private copy and then with the restore operation, then he; and if the changes are
approved, then the developer can restore, these privately developed this private copy, it
can be put in the masters copy place, that means, the master copy will be replaced with
this private copy if that is approved by the CCB. So, this is how the configuration control
process it takes place by using two different important operations reserve and restore.
902
So, this thing I have explained here that the configuration management tools allow only a
one team member to reserve a module at any time. So, because we have already seen the
problem of what concurrent assess, it will not allow more than one person at a time to do
the reserves, so that is why configuration management tool allows only one team
member to reserve a particular module at any time. Once that work product is reserved, it
does not allow anybody else to reserve the same module until the reserve module is
restored. So, in this way, we can prevent or the tool can prevent more than one developer
to simultaneously reserve a module, thus the problems which are associated with the
concurrent access that can be taken care of.
Now, let us see how the modifications to a work product are made under configuration
control. I have already told you that when a developer needs to change a work product,
he first makes a reserve request. Then that reserve request is honoured only if the
appropriate authorization has been given by the project manager; otherwise this request
will not be honoured, ok. So, a reserve request by a team member is honoured only if the
appropriate authorization has been given by the project manager to the member for that
specific work product.
After the reserve command successfully executes, a private copy of the work product or
the configuration item is created in his local directory. And then; and the project or the
developer he can carry out the necessary changes to the specific work product on the
903
private copy only not only master copy, he can make any changes in the private copy
only.
Then once the developer has satisfactorily a completed all necessary changes, then the
changes are need to be restored in the configuration management repository. However,
restoring the changed work product to the system configuration, it requires approval; it
requires permission of a board called as change control board. The change control board
is usually constructed by taking members among the development team members. Now,
for every change that needs to be carried out, this CCB reviews the different aspects of
the changes made to the controlled work product and certifies certain aspects they
endorse certain comments about the changes.
904
(Refer Slide Time: 10:42)
They may give comments in these aspects that change is well-motivated. The developer
has considered on document the what will the effects of the change the developers has
also already considered and documented, whether there is a adverse effect or only
positive effects will be there, if any negative effect also the developer has already
mentioned. So, the committee is aware of that. Then the changes interrupt well with the
changes made by other developers. So, what changes particular developer has made,
whether those changes are what interacting well with the other the changes made by the
other developer that also the committee will see, and then the appropriate people may be
from the CCB, they have validated the change, they have tested the change, they have
validated the change.
For example someone has tested the changed code and has verified that the change code
is consistent with as per their need. So, if they verify and the committee or the CCB
satisfied with every aspect of the changes made in that what work product, then they will
approve and they will give the permission, and then the what software developer, he can
replace, he can restore the master copy with these changed copy.
905
(Refer Slide Time: 11:58)
So, now let us see in the CCB will be there. Normal, with the changed control board is
seldom a group of people, normally for smaller project the project manager himself if the
he test the role of CCB. Except for very large projects, the functions of the change
control board are normally discharged by the project manager or by some senior member
of the development team. So, once the CCB reviews the changes to the module and
approves all these changes, then the project manager updates the old configuration item
with the new one through a restore operation.
906
A configuration tool does not allow a developer to replace a work product in the
configuration with his local copy unless he gets an authorization from the CCB. So,
unless the CCB authorizes, unless the CCB approves the changes even if the software
developer has got the private copy, but he cannot upload it, he cannot restore this
modified copy. So that is why configuration control tool, it does not allow any developer
to replace work product in the configuration with his local copy unless he gets an
authorization or permission from the CCB. Therefore, incompletely modified or
improperly modified or inappropriately modified work products, they cannot be updated
in the configuration.
So, we have last class we have already seen about what version and we have also seen
release. We have already seen how the versions can be managed. The software what
configuration management we have seen let us quickly see out how the different releases
can be managed; so, release management. The release management process it systemizes
the work carried out by developers to provide a new release of a software and on the part
of the users to smoothly obtain and use a new release. So, see two important things you
can see that the real release management process is the systemizes two things, the work
carried out by the developers, how to provide a new release of a software, and for the
users how they can smoothly obtain, and use a new release this can be provided by the
release management process.
907
The release process should involve what see now that this release management becomes
very much important after the users getting the facility of easy downloading of the
releases from the internet. Then, so that is why this release management has become very
much important. The release process should involve what minimal effort on the part of
the developer. See two important stakeholders are there, developer as well as the user.
This release process should involve minimal effort as much as less effort on the part of
the developer to upload a new release of software; similarly this for the users this process
should involve minimal effort for downloading it and installing it.
For example, when a new version of the system it has to be released, the tool should
automatically determine what will be the changed components, the dependency among
them, and all the interdependent components, they should retrieval as a group, so that
there will be no possibility of inconsistency. Also retrievals of unnecessary and
unchanged components should be avoided. So, whenever there will be retrieval of some
unnecessary components some unchanged components that should be avoided by the
release management tool.
908
(Refer Slide Time: 15:40)
Now, let us see as I have already told you, there are several configuration management
tools are available; out of them some are free, that means, open source tools and some
now somehow some are commercial tools. So, today we will discuss what some open
source tools. So, two popular configuration management tools on UNIX systems are
there may be these are open source, one is SCCS, it is named as source code control
system. Another is RCS that is that stands for revision control system either SCCS or
RCS they can be used for controlling and managing different versions of text files.
Please remember they can be used for managing different text different versions of text,
files, they do not handle binary files. For example, executable files, documents, file
changing diagrams, etcetera they cannot be handled by SCCS and RCS. SCCS and RCS
they provide an efficient way of storing the different versions that minimize the amount
of the occupied disk space. So, these tools they provide some efficient way for storing
versions that will minimize the amount of occupied disk space.
909
(Refer Slide Time: 16:56)
Now, let us see suppose a module mod is present in three versions. For example, a
module mod is present in three versions may be module 1.1, module 1.1, module 1.3.
Then the tools SCCS or RCS they store the original module what module 1.1 together
with the changes needed to transform from module 1.1 into module 1.2 to module 1.3.
Please see they will not store all the versions module 1.1, module 1.2 and the module 1.3,
they will store the original module, the baseline module, module 1.1. Along with that
they will store the changes which are required to transform from module 1.1 to module
1.2 and from module 1.2, module 1.3. So, basically they will store the baseline version
module 1.1, and the changes required to move to module 1.1, module 1.3.
The changes needed to transform each baseline file to the next version are stored, and are
called deltas. So, the changes which are needed to transform from the baseline module
may be from your module 1.1 to module 1.2. So, these changes which are needed to
transform each baseline file to the next version, they are stored and these changes are
called deltas. The main reason behind storing the deltas rather than storing the full
revision files is to save the disk space.
So, if you while store all these versions module 1.2, module 1.2, module 1.1, module 1.2,
module 1.3, the disk storage requirement will be very high. Both the files store only the
baseline version module 1.1 then the changes required to transform into module 1.1
910
module 1.3, then the required disk space will be very less. So, RCS and SCCS do or
follow this approach.
The change control facilities provided by SCCS and RCS. What facility they provide,
they include how to the ability to incorporate restrictions on this setup individuals, who
can create new versions. So, proper what, authorization is required, and facilities for
checking the components in and out, that means, who can reserve one can reserve among
the who can perform the restore operations, ok.
So, who can perform and who can perform reserve and restore operations and how they
can perform, these facilities are provided by this tools SCCS and RCS. Individual
developers they check out to the components and modify them, ok. The individual
developers they check out the components and modify them. Then after they have made
all the necessary changes to a component, and after all the changes have been reviewed
and approved by CCB, then they can again check in what the changed module into the
SCCS and RCS, and the master file is replaced with this what changed module.
911
(Refer Slide Time: 19:54)
Now, let us see about some of the features of SCCS. I have already told you that SCCS is
a configuration management tool; it is available on UNIX systems and it helps control
and manage text files only. It also implements an efficient way of storing the versions; it
minimizes the amount of occupied disk space by storing only the baseline version and
the changes required for transformation, not all the what, revisions.
Suppose a module is present in 3 versions this I have already told you like module 1.1
module 1.2 and module 1.3. And SCCS stores the original on the baseline version 1.1.
912
Along with the changes those are needed to transform it into module 1.1 and module 1.3
and these changes or modifications they are known as deltas.
So, access control facilities provided by SCCS, they include that facilities for checking
components in and out. Individual developers they can check out components and
modified them. And after they have changed the module as required and the module has
been successfully tested and verified by CCB, then they can check in the changed
module into the SCCS. The master copy now will be replaced with this modified copy.
913
Revisions are normally generated by numbers in ascending order like revision 1.1,
revision 1.2, 1.3 etcetera. It is also possible to create variants of a component, ok. It is
also possible to create different variants of a component by creating the what by creating
a fork by using fork in the development history ok. It is also possible to create different
variants of a component by creating a fork in the development history. So, these are the
configuration management tools. Many other configuration management tools are there.
You can find out from other books on internet.
So, now I will just conclude with this I will give some examples of other project
management tools. So, these two are RCCS and SCCS are explicitly they are used for
software configuration management, but there are other project management tools to
support other activities such as drawing Gantt chart, etcetera for the CPM for handling,
CPM PERT for dealing with CPM PERTetcetera, there are other project management
tools.
So, I have listed here some other project management tools to perform some other
project management related activities. For example, Gantt project is one of the freeware
ok. For example, Gantt project is one of the freeware. It is GPL-licensed project
management software. It is a very open source software that runs under the, what
windows operating systems, Linux operating systems and MAC operating systems.
There is another tool called as Microsoft project. It is the basic project management
914
software from Microsoft Corporation. The, and you can see this is available if you are
using Microsoft what system; Microsoft software you know computer then you can see
that this Microsoft project is already there.
So, another software is there called as Primavera project management software which is
a widely used suite of project management software. So, you can see there are two
versions here SureTrack is one which is the entry level software, and Primavera 6 is the
advanced software. These software’s you can use also project management related
activities. Using SureTrack’s Project KickStart a project manager can define the different
project phases, establish the goals, anticipate the obstacles and delegate the different
assignments. So, these are only basic things I am just giving the information. You can
see the details from the books and the internets.
915
(Refer Slide Time: 24:37)
So, let us just compare the different features of the project management tools. Like see
Gantt project, it can handle scheduling project scheduling if you have known CPM and
PERT etcetera. So, Gantt project supports scheduling. It also supports resource
management. I have already told you for what representing resource management, Gantt
project is one of the best software and it is open source software.
Then primavera SureTrack is this also supports portfolio management it is also an web
based what software. This supports scheduling activities through CPM, PERT etcetera,
and it also supports cost management it also supports resource management, but this is
not on open source software. So, these are few of the, what project management tools we
have listed here. Many other what projects are there for example, another project
management tool is called as Libre project. So, you can also use Libre project for the
different project management related activities ok.
916
(Refer Slide Time: 26:22)
So, today we have discussed in this class the detailed steps for configuration
management process. We have already seen, in this class we have seen the details of
configuration management process. We have presented some configuration management
tools such as SCCS, RCS etcetera. We have also presented some other project
management tools such as Microsoft project and primavera and these what Gantt project
Libre project etcetera.
917
Thank you very much.
918
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 48
Contract Management
So, first we will see little bit introduction about managing the different contracts then we
will discuss today what are the different types of contracts that one might have to what
see in case of any organization.
919
(Refer Slide Time: 00:41)
You know that many organizations they choose to obtain or to get their software
externally, given their limited because their capability for developing new and reliable
software is limited. So, getting it from outside from external suppliers it seems sensible it
is viable.
These organizations think that, it is more cost effective that to employ outside software
developers for new development. And then reduced group of their in house software
developments staff members they can maintain and the support existing systems, they
will try to purchase it from outside from a what third party from a supplier. Then they
have some in house staff members less staff members, few staff members they can be
deputed for maintaining and supporting the what, existing system.
920
(Refer Slide Time: 01:37)
Buying of goods and services, rather than doing it yourself is attractive under the
following situation. When money is available, but other resources especially staff time
they are in short in short supply; you are very having very less number of staffs, but you
are having money in that case it is better to buy the goods and services from the market
from the suppliers. It is essential that the customer organizations they should find time to
clarify their exact requirements at the beginning and to ensure that the goods and
services delivered are satisfactory.
See even if you will give contract to a third party to develop software, it is you who
know about the detailed requirements for your organization. So, that is why the customer
organization they have to find out what are their exact requirements at the very
beginning. And those requirements they have to inform to the developer, then only the
developers can develop a good and good quality what product or a good quality service.
921
(Refer Slide Time: 02:51)
Potential suppliers they are likely to be more accommodating before and contract is
signed than afterwards. See this is normal tendency this suppliers they are likely to be a
very much accommodating before any contract is signed but, once after signing these
thing they are becoming very much casual they may not listen the complaints or request
from the clients. So, especially if the contract is up for a fixed price then this problem
will come accurately. So, thus as much as forethought and planning are needed with an
acquisition project, same thing also it is also required with internal development, ok.
922
So, as much forethought and planning as you require, in case of acquising a project same
thing almost what similar forethought and planning is also required for internal
development. Now let us see the different types of contracts, there are different
classifications based on certain parameters. The first classification we will see so,
according to this classification there are three types of contracts; one is the bespoke
system, another is off the shelf package, another is customized off the self or popularly
this is known as COTS software.
So, bespoke system is a system, which is created specially for one customer for one
organization because, depending up on their requirements their requirements are very
much specialized. So, the third party or the software, the developer, the supplier may
create a special software for that customer only. So, bespoke system means these
software they will be created or the facility the service that will be created specially for
one customer. And, off the shelf package means just it is what generalize software are
available in the market you just go to the market and brought as it is.
So, off the shelf package, but as it is also known as shrink wrapped software. Actually
this off the shelf what component you see very much applicable in the case of hardware,
you suppose you want to develop what a motherboard. So, what you have to do? Go to
the market see what are the different components are available, purchase all those
components and the ICs, then on a board you just assembly all those things.
So, here what we are doing? Going to the market bought the components as it is and then
we are assembling. Similarly in the off the shelf of a package, so we will see several
generalized what packages, software packages are available and we will bought them as
it is, then we may what link them we will interface them. Then another category is
customized off the shelf or COTS software, this is basically a core system is which is
customized to meet the needs of the client.
So, this is a basically what the; it is a core module kind of thing and it is a core system all
the fundamental facilities are there; and then you purchase it and customize it, according
to your requirements according to what just to meet the needs of the client. You can have
seen just government have does in this housing schemes they prepare only core houses
and then, the purchaser when he will purchase the core house, then he will what
customize it according to his needs. So, he may put marbles or he may put different what
923
different arrangements for A Cs, different arrangements for lightening etcetera. So, this
is that similarly in customized off the shelf cots software, you will just go and purchase a
core system then that can be customized to meet the needs of the client; either you can do
yourself the customization or the supplier he will customize the software according to
your own need. So, this is known as something customized of the self or COTS software.
So, where an equipment is purchased, is referred to as a contract for the supply of goods
ok. So, whenever an equipment or any hardware device it is purchased, normally it is
referred to as a contract for this supply of goods; so, equipment is coming under the
goods. But when you are going for software with the supply of software this is known as
supplying a service, in case of what equipment we call it as supply of goods, but in case
software we call it as supply of a service.
That means you have the what, vendor has to develop the software or write the software
for you or it can be granting of a license so, or granting of a permission to the user, to use
the software which remains in the ownership of the supplier. The supplier has the
ownership of the software; he may give you license; he may give you permission to use
this for a particular period of time depending on your contract. So, this is known as
supply of service.
924
So, let us see again that if you are going to purchase the equipment, normally we call it
has a what contract for the supply of goods and when you are going to purchase this
software we call as supplying a service or granting a software license.
Now, let us see another classification; so, there is another way of classification, based on
the calculation of a payment to the suppliers. How you are calculating the payment that
has to made to the supplier. According to this there are three categories of a contracts
fixed price contracts, time and material contracts, then fixed price per delivered unit.
925
Let us first see about what is the fixed price contracts. So, in case of a fixed price
contracts, the price I just name suggest the price is fixed in case of this fixed price
contracts the price is fixed when the contract is signed. So, at the time of signing the
contract, the price is fixed. And here the customer knows that if there are no changes, in
the contract term. So, in the contract owner signing that are some terms are some causes
are mentioned. So, if there are no changes in the contract terms in near future then this
price, then they will pay on completion of this contract, there is no change in the price.
So, the customers requirement has to be fixed at the outset. So, in this type of contracts
the customer at the very beginning he has to fix the what, requirement. The customer’s
requirement has to be fixed, at the outset that is the detailed requirements analysis ok.
The detailed requirement analysis and specification it must have to be done before this
signing the contract or at the beginning of the contract. And it will be very difficult that
change it later on because this is based on this fixed price based on the requirements the
customer who will sorry the developer who will quote the price.
So, once the development is under way, then the customer cannot change their
requirement please see this is the drawback of this type of contract. Once the
development is underway then the customer cannot change their requirements without
renegotiating the price of the contract. If you want to change some of the terms, if you
want to change some of the requirements, then you have to again renegotiate the price
with the customer with the developer, he may charge some extra money for what
incorporating the new requirements you are saying. So, this is something about the fixed
price contracts.
926
(Refer Slide Time: 10:15)
Let us see what are the advantages of the contracts, here the customer knows that what is
the expenditure this budget is fixed I have to pay. Suppose 1 lakh rupees for the surface
and supplier is motivated to be cost effective also this supplier is motivated to cost
effective he will give what a known value of the price it will be cost effective.
Let us see what are the drawbacks there are several disadvantages of this contract; first of
all supplier will increase price to meet contingence absorb the risks. Because, supplier
knows it is fixed, price is fixed later on he cannot what change or something. So and
927
during the development of projects several weeks may come; and in order to observe the
weeks what you will do you will put some contingencies in each quotational quoted what
quoted price. So, you will you will definitely try to increase the price to meet the
contingencies or to absorb the risks.
And next is difficult to modify the requirements this I have already told you. In this type
of contract the customer has to what perform the requirement analysis and inform all the
requirements at the beginning later on you cannot change; if you will change then the
what supplier will ask more money.
Cost of changes likely to be higher; so, once the what price is fixed and next you say that
few changes then the developer will charge high price to you. So, cost of changes likely
to be higher and threat to system quality; obviously, since the price is fixed then the
developer will not be motivated to give what high quality service; because, you know
whether I will put what kind of service my price is fixed. So, it is a threat to the quality
of the system. So, these are some of the drawbacks or the drawbacks you can see from
the books or from the internet sources.
Then, next one is time and material contracts. So, in this case the customer is charged at
a fixed rate per unit of effort, ok. So, in this kind of contract the customer is charged at a
fixed rate, rate is fixed per what, per unit of effort, ok. So, in this case the customer is
charged a at a fixed rate per unit; you put may be per staff hour or per person month.
928
And the supplier may provide an initial estimate of the cost ok, this initially the supplier
will provide a base cost the supplier may provide on initial estimate of the cost based on
what based on the current on the standing of the customers requirement.
So, looking at the what, customers requirement, the supplier may provide an initial
estimate of the cost. But this is not the basis for the final payment, this is not the final
payment the supplier then what will do he will usually send invoices to the customer for
the work done at regular intervals say monthly or quarterly. So, he will send the supplier
will send what bills invoices to the customer, for the work done for that period at regular
intervals may be monthly or quarterly.
Now, let us see what are the advantages of this approach? The advantages are that it is
easy to change requirements.
929
(Refer Slide Time: 13:25)
Now, since the price is not fixed, the charge is; the price is fixed or can say that the
customer has to pay some fixed rate per unit of effort, may be per person hour or per
person month this charge will vary now. So, the customer is charged at a fixed rate per
unit of effort; so, if you want to add some new requirements. So, then again how much
per staff per hour required or how much person month is required accordingly it will be
multiplied by the price. So, there is no problem in what change changing the
requirements. So, it is easy to change the requirements in case of this method, but and
another advantage is that lack of price pressure can assist product quality.
Now, the customer the what, developer is not under pressure for this what price because,
he knows not if the customer will say some new requirements they have to pay more. So,
they will concentrate on improving the product quality which has not the case with the
previous approach that is the fixed price contracts.
930
(Refer Slide Time: 14:27)
Now, let us see the drawbacks. Drawback is customer liability; the customer absorbs all
the risk in the fixed price contract. The supplier absorbs the risk and that is why he keeps
some contingency and he may make more and he may demand more money. But in this
case the risk will be absorbed by the customer, if any change customer in the
requirements customer will say that these are the changes for that the developer will
charge extra price.
So, all those risks that has to be what absorbed by the customer; so, that is why this is the
customers liability. The customer absorbs all the risks associated with the a poorly
defined or changing requirements or adding new requirements everything that has to be
borne by the customer; and the lack of incentive for supplier to be cost effective. So,
there is a very lack of incentives for the suppliers to be cost effective.
931
(Refer Slide Time: 15:21)
Then the next category is fixed price per unit delivered contracts. So, now, let us see
what is meant by this; so, this is often associated with a function point counting; and here
this say that fixed price per unit delivered contracts. And this approach is associated with
a function point counting, while discussing about the cost estimation we have already
discussed the function points LOCs (Refer Time: 15:55) etcetera.
So, this technique is associated with a function point counting; here the size of the
system that has to delivered is calculated or estimated at the outset of the project. So, at
the very beginning, what will the size of this system that has to delivered is estimated at
the very beginning or at the outset of the project. The size could be estimated in terms of
lines of code, you can estimate the size in terms of LOC or SLOC, but we know SLOC
has several drawbacks. So, and function points removes some of the drawbacks of this
SLOC. So, that is why a FPs or function points they can more easily derived from the
what, users requirements or from the requirements documents. Then a price per unit is
quoted, ok.
So, based on this function points a price per unit is quoted, the final price is then the unit
price multiplied by the number of units. So, how many so now, price per unit is quoted;
you see your software will require how many units then the total price will be the unit
price multiplied by the total number units its very simple.
932
(Refer Slide Time: 17:03)
Let us take a small example, suppose you have to develop a software and the charge is as
follows. So, up to 2000 function points the design cost is 200 rupees per function point.
And coding cost is a per function point 700 rupees and the total cost of development is
900 rupees per function point.
Similarly, the function points greater than 2000 and less than 2500 design cost is 240
coding cost per function point is 760 and say total each 1000 rupees. In that way for
different ranges of function point counts, the unit prices for the design cost as well as the
coding cost are given and from these the total cost is found out. Now let us see you
suppose you require to develop may be a software, which has function points which is
estimated that it will contain say 2700 of a function points, ok.
933
(Refer Slide Time: 18:01)
Suppose the company that produce this table ok; so, here the company that produce this
table. In fact, charges a higher fee per function point larger systems you can see, up to
2000 is less, then for higher function points you see the value is the cost is increasing.
Now, suppose you require a system and that has to be implemented and the supplier for
implementing this system it estimates that there will 2700 FPs ok. Suppose you have to
you require a system the you have told you requirements to the developer, the supplier;
the supplier has estimated that the system that has to be implemented it will contain 2700
FPs. And then he will charge for this system based on this calculation based on these
rates. So, now, 2700 FPs means you see you can first avail up to 2000 then for 500
another rate. Then for remaining 200 this slob third slob will be applied.
So, then what will be total price? The total charge that supplier will make or he will
quote for the customer will be how much 2000 into because, now I have broken 2700
means in order to fit in this slob two for 2000 this charge will be applicable. For five
hundred another this charge will be applicable and remaining 200 function points this
charge will be applicable. So, now, this will be how much, 2000 into rate is 900 plus 500
into 1000 plus remaining 200 will be charged at the rate of 1040 this.
So, this will what, this will come to or this is coming to 25, 08,000. So, for developing a
system which will be implemented at the customer site having or which has estimated
which is estimated to contain 2700 function points will be approximately what priced 25,
934
08,000 based on what this fixed price per unit delivered counts. So, this is here we have
chosen FP, we can also chose for LOC, but LOC has several drawbacks. So, you can use
FP in place of LOC for this type of contract that is for fixed price per unit delivered
contracts.
The scope of the application can grow during development of course, during
development you may require some more scope some more requirements. So, this scope
of the application can grow during development it, but you see it would be unrealistic for
a contractor to be asked to quote a single price for all the stages of a development
project. So, if you are want software, so for the all the stages of the development project
it is very much difficult the contractor cannot quote a what value it is unrealistic.
See, how can they estimate the construction effort needed when the requirements are not
yet established; you do not know what will be the requirements. If requirements are
evolving so, you the customer can the supplier cannot quote the full value for a single
price for all the stages rather if this kind of things are there. Then for different what slobs
for different how many function points are there, even if it is increasing it can be
accommodated some to this ranges or still this can be extended.
So, if your software development consist of many stages then what you can do in since it
is difficult for the supplier to give a single price; in this case one approach can be or one
approach would be to negotiate a series of contracts. Then what you can do, you can
935
make a series of contracts each covering a different stage of system development for
each stage some price you can negotiate. So, what could be the possible stages for each
of the stages you can negotiate what some prices you can negotiate a series of contracts
and then the prices can be charged to the developers sorry to the customers.
What are the advantages in this approach? You can see that the customer understands ok,
customer understanding of how price is calculated because see everything is given may
be in the form of a table or what unit wise this is given. So, how the price is computed? It
is very simple mathematics. So, the advantage that customer can understand how the
price is calculated. So, comparably between different pricing schedules, he can compare
easily between the different pricing schedules. Emerging functionality can be accounted
for if a new functionality you want to add, that depending up on the what situation or the
time it can be easily added. And the charge it can be easily calculated by this supplier
and supplier incentive to be cost effective of course, the supplier incentive here it will be
much more cost effective.
936
(Refer Slide Time: 23:15)
But the drawbacks are that difficulties with software size measurement, may need
independent FP counter. So, how to what measure the software size? We have seen the
advantage or drawbacks of LOC; and how the drawbacks of LOC can be removed we
have seen also in FPs, but FPs also have some drawbacks. So, that is why difficult it will
find some difficulties while software measuring the software size you may need some
independent function point counter.
Similarly, changing as oppose to new requirements how do you charge see if you add a
new requirements its easy we can use the per unit. So, or this range can also be extended.
So, adding new requirements it is easy yeah; that means, the supplier can easily calculate
the extra charge. But what is about changing requirements? The requirements is already
there, you want to little bit changes then how you can calculate the charge. So, this is
difficult in this approach.
So, these are the different what this is these are the different contracts based on another
classification ok. So, this is a different classification of contracts or these are the
different types of contracts based on the other another classification how the payment is
calculated.
937
(Refer Slide Time: 24:31)
So, there is still another type of classification, another way of classification based on the
approach that is used in contract selection is as follows. So, this classification is based on
the approach that is used for contractor selection. So, number one is open tendering
process, then restricted tendering process and then negotiated procedure or this
negotiated procedure we call it as single tender or singly quotation process.
Very simple the open tendering process so, here any supplier can bid the response to the
invitation to the tender ok. So, basically first the client he will float invitation to tender or
938
he will what display he will advertise some request for proposals popularly know as
RFP. So, any supply in open tendering process any supplier can what submit the bid in
response to the invitation to tender or request for proposal. Here all tenders must be
evaluated in the same way that is one important thing, how many tenders have submitted
by different what bidders? So, all the tenders must be evaluated in the same way. And
the government bodies may have to do this by local or international law.
So, what are the government bodies? They have to what follow this open tendering
process while purchasing equipments or what services by this way. And the government
bodies may have to follow this way by what observing the local and international laws;
with a major project this evaluation process can be time consuming and expensive. So,
this is what drawback that for major projects this evaluation process can be very much
time consuming and expensive. Because, for publication of these what RFP or what
advertisement you have to give some days minimum one month then for what you have
to evaluate technical proposals then cost evaluation then you have to finally, award the
contract unlike this there is according to the laws that is the timing gaps are given. So,
this will take huge amount of time.
So, if you do not have sufficient amount of time then this method may not be suitable.
Similarly where the, what client is a public body, an open input tendering process may be
compulsory. So, for normally government organizations and public organizations or
public bodies open tendering process normally it is a compulsory.
939
(Refer Slide Time: 26:53)
Then we will see the next one that is the restricted tendering process. So, in this case the
bids are received only from those suppliers who have been specifically invited. So,
please mark the difference, here anybody else can supply the bid can quote the bid. But
here the bids are received only from those suppliers who have been specifically invited.
You know in your locality also say these are the 5 or 10 what suppliers. You will send
your RFP request for bids only to those suppliers; and when you receive only those ten to
whom you have send this request for proposal their bid will consider others cannot apply.
So, that is why this is a you can say that special case of open tendering process where
everybody cannot submit, but to whom we have to invited to submit the bid they can
only submit and you can evaluate their bids.
Unlike the open tendering process at any time the customer can reduce the number of
suppliers being considered at any stage. So, in open tendering you cannot reduce the
number of suppliers, but in open what restriction tendering process that is why the name
is restricted. At any time you can restrict you can reduce the number of suppliers which
are being considered any stage.
So, normally you have to say minimum perhaps six number of what these request you
have to send you have to invite the bids from minimum 6 number of what firms or
suppliers. Then how many are coming then you have to see you have to evaluate and
then you have to what select the appropriate supplier for offering the contract.
940
(Refer Slide Time: 28:29)
Then the last one is know as negotiated procedure or single tendering process. So,
because see open or restricted tendering process may not be suitable in some particular
sets of circumstances particularly when time is not there let us take two examples here.
Suppose there is a fire that has occurred and that in your computer center and that has
destroyed some of the ICT equipments. Then what is now your major objective the major
objective is how to get replacement the equipment up and running as quick as possible.
And there may not be sufficient time to embark on a lengthy tendering process; I have
only told you these what open tendering or restricted tendering this take what a long
amount of time because that time gap you have to maintain as per rule and you these
things you require urgently. So, in this case you do not have time to go for this lengthy
tendering process. So, what to do? So, the solution is this single tendering process or
negotiated procedure.
941
(Refer Slide Time: 29:23)
Let us quickly take another example, where a new software application you have just
recently built or you have just recently you have obtained it which was supplied by
outside supplier. Now you require some extensions to this newly purchased system. So,
you know as the original supplier has this staff similar with or familiar with the existing
system they can do it easily.
But it will go no I will go for the tendering process, then you see even if you get what
firm with less price, but they are not conversion they are not familiar with the software.
So, you will get so many technical problems, so that is why in this case also it is
inconvenient to approach other potential suppliers by a filtering process. So, the solution
is you should go for the single tendering process, who has developed who has supplied
the what, software to you should approach that one following this single tendering
process.
So, but in this cases so, in this cases an approach to a single supplier may be justified that
is you have to negotiate with only one supplier. So, that is why you have a single
tendering process may be here from where you have supplier, you have purchased this
supplier from which supplier you may negotiate with that supplier with some original
price.
942
(Refer Slide Time: 30:35)
So, negotiated procedure or the single tendering process there is a drawback of this. So;
however, approaching a single supplier a could expose the customer to charges of
favoritism; you may get complaints that you have shown favoritism to that particular
single person and you have awarded the contract so, you may get that sort of criticism.
So, whenever you are going for single tendering process, you have to give full
justification, clear justification, proper justification, why you have gone for single
tendering and two such examples I have already shown. So, with a proper justification
you can go for single tendering process, so that the drawbacks of what restricted
tendering or open tendering process can be avoided mainly the timing is used out. If
there is no time very less amount of time you should go for single tendering process with
a clear justification or with a proper justification.
943
(Refer Slide Time: 31:31)
So, in this class we have discussed a little bit of introduction to managing contracts. We
have discussed the different classifications of contracts along with their advantages and
disadvantages. We have also discussed which type of contract is best suitable under what
circumstances.
So, this is the reference we have taken this material; and then we will stop here.
944
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 49
Contract Management (Contd.)
Good morning, so, now, let us see the remaining portion of this contract management.
We will first see what are the different stages in placing a contract, then we will see
some of the terms associated with what contract tendering process or the contract
placement process so, we will see the contract checklist. Then we will see this contract
management how to manage the contracts.
945
(Refer Slide Time: 00:44)
These are the different stages in contract placement; first you have to analyze what are
your requirements then only you can initiate the what contract process then you will
think of how to evaluate the contracts. If you will invite the contracts how to evaluate the
contract you have to think of and then after these after making this plan then you can
send invitation to tenders or you can send the RF request for proposals RFCs and then
you will receive RFCs or this RFCs require request for proposals from different vendors
and after receiving then after receiving the proposals, you have to determine what
mechanism you will use to evaluate the different proposals.
946
So, now let us explain all those steps in detail all those steps in detail. So, first one I have
already told you before thinking of something for giving some contract you have to
yourself internally you have to analyze the requirements. After analyzing the
requirements, you specify what exactly your requirements are. You document you
prepare a document for your requirements, you write down all the requirements in a
document that we call as SRS software requirements document.
So, normally this requirements document should contain these sections, there should be
introduction section that should contain the introduction to this proposal or what are for
what is your problem that needs to be automated then description of the existing system
and the current environment. Then what is the future strategy or what are the future
strategy or plans.
One section should be there the system requirements if you will develop if you will get
this what system what are the requirements out of this something might be mandatory
compulsory and something might be desirable and then what are the possible deadlines
when you are expecting that the firm will give you the product; may be the whole
product by one date or phase wise we can give the products in incremental manner and
then in any additional information required from the bidders that also we have to put. So,
in the requirements document so for the contract you have to prepare the requirements
document and the requirement document must contain these sections.
947
So, the requirements will include what? The requirements that you will mention in the
requirements document should include functions in software or what are the functions of
the software within what and what will the inputs and outputs with all necessary inputs
and outputs that should be included. Then standards to be adhered to the software must
meet which kind of standard; ISO standard or that WTO standard World Trade
Organization standard etcetera. So, the requirements should also contain these standards
to be adhered to. Then other applications with which software is to be compatible your
once after getting this application it should be what compatible with what other software.
Suppose you are trying to what automate.
So, previously this railway reservation system was manually and when it has a given a
proposal for automating, it you have to see it has to be compatible with what other things
might be see this railway reservation system while you are booking this has to be
compatible with this gateway of the banks because the money that you have to pay it has
to be done through bank gateway. So, it has to be compatible with that. So, similarly
quality requirements or the response times so, what kind of quality you require, how
much the response time should be and what should be the throughput all those things
must be clearly mentioned in the requirements document.
Then second phase is evaluation plan. You have to think of when the proposals will
come from different vendors how to evaluate. So, you have to think how are the
948
proposals to be evaluated. So, there are different methods that you can use for evaluating
the what proposals, you have to plan for them, like just reading the proposals see which
one is the best proposal. You can conduct some interviews invite all the what suppliers
take an interview from with each of them or demonstrations if you are just purchasing a
new hardware or a software call the parties the software the suppliers say them give a
demonstration each may be all are present or you may confidentially by one by one you
call and let them give their demonstrations.
Another one is site visits what has to be automated. So, what you can do you an tell them
that site they may visit your site or you can visit their sites what kind of things they are
doing. Based on that you can evaluate and last one is practical test you can tell them they
should what give some tests somewhat online test can online demonstration kind of thing
they will come to your place you say that we want this give that say test and shows how
this is working. So, those kind of things those kinds of methods can be used to evaluate
the proposal submitted by different vendors.
So, need to assess, so, in case of the evaluation plan while you are making the evaluation
plan while you are making the evaluation plan there is a need to assess value for money.
There is a need to assess the value for money or you what you call VFM for each
desirable feature. So, for each desirable feature you have to there is a need to assess the
value for money and this actually VFM approach is an improvement on previous
949
emphasis on accepting lowest bid. You see earlier days how the proposals are evaluated
just financially. The supplier who has quoted the lowest amount that was selected he was
awarded the contract he was getting the contract.
So, VFM approach is an improvement of this traditional approach which was based on
what this lowest bid and some of the examples are they are like a feeder files saves the
data input or 4 hours work a month saved if 4 hours are worked a month, then it may
save some 20 pound an hour or. So, system to be used for 4 years that is a constraint you
put the system to be used for 4 years or if the cost of a feature is 1000 pound then would
it be worth it. So, like this kind of considerations it can take. So, basically this evaluation
approach is an improvement of the previous approach which was based on emphasizing
the lowest bid.
Next step is invitation to tender. So, after what deciding that this plan will use for
evaluating the proposals then you can what invite a tenders from different vendors. So,
note that the bidder is making an offer in response to ITT. So, when we will issue ITT
invitation to tender or the we call it as RFP request for proposal. So, in response to the
ITT or the response or this RFP what request for proposals, then bidders different firms
they can what send offers, they can quote, they can send proposals to you and acceptance
of offer creates a contracts they then when a when these proposals will be evaluated after
950
that which will be what selected. So, that will be awarded that formula would be contract
acceptance of the offer creates a contract customer may need further information.
So, what information the firms have mentioned in that proposals that may not be
sufficient. So, the customer may need further information. In that case the supplier has to
be informed and what information you require more, they have to give those information
and then the evaluation of the proposals may take place. The disadvantage that different
technical solutions to the same problem may exist. So, for what do you want to solve, the
same problem for the same problem different vendors may provide different technical
solutions. Then how to evaluate?
It is little bit difficult to evaluate, because for the same problem different technical
solutions are quoted by different what vendors. So, but in case of lowest what this lowest
bid this problem does not arise, which is having the lowest amount he will award the he
will be awarded the contract. But in this case where for the same problem different
technical solutions are there it is difficult to evaluate.
Then in during this what invitation to tender, proposals will be submitted and we have to
what know one important document called as Memoranda of Agreement or in short, we
call as a MoA. So, here the customer asks for technical proposals. So, the proposal
actually can be divided into two parts; one the technical flow part another the financial
part. So, the customer may ask for the technical proposals.
951
Then the technical proposals will be submitted by the firms with these technical
proposals are examined and discussed, may be examined through a committee internal
committee may be some experts you may involved in that committee, then the agreed
technical solution in the MoA. So, the technical solution which is agreed by all that will
serve as the MoA that is the MoA. Then tenders are then requested from suppliers based
on MoA.
So, after this preparing the technical proposal which is now agreed by all and this will
serve as the MoA, then technic the tenders are requested from suppliers. Then you
advertise it tenders are requested from suppliers based on this MoA, because this
contains all the technical details, and this is these are agreed by all then the tenders are
judged on price.
So, because now technical proposals are now fixed, they are frozen now the tenders will
be judged based on the price, because every vendor has to what quote these technical
proposals. They have to provide this technical what details as mentioned in the MoA,
then tenders are requested from the suppliers based on the MoA and then after receiving
the tenders they are judged based on the price, because now as a uniformity on the
technical proposals technical proposal or technical contents have been frozen now. So,
then the fee could be paid for technical proposals by the customer, but so, what charge or
what amount has been spent in preparing the technical proposal that amount may be paid
by what the customer.
952
(Refer Slide Time: 11:56)
The last step is evaluation of proposals. So, here it is needed to produce an evaluation
plan describing how each proposal will be checked against the selection criteria. So,
based on what criteria you will select that which is the best firm and the contract will be
awarded to that firm. So, that is why it is needed to produce an evaluation plan
describing how each proposal you can check, how each proposal will be checked against
the selection criteria. So, this will reduce the risks of requirements being missed and
ensures that all proposals are treated consistently. So, if you will do you will finalize the
selection criteria this will reduce the risk of any requirements being missed and it will
ensure that the proposals are treated consistently. There is no consistent in these cases.
It will be unfair to favour a proposal, because of the presence of a feature not requested
in the original requirement. So, if in the original requirement or in the MoA one what a
feature it is not there, but somebody a quotation a supplier has mentioned that feature
and based on that if that proposed feature if you are selecting that vendor then this will
be unfair, because it has not mentioned in the original requirement or in the MoA. So,
that is why it will be unfair to favour a proposal, because that a feature is not present in
the original requirement and since supplier has mentioned in a proposal you have
selected him it will be unfair.
953
(Refer Slide Time: 13:31)
An application could be of three types. We have already seen in the last class that an
application could be bespoke. That means, for a particular user with the off the shelf may
be generalized, or it is off the shelf, but it is customized. In the case of off the shelf
packages, the software itself could be evaluated and it might be possible to combine what
to combine some of the evaluation with the acceptance testing.
But with the bespoke development it would be a proposal, that is evaluated while the
COTS this software it may involve the elements of both; that means, elements of
bespoke as well as elements of the what off the shelf package ok. Thus, different
approaches would be needed because COTS approach uses the best features the features
of bespoke and off the shelf methods. So, that is why this so, that is why thus different
approaches would be needed ok.
954
(Refer Slide Time: 14:40)
Now, let us see what are the steps of the evaluation of the proposals. how you will
evaluate the proposals, what are the what is the process. The process of evaluation may
include the followings; first you have to scrutinize the submitted proposal documents.
Then you may have to conduct interview with the suppliers or their representatives, then
you may request them to if it is not clear they may give some demonstrations, they may
do some what site visits or he may do go for site visits and also he may go for the
practical test, you say them, you show us all those things here practically. So, he may
getting conduct some practical test and based on this practical test who can show the best
results you can award the contract to them.
955
(Refer Slide Time: 15:31)
So, the proposal documents provided by the suppliers they can be scrutinized to see if
they contain all the features as mentioned in the MoA, which are satisfying all the
original requirements. If they have missed some of the points then clarification might be
sought over certain points. It is important to get a written agreed record of these
clarifications, do not accept something verbally or orally.
It has to be what agreed and recorded the customer might take some initiative by taking
the minutes of the meetings the customer what he might take the initiative by what
preparing or taking minutes of the meetings and then writing afterwards to the suppliers
to get them to confirm their accuracy. So, after recording the minutes, you can send them
to the suppliers in order to get them to confirm their accuracy.
A supplier could in what the final contract document attempt to exclude any commitment
he may what attempt to exclude some of the commitment to any representations made in
the pre contract negotiation. That is the term and terms of the contract needed to be
scrutinized for this, but where there is an existing product it should go for demonstration.
956
(Refer Slide Time: 16:44)
A frequent problem which that while an existing application works well on one platform
with a certain level of transactions it does not work satisfactorily with the customers ICT
configuration or level of throughput. So, one what existing application, it is working on
one platform may be windows, but while you are taking to the customer site and that is
based on Unix. So, that what application may not work or the throughput may not be
same or the response time may not be same.
So, demonstrations might not reveal this problem. So, these kind of things demonstration
might not this kind of problem problems demonstrations might not reveal. So, these
things can be what better what evaluated if you will make visits to the operational sites,
visits to operational sites already using the this system could be more informative and
you can get more idea which will help you in selecting the firm. In the last resort a
special volume test could be conducted. So, similarly at last a volume test you should
conduct and then you can what see which proposal or which yes which proposal passes
this volume test and that may be selected for awarding the contract.
957
(Refer Slide Time: 18:00)
So finally, a decision has to be made to award the contract to a supplier. The reason for
following so, now, let us see why will follow such a structured and objective approach to
do the evaluation, because we want to demonstrate that the decision that has been made
that is impartial that is unbiased. A project manager please remember a project manager
he cannot be expected to be legal expert.
It needs advice. So, there might be some legal what affairs. So, the project manager
cannot be a legal expert he needs advice but ensure that the contract reflects the true
requirements and expectations of supplier and client. So, whatever the contract you are
finally, awarding that should reflect the true requirements of the client and the
expectations of both the suppliers and the clients.
958
(Refer Slide Time: 18:51)
So, let us quickly see some of the what contract checklist or the some of the terms
associated with the tendering process. So, you must give proper definition to the terms
such as what do what is who is a supplier, what does what is meant by a supplier, who is
an user, who what is an application proper definition should be there in the tender.
Similarly the form of agreement you must specify for example, is this a contract for a
sale or for a lease kind of thing or this is a license, just a permission to use a software
application, can the license be transferred to another what party or another what person.
So, all those forms of agreement must be what properly explained.
959
Then you can show that another term is goods and services to be supplied. So, as an
organization you might require some goods or services. So, will see two things that
equipment and software to be supplied. So, if you are what hiring or if you are
purchasing some equipment and software, then this should include one actual list of the
individual pieces of equipment; what are the individual pieces of equipment to be
delivered? Do not write just server exact model number with exact specification and the
different what components of the server that has to be clearly mentioned. So, it has to be
complete with this specific module number version numbers etcetera.
So, similarly if you are going for what purchasing or hiring some services, this should
cover such things as whether the service requires some training and whether that will be
provided by the firm or not. Documentation, installation, conversion of existing files
during automation, maintenance agreements, for how many years the supplier will
provide free maintenance may be 1 year 2 year 3 year etcetera. Similarly, warranty and
transitional insurance arrangements for how many years the firm will be at the insurance.
So, all those things must be specified while you are going for what purchasing goods and
services.
Similarly, what is who is the owner of the software ownership of the software that also
has to be clearly mentioned, can the client sell the software to others is it possible, can
the software can the supplier sell software to others, could specify that customer has
960
exclusive use, if he is yes you have to see otherwise see there are two possible whether
the supplier can sell or cannot sell.
So, it could specify that the customer has exclusive use or not. Does the supplier retain
the copyright or it is given to the customer that has to be also clearly mentioned. Where
supplier retains the source code may be a problem if the supplier returns the source code
may be a problem because if the supplier goes out of the business to circumvent a copy
of code could be deposited with an escrow service.
So, this is a serious problem if now your relationship good and if next time the supplier
does not give the contract then he will create problem and the source code with him you
are just having the using the application. So, you cannot what do your jobs you will be in
serious problem, because any maintenance etcetera you cannot easily do because the
source code is lying with the supplier.
Similarly, environment for example, where equipment has to be installed what is the
environment, who is responsible for various aspects of the site preparation like the power
supply back up facilitation etcetera this is who is responsible that has to be clearly
mentioned. Similarly customer commitments for example, providing access supplying
information and approving the intermediate results etcetera. So, customer what are the
customer commitments that is also should be properly mentioned in the contracts.
961
(Refer Slide Time: 22:48)
Acceptance procedures, that good practice it accepted delivered system only after user
acceptance test. So, you have to carry out the user acceptance test. So, when the user
acceptance test will be there, who will be who will perform the user acceptance test all
those things should be clearly mentioned all those things should be clearly mentioned in
the contract.
Part of the contract would specify such details as the time that the customer will have to
conduct the tests ok. So, part of the contract would specify such details as the time that
the customer will have to what conduct the test, deliverables upon with the acceptance
test they depend on and the procedure for signing of the testing what is the procedure for
signing of the testing as completed all those things must be clearly mentioned in the
contracts.
962
(Refer Slide Time: 23:36)
Similarly what standards to be met? The product should met ISO standards or the
product should met that WTO standard that World Trade Organization standard etcetera.
That also has to be clearly mentioned in the contracts. Then project and the quality
management, the arrangements for the management of the project must be agreed ok.
The arrangements for the management of the project must be agreed by the what client as
well as the supplier.
963
These include the frequency and nature of for the progress meetings and the progress
information to be supplied to the customer. So, all those things should be clearly
mentioned there and timetable of activities like this will provide a schedule of when the
key parts of the project should be completed. The different milestones when they will be
achieved a clear-cut timetable a what, unambiguous time table a complete time table of
this different activities should be mentioned in the contract.
Then on the price and payment method; obviously, the price is the most important item
in your contract. So, what also needs to be arranged is when the payments are to be
made. Whether the payment will be made a monthly or whether the payment will be paid
payment will be made quarterly or half yearly or yearly or when some milestone is
achieved. Some deliverable they have made then only we will make the payment or on
increment basis that has to be clearly mentioned. Payments may be tied to completion of
some specific task say when task A is completed then you release say 10 percent when
task B is released completed then what release another 20 percent like this. So, payments
can also be tied to the completion of some specific task or events.
964
(Refer Slide Time: 25:10)
Then we will quickly see about the contract management. So, some terms of contract
will relate to management of a contract ok. So, some terms of the contract will relate to
the management of the contract for example, about progress reporting, how frequently
you can report the progress. That should be clearly mentioned.
Some decision points decision points may be some of these what you can say decision
points could be some of the milestones, it could be linked to release of payments to the
contractor; that yes testing is completed release say 50 percent, then installation is
completed another 25 percent like this. So, decision points or these what we can say
milestones they could be linked to release of payments to the contractor. So, when the
variations to the contract that is how are changes to requirements dealt with.
So, these were the original what requirements mentioned, after some days some changes
are made to requirements. So, how these changes to the requirements will be dealt with
by these what supplier that also should be mentioned. So, that is a part of the contract
management. Similarly what should be the acceptance criteria as I have already told you
should go for user acceptance test and if it is successful then you can accept you can
accept otherwise you can reject. So, what is the acceptance criteria that also should be
clearly mentioned.
965
(Refer Slide Time: 26:32)
How do you evaluate the followings like, what is the usability of the existing package? A
package is there what is adjusting what is the usability of the existing package?
Similarly, usability of an application yet to be built an application it is first application is
existing what is the usability? Similarly, an application is yet to be developed, what will
be its usability? Maintenance cost of hardware this software will not run alone it will
request some hardware. So, what maintenance cost will be required for the hardware?
Similarly, what time will be taken to respond to requests for software support? So, you
are facing some problems and you are informing the supplier who was delivered the
software how much time they will take to respond to your query to a request for
providing the details what service or the support. So, that time also is very much
important.
Similarly training in order you are purchasing a new software your end users do not
know how to run it so, you require some training. So, that training should be include as a
part of the what contract they should give free training to the users and that fee they
might be charge in the what their proposal, but training should be free for what educating
the users how to use the software. So, also all those things you should evaluate while
what looking at the contracts while evaluating the contracts.
966
(Refer Slide Time: 27:55)
Contracts should the contracts should include some agreement about how the customer
supplier relationship is to be managed is very much important the contracts, it should
include the agreements that how what is what should be the relationship between
customer and supply relationships and how it will manage.
For example, if decision points or milestones and what should be the decision points
based on where the payment will be released. Some of the quality reviews what should
be the quality of the software or the intermediate products there as given by the supplier
when I change to requirements if, because in what organization changes will must occur.
So, if there is a for every change, they will say that we require money very much
difficult. So, minor changes they should do it freely and whenever there will be some
major changes how they will charge the price. So, that also should be clearly mentioned
in the agreement.
967
(Refer Slide Time: 28:49)
So, today we have discussed the different stages in contract placement. Then we have
discussed the typical terms of a contract, the checklist of the what contracts. We have
also explained some terms of a contract which relate to the management of contract. So,
these are the references we have taken.
968
Software Project Management
Prof. Durga Prasad Mohapatra
Department of Computer Science and Engineering
National Institute of Technology, Rourkela
Lecture - 50
Project Close Out
Good afternoon. Today, we will discuss the last phase of a Software Project Management
cycle that is Project Close Out or project closure; how to close the project.
Today, we will discuss first the little bit introduction of project closure, then what are the
possible reasons for project closure, then what is the process to be followed for closing a
project, then performing a financial closure and finally, publishing the project closure
report.
969
(Refer Slide Time: 00:47)
So, you know that every project must, it will come to an end, sometimes today or another
day. This is the last phase of a project management lifecycle. It is the responsibility of
the project manager to decide the appropriate time to close a project, when the project
will be closed that is the responsibility of the project manager to decide.
Normally, project closure activities can be divided into two types administrative closure
and contract closure. Let us see what is administrative closure.
970
Administrative closure activities; they consist of ensuring that all the project deliverables
they are achieved and the project know-how are transferred to the other personnel and
are properly documented and archived ok. As its name suggest administrative closure
activities so, these activities consist of ensuring that all the project deliverables such as
requirement specifications, your design documents codes, test plans and the modules
tested modules, etcetera. They are properly archived and the project know-how; that
means, the project what knowledge project details, etcetera are these are transferred to
the other personnel, which are associated with project and these project know-how they
are properly documented and archived.
Then the other activity is known as the contract closure activities. So, these activities
verify that all the terms and conditions of the contract with the customer as well as
various subcontractors they are met. All those terms and conditions they have been
successfully met, and the project and their satisfactory closed.
Now, let us see why we require to close a project? What are the possible reasons for
project closure? There are two main reasons for closing a project; one, all the project
goals the objectives they have been successfully accomplished. So, the project is
completed, now we have to close the project, other one, that it has been found that the
project is unlikely to achieve its stated objectives. What are the possible mentioned
objectives or the goals it is very much difficult to achieve them it is unlikely that we can
971
achieve them and hence, we have to prematurely terminate the project. So, these are the
two possible reasons why a project needs a closure.
Now, let us see I have already told you there are two reasons; one that project is
successfully completed, another is project is not successfully completed and it is unlikely
that it will achieve its mentioned objectives and goals and hence, we have to prematurely
terminate. Let us see, some of the possible reasons, why we have to prematurely
terminate the project before its due date.
Many reasons are there to prematurely terminating a project, but few important reasons
we will discuss below. Number one lack of resources, the project requires some
resources and those resources we do not have right now. We have started the project, but
after the starting project, we fail that we cannot, what accommodate the resources might
be due to financial deficits or some technical problems we do not have the lack of
resources, we do not have the sufficient man power etcetera. So, in that case we have to
prematurely terminate the project.
Similarly, changed business need of the customer; the customer was requiring
something, but in the meanwhile after the starting of the project there is no need. So, that
because suppose, he wants to the customer has given a project to develop a inventory
management system. And after posing the status then or after posing the after starting
project, they felt that we will give this development of the inventory control system to a
972
third party then so, the business need does not now, it is does not occur the it is there is a
change in the business need of the customer. So, in that case you have to forcibly
terminate the project.
Number three the perceived benefits accruing from the project may remain what they
may not no longer remain valid. For example see while starting the project you are
expecting that this much benefits, I will get; if I will what automate it. But in the
meanwhile several competitors came into the market and some competition products
they come into the market. So, you are afraid of that this much benefit I may not get. So,
in that case this, what project has to be prematurely terminated.
Similarly, next possible reason is incomplete requirements, if you do not know your all
the requirements; requirements are incomplete then even if in assuming that we can
clearly state the requirements within say few days after the starting of the project. We
have started the project, but even if some days have passed, we are not able to define the
requirements; we are not able to clear what the customer is not able to clearly specify the
requirements.
So, in that case the project has to be prematurely terminated. Similarly, changes to the
regulatory what policies. So, your organization is bounded by the rules and policies
regulations of a government and if a government, while you have started there was no
such what binding policies, but after the project has been started government has
973
imposed some policies, some restrictions. Such as suppose, you have planned to develop
a product which is based on satellite communication and then that we will release to the
market. And suppose, government has imposed a rule no, nobody can use a satellite
communication based product in this country, then what will happen? You have to
forcibly, prematurely terminate the project even if you have started it.
Then some key technologies used in the project have a become obsolete during the
project execution. What are the key technologies during the starting of the project what
key technology are there and after starting of the project after few months or few years.
Now, new technologies have come up and these then techniques now, they are being
outdated. For let us take an example; you have suppose started with development of a
project, we using 4 G. Now, you see now after using few days 5 G is coming. So, once 5
G will come to the market then; obviously, 4 G will become obsolete and if I have
started your project keeping in a mind that I will use 4 G then that will be obsolete and
nobody will purchase your product. So, you have to forcibly prematurely terminate the
project.
Similarly, risks becoming unacceptably high, while you have started the project, you
have known that some minor risks, what lower risks will come. But after starting the
project within few days you see the risks are becoming very much high, may be technical
risks are there or financial risks are there or operational risks are there. So, you cannot
tolerate all those risks. So, in that case also you have to terminate the project. So, these
are some of the possible reasons, why you have to terminate a project before it’s time;
that means, you have to prematurely terminate a project.
974
(Refer Slide Time: 07:41)
Then let us see why it is required to properly close the program or projects and why
people are not properly closing the projects? Why? Due to somewhat reasons might be
due to negligence or so they are not properly closing, after these we will see, what are the
problems it will not properly close the program.
So, let us first see why are projects not properly closed number one lack of interest by
the project team members. The project team members, after either successfully
completing the project or prematurely terminating the project, they do not have any
interest in that project. So, due to lack of interest they do not officially what or properly
close the what, project. Then under estimation of how first know how can get lost and
how much implicit knowledge exists with the team members.
See finally, after developing the project you gain some knowledge and you know after
the what, project is over this gain, what knowledge gain; It will be simply, you will
forget, you will lose, you will loss those knowledge. So, now and you are under
estimating ok, I have that knowledge now, I have lost it. So, you are underestimating that
or you are not imagining that how quickly the, know how or the knowledge, it will get
lost.
So, due to underestimation of how fast, how quickly the know, how only the knowledge
regarding the project it can get lost, people may what, they may not do the proper closing
of the project. Also, how much implicit knowledge exist within the team members, you
975
are underestimating that and as a project manager, that is why you may not properly
close this, what project this is another reason.
Another reason is emotional factors suppose a project is continuing for three years, four
years, five years. Within those four five years, we are having strong relationship among
your teammates and you are having some emotional, what relationship among your
teammates and you do not want that. So, quickly you will lose these emotional
relationships. So, due to the emotional factors you do not want that no, this much
quickly, I will not properly close this project.
Another reason is in decision regarding the project closure. As a project manager we are
not able to take a decision, when to project the close the project, you are making the late.
So, because you are in dilemma whether the close, whether to close the project or today
or tomorrow or after few dates so, due to this delay you are not able to take a decision so,
that is why due to this indecision you are not able to properly close the project. So, these
are some of the what reasons, why the projects are not properly closed.
Now, let us see these things, these points what I have explained, these have explained in
this slides so in detail that you can read yourself.
976
(Refer Slide Time: 10:33)
Now, let us see about what are the problems if project closure is not becoming proper, if
a projects are not closed properly what kind of problems we may face? Number one time
and cost overrun; see if a project termination is delayed what will happen the project as a
cost center runs up expenditure in the meanwhile leading to cost overrun. You are
nothing, no work you are doing, because project is already completed, but since you have
not officially have closed the project.
So, some expenses will still continue. So, these will be what these cost in that way it this
will lead to cost overrun, also the project duration appears to be longer than what it
should actually be. Already, you have finished within four years the project, but another
two three months you are not able to officially properly close it so; that means, the
project duration it will be counted as four years three months. So, unnecessarily the
project duration it appears to be longer than what it should actually be.
977
(Refer Slide Time: 11:33)
Next problem is locking up what valuable human and other resources. Your project is
over, but since you are not officially closed it the human beings, the staff members and
other resources such as computer, etcetera these are being held up. Those resources or
human beings these staff members otherwise, they would have what assigned to some
other project.
So, unnecessarily you are locking up the valuable human beings or the staff members
and other resources, because you have not properly closure the officially mark the
closure of the project. So, this is another problem that may occur or if you are not
properly closing the project.
978
(Refer Slide Time: 12:11)
Then this stress on the project personnel is another problem. So, the project personnel
often lose out on experience that they could have gathered on other projects on which
they might have been deployed. Had the project closeout occurred at the appropriate
time? So, if the project would have closed at the appropriate time, these staff members
could have been what allocated they could have assigned to some other projects and they
could have what gain experience there. So, unnecessarily they are losing their
experience.
So, and the feeling of not doing anything challenging and missing out on the learning
opportunities, sitting ideal and the impact of these on the future carrier can be very
stressful for the team members and they will be unnecessarily stressed. So, this is another
problem that you will get, that you will observe, if the projects are not properly closed.
979
(Refer Slide Time: 13:03)
Now, let us see; so, quickly some of the issues associated with project termination. The
problems with project termination are two-fold; one emotional problems, two intellectual
problems. The emotional issues, it can concern both the team members as well as the
customer or the clients.
Now, let us see what will happen or what these emotional issues they suggest. The
emotional issues ok, the emotional issues are that the team members may experience; the
emotional issues that the team members may experience include what the uncertainties
980
and apprehensions concerning their assignment to the next project. So, if the project is
not properly closed, then the emotional issue the emotional thinking will come among
the team members that they may not be what they may not get another suitable project.
There will be some, they will apprehend that they may not be assigned to the next
suitable project.
So, this may manifest as general loss of the interest in work and lack of enthusiasm to
perform the remaining project work. So, in that way what will happen? What are the
remaining project work they will not do sincerely, they will not have interest and they
will what they will lack of enthusiasm to do work for the next projects. Similarly, there
can also be diversion of attention, if the project is not officially close they will sit idle,
the staff members will sit idle and the team members their attention will be diverted.
They will pay more attention to issues such as getting reassigned to a project of their
choice and the project work can take a and then the main objective the project work, it
may take just a back seat, they will just lose interest, they will have diverted attention, if
they will sit idle.
On the client side, there can be a sudden change in attitude and the loss of interest in the
project that is another important issue associated with the project termination. The client
may even change the personnel dealing with the project, since you are not officially have
closing or terminating the project. So, the client may even change the personnel who is
981
dealing with the project and thereby, causing further disconnect and difficulties in project
closure. So, the official project closure will be further, what we will face so, further
difficulties and problems.
Now, we will come to the next problem. Next one is intellectual problems; the
intellectual problems may include hand what handling some sensitive issues. Let us take
what sensitive issues may come. So, when a project is to be prematurely terminated the
terms of the contract. And the list of deliverables need to be renegotiated once it is
premature terminated what terms what conditions, we are mentioned in a contract and
what list of deliverables, they were plan previously, they need to be renegotiated. And
also even when some deliverables and tasks that are considered to be not necessary at
any more; however, before dropping these, it needs to be verified by the client ok.
So, even when some deliverables and tasks that are considered. There are not necessary
any more, but before dropping, these it requires to be verified from the client. Also, the
closure decision, it has to be effectively communicated to all stakeholders since, you are
what closing the project may be successfully, after successful completion or premature
termination, these decision has to be effectively communicated all the stakeholders may
be this project manager, developers, designers, clients, etcetera to all of these
stakeholders, you have to pass on these decision, effectively.
982
(Refer Slide Time: 16:53)
The learn now let us see, how will, what how will do this project closure first? What is
the project closure process, what steps you must follow to closure a project, but before
that let us see some fundamental things. Before the project closure process can be
initiated the decision regarding closing the project, it needs to have been taken in
consolation with the top management. You are not competent enough to what close the
project you have to get a permission approval from the top management.
For now, there are again two possible cases; you might want to close a project after
successful completion of the projects or after premature termination; now, let us these
two what cases individually. For successful projects it is expected that the requisite
technical documents, the user manuals, testing documents, user training, etcetera should
have been completed perfectly. And it should have been ensure that the project outputs
are usable by the customer without any difficulty.
So, two things you can see here that for successful projects, all the documents they have
been completed successfully and you should ensure that the project output or the final
outcome that is usable satisfactorily by the customers by the clients without any
difficulty. It also needs to be ensured that the administrative activities such as settling
their claims, archive what archiving the deliverables for future use, they have been also
accomplished successfully.
983
(Refer Slide Time: 18:29)
Now, let us see about the second case that is project those who will be terminated in
premature. For a project facing premature termination, the project manager in
consolation with the top management and in consolation with the customers, he has to
take the decision as whether to terminate the project immediately right now, he will what
terminate the project or he may take some time to keep it under watch for some more
time.
So, this decision the project manager has to take in consolation with the top management
as well as the customers, whether to terminate the project immediately or we can observe
the some more days and then will close it. For both the normal successful closure or the
premature termination categories of the project, it has to be ensured that that there is no
further obligations. In both of the cases there are no further obligations pending, you
must as a project manager you must ensure that there is no further obligations pending
whether it is a normal closure or it is a premature termination.
984
(Refer Slide Time: 19:33)
Now, let us successfully; now let us quickly see, what are the steps they have to be
followed for project closure process; number one; we have to get the client acceptance.
You have to get the acceptance from the client that we are going to close the process,
getting client acceptance, then archiving the project deliverables, whatever the project
the outcomes the deliverables such as your SRS document, design documents, code and
this test cases, etcetera all those things you have to archive properly. Next preserve the
project know-how the project details, the project knowledge, the experience that you
have gained out of the project you have to preserve properly.
So, that they can be used in similar projects in near future, then performing a financial
closure, all the financial documents, balance sheets, etcetera in income expenditure
statement, etcetera every what the financial aspect must be close must be closed
properly. There should be a financial audit and the financial obligations should be over
year or you must close the financial detail. There must be financial closure that has to be
performed before the financial, before the final closure.
Then performing post implementation project review yes, whether the project has been
successful completed or you are pre maturely terminating. So, you must have to conduct
a post implementation project review. You call it as a post mortem post mortem analysis,
because by this process you can know what are the what plus points of your
development, what are the negative points and what are the what lessons that you have
985
learnt from the what project development and these things you can use for getting benefit
in similar types of projects. This experience can be used, their help can be taken in the
similar projects in near future to get the benefit, and then you have to prepare a post
implementation review report.
So, after carrying out these post implementation analysis or this review you have to
prepare a report and archive it and finally, you have to release the staff, which staff
members we are associated with development of this project. After the official closure of
the project, you have to as a project manager you have to release them and, it is your
duty to see that they are successfully allocated to other similar projects, depending upon
their experience and skill.
Those things I have explained here in the couple of slides. I am skipping those things you
can these are easy things you can read those yourself. But I want to say something about
this post implementation project review, because this is very-very important. The goal of
a post implementation project reviews as I have already told sometimes, you call as post
mortem analysis it is carried out to perform a critical analysis of the project. Why? In
order to learn and improve and avoiding that repeating the same mistakes in future
projects. You can learn many things, you can improve how can what steps you can take
so that you can improve in your similar projects and you can avoid the mistakes what
you have done in this project, you can learn from the mistakes and avoid the similar
986
mistake mistakes in a future projects. By analyzing the past mistakes the project teams
they can learn to do better by improving their methods and practices.
So, not only the successful projects, but also the from the unsuccessful projects, they can
also implicitly hold a lot of information that can be identified, documented and
disseminated to get benefit from the other similar projects in near future.
Now, let us quickly see, what are the steps of post implementation project review; what
steps you must follow for the post implementation project review. First step; you have to
conduct projects surveys to collect various types of information. So, after the project is
over you should conduct some surveys to collect various types of information. For these
you may carry out, you may prepare some questionnaires and collect the information
from the stakeholders. So, information such like gather performance issues and these
organization structure, etcetera those kind of information you can collect from the
survey. Then you collect of you collect some objective information or some project
metrics from this review.
For example the cost metrics, say schedule metrics, quality metrics, those metrics you
can call it during this post implementation project review. Then hold a debriefing
meeting. So, before making the final project review meeting hold a debriefing meeting
called as the preparatory meeting, where the project manager will there all and some
senior persons or senior management persons may be there, analyze what you have
987
happen and what are the advantages you have got, what are the experiences and what are
the lacunas, then yeah, what arrive at a consistent decision. And after this you can hold
the final project review meeting and then discuss all those, then see what are the
experiences you have gained, what advantages, what are the skills you have learnt and
what are the mistakes that you have committed. So, all those things just analyze then
prepare a report based on this post implementation review.
This report you archive and these findings will help in improving the similar types of
projects in near future. These report will also give some of the recommendations for the
similar types of projects in near future. If you will follow those recommendations
definitely you will get more profit more benefit out of these similar types of projects in
near future and finally, after what preparing the review report you have to publish the
report you have to send it to all the stake holders. So, that they will analyze, they will see
what are the what plus points of, what team members, what are drawbacks what are the
mistakes they have done and these what findings they can use in the near future projects
of similar type and nature.
So, then as I have already told you the outcome of these post implementation project
review will be what report ok, it will be a report. We have also seen that after the project
is closed out you have to prepare a final report, this is known as the project closure
report. Now, let us see it contains what? This project closeout report it is documents the
988
important results, which are obtained from the various project closeout tasks from the
various project closeout activity or task what results you have got, this document will
contain that.
First, it is starts with a historical summary of the projects deliverables, what are the
project deliverables? A historical summary will contain and the baseline activities over
the course of the project that it will be contained, then subsequently, it will be present the
summary of the survey results, ok. So, I have already told you before this you have
already conducted a survey.
So, from this survey what you are getting. So, then subsequently it will be present the
summary of the survey results and the quantitative data, those are gathered about the
projects performance ok, regarding the performance details also it will contain.
Finally, the results of the final project review meeting, I have already told you here, you
have to carry out a final project review meeting what are the findings what are the results
they must be contained here finally, the results of the final project review meeting are
presented in this document. The reasons for variances from the baseline plan, we have
already prepared the baseline plan. What are the reasons that what are the reasons for
which there are variances, there are what diversions from the baseline plan and also the
lessons that we have learnt, the best practices that we have followed and the what
disposition of a project resources, etcetera, they are also highlighted in this report.
It also contains recommendations for improvement for other similar projects. So, from
this experience; from this project, what experience you have got you can prescribe some
recommendations for improvement of similar types of projects. Those recommendations
also will be what written in this closeout report so that, you can get benefit of these in the
next similar types of projects.
989
(Refer Slide Time: 28:15)
So, then finally, you have to publicize the result; the project leader he will summarize the
positive and the negative findings as well as the prescriptions or the recommendations
for improvement. The summary is published so, that all the team members can refer to it
and also the management can take initiatives for any what necessary corrective actions
based on this.
The important findings of the post implementation project review audit, it can be
published in a document. The document can be used to disseminate the lessons which are
learnt and to work as a reference for similar future projects. So, whatever the important
findings that have to be archived, that has to be published in a document and that
document has to be archived. The document it can be so, the document it can be used to
disseminate the lessons that we have learnt from the previous project and it can be used,
you can under the to work as a reference it can be used as a reference for similar types of
projects in near future.
990
(Refer Slide Time: 29:21)
A typical way in which the post implementation project review report can be organized
as follows; this is a typical structure. First you have to write down the project
description, the information about the project, then to give the context in which it will be
used, that has to be written then what worked well, what are the good things positive
things, what are the advantages, ok.
The positive things they have to written here, then the factors that impeded the
performance of the project ok, which factors have impeded the performance of the
project that will be written the negative aspects also will be written. The mistakes that
you have committed, they have also to be written here, then a prescription for other
projects to follow. So, out of this project what you have learnt the prescription the
recommendation that you will give to other similar projects in near future they should
follow. So, those prescriptions or those recommendation also should be written in this
what project review report.
991
(Refer Slide Time: 30:25)
Finally, the final step in project closure is releasing staff. So, now, the project is
officially closed, you have to release the staff. So, this is the final step of the project
closure to process, this is the last meeting before the project team members disburse to
different other projects. And the project manager should ensure that the team members
who had working for these project, they are assigned to proper projects according to their
expertize and their skill and this meeting is also a ground for celebration for celebration
before the a team members.
This would be before the team members disperse to different projects and for
recognizing what are the exceptional performance by the team members, then you must
recognize here and recognizing the experience and proficiency gained by the team
members. Because if different team members, they have a different experience, they
have different proficiency, you must recognize the experience and proficiency gained by
the different team members. They as a project manager you must experience, you must
recognize the experience and the proficiency which are gained by the different team
members, you must recognize them you if possible you can reward them.
992
(Refer Slide Time: 31:35)
So, these are the; so, the final step is release the what team members and the as a team
manager ensure that these team members should be assigned to other different projects,
depending upon their experience and skills. So, today we have discussed the reasons for
project closure, why project are becoming why there is a need to what close a project.
We have also discussed the problems of improper project closure, what problems will
face, if you improperly what close the project.
We have also explained the issues with project team in termination, we have presented
the steps, they have to be carried out for closure of a project. We have also presented the
structure of a typical project closure report what should be the contents and what it
should contain and how we can get the benefit in similar projects in near future these are
the things we have discussed today.
993
(Refer Slide Time: 32:33)
This is the reference we have taken from this book, these contents we have taken from
these books.
994
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 51
Software Quality Management
Welcome to this lecture. In the last lecture, we had discussed about risk management and
then we had started discussing about Software Quality Management. We had discussed
about some very basic concepts about software quality. For example, we discussed what
is a quality software. And we said that fitness of purpose is a good definition of quality
for traditional products, but for software products the fitness of purpose is defined as the
satisfaction of the requirements document.
But then, this is only a basic requirement for a quality product, one of the attributes of a
quality software, but a software product is said to be a quality one, when it has several
other desirable attributes in addition to the correctness or fitness of purpose. And we had
discussed about the various attributes, quality attributes and we said that there are several
quality models which define that what are the quality attributes and how to measure them
given a software product.
After that we had just started discussing about the evolution of the quality systems over
the years. Let us proceed from that point. We will continue with the evolution of
software quality.
995
(Refer Slide Time: 02:15)
We had seen in the last lecture that before World War II. The only way quality products
were produced is by testing, to test a product rigorously, identify all the bad products and
eliminate the defective ones. This was the only way that quality was used in
manufacturing.
But since then, the quality systems have rapidly evolved and there are four stages of
evolution and many advances originated by the Japanese.
996
(Refer Slide Time: 03:06)
The initial inspection and testing was superseded by quality control. We will see what is
quality control and statistical quality control and later the quality assurance techniques
evolved. And since then the total quality management techniques evolved and still we
are further developments in this area. As you can see that there are various stages of
growth of quality because the customers are becoming quality conscious and it is very
important for an organization to produce quality products.
And the manager has a very important role in an organization producing quality
software. If we look at the different developments that have occurred here from
inspection to total quality management, we will see that initially there was too much of
emphasis on the product, the product was tested thoroughly, but later we will see that we
do not really look at the product much we look more at the process.
The basic assumption is that the process if it is good it is bound to produce quality
products. We need not look at the product and reject the bad products that need not be
the emphasis, the emphasis should be on having a good process. Over the time that the
quality systems have evolved the emphasis has shifted from product assurance to process
assurance.
997
(Refer Slide Time: 05:14)
The initial technique deployed for quality was testing and inspection and later the quality
control technique or QC started appearing. The basic principles of the quality control is
that not only test the manufactured products detect and eliminate the bad products, but
also determine what causes the defects. That is look at the process of manufacturing.
If it is the case that the rejection rate is high for a nuts and bolt manufacturer, just check
that what is causing this defect, is it that the temperature at which it is done is really
expanding the cost and the cost is not tolerant to the temperature and that is causing the
defect, then use a different cost. So, the quality control principle, as you can see that
gradual shift is there to the process, yes do the testing, inspection and testing.
That is still there it is one of the basic techniques even in the modern quality system
testing is there, eliminate the defect product, but also look at the process; why the
defective products have arise in the first place. This form the basic quality control
principles, but then we had the statistical quality control.
998
(Refer Slide Time: 07:13)
The quality control not only in targets to reject the defective products, but also aims at
correcting the causes of the manufacturing process that was resulting in the defecting
products and slowly the quality control techniques evolved into statistical quality control
SQC, where the idea is that, many times testing each and every product is difficult, can
we use statistical techniques to make inferences about the quality of the product and also
make variations to the process and check whether the quality is improving or not.
So, the statistical quality control techniques, essentially use statistics to make inference
about the quality of the output produced, the quality of the process that is produced and
then make changes to the process and see whether the quality is improving.
999
(Refer Slide Time: 08:39)
And after quality control the next breakthrough was quality assurance.
The main assumption in quality assurance is that to produce quality product, looking at
the finished product and then doing testing is not that important. The major focus of the
quality team should be on the process. If a good process is in place good products are
bound to result, if an organizations manufacturing process is good and is followed
rigorously then the products are bound to be of good quality. Just looking at the products
rejecting bad products is not a good way of ensuring quality.
1000
Rather the team, the quality team needs to concentrate on having a good process and then
seeing that the process is followed rigorously. In that case we the products are bound to
be good.
The modern quality paradigms are based on the quality assurance principles and we will
see that there is guidance for recognizing, defining, analysing and improving the product
process. So, see the shift from the product to the process. The quality assurance
techniques have shifted from just testing based techniques to recognizing, defining,
analysing and improving the production process.
1001
(Refer Slide Time: 10:48)
And even that further evolved into the total quality management principles. This was the
next step of evolution. Here make process measurements that is how effective is the
process. Is it producing good products? Is it efficient? And other desirable characteristics
of the process, how effective is the process and then make changes to the process, so that
the process improves and the focus here is continuous process improvement through
process measurements.
We need to have metrics collected about the process and then see where there are
problems and then change the process itself and process continually improves.
1002
(Refer Slide Time: 12:03)
So, this is the total quality management principle. Here it is not just documenting a good
process, but then here the emphasis is to as the projects run to collect statistics, the
metrics regarding the process performance and find the bottlenecks in the process and
optimize them through this redesign. But one thing is clear that over the years the quality
paradigm has shifted from product assurance to process assurance.
Initially software testing was the primary means of manufacturing good quality products.
It is the basic technique, it is still there, but then we have the we had the quality control
1003
as the next development. It incorporated software testing no doubt, but then the defective
products were analysed to identify what was the problem in the process that was
resulting in the bad products and modify the process. That is the software quality control.
In software quality assurance, the basic premise is do not focus on the product testing,
focus on having a good process, document the process and then follow it rigorously, then
the product is bound to be good. That was the next development you can see the shift
from testing to testing is a smaller activity and analysing the process the shift to process
here and then finally here software quality assurance have a good process, good products
are bound to follow.
And then the next step of development is total quality management where not only have
a documented process to start with a good documented process, but through process
measurements continuously improve the process. So, this diagram emphasizes that
software testing is still there, it is part of it is a smaller part now. There are many other
things that happen.
For example in software quality control, we not only do the testing and inspection, but
also we look at the defects analyse the defect and find out why the defects are arising
software quality assurance these things are still there, but then we also have documented
process. And total quality management we have processed measurements, find out the
bottlenecks and continuously change the process.
1004
The modern quality systems emphasize on process improvement and here the changes
are made to the process to improve the product quality, reduce costs and accelerate the
schedules. We find where are the process bottlenecks which are causing the product
quality to be poorer, resulting in higher cost, delays and so on and fine tune the process
to improve the product quality, reduce cost and accelerate the schedule.
One thing that is well accepted now is that a good process is required to produce good
product. But we must remember that given a process a manufacturing plant, product
manufacturing plant can run the process again and again and each time good quality
products will come out, no doubt about that for traditional manufacturing.
Let us say we look at a iron manufacturing company it consumes iron ore and other raw
materials produce good quality of iron may be flat sheets or rods. And once the plant has
been set up and process is followed like getting good quality raw material and so on, it
will keep on continuing to produce good quality products once the process has been set
up. But for software development and other design activities, we cannot say that we set
up a good process and always good products will come out there are other factors.
For example, let us say the complexity of the problem. The process that we heard worked
well and good quality products came out for accounting software. But let us say we are
doing something very different in a telecom domain and maybe we cannot replicate that
1005
process to have a good product in telecom. And also, the capability of the designers this
is the important fact, important factor in producing good quality.
Even though the process is there, but then if the designers and coders are inherently not
good, the product cannot be good. So, just making this point here that for traditional
products having a good process is enough to have good quality products, but for software
development having a good process is only part of the story, there are other factors
which affect the quality.
One of the early standards in quality is the ISO 9000 series of standards. ISO stands for
International Standards Organization it is a consortium of a large number of countries
established to formulate and foster standardization. The ISO 9000 series of the quality
standards were published quite some time back in 1987. These are basically sets of
documents which are guidelines for the process.
They do not really look at the product, it is not a product specific, it is based on the
process. And ISO 9000 came up with three documents, the ISO 9001 standard, 9002
standard and ISO 9003 standard.
1006
(Refer Slide Time: 20:02)
The ISO 9000 is based on the quality assurance principles that if a good production
process is followed good quality products are bound to follow, you can see that this is
essentially the quality assurance principle and ISO 9000 is based on the quality assurance
model.
Now, let us see what is the difference between ISO 9001, 2 and 3. ISO 9001 is applicable
to organizations engaged in design, development, production and servicing of goods. So,
here for the manufacturing there is initial design and the design is used for development
1007
and then production and then servicing. And therefore, this is the right standard for
software development organizations because they do these requirements design, develop,
test and service.
But there are the other standards 9002, it is applicable to organizations who do not do the
product design, but are involved in production and servicing. For example that there is a
consultant organization who designed the plant. The company here did not design the
plant, there is a consultant organization which designed the plant and once the plant was
set up and the process was there then it just continued producing goods and then
servicing.
So, here the plant and the process is purchased and the company only manufactures. And
the 9002 series of standards is applicable to such organizations and of course, it is not
applicable to software development organizations because here the development
organization does requirements design, coding, testing, everything. It is not that it just
gets the design and code and just keeps on manufacturing.
1008
(Refer Slide Time: 22:42)
ISO 9003 standard, this is another document that was produced by ISO it is applicable to
organizations which do only inspection and testing of products.
This is just pictorially represented here that 9001 here, the applicable to organizations
which do design, development, production, installation and service and software
companies come under 9001, but then 9002 and 9003 standards are applicable to 9002 is
applicable to those who do production installation and servicing whereas, 9003 is only
those who do the inspection and test.
1009
But 9001 is a very generic standard applicable to many types of industries, those who
produce design and produce goods but then software is different. Here there is no raw
material that is consumed. Even as the software is being developed its not visible, only
when the software is finally, running you can see it.
And therefore, software needs a very different interpretation of the guidelines given in
the ISO 9001 document and this was felt because it became very difficult to apply the
clauses of 9001 to the software industry. And ISO later came up with a new document
called as ISO 9000 part 3, which provides the guidelines for software design
development and maintenance. This was specific to the software.
1010
(Refer Slide Time: 25:43)
For example, during software development the only raw material consumed is data, if we
can think of some raw material which is basically the data or inputs. Whereas, the other
types of development, manufacturing and development they consume lot of raw material.
For example, iron ore, coal, limestone and so on. And therefore, many of the clauses of
the 9001 pertains to raw material controller, input controller and that is not applicable to
software industry. These clauses are not relevant for software organization.
1011
(Refer Slide Time: 26:44)
And therefore, ISO came up with a new document applicable to the software industry. It
recognized that there is a radical difference between software and other product and
came up with a new document which is ISO 9000 part 3, few years later the original is,
original documents were published in 1987 and the ISO 9000 part 3 came up in 1991.
It helped to interpret the ISO standard for software industry, but still the official guidance
is inadequate and it is interpreted by various consultants.
1012
We will stop here. Almost at the end of the time and we will briefly look at what is the
requirement for ISO 9001 for the software industry, what areas it emphasizes and what is
the model that it proposes. And then the subsequently we will look at the software
engineering institute, capability maturity model. We will stop now.
Thank you.
1013
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 52
ISO 9000
Welcome to this lecture. In the last lecture we had looked at a quality system and it said
that there are some popular quality systems. The guidelines for which I have been
formed, one of that is ISO 9000 and another one is SEI CMM. These two we will discuss
in some depth and then we will look at the personal software process and the six sigma
concepts. Let us discuss about the ISO 9000.
In the last lecture we had said that the ISO 9000 is a series of guidelines to set up a
quality system, it came up with a series of volumes. ISO 9001, 9002 and 9003; initially
came up with these three volumes in 1987. The ISO 9001 it discusses the guidelines for
the quality system model, for quality assurance in design development product
installation and service. This is a generic set of guidelines applicable to a wide spectrum
of industry and including software industries.
ISO 9002 on the other hand it is a set of guidelines for setting up a quality system model,
for quality assurance in production, installation and servicing. ISO 9003 is the quality
system model for quality assurance and final inspection and test. But, then as we were
1014
mentioning in the last lecture that the 9001 is applicable to a wide spectrum of industries
and software is very different software industries working is very different from other
industries.
And, there were difficulty in interpreting the ISO 9001 clauses for software industry; due
to this purpose ISO came up with new volume, a new document the ISO 9003, part 3.
This interprets the ISO 9001, the context of software industry is specific to the software
industry, is guidelines for application of ISO 9001 to the design development and
maintenance of software. Our discussions here will be formed on the ISO 9000 part 3
document.
The ISO 9000 part 3 document, it interprets the ISO standard for software industry. But,
then still the official guidance provided in setting up the quality system is still
inadequate. And therefore, there are many quality consultant who provide help in setting
up the quality system according to ISO 9001.
1015
Later in the year 2000 ISO 9000 came up with a new revision. Here the important goal
was to improve the effectiveness of the process by collecting performance metrics, this
was not there in ISO 9001. It identified many activities, where numerical measurements
need to be taken, numerical measurements or the effectiveness of the process activities.
And these measurements would be used to improve the process and also tracking of the
customer satisfaction is made explicit. As you can see that ISO 9001, the original one in
1987 the main goal was quality assurance whereas, ISO 9000 the 2000 revision; it is
towards total quality management, where there are process performance measurements
and based on that continual process improvement has been addressed.
1016
But before we discuss about the details of ISO 9000 quality system, let us first discuss
why it is necessary to get ISO 9000 certification, what benefits macro to an organization
if we gets the ISO 9001 certification. The first thing is that a software organization to
participate in a bid for some work, sometimes it is necessary to have a ISO 9000
certification. Without this they would not qualify to even participate in the bidding
process. This is a good enough reason for many organizations to go for the ISO 9000
certification.
But then there are other benefits of getting the ISO 9000 certification. The ISO 9000
guidelines, the mandate that well documented software production process should be in
place. And this yields benefit in the quality of the project completed, the cost
effectiveness, meeting the schedule and so on.
And therefore the organizations not only get a chance to participate in the bids, but also
their quality of the work, cost effectiveness, meeting schedules etcetera improves. In
other words, it contributes to repeatable and higher quality software.
Over the next few slides we will be using the term repeatable. Let me just tell one
sentence about what we mean by repeatable. Repeatable means that an organization can
repeat its success on a project in similar projects. Let me just repeat what I said; the term
repeatable means that if an organization had a success in one project it can repeat its
success in similar projects. If the organization does not have a disciplined process,
1017
documented process it may succeed in one project, but then even in a similar project it
may fail, because it is not repeatable.
The success might have come due to some individual efforts, some very enterprising
energetic individuals who might have worked hard to make one project success. But,
then another similar project, it may be a failure if there is well documented software
process, software production process is not in place.
And therefore, ISO 9000 helps in repeating success in one project to another project and
also results in higher quality software. It fine tunes the development process because this
gets documented reviewed. Therefore, the development process becomes more focused,
efficient and cost effective.
But one thing let us be clear before we discuss about the ISO certification. ISO 9001
certification is not really awarded by the ISO; the ISO does not in fact certify. It does not
really issue any certificates the ISO by itself does not issue any certificates. The ISO by
itself does not accredit, approve or control the certification bodies. There are certification
bodies in every country who award the ISO certificates, but then ISO not even accredits,
approves or controls the certification bodies. Every country has its own certification
body which awards the ISO certificate.
1018
But, then if we think about it there can be wide variation in the requirements to get ISO
9001 certification across countries and also it becomes sometimes very important that
who had certified. For example: an Indian company may also get certified by a European
certification body. Since, there is a wide variation of ISO certification it becomes
important who had certified.
But, then even though ISO does not certify, it develops the standards, documents them
and it also guides the certification bodies to have a uniform accreditation and
certification. But the certifying bodies they can relax certain of these overlook and give
still certifications.
The ISO 9000 certification also points out some weaknesses of an organization and
recommends a remedial action and sets the basic framework for a continually improving
process.
1019
We have so far seen that what are the benefits of getting ISO 9000 certification for a
software industry. And also some issues regarding the certification itself. For example,
we said that ISO by itself does not award certificates, it just produces the guidelines, the
standards and every country has its accreditation body to whom the companies apply and
get certification. But, the next question that comes to mind is suppose a software
development company wants to get ISO 9000 certification; what does it have to do?.
Naturally, it has to apply to an ISO 9000 registrar for registration. Every country has a
set of registrars identified by the accredition body of that country and the registration
body, the registrar who carries out the registration process. And there are several stages
of this, let us just have a look at what are the stages at a very overview of this.
1020
Every country as we said that there are several registrars. The company needs to identify
registrar and typically a company takes the help of a consultant who sets up the
companies documents, processes, helps in selecting the registrar and then they apply to
the register for registration, the online application process and then provides the required
documents. The registrar in turn looks at the uploaded documents and suggests some
improvements and corrective action.
And this again the certifying sorry the registrar again looks at the corrected documents
and this process may go on until the documents that have been uploaded are to the
satisfaction of the registrar. And after this the registrar performs a full assessment that is
the quality system, the process and so on. And if it is the success certificate is awarded
and the company enters into a surveillance mode that is every few years it needs to again
get reviewed for retaining the ISO 9001 certificate.
If the registrar is not satisfied then again suggest some corrective actions and fresh
registration application need to be made. But, one thing please note here as you
discussed that the entire ISO certification process is only based on the documents; the
process documents, the quality system documents and so on.
We never mentioned anything about the product. So, the registrar does not even look at
the product. In other words the ISO 9000 certification is not given to a product of a
company, it is given to the company or the organization as a whole and not to any
specific product. It is given to the process, the manufacturing process or the development
1021
process. And the assumption is that a good process should result in a good product, but
the product are not even looked at to give the certificate.
The first stage is the application stage, apply to registrar for registration, the pre
assessment stage where the registrar makes a rough assessment of the organization based
on the documents uploaded.
1022
Reviews the uploaded documents and the adequacy audit, the process and quality related
documents are uploaded. The register reviews the document and makes suggestions for
improvements.
And once the company revises the document as per the registrars suggestions,
compliance audit is done by the registrar. And this goes on until the registrar is satisfied
whether the suggestions made by the registrar is a complied that is reviewed by the
registrar.
1023
And once the registrar is satisfied, it awards the ISO 9000 certificate. But, then it is not
given permanently, requires continuous monitoring by the registrar for the organization
to retain the ISO 9000 certification.
Since the certification is given based on the process of the organization, the quality
system of the organization and not on the product. It is alright if the company uses the
certificate in its corporate advertisements, that the organization has been certified to ISO
9000, but it cannot advertise its product. It cannot say that its products are as per ISO
9000, because the certificate is given to its process not to the product. And, if any
organization uses the ISO 9000 certificate in its product advertisement then its certificate
may be withdrawn.
1024
So, far we have looked at some basic things about ISO 9000, what benefits it provides,
how to get the certification and so on. Now, let us look at what are the ISO 9000
requirements, what does the company have to do, what are the specific aspects of its
process that it must clearly state in the process document. So that, it will be awarded the
certificate. The first thing is the management responsibilities, the management should be
committed to the quality of the products and also the quality system should be
independent of the development system.
This is required that the quality system that is the people who are checking quality, they
should not be working under the product manager. This is one of the requirement that
they have to be independent. The product manager, the quality manager they should
report to the top management only. It should not be the case that the quality manager
reports to the project manager, in that case the quality may get compromised and this is
one of the requirements in the quality system.
The quality system itself how it is formed, the specific documents it maintains and so on.
Contract review; an organization during any of its project execution may take help of
subcontractors, the subcontractors do some part of the work.
And, in this case whether the review is made of the terms and also when a organization
enters into a contract with a party for to develop a software is the contract reviewed
properly, is it understood what needs to be done by what time and so on, who reviews it;
1025
these things must be detailed here. The main idea here is that unless the contract is
reviewed properly and the work is accepted, then the project may not succeed.
Design control: as the design is made, it undergoes many changes due to various reasons;
is the design controlled so that, the latest design is available and you can easily find out
what changes had occurred to it before it and so on. In plain words this means
configuration control that is using a tool a configuration management tool to keep the
design documents under configuration control.
Document and data control: here all the documents and data must be under configuration
control. Just see here that configuration control is really emphasized or in other words
without using configuration control an organization may not get the certification. This is
a very important requirement that configuration management must be used. As part of
this course we will be discussing about the configuration management. Purchasing must
be reviewed, control of customer supplied product again configuration control, product
identification and traceability.
Inspection and testing: testing is a very important requirement to get a quality product.
So, both inspection and testing, the organization has to specify what are the steps of
inspection, where all inspection occurs. And what are the stages of testing? Unit
integration and system testing. Control of inspection and measuring test equipments, if
there are some test equipments are used how are these calibrated, this also needs to be
discussed.
We just looked at a very brief overview of the ISO 9000 requirements. The specific
items that the organization must emphasize and also give its process for this in the
documents to be uploaded. This is just a highlight, but there are many other requirements
for ISO 9000, these are just the highlight requirements, some of the very important
requirements. We will just look at few more requirements in the next lecture and then we
will proceed further.
1026
Thank you.
1027
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture - 53
ISO 9001 (Contd.) & SEI CMM
In the last lecture, we had said that ISO 9000 is a very important quality standard. There
are lots of benefits to an organization who gets ISO 9000 certification. We had discussed
about what benefits accrues to an organization and how to apply for the registration and
get the ISO registration and finally, awarded the ISO certificate and after that we were
discussing about the major requirements that the organization must fulfil.
We had identified several important points; now let us proceed from there. It said that
some of the important requirements are the management responsibility; how is the
management setup; are they committed to the quality; how do they do it; the quality
system itself that how is the quality system setup; is it independent of the development
system; contract review – before entering into contract is the contract reviewed for the
capability schedule and so on; design control; document data control; control of supplier
product; control of customer supplied product; product identification; process control
and all these we said that configuration management is key to this.
1028
Without configuration management we cannot have control of the documents traceability
the changes that have occurred over time and so on. And, therefore, one of the major
requirement of ISO 9001 is to use a configuration management tool and inspection and
testing is another major requirement, have to identify at what stages inspection occurs
and what are the types of testing that are undertaken and also if there are some test
equipments are used.
How are these calibrated? There are few more requirements. Inspection and test; control
of nonconforming product that when there are rejections or defects how are these kept
out of a circulation to the customers. If it is found that one of the product is having defect
then how is it ensured that the same product is not given to another customer, and again
here it means use of configuration management tool because in configuration
management tool we can easily identify where a defect has been fixed and get the latest
software.
Corrective and preventive action; how are these corrections and preventive action taken.
Handling, storage, packaging, preservation and delivery; control of the quality records. It
is not that the quality is the quality system just does inspection and it just approves but it
must maintain the quality record. The quality system must be audited, who audits the
internal quality system that must be clearly identified.
1029
Another important requirement of the ISO 9000 requirement is that periodic training
should be given to the developers. The developers must undergo periodic training
because the skill changes frequently that needs to be a skill up gradation and this has
been identified by ISO 9001. The document must also identify at what frequency training
will be given, in what mode, how and so on. Servicing – how will servicing be provided;
statistical techniques – what statistical techniques will be used and so on.
Now, let us look at some of the points some of the requirements more elaborately. As we
mentioned that we are not going to look at all the requirements that will be too much for
a short discussion. We are just getting a overall idea about the ISO 9000 certification
process, the major requirements and so on. I am not really going into the nitty gritty of
every clause that needs to be satisfied.
One of the major requirements is that all documents that are produced during developed
must be properly managed, authorized and controlled. That means, who is authorized to
change the document should be identified and enforced that is controlled and without a
configuration management tool the authorization system will not work because
somebody else may change it. The configuration management tool enforces the
authorization and who makes change and so on are noted here.
Project planning is a important requirement of ISO 9001. Proper plan should be made
and progress against these plans will be monitored. So, planning, monitoring and control
1030
should be practiced by the organization without which ISO 9000 certification will not be
given. It mandates planning, monitoring and control. So, proper project manager has to
be there who has to do the planning monitoring and control.
1031
(Refer Slide Time: 08:19)
So far, we discussed about some of the important points regarding ISO 9000 certification
process we said that the registrar looks at the documents, the process document, the
quality document and so on and based on that awards the certificate. This we can
interpret to say that the organisation has a good document, good setup document. It has
proper production process quality system documented.
But, then one of the short coming of the ISO 9000 certification is that it does not provide
any guarantee that the organization actually follows those documents because it is given
on the documents the documents that reviewed they are made to have all the
requirements of the ISO 9000 but whether the organization actually following that across
all projects that is not monitored by ISO 9001.
And, also it says that what are some of the points that are required, but how do have that
process is not there in the document and that is also very difficult because it is applicable
across many industries and even in the software industry there are various types of
projects for which different processes may be required and that is the reason why the
consultants make a roaring business to help the organization set up guidelines to help
them define an appropriate process on which the certification will be obtained.
1032
(Refer Slide Time: 10:33)
Also we had mentioned at the beginning of our discussion that the ISO 9001 certification
process is given by an accreditation agency and it is not given by the ISO and there are
various degrees of rigour with which the accreditation agencies they give the certificate
some may be very flexible, in some country it may become very easy and so on. There is
no international accreditation agency exists.
So, when ISO 9000 certificate given by different accreditation bodies may mean
different and therefore, let us say an Indian company or some other company or wants to
bid for the project in Europe, they may need the certification from a European
accreditation agency like (Refer Time: 11:48) or something. Since these are given by
accreditation bodies across different countries that are likely variation in the norms of
ordering the certificates.
1033
(Refer Slide Time: 12:05)
With this brief idea of ISO 9000 which is very important for a project manager let us
look at another quality system standard which is the SEI capability maturity model or the
CMM. The SEI stands for the software engineering institute at the Carnegie Mellon
University in USA, itdeveloped a capability maturity model which in fact, in the quality
system the set of quality system guidelines, but then this was done by the Carnegie
Mellon University in USA on the request from Department of Defence in the USA. The
Department of Defence in USA is one of the largest purchaser of software. It takes the
help of many contractors or software developers to get its software.
But, then it was facing a problem that some of the developers used to give poor quality
work them some of them even could not complete the work and so on and with that
purpose it gave this project to the Carnegie Mellon University to develop a quality
system guidelines which can be checked by the Department of Defence to see if the
vendor organization has a good quality system and it can deliver the software. Or, in
other words, the US Department of Defence by using the CMM model can assess the
contractor performance before awarding a contract.
Since the work gets delayed money wasted if the work is awarded to an organization
who cannot complete the work or does a poor job. It is very important for the
Department of Defence before awarding the contract to make an assessment of the
1034
contractor to check whether you it can be awarded the contract for software development
and this was precisely done by the CMM.
But, then the CMM can be used in two ways, one is capability evaluation. This was used
by the department of Defence to evaluate the capability of the vendor that whether it can
actually develop a good quality software and deliver but then the CMM also is useful for
doing a software process assessment. In the software process assessment it is not done by
the organization awarding the contract and checking the contractor likely performance,
but here a software development organization checks its own quality system.
1035
(Refer Slide Time: 16:35)
1036
The other use of the CMM model is software process assessment. Here it is used by an
organization to assess its own process the objective of this is to find out how to improve
its own process and naturally this assessment is purely for internal use because the
organization is assessing itself with the objective of improving its short comings. And,
therefore if it says that we have been assessed at SEI CMM, level 5 and so on that would
mean very little because we have assessed themselves with the objective of improving
where it is improvements are required and therefore, it is not very meaningful to claim
that they have the level 5 capability.
But, as the DoD the Department of Defence of United States, it awarded the project to
the Carnegie Mellon University who developed the CMM and then the department of
DoD enforced the capability evaluation, then all the contractors for the Department of
Defence they used the CMM on themselves before the capability evaluation, so that, they
can win the contracts at the Department of Defence. So, that helped them to get the
contract, but also it helped the organizations themselves.
It was found that it helped definitely helped them to improve the quality of the software
they developed. They could meet the timelines more frequently and they could develop
the software within the required cost and because the organisations realized that using
the SEI CMM was beneficial to them also not only winning the contract but also it had
benefits to themselves in developing better quality software within time and cost
1037
effectively they adopted the SEI CMM model and even other organizations who were not
really contractors of the Department of Defence they started using the CMM for this
purpose that it will help them come up with good quality software cost effectively and
within time.
Now, let us look at the SEI CMM model itself. So far we have been discussing how it
originated, what are its benefits, its uses – two major uses of the CMM, capability
evaluation and process assessment and so on. Now, let us look at the SEI CMM model
itself.
In simple words the CMM model can be used to appraise the process maturity of an
organization into different levels and the implicit assumption here is that an organization
using a good process produces good software. So, here we classify the organizations into
different levels based on the process they are following and it has been found that this
class actually is an indicator, this class of the contractor or the class of the vendor is a
likely indicator of the performance when a project is awarded.
If it is at level 1, it is likely to produce poor quality work schedule slippage, higher cost
compared to a level 5 organization who is likely to produce work that is cost effective
within time and higher quality.
1038
(Refer Slide Time: 22:33)
The CMM actually classifies organisations into five levels called as five maturity levels
of the process that is how mature is the process and not only that that it classifies them
into five levels. It also tells that what the organization needs to do to improve its process
from one stage to another. If an organization is assessed at level 1, it clearly says that
what it needs to do, what aspects it must improve and how to go to the level 2 and from
level 2 to level 3 and so on.
But, then from level 1 it can go to the level 2 and not directly to level 3 without going to
level 2 because the requirements of level 2 are a pre-requirement for level 3 without
meeting the requirements for level 2 it cannot directly go to level 3. So the requirements
of level 3 also have pre-requirement of level 2; requirement of level 4 needs the pre-
requirement or that the organization is already satisfying level 3 and level 3 requires
already satisfied level 2 and so on.
And, the CMM model if we really dissect it we will find that it is largely based on the
work of Philip Crosby who had published a book called is Quality is Free. There also we
have five levels of organizational process maturity and so on. But, then the CMM model
has structured it and identified the specific questions to be asked based on which the
organisation can be assessed and so on.
1039
(Refer Slide Time: 24:59)
The five levels that the CMM identifies are the following. The lowest level is the initial,
this is level 1. Every organization by default is at initial level and then is the repeatable
level or the level 2. The third level is the defined level, the fourth is the managed level or
level 4 and the fifth level is called as optimizing. This is the continuously improving
process.
We are already at the end of this lecture we will stop here, and in the next lecture we will
discuss about the specific requirements that the organization must meet to be able to a to
be able to qualify for a specific level of the SEI CMM model. We will stop here.
Thank you.
1040
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 54
SEI CMM (Contd.)
Welcome to this lecture. In the last lecture, we were discussing about the SEI, CMM
quality model. We had said that this was set up by the Software Engineering Institute at
the Carnegie Mellon University, on a request from the department of defense. The
department of defense wanted to assess the capability of its vendors the likely
performance of the vendors before awarding the contract. But then it had significant
benefits for the organizations themselves. It helped them to improve the quality of their
work, meet the schedules and also develop in a cost effective manner.
And therefore, it is widely accepted, world wide it helps the organizations to improve its
quality in stages. We are just discussing this model here where the level 1 is called as the
initial model. All organizations by default are at initial, even if they do not have any
documented process they do ad hoc build and fix for all these they are at initial level,
every organization by default is at the initial or level 1.
And to go to the level 2, they must have a disciplined process. A disciplined process is
that they must be using some process. They must be having project plans made,
1041
monitoring against the plan, there must be a proper project manager who makes plans,
monitor against these plans, configuration management of the documents and so on. And
then it goes to the repeatable level. This is level 2, the implication of the repeatable as we
are mentioning is that an organization can repeat its success on one project on similar
projects in the future.
That is if we did a job that was awarded to it well, then it can repeat its success on that
project on similar projects. The third level is called as defined level. In the defined level,
the process must be documented in the repeatable it is a standard consistent process. In
the repeatable level, it is a disciplined process and then in the defined level the process
must be documented, so that the process is used consistently by all members.
In the initial level the process there is no disciplined process, everybody is free to use
their own process, even build and fix there are no process, only build and fix, that is also
would qualify for initial. In the repeatable there must be a disciplined process that is the
process must be followed by different developers, but there may be variation in the
process, their understanding a process may vary.
But in level 3 there is a standard consistent process, only then they get to level 3 and to
get a standard consistent process, the process has to be documented, it has to be defined
what exactly is the process. Here in the disciplined process and the repeatable, the
project management is there and also the process is disciplined, everybody have the
process in their mind, but then there can be variation across the different developers. But
in the level 3, there is a standard consistent process and this can be achieved only when
the process is documented.
In the level 4, its necessary to have a predictable process that is the process performance
needs to be measured and then whether the goals that are there are met this is identified
and corrective actions are taken if the goals are not met. For example, let us say one of
the process metric or process measurement is how effective is the review where most of
the bugs identified in the review process or they got identified during testing and that
measurement is tracked and if there are very few bugs which are identified during the
review and most of the bugs got identified in the testing.
Then it was identified that why the review was not effective and therefore, it was
enforced more thoroughly may be the review was not done properly and that is why most
1042
of the bugs they escaped to the testing. From the managed go to the optimizing, the
organization must have a continuously improving process.
Now, let us see the same example that if there are bugs identified, the number of bugs
identified in the review versus the number of bugs identified in the testing if it was
identified that review was not very effective in a managed stage it would be investigated
that why the review was not effective. Maybe it was not followed rigorously, may be the
people who reviewed they just were lacks could not review properly and so on.
Whereas in the optimizing level, it was also identified that whether the review process
itself is not proper; does the review process needs improvement and the process may be
changed here. So, in the managed stage the metrics are collected, the process metrics are
collected to investigate and find out why the targets were not met, what was the short
coming on the part of the team members who were using the process.
Whereas, in the optimizing level it is also identified that does the process itself needs
improvement and then the improvements are made to the process. So, that is the overall
idea here in the capability maturity model that the initial all organizations quality, this is
also called as a chaotic organization where everybody all developers are free to develop
their own way chaotic activity, that is the hallmark of the initial level.
In the repeatable processes are followed, but then there can be variation across the
process different team members. The project management is followed. And then the next
level three is the defined process where the process is defined and documented. The
managed level, the process performance metrics are collected and these are used to
identify why the targets are not met, why certain things were not working certain
activities were not satisfactory, but the process is not changed based on the metrics.
Whereas, in the optimizing the process continuously improves. The documented process
every time necessary it keeps on improving.
1043
(Refer Slide Time: 10:24)
The level 1 is the initial level. Here every organization by default qualifies to be level 1
there are no process requirement, no project plans required. So, every organization just
developers got together developing and that is will qualify the level 1 or the initial. The
hallmark of a level 1 organization if a organization if qualified at level 1, then the
projects are done ad hoc, the developers are free to do whatever ad hoc and chaotic
activities this is the hallmark of a level 1 organization.
1044
But the thing is that many software organizations they are at level 1. They, the
developers there are free to do whatever way they want to develop software. Success
comes from smart people and hardworking people doing the right things. If a level 1
organization has a success in a project its largely attributed to some of the team members
who had worked overtime, who are doing smartly getting some things right and so on.
But then this kind of organization get can get into big problem if during the project some
of those good people leave, then the project there will be a total failure. And this kind of
organization during the project tenure there are frequent crisis and the good people the
calm down in a firefighting mode, let us say some module is not working, the good
people the smart people the those are there in the team. They take up those activities help
out and bring the project to success.
Here there are no specific process followed. Different engineers have their own
understanding of their process, they may not follow any process and the success of the
project depends on the individual effort and heroics.
1045
(Refer Slide Time: 13:08)
The schedules and budges are missed. Progress is not measured, there are no plans made
there are no progress measurement. And that is one of the major reason why the
schedules are missed because to start with and not made any plan that what work will be
complete by what date and so on. Also the product configuration is not controlled.
It is likely that buggy software where the bug was fixed, but still the buggy version was
forwarded to a customer. The activities that the developers take a nonstandard
inconsistent and there are large number of defects in this kind of software developed by a
level 1 organization.
1046
(Refer Slide Time: 14:10)
The level 2 is called as a repeatable level because here the basic process framework is in
place and some of the basic requirements are in place and therefore, an organization at
level 2 if it has a success in a project it can repeat its success on a similar project. If it
developed a payroll software successfully for some organization and another payroll
software it can with success develop or some other organization.
One of the major requirement of the level 2 is project management practices that is
making the basic plans and then monitoring against the plan, tracking the cost and
schedule. Size and cost estimation techniques are done by the project manager function
point analysis, COCOMO etcetera these are used. But again here the developers though
they have their process, they do not use a build and fix.
They have their process, but these are not formally defined. There may be variation of
the process that are used by different developers and also these are not documented.
1047
(Refer Slide Time: 15:41)
Also the process that is used for different projects may be different and here the success
on a project can be repeated on a similar project and if a organization produces a family
of products then a level 2 organization, is it is possible for a level 2 organization to
repeat its success on one product across the family of products.
In level 3, the process is defined and documented and therefore, there is a consistent
process that is followed by all developers. There is a common organization wide
1048
understanding of what are the activities to be done at what time, what are the roles, what
are the responsibilities.
But then even though the process is defined. But then there are no process measurements
are done and roughly ISO 9001 is at level 3. But ISO 9001, the 2000 version that targets
for level 5, but the original ISO 9000, 1987 is roughly at the level 3 of the SEI, CMM
model.
1049
In the level 4 here, there are quantitative quality goals for the product are set. For
example, that how many bugs per 100 lines or 1000 lines should be there in the final
product. After all the testing is complete all the development and testing is complete all
the development and testing is complete what is the quality goal for the number of bugs
per 1000 line or what percentage of bugs to be identified during the inspection process
and what percentage to be identified by the testing process?
Here these kind of process and product qualities are measured and if this fall short, then
these are used to improve the project performance that is some things were not done
properly. And it was used to give this feedback to the team that you did not do a proper
review or a proper testing was not done. This has to be done more rigorously. So, the
project performance is enhanced and not the process itself. The no change to the process
is made, but then the process that exist are used more rigorously to improve the
performance.
And level 4, the process and product quality are collected and the software process and
product are quantitatively controlled and variation of the process performance is
narrowed to fall into acceptable quantitative bounds. That if let us say during inspection
50 percent of the bugs must get detected and should not go to the testing stage, then more
or less all projects the same thing happens that is all projects do the inspection
1050
rigorously. Here quantitative quality goals are set and therefore, the quality is
quantifiable and predictable.
In the level 5, the measurements are used not only to enforce the process is followed
rigorously, but also it is identified whether the process itself needs improvement because
it may so happen that the review is done rigorously, but then the number of defects
getting detected during the review is not satisfactory. Then the review process may need
to be improved and therefore, the process keeps on improving continuously based on the
measurements.
If there are some known problems with the process, then the process is fine-tuned and
the lessons learned into some specific project learn from specific projects these are
incorporated in the process, so that these good things that some project used and got very
positive results, these can be used organization wide and therefore, the process itself is
changed.
1051
(Refer Slide Time: 21:32)
Not only that, there are lot of innovations and improved practices are being reported
across different organizations, in the journals, magazines and so on. These tools methods
and processes are identified, there is a special team here in a level 5 organization whose
work is to find out if there are improved methods tools and processes which are reported
then these are incorporated into the process itself and these are transferred throughout the
organization.
1052
Now, let us discuss about another important aspect of the SEI CMM which is the key
process areas. As you might have already noted that the ISO 9000 is a go, no go
scenario, either a organization qualifies for ISO 9001 or it does not. But here in the SEI
CMM when organizations get assessed at certain level and then it tries to go to the next
level, and there are set of things that have been identified that organization at one level to
go to the next level and that is called as the key process areas, some part of the process
that needs to improved.
The KPA is a important aspect of the CMM model. An organization assessed at certain
level it can go to the next level if it satisfy some key process areas. These are the process
areas that an organization at some level must focus to reach the next level or let us say
level 3 KPA, it identifies the process areas which a level 2 organization must focus to
reach to the third level.
Now, let us see what are the KPAs for the different levels. For level 1 there are no KPAs
because every organization is by default in level 1 and KPAs are those process areas
which an organization must focus to come to that level. The level 2 KPAs are the process
areas which a level 1 organization must focus to come to level 2. These are the KPAs
requirements management, the requirements document must be written and managed.
If the software project planning, there has to be a project manager plans have to be made
estimates have to be made, based on that plans to be made and based on the plan the
1053
project monitoring has to be done. The third requirement is configuration management.
The configuration management allows a document to be controlled, only the authorized
persons can change it and also it will be possible to identify the latest version of the
document easily and also it will be possible to go back to a previous version and
subcontract management.
So, these are some of the process areas that a level 1 organization must focus to come to
level 2. Must have proper requirements document, because requirements are the basis for
planning and managing. There has to be a project manager who does project planning,
project estimations and monitoring and control based on the plan.
We can pictorially describe what are the KPAs per level 2, one is the requirements
management. The requires, the requirements must be obtained and this must be put under
configuration management because requirements keep on changing. Software project
planning monitoring and control, this is another KPA software configuration
1054
management. This is also an important KPA and subcontract management, this is another
KPA.
We are almost at the end of this lecture. We will look at the KPAs for the other levels of
the SEI CMM in the next lecture and we will move on from there.
Thank you.
1055
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 55
SEI CMM (Contd.)
Welcome to this lecture. Over the last couple of lectures, we have been looking at the
software quality management issues and if you remember we had said that for effective
quality management, an organization must have a quality system. And there are several
quality standards, quality system standards that are available used throughout the world.
We had discussed about ISO 9001 and then we had been looking at the software
engineering institute capability maturity model.
We had looked at the 5 models and what are the quality system that is required at the
different levels. The 5 levels of SEI CMM and the quality system requirements at each of
the level. And also we had said that the CMM gives a step wise approach to improve the
quality at an organization and this is known as the KPAs or the Key Process Areas.
An organization at a certain level that wants to improve its quality standard to the next
level, will have to incorporate certain key process areas. It has to focus on improving
some key process areas that is clearly defined by SEI CMM. In the last lecture, we had
just started looking at the key process areas, now let us continue from there.
1056
We will first look at the key process areas and then we will compare the CMM with ISO
9001 and after that we will look at the personal software process.
The key process areas are defined for every maturity level of the SEI CMM. The KPA
identifies the process areas that an organization at a lower level must focus to reach this
level, that is suppose an organization is assessed at level 2 and it has to, it needs to
improve its process its quality system to level 3. Then it must look at the KPA is
identified for level 3 and focus on those.
In that sense, the KPA is a very important concept here in the CMM and it gives a step
by step approach to improve the quality at an organization.
1057
(Refer Slide Time: 03:43)
For the level 1, there are no KPAs because every organization by default is at level 1, it
does not need even a process. So, there is no process areas, key process areas identified
for level 1. Now, let us look at the level 2 KPAs that is if an organization is assessed at
level 1, what are the process areas it must focus to improve its quality system to level 2.
The first thing identified in the KPAs is requirements management, that is the
organization must have proper requirement specification and also how does it manage
the requirements that is the changes in the requirements how are these controlled. This is
important because requirements is the basis for planning and managing a software
project that is identified by the CMM, requirements is a fundamental document, is a
basic document which is required for many purposes including planning and managing a
software project.
Another aspect process area which is highlighted is the project planning and monitoring.
So, this is basically the software project management. Unless a organization has a project
manager who estimates the size cost, makes plans according to the size cost, develops a
schedule and then monitors the progress, the organization cannot be assessed at level 2,
one of the emphasis of the level 2 process is the project management. It is identified that
project management is one of the basic aspects of a project without which the project is
hard to become success.
1058
Another issue that is process area that is identified is configuration management. A
configuration management tool must be used, so that various documents can be put
under configuration control, the requirements document, the design document and so on.
So that, the changes to this can be authorized that is who changes it is not that anybody
can change, an authorized person can change the document and also it should be possible
to identify the latest configuration or the latest version of the document and also to be
possible to find out what changes have occurred so far and so on.
The ones that are the process areas that are identified for level 2 are some of the basic
aspects of a software project without which it is very difficult to go to the higher levels,
the going to level 3, level 4, etcetera without incorporating this basic process areas would
be counter productive. The other process area is subcontract management.
Every project at some point or other needs to give out work, well defined pieces of work
to other vendors and how are these specified is the contract clearly specified what is the
work to be done and when getting the delivery is it checked against those specification.
So, those are subcontract management. So, the level 2 KPAs are 4 important KPAs.
There are other KPAs, but we have just highlighted here most important ones,
requirements management, software project planning, configuration management and
subcontract management.
1059
Just represented this in a diagrammatic form here. The level 2 KPAs are 4 process areas
requirements management, software project management, that is software project
planning monitoring and so on, software configuration management and software
subcontract management.
The level 3 KPAs are the ones are the process areas which a level 2 organization must
focus and incorporate in order to improve its process to level 3. The level 3 KPAs are
process definition and documentation. In level 2 different developers might be using
their own process, but there is no standard understanding of which process is being
followed, there is no documented process available in the organization whereas, in a
level 3 organization the process that will be used by all developers is clearly defined and
it is documented in the form of a process handbook.
Reviews are carried out at various stages, the requirements are reviewed, design is
reviewed, code is reviewed and so on. And the third process area is training program it is
identified that the skills for software development is continually changing, and unless the
developers keep up with the latest developments it would be heard for a project to
succeed and therefore, periodic training program is another issue which is highlighted by
the level 3.
1060
(Refer Slide Time: 11:18)
Represented this in the form of a diagram. 3 important level 3 KPAs training program,
peer reviews of various documents and process definition and documentation.
The level 4 requires measurements of the process performance, both the process and
product metrics are obtained and there is a quantitative process management, that is
based on the measurements and the process and the product, if there is a deficiency
somewhere then it is identified why this is occurring.
1061
May be the development team is not strictly practicing some process and therefore, it
focuses on identifying and correcting the causes of variation. If the defect rate that is
identified after testing is high, then possibly the testing was not proper, may be the
testing was not done rigorously and therefore, the project manager identifies that what
caused the metrics to miss the specified values and then it takes a counter measures. So
that, the process variation is reduced.
In summary here, for level 4 KPAs process and product measurements and using this
measurements quantitative process management is practiced here. If there is a deficiency
in any of the measured attributes it is identified and corrective action is taken, but one
thing must point out that the corrective action here does not really imply changing the
process. For example, the testing is not effective because certain types of tests were not
documented in the process document that is not identified actually at level 4.
The level 4 only tries to find out why there is a variation in the process that is sometimes
there are more defects for example, and then identifies what action needs to be taken by
the project manager to correct that deficiency.
The level 5 KPA on the other hand, it one of the focus is defect prevention that is
identifies what is causing the defect and then if necessary change the process. The level 4
assumes that the process is all right, no changes requires only the way it is enforced or
practiced is causing the process variation which is causing the defects whereas, in the
1062
level 5 the defects are analyzed their causes and if necessary the process would be
changed.
The other process area that is identified as the level 5 KPA is technology change
management. In the area of software development the technology changes rapidly and
therefore, there must be a group in the organization whose sole work is to identify latest
and beneficial new technologies and transfer these into the process and organization
wide. The different tools that are become available new methods, new processes, these
are all identified and incorporated into the process.
The other process area that is identified is process change management. The process
continually changes in a level 5 organization and therefore, the process itself must be put
under configuration management. So that, the latest process can be identified. There can
be different versions of the process which are applicable to different teams and therefore,
the process changes need to be managed, otherwise what is the latest process, latest state
of the process would be difficult. The process itself is put under configuration control.
In the level 4 KPA, we had identified quantitative measurement and the process in
product and based on this the project manager identifies deficiencies and determines
what is causing the variation and takes corrective action. Some of the metrics that are
deployed, here there are few examples. The estimated source lines of code and the actual
lines of code, the estimated source lines of code and the actual source lines of code.
1063
Across project, the estimated lines of code and the actual lines of code the variation must
be bounded if for some project the estimated source lines of code is very different from
the actual source lines of code and the project manager needs to introspect and identified
what causes the variability, what aspects were not considered, what went wrong and so
on.
The number of issues raised during code inspection, the number of defects detected
during unit testing, the number of detected defects detected during system testing these
are also important metrics. The number of issues raised duuing code inspection these are
much smaller compared to the defects detected during unit and system testing; that
means, the code inspection is not working properly.
And similarly if the number of defects detected during system testing is too many, then
possibly the unit testing and code inspection did not work properly.
So, far we have looked at two quality standards the ISO 9001 and SEI CMM. Let us
quickly compare these two standards. The first thing to note is that ISO 9001 is awarded
by an international standards body, but Software Engineering Institute is not a standards
body. It has just made some recommendations to the department of defense and which
we are using and therefore, the ISO 9001 can be quoted in official documents and
communications and the this can be accepted where these are given.
1064
On the other hand the SEI CMM assessment is purely for internal use that is an
organization wanting to improve its process uses the SEI CMM assessment or an
organization trying to select a vendor uses its own CMM assessment on the organization.
The second issue is that the SEI CMM is very focused on the software industry whereas,
the ISO 9001 is used across various industries and therefore, the SEI CMM addresses
many issues which are specific to the software industry which are not really taken care
by the ISO 9001. And also the SEI CMM targets for total quality management.
unlike the ISO 9001, the original ISO 9001 document which was proposed in 1987, it
targets only quality assurance that is the level 3 of SEI. So, roughly the ISO 9001 that
was brought out in 1987 corresponds to the level 3 of the SEI CMM, but of course, the
ISO 9001 the 2000 version, the one that came out in 2000 version and later they also
target for TQM.
1065
(Refer Slide Time: 22:16)
Another major difference is that the ISO 9001 provides a go, no go solution that is an
organization either qualifies for ISO 9001 or does not. Whereas, the SEI CMM it
provides a step by step approach by which an organization can improve its quality, that is
starting at level 1 or very poor quality, very poor quality system. It can reach level 5
through a step by step approach.
It provides a way for gradual quality improvement over 5 stages and also it clearly
identifies at each level what is the most pertinent or important process area that it needs
to focus, that is if a level 2 organization without really focusing on level 3 KPAs it tries
to focus on level 4 KPA then it will be counterproductive.
Let us say a level 1 organization without really practicing the project management,
configuration management, requirements management etcetera it tries to have a defined
and documented process. It will not be counter product, it would be counterproductive
because the managers would be overwhelmed by schedule and budget pressure because
the basic project management aspects are not there, the configuration management is not
there and having a process and using it will not benefit much.
1066
(Refer Slide Time: 24:23)
It is only the accreditation agencies per specific country they provide the certification
whereas, CMM there is no certification actually.
1067
(Refer Slide Time: 25:17)
But then over the years lot of CMMs came into place. CMM for software development,
CMM for systems engineering that is both hardware and software combined
developments, CMM for software acquisition, people CMM or the CMM for people
management security engineering CMM etcetera. These are all pertinent these are all
relevant to an organization, but then each had its different structure, different terms were
used the different ways of measuring the maturity which caused confusion.
And when an organization tries to use CMM as a whole for all its activities including
acquisition and so on it was very hard because these had different terminologies and
different ways of measuring the maturity. And also if multiple areas an organization had
to select supplier it was very difficult.
1068
(Refer Slide Time: 26:40)
And with this the CMMI was proposed in 2008, 2002 and later it has been revised
several times. The main idea here is integrating large number of CMMs that an
organization would have to deal with into a single one.
But then there are 3 flavours of CMMI, one is for development where there is which is
applicable for software engineering, system engineering collaborative teams and
acquisition of suppliers, acquisition from suppliers. There is CMMI for service
industries, these are used across different industries from taxi aggregators to call centers,
1069
they use CMMI for service and then CMMI for acquisition which is used for acquiring
products and services.
We are already at the end of the lecture time. We will just stop here. And in the next
lecture we will discuss about the personal software process or PSP.
Thank you.
1070
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 56
Personal Software Process (PSP)
Welcome to this lecture, over the last couple of lectures we had discussed some basic
concepts in Software Quality Management and we had looked at quality standards. The
ISO 9001 and SEI CMM. We had also compared these two standards, which will give us
an insight in as to which one is more beneficial to use for a specific organization.
In this lecture we look at the Personal Software Process. The ISO 9001 and SEI CMM
are applicable to organizations, these look at the process that is being followed at the
organization and then they make recommendations on the quality issues of this process.
But the need for a personal software process was identified because, every individual
uses his own personal process. These are actually scaled down version of the industrial
software process and are used by individuals, unlike the SEI CMM and the ISO 9000
which are used across an organization across all projects in an organization.
Here it focuses on the individuals working style or the individuals process, ISO 9000 and
CMM they just make an assumption then individuals have effective personal practices.
But then how do the individuals improve their personal practices starting with poor
1071
practices, how can they improve their practices over stages and that is the focus of the
personal software process or PSP.
As we know a process is the set of steps that somebody uses for doing a job. For
example, a developer is given the task of coding a module, how does he go about? Does
he just understand the module requirement and then start coding, does he look at the
module make an estimate of the code, the coding time and then really keep the time and
then check whether he is according to the plan that he has made, does he have a coding
process how to code.
So, these are the ones that are targeted by the PSP. The quality and productivity of a
engineer is largely determined by his process, the PSP framework it helps a software
engineer to measure and improve the way he works.
1072
(Refer Slide Time: 04:08)
The individuals they have their different styles of working and it provides a defined
process for the individuals and of course the individuals can finetune and continually
improve their own process. It helps individuals to incorporate certain personal skills and
methods. For example, they can do estimation and planning for their own work the
project manager does estimation and planning for the project.
But what about the developers do they do any estimation of their own work that is
assigned to them. Do they plan out how to go about doing that work and do they really
track the performance against the plan they made.
1073
(Refer Slide Time: 05:13)
One of the most basic issues that has been pointed out by the personal software process
is the time management. Unless the time is measured it becomes very difficult or very
erroneous if somebody tries to do a boring activity which he does not really enjoy, then it
will appear to be much longer than it is. For example, somebody let us say does not like
to design he likes to code and he is asked to design even if he spends just few hours, he
may say that I have been doing the design activity for a long time and therefore the time
needs to be measured.
Similarly, interesting activity seem very short, somebody who enjoys doing coding and
he has coded for let us say eight straight hours and you ask him that how long did he
code he will say that no I just spent very little time in coding, I just done it all without
spending too much time. For this reason it is necessary to record the time for designing,
writing code, for compiling and removing the syntax errors and then testing and
removing the bugs.
1074
(Refer Slide Time: 06:56)
Plans have to be made by the individual if he is given a unit to code, he has to plan how
is he going to do it has to write down the plan for the design must record the time taken
in the log and the plan must be ok. That is compared later with the log for the coding
how much time really it took compilation to remove syntax errors, how much time does
it took test to remove the bugs, how much time does it took and these are all compared
with the initial plan made and a project plan summary is developed.
And then a postmortem is used to identify what went right what did not go ok, how it
needs to be improved and so on. So, that is the basic of the PSP.
1075
(Refer Slide Time: 08:09)
Other than the time management and keeping records or logs, the other aspect which is
highlighted by the PSP is Planning. Starting with the problem the problem definition
need to estimate what is the size of the work, what is the maximum minimum and
estimated lines of code and from this must determine how much minutes it will take.
If a individual knows his coding speed that is how many minutes per lines of code, then
he can determine the maximum minimum and total are the estimated development time.
And these must be entered in a project plan summary form and once starts working must
record the actual time the plan time and the actual time must be recorded.
1076
(Refer Slide Time: 09:32)
The design the specific format in which the design to be done. The developer must have
his own standard which is of course final output must be compliant with the required
standard of the team. But the way he goes about designing the specific format he must
identify and then record the design time the time recording log.
For the coding must implement the design and should have his own way to go about
coding his own process some developers they first write the function prototypes and then
identify the variables and define them and then write the code. Some other developers
1077
may feel comfortable in first writing a simple version of the function and slowly
enhancing that adding more and more variables parameters and so on. But finally, the
way the text is laid out that also must be for that must be standard and the coding time
must be recorded in a time recording log.
The Compile to remove all syntax errors, each time the program needs to be compiled
and the syntax errors identified must be fixed until the program compile satisfactorily
and this time is recorded in a log.
1078
Then the testing here all the bugs have to be identified and fixed and the time is recorded
in a log. And finally the postmortem where the project summary data that is the plan time
and the estimated time are analyzed and then the observations are written down in
postmortem.
If we represent it schematically in the PSP there are four levels, PSP 0, PSP 1, 2 and 3 at
the 0 level only the measurements are done, that is the personal measurement that is the
how much time somebody takes for a line of code. It is time per lines of code the basic
size measures that is given a work does he estimate that how much time will it take how
many lines of code and so on.
In the PSP 1 level the basic planning is done, the personal planning that how to go about
doing the work when to do which one this is planned and there is a schedule that by this
time I must do this and so on. In the level 2 of PSP the personal quality management and
the design and code review, the developer himself reviews his design and code and is
written in the form of a log. And in the PSP 3 the personal process evolution and not
only the personal process has been defined, but the opportunity to improve the process is
identified and the process is continually changed at the level 3.
1079
(Refer Slide Time: 14:06)
Now, let us look at very briefly at Six Sigma, the six sigma initiative is a quantitative
approach to eliminate defects. Again this is applicable to all types of industry from
manufacturing, product development, service. It is applicable to all industries and it
incorporates statistical representation of the defects and based on that one can conclude
that how a process is performing.
A defect is defined in six sigma is anything that does not fall within customers tolerance
limit. If something is a defect, but the customer is OK with that that is not really a defect.
1080
In the six sigma it argues for continuous process improvement. So, that the variability in
the process the upper and lower limits that are set this fall within six standard deviations
above and below mean. We will not go into the detailed mathematical computations how
to define the six standard deviations and so on.
But we will just say that to achieve six sigma a process must not produce more than 3.4
per million opportunities. If we look at definition of a 5 sigma or 4 sigma can see that the
number of defects acceptable are much higher, a six sigma process must not produce 3.4
defects per million opportunities. Here the defects are represented using a Pareto chart
and the defects that occur with the greatest frequency or that incur the highest cost are
identified to take corrective action.
1081
(Refer Slide Time: 16:32)
Several Six Sigma Methodologies are defined like define measure analyze improve and
control DMAIC. Define measure analyze design and verify a DMADV and these are
implemented by green belt and black belt workers supervised a master black belt worker.
We will not go into the nitty gritty of six sigma methodologies we do not have that much
time, in this course we just looked at a very overall idea of the six sigma.
We will now start looking at the software reliability issues. Now reliability is a very
important issue in software, the customers they look for highly reliable software they
would not buy software which is not reliable.
1082
(Refer Slide Time: 17:31)
And therefore, it becomes a very important issue with the project manager to understand
the nitty gritty of reliability and ensure that the final developed software has high
reliability. Reliable software is a concern for all users, especially if it is used for some
industry application it is very crucial and also reliability is a important determinant of the
quality of the software. A software having poor quality sorry poor reliability cannot be
said to be a quality software.
The users not only want highly reliable software, but they need a quantitative estimation
to be given to them about the reliability before they can make any buying decision.
1083
(Refer Slide Time: 18:50)
But how do we define Software Reliability? Informally we can say that the reliability is
basically the trust worthiness of the software or the dependability of the software.
Intuitive definitions trust worthiness, dependability. But can we give a more formal
definition of software reliability. IEEE document in 1991 defines reliability as the
probability of software working correctly over a given period of time under specified
operation conditions. So, here it is defined in terms of the probability of the software
working correctly over a given period of time, under specific operating conditions.
1084
To get some intuitive idea about software reliability, a software that has large number of
defects is clearly unreliable. And also we know that if we remove some bugs that is as
the bugs are identified and fixed, the reliability improves. When there are large number
of bugs the reliability is low. But as the number of bugs reduces the reliability improves.
For Reliability Measurement we need to give a number to the software, that will identify
or indicate it is likelihood of failing. For hardware it is a relatively simple issue how to
compute the reliability of hardware you keep on operating the hardware, make
observations and the failure and that gives the reliability of the hardware. But for
software it is a difficult problem, for hardware just put it on use for a long period of time
and then just give the reliability measurement of the hardware.
But for software it is difficult let us first understand what is causing or making the
software reliability evaluation a much more harder problem than software. There are
actually several factors several important factors. We will look at four five of them
which make software reliability measurement much harder than hardware reliability
measurement.
1085
(Refer Slide Time: 22:07)
One issue is that if there are hundred bugs in a program, all bugs do not cause failures in
the same frequency and same severity. Some of the bugs they keep on making the
software fail every now and then. Whereas, some other bugs they rarely make the system
fail. We can think of a software as it operates there are some areas of the program code
which get executed again and again, very frequently some part of the code gets executed.
And therefore, if that part of the code has a bug this will normally show very frequently,
but if some other part of the code which is rarely used has a bug this will not appear so
frequently. If this is the software some part of the code gets executed very frequently this
is called as the core of the program. For every program we can identify the core, there
are tools available called as profilers which can identify the core and therefore if bugs are
present here in the core they will cause it to fail very frequently.
So, if there are ten bugs present in the core or let us say just one bug present in the core,
the software may exhibit much higher failure rate than another software which has let us
say hundred bugs in the noncore part. And for this reason just estimating the number of
errors that are remaining in the software is not good enough to predict how frequently it
will fail. We must also identify whether it is in the core part or in the noncore part.
1086
(Refer Slide Time: 24:39)
The 90-10 rule says that 90 percent of the execution time of a program is spent in
executing 10 percent of the code. For any program this can easily be identified by a
profiler tool for Unix the tool name is p r o f prof and we can easily identify the core of
a program. And if the bugs are in the core of the program the reliability will be very low
it will fail very frequently. But even if much larger number of bugs are present in the
noncore part of the program the program may not fail so frequently.
1087
Study shows that removing 60 percent defects from the least used parts leads to only
about 3 percent improvement in the reliability. But if these are used from the if they are
removed from the most used parts or the core of the program then the reliability will
improve by a significant factor. Just imagine that if there are hundred bugs in the least
used parts that is noncore and you remove 60 of them the reliability improves by only 3
percent.
So, to estimate the reliability of a software just knowing the number of bugs is not good
enough there are several other factors which make the reliability evaluation much more
complicated. We look at these issues in the next lecture, for this lecture we are almost at
the end of the lecture hour, we will continue from this point in the next lecture.
Thank you.
1088
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 57
Software Reliability – I
Welcome to this lecture. In the last lecture we had looked at some very basic aspects of
Software Reliability. We had said that quality and reliability are closely related and the
software quality to large extent is determined by it’s reliability.
If a software is of not if poor reliability we cannot say that it is a good quality and with
that intention we started discussing about some very basic aspects of software reliability.
Like what do we mean by Reliability? How do we formally define reliability? And then
we were discussing about why software reliability is much difficult to measure why the
software reliability study is very different from hardware reliability study and so on.
Now, let us continue from that point onwards. We will first briefly look at why software
reliability measurement is a much harder problem than hardware reliability measurement
or measurement of the reliability of any other equipment, structure and so on. And then
we will define some metrics for software reliability measurement and then we will say
that to meaningfully estimate software reliability. We have to use the two approaches
1089
which is reliability growth modeling and statistical testing and we will see the pros and
cons of both these approaches.
We have been saying that measurement of software reliability is a hard problem. There
are many factors which contribute to making this hard problem, the first issue is that a
software will have many errors and one would expect that if we can estimate the number
of errors we can derive the reliability from that. But unfortunately all errors do not cause
failures at the same frequency and severity. Actually there is a very very wide difference
with the frequency with which different errors called system failure and also the severity
of the system failure.
There are some errors which rarely show may be after a million hours of uses may show
up once. Whereas, another error may show up even in the first minute of users, the error
which is occurs very rarely if we remove that error it will make a very little difference to
the reliability of the software the perceived reliability. But the one which appears again
and again in the form of failures that would result, if we can eliminate that error it will
result in significant improvement of the reliability of the software.
From this argument it is clear that even if we are somehow been able to estimate a
number of errors remaining in software, that is not enough to estimate the reliability of
the software. If this is the software then as the code executes some code some part of the
code that executes almost all the time, the other part does not execute as frequently and
1090
this part marked in the red is called as the core of the program those statements are
executed again and again.
The 90-10 rule says that 90 percent of the execution time is spent only in 10 percent of
the instructions of the program, the rest 90 percent of the instructions execute only 10
percent of the time and 10 percent of instruction execute 90 percent of the time.
And therefore, if we remove a error appearing in the core part that will result in a
significant improvement of reliability. Whereas the one which is very rarely used would
make hardly any difference to the reliability.
1091
(Refer Slide Time: 05:46)
Some studies suggest that if a software has let us say thousand bugs and if we remove six
hundred of those bugs which are there in the noncore part. It results only 3 percent
improvement to the reliability, very very surprising result, that removing almost more
than half the bugs results in only 3 percent improvement. Because those bugs happen to
be in the noncore part and this issue makes reliability estimation from the number of
latent error estimation not very feasible.
1092
The reliability improvement from correction of a single error depends to a large extent
whether the error belongs to the core part or the noncore part of the program, this is one
issue which we need to keep in mind when we think of software reliability.
Another issue so this is still in the first issue, because the frequency with which a error
causes failure depends on the part of the program at which the error lies. Therefore, the
system reliability the observed system reliability and the number of latent software
defects, there is no simple relationship otherwise you can derive or empirically propose a
expression in which you should be able to input the number of latent software defects
and get the reliability. But this is the main reason why we cannot do that.
Now, let us look at the second issue, the second issue is the failure rate is observer
dependent, a software which appears to be very reliable to one person may appear to be
very unreliable to another person. Now, let us investigate the reason behind this.
1093
(Refer Slide Time: 08:33)
Let us say one user selects inputs or executes the software. So that only the correct
functionalities are executed a software has hundreds of functionalities and let us say each
user is interested in executing only part of the functionality. Let us say a library software
the students or the members are interested to issue the book, return book, query book and
so on. The librarian would be executing the functions create member, delete member
create book, collect statistics and so on.
The accounts department would again be using the same software, but executing very
different functionality. Like what is the income from fee collection membership
collection how much is from the fine collection how much grant is received how much
books have been procured this year and so on.
So, every user every type of user use different functionalities of the same software. Now,
let us say one type of user they are executing functionalities which are correctly
implemented functions and they find that all the functions they execute work perfectly all
right and they would give the opinion that the software is really good highly reliable. On
the other hand for another category of users almost every function they execute displays
failure error and so on and they would form the opinion that the software is of poor
quality it is unreliable.
1094
(Refer Slide Time: 10:36)
And therefore, for the same software two different users can have two different opinion
about it’s reliability. The opinion of a user about the reliability of a software we call as
the perceived reliability. So, the perceived reliability can be low for one group of users
and the perceived reliability can be high for another group of users.
We are just mentioning this point the different users of a software application, they use
the software in different ways they are interested in different functionalities and therefore
the failure which shows for another one user may not use so for another user.
1095
So, different users can come up with different reliability numbers for the same software
and therefore it is clearly observer dependent. But again if we have to give one absolute
value how do we do it, which user will take into account. Because we will get different
numbers for different users, we will see how to handle that problem, but then it makes
reliability estimation complicated because the reliability has different observers give
different reliability numbers to the same software. And it makes it difficult to give a
absolute reliability number a single reliability number to the software.
The way it is done that is to give a single reliability number is to consider the operational
profile of the software. And the if you have to give a single number we will have to give
it based on it is operational profile. Since the perceived reliability depends to a large
extent on how the software is used or in other words it is operational profile. Let us first
define what is operational profile and then we will see how to give a single reliability
number to a software.
1096
(Refer Slide Time: 13:33)
Let us say a software has many functionalities F1, F 2, F3 etcetera and also each of this
functionalities will have different scenarios of operation. Let us say this is a ATM
software, bank ATM and then one function is withdraw cash, one is the main line
scenario that is, go there do everything correctly give password insert the amount to be
withdrawn and cash is dispensed come out.
The second scenario is that the password may be incorrect or the card may be rejected or
may be that the account does not have enough balance. So, that is another scenario one
more scenario can be that the ATM does not have enough money to dispense and so on.
So, the same functionalities can have different scenarios of operation. We can observe
the execution of these different functions by the users, we can take a log of that as the
different users use a log gets collected and we analyze which function and which
scenario gets executed how many times. So, that is function and the probability of
occurrence we might have these three functions F1, F2, F3 and we may determine that
one scenario the first scenario F11 gets executed 40 percent of the time, F1 2 the second
scenario of the F1 function gets executed 10 percent of the time and F1 3 the third
scenario for F1 get executed 10 percent of time.
The first scenario of F2 gets executed 5 percent of time and the second scenario gets 15
percent of time. The first scenario third use case the third function gets executed 15
percent of time and the second scenario of the third function gets executed 5 percent of
1097
time. And this we call as it is operational profile of the software. Let me just repeat here
that the operational profile of a software is constructed by observing the way the
software is used by various users over a long enough period of time and then finding
what is the rate at which the different functions get executed. The probability that the
first scenario of F1 gets executed is forty percent and so on, this we call as the
operational profile of the software.
And once we have the operational profile that is the average system users of all types of
users and therefore based on the operational profile. We can give a single reliability
number to a software and that we will discuss a little later how to go about giving a
single reliability number to a software based on it’s operational profile. And we call that
a statistical testing.
Now, let us look at the third issue which makes reliability measurement difficult, this
problem is that the reliability of a software keeps on changing throughout it is life cycle.
Because each time there is a failure, the software is debugged, the error is detected and
corrected and therefore the same type of failure will not occur.
But then the improvement of reliability is not fixed some error fixes may cause large
improvement of reliability, some may cause less improvement in reliability and so on.
But typically as the software gets used, the errors are expressed in failures and these are
corrected the reliability keeps on improving for the software. Unlike hardware where the
1098
reliability more or less remains constant, in software the reliability changes and typically
improves.
And therefore, becomes very difficult to give a reliability number to a software, because
if we give it today, then after a year the same number will not hold or may be after a few
days the same number will not hold. So, it becomes difficult to give a number reliability
number, reliability estimation value which can remain valid for sometime.
If we compare hardware and software reliability there are some issues which are very
different in hardware and software. The first thing is that the type of failure that occur in
hardware is very different very different from the failure that occurs in software. In
hardware the failures occur due to component wear and tear, if it is a automobile the
failures that typically occur are let us say the tyre punctured the tyre worn out replace the
tyre the battery become old, replace the battery and so on.
So, if we analyze hardware failures like a car or a mixer grinder, we find that the failure
is due to use of the system and some components, they wear and they break and the
failures are largely due to component breaking due to use or aging. But in software there
is no such thing as wear and tear, here this is a car the wheel has worn out punctured and
it can be replaced and the failure here is due to wear and tear. Whereas, in software there
is no wear and tear, this is one major difference between software failures and hardware
failures.
1099
(Refer Slide Time: 21:36)
If it is a logic circuit a logic gate can become short circuited or open circuit and therefore
it may be 1 or 0 in all types of hardware. If there is a failure typically find out which
component is failing and replace that part and then the reliability is back to what it was.
A reliability estimation can be done for a hardware system, a number can be given and
there can be failures. But then on failure we replace a part and the system is back to the
previous reliability.
1100
But software the failures are not due to wear and tear, the failures are due to bugs and
here the only way to correct the failure is to remove the bug. And once we change the
code and remove the bug the reliability changes, the system continuous to fail unless
changes are made to the design and code.
And once we change the design the reliability changes typically the reliability improves.
Now, the metrics that are used to measure hardware reliability, they are basically based
on observing the number of failures over a period of time long enough time and then
noting the number of failures and giving a reliability value. But for software these
metrics will not be very meaningful because, the failure types are very different here the
each failure results in reliability improvement in software. Whereas, the hardware the
reliability is maintained.
1101
(Refer Slide Time: 23:59)
Whenever a software fails at test case there is a failure it is debugged the fault is rectified
and therefore the traditional notion of measuring reliability by observing the number of
failures per unit time does not appear to be meaningful in the software context.
In hardware system once there is a failure, the component is replaced and the reliability
is maintained. On the other hand, when software is repaired it is reliability may increase
in that we have removed a bug or it may decrease also, because one fix error fix may
create new bugs, let us say we fixed one bug. And let us say four five new bugs got
1102
introduced because that error fix was not proper it fixed that bug, but then there were
some other bugs which resulted due to that bug fix.
So, the we can say that the goal of hardware reliability study is stability that is the
reliability is maintained the inter failure times remain constant, once the failure
conditions are removed.
On the other hand the goal of software reliability study is reliability growth that is
typically the inter failure times decreases the inter failure times decrease. I am sorry the
there is a reliability growth the inter failure times increases that is, it does not fail as
often because bugs are being removed. So, there is a reliability growth or inter failure
times increases.
So, the study of hardware reliability stability, reliability stability, whereas the goal of
software reliability is reliability growth the inter failure times increases. Whereas, in
hardware the inter failure times remain constant.
1103
(Refer Slide Time: 26:24)
So, we can see behavior of hardware and software failure in a diagram called as the
bathtub curve. We are almost at the end of this lecture we will continue from this point in
the next lecture.
Thank you.
1104
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 58
Software Reliability – II
Welcome to this lecture. In the last lecture we were discussing the difference between
hardware reliability and Software Reliability. We said that hardware reliability
measurement is a relatively simple problem, because the failures there are due to wear
and tear of components. And, once we replace the broken component, the reliability
comes back to its previous reliability or the reliability is maintained.
Whereas, in software the failures are not due to wear and tear, these are due to bugs and
to correct the failure it is debugged and the error is corrected. And therefore, the
reliability increases or the inter-failure times increases. And therefore, the reliability
keeps on changing in case of software whereas, in case of hardware the reliability is
maintained.
Now, let us look at this issue a little deeper. This is a typical behavior of a hardware
system, here we have plotted the failure rate that is number of failures per unit time with
the time of users. For a automobile or something it may be let us say for a period of 30
years or something and for a simpler system it may be 1 year and so on. Now, every
1105
hardware system it shows this kind of behavior that initially the failure rate is very high
and then with time it decreases. It may be automobile, it may be a mixer grinder, it may
be a refrigerator, it may be a television set.
Any hardware system initially shows very high failure rate and the failure rate keeps on
decreasing and then for a long time the failure rate is maintained. The inter-failure times
remain constant over a long period of time that we have shown the symbol. And, this is
called as the lifetime of the system; when the failure rate starts to increase. And, this is
called as the bathtub curve, it is a typical model of any hardware system reliability.
Initially the weak systems, the weak components each hardware system is made up of
many components. Initially the weaker components they fail and may be the interfaces
among components they fail recurring interfaces.
So, this is called as the burn in where the weaker components and weaker interfaces
among components they fail and then they are repaired and reliability becomes high,
stable and maintained in this rate for a long time. Typically, the initial part is covered by
the manufacturer’s warranty otherwise the people will not buy the hardware system
because, it is showing a very high failure rate.
Typically, all good manufacturers cover this part by the manufacturer’s warranty because
of the burn in. The reliability is maintained and at the end of the lifetime, this point is the
lifetime the failure rate starts increasing. And, here most of the components are worn out
and the major components start failing.
1106
(Refer Slide Time: 05:08)
In contrast to a hardware failure curve, the software failure curve appears like this. It
should actually the ideally it should appear like this violet curve, that as the time passes.
The different problems or failures are reported and these are corrected and typically the
failure rate is high of the for the software, any software. Once, these are released and
these are fixed, as they are fixed the reliability improves and the failure rate comes down
and then it slowly becomes better and better. Initially, there is a steep fall because the
frequently occurring failures or the failures in the core part of the software get noticed
and they are repaired.
And, each of those repair causes a huge improvement in the reliability and later slowly
the more esoteric or rarely used parts of the software errors are found and those are fixed
and they little improve the reliability. But, actually the curve is not like this, the actual
software failure curve is that initially there is a rapid increase in reliability or decrease in
failure rate. But, then this is not smooth curve throughout and the curve I have just
shown few of this once here, but throughout this there are glitches like this. Each time
there is a failure reported and these are corrected, there is a jump here because of a fix
causing other bugs to appear.
Of course, sometimes the fix may be very proper, but sometimes a fix, bug fix may cause
new bugs to arise and therefore, the reliability decreases after a fix. And, then after
sometime the reliability decreases; initially the reliability increases or the failure rate
1107
comes down and then there is temporary glitches, these are due to the bug fix creating
new bugs. But just see the trend here, the unmistakable trend here is that the reliability is
decreasing over time. Why is that? The reason is that because of many bug fixes slowly
the structure of the software becomes poor. And therefore, the reliability becomes poor,
many ad hoc fixes degrade the software structure and therefore, it gets poorer reliability.
We can see that the failure curve for software is very different from the bathtub curve for
hardware systems. In other words the behavior, the reliability behavior of hardware and
software are very different.
Now, let us see how do we measure reliability because, the user should be interested.
They may specify the level of reliability for a software, typically for safety critical
software they would definitely give a reliability figure. And, even the other users they
are concerned about the down time, the frequency with which failures appear and so on
and they may specify a failure rate for a software, they would like to install on their
systems. But, using what metrics? Let us look at that issue that what are the metrics for
measuring reliability.
1108
(Refer Slide Time: 10:12)
Of course one very basic issue with software is the different types of users use the
system differently. And therefore, we cannot give different ratings to the same software.
We need to give one reliability rating and that should be observer independent and all
users should agree on that rating.
Now, let us see the reliability metrics for hardware, would they be useful for software.
The mean time to failure is very popular hardware reliability metric, it is the average
time between two successive failures or the inter-failure time. The mean inter-failure
1109
time that is mean time to failure, here the software sorry the hardware this is a hardware
metric we are trying to discuss whether you can use it for software. For a hardware
system it is put to use for a long time and the and the failures are observed over this time
and the average time to failure is computed from this.
For example, let us say this is the time of uses, there are correct operations after quite
some time there is a failure. For a car may be the wheel has worn out and then again
correct operation and then there is a failure, may be the battery has run out; replace the
battery. And, then again there is a correct operation and then there is a failure may be the
wire has got heated and shorted. So, replace the wire and again correct operation. The
time between one failure to another failure is the inter-failure time and we can measure
the inter-failure time for different failures and then compute the mean of that, that we
will call as the mean time to failure.
But, mean time to failure would not be very appropriate for software as per hardware
because each time there is a failure, the bug is fixed and the rate of failure changes. For
hardware it is a very reliable metric because the reliability is maintained and its
meaningful to observe it for a long period and measure its reliability. But, for software
reliability keeps changing, it improves typically, but then there can be glitches or the
software may degrade due to a bug fix, the reliability may degrade and therefore, it either
1110
increases or decreases. So, the mean time to failure where we observe the inter-failure
times not be a really very appropriate metric.
For a hardware system typically the failure times are recorded, let us say the failures
occur at t 1, t 2, t n and then the inter-failure times are computed. And, then they are
summed the total number of inter-failure times and divided by the total number of
failures. So, computing the MTTF is straight forward. MTTF = Σ(ti+1-ti)/(n-1)
1111
But, what about rate of occurrence of failure, this is another metric. Here the frequency
of occurrence of failure is obtained. We observe the time period, long enough time
period and find how many failures have occurred and then from that we find the number
of failures per unit time. So, that is called as the rate of occurrence of failure. We just
count how many failures and then the total time of uses and then find the number of
failures per unit time or the rate of occurrence of failure.
But, another issue is that once a failure occurs, it takes some time to fix the bug. There is
a repair time because each time there is a failure, there is a repair time. For a hardware
may be need to identify which component has failed and then replace the component.
And for software debug, find out the bug, correct the code; but in both cases hardware
and software there is a time to repair which is the time to fix the bug. So, each time there
is a failure some time is needed to fix it, that we call as the mean time to repair.
1112
(Refer Slide Time: 16:33)
If we consider mean time to repair, in the previous case we just considered mean time to
repair is 0. But, if we really consider the time to repair then the mean time between
failure is the mean time to failure plus the mean time to repair. This is the time a system
is not used, mean time to repair and mean time to failure is the time the system is used.
And, the mean time between the failure is once the failure occurs when is the next failure
expected that is MTBF is MTTF plus MTTR. MTBF=MTTF+MTTR
If we say that MTBF of 100 hours; that means, that once a failure occurs the next failure
will occur after 100 hours and this includes the downtime of the software. Once a failure
occurs the next failure is expected after 100 hours of clock time, that is including the
downtime and it is not the run time of this software.
1113
(Refer Slide Time: 17:57)
There is another metric, the probability of failure on demand; unlike the MTTF and rate
of occurrence of failure, the probability of failure and demand does not involve a time.
We do not measure it over a interval. Here we just keep on executing the software with
different functionalities and found out, if we invoke the functionality with different data
test input for 1000 times. How many times did the software fail?
If it is the probability of failure and demand is 0.001 that is on; that means, that if we
executed the functionalities of the software 1000 times, then only one failure was
observed. This is a more meaningful metric for software, because unlike a hardware
which is continuously used, a software does not run unless a function is invoked. If the
user invokes let us say each function, it runs for a fraction of a second or something and
waits for the next input. So, the system is not running continuously unlike a hardware, let
us say car is running for 10 hours, it is continuously the hardware is being used.
Whereas, here the software each time the user invokes a functionality, the software runs
for a small time and then keeps waiting for the next input. And therefore, observing the
software reliability over a period of time is less meaningful because, the question comes
is that how frequently were the functions execute. Because, the software is not really
running all the time, much of the time it was waiting for the user input. So, the
probability of failure on demand is more appropriate metric for software.
1114
(Refer Slide Time: 20:35)
We can also define another metric called as availability which is basically characterizes
the likelihood of the system being available for use to the user over a period of time. And
of course, it will consider the failure time and the time to repair which we call as the
downtime. And, based on this we can give a availability number that is a user once trying
to use the software how likely that the software is available for use. Because, it may be
undergoing, it might have failed, it might be undergoing debugging and so on.
1115
The availability metric is important for systems which are not suppose to be down, they
run continuously. For example, an operating system or a telecommunication software.
But, then we have two notions of availability: one is the operational availability and the
other is inherent availability. In the operational availability we just consider the mean
time between maintenance and the mean down time, whereas, the inherent availability is
the mean time between failure and the mean time to repair.
MTBF
Operational availability
MTBM MDT
MTBF
Inherent availability
MTBF MTTR
So, here we just consider the time between failure and the repair time, whereas here we
consider the mean downtime. The downtime is different from the meantime to repair, in
the sense that this is the time to repair.
But, then there may be preventive maintenance, there may be initialization study and so
on why the system is down. So, the mean downtime is a more general measure of the
downtime which not only the repair, but also the system may be down due to various
reason including preventive maintenance, routine initialization etcetera. So, the
operational availability is the actual availability to the user whereas, the inherent
1116
availability is a ideal notion of availability; where we do not have preventive
maintenance etcetera. The only downtime is due to the repair time.
But, if we just look back into all the metrics that we discussed, all the metrics are
centered around the probability of the system failure. And, they do not take any account
of the consequence of failure, but in reality some failures are very severe, they just the
system hangs. We need to reboot, work is lost and so on. Whereas another function,
another failure may be that the current system, the current function invocation did not
work, but the system did not hang; just need to try different function or the same function
the second attempt it may work and so on.
So, the severity of different failures may be different. If we just consider all failures
occurring over a period of uses that may not be very proper. We cannot compute all
types of failure and then come up with a reliability metric, because some may be very
insignificant failures. And, if we only count the severe failures like cross type of failure
that also will not be proper. So, how do we go about handling this problem? The
different failures have different severities.
1117
(Refer Slide Time: 25:10)
Some failures are transient and consequences are not serious and they are of little
practical importance. At best they may be just minor irritants may be the mouse did not
work, but next time the mouse worked and so on.
How do we handle this problem? We handle this problem by defining a failure class,
different classes of failure.
1118
(Refer Slide Time: 25:53)
We classify the different types of failure. Transient: this class of failure they occur only
certain inputs, permanent failure they occur for all input values, some function they fail
for any input value. But, let us say only fails per one input value when we give only 100
it fails, for all other it work satisfactory; then we call it as transient. Permanent it fails for
all input values, recoverable that the system did not hang. We could recover either the
user himself or with operator could recover and again without rebooting we could use the
system.
1119
Unrecoverable the system might have hanged, we might have to restart the system those
are unrecoverable. And, the cosmetic are the ones which are minor irritants, they do not
really cause incorrect results or something, but just minor irritant may be the user need to
sometimes press the mouse button twice. So, we can measure the reliability for each
class of these failures and that will give a better idea of the reliability.
Maybe the reliability by considering the unrecoverable and the permanent of time type of
failure or may be ignoring the cosmetic failure we can measure the reliability and so on.
So, far we have looked at the reliability metrics and some very basic issues and then in
the next lecture we will discuss about how to go about measuring the reliability of a
software. We will stop at this point.
Thank you.
1120
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 59
Software Reliability – III
Welcome to this lecture. In the last lecture we had discussed about some issues in
Software Reliability measurement.
This lecture let us first look at software reliability estimation techniques. We will
basically discussed two main techniques for software reliability estimation; one is
reliability growth modelling and the other is statistical testing let us get started with that.
1121
(Refer Slide Time: 01:31)
So, the problem that we are trying to address is that given a software application how do
we go about estimating the reliability of the application? The application has been
developed, installed and running. Our task is to estimate the reliability of the application,
we need a quantitative value for the reliability of the application.
One of the options is reliability growth modelling. We just try to map the failure rate
with a reliability growth modelling curve and based on that we can say that what is the
reliability of the software after elapse of sometime at the movement so on. We will look
at this technique reliability growth modeling as a way to estimate the reliability of a
software application. We will look at statistical testing this is another way to estimate the
reliability of the application.
The third technique is indirect ways of measuring the reliability here we do not really
look at the failure rate and so on and measure the reliability, but based on the
characteristics of the software. For example, various metrics of the software its
complexity matrix number of calls per use cases per use case and so on.
So, based on a set of metrics that are collected for the application, we can predict the
reliability. We will not look at the indirect means we will look only at the reliability
growth modelling and statistical testing.
1122
(Refer Slide Time: 03:55)
Let us look at reliability growth modeling to start with. A reliability growth model is a
graphical model of the reliability growth of the software with time. As different errors
are detected and repaired the reliability of the software changes and we model over time
how the reliability would grow.
The use of such a reliability growth model is that we can say that what will be the
reliability of the software after some time. We can also use this model to predict that
given a quantitative requirement of the reliability of the software how long will the
testing need to go on. Because as testing goes on, more and more bugs are detected and
fixed given that a certain level of reliability is given for the application. We can predict
how long how much longer to test is it 2 weeks, a month, 2 month this is another use of a
reliability growth model.
1123
(Refer Slide Time: 05:31)
The simplest growth model is the step function model its quite old model very simple,
but then it is also intuitive and forms a background for other more sophisticated
reliability growth models. Here the basic assumption is that each time a failure occurs the
corresponding error is detected and fixed and the number of errors in the software
decreases therefore, the reliability increases.
The simple assumptions made by this model is that all error fixes are ideal and error fix
permanently fixes the error there are no side effects of that no other bugs are introduced
by that and also each time an error is fixed, the reliability increases by a constant
amount. Very simplistic model we know already based on our intuitive knowledge of the
software reliability that reliability growth on an error fix is not constant.
Initially the bugs that are there on the core of the software get exposed they cause failure
because those are the ones which cause frequent failures and they are the ones to get
detected first. And, fixing the bugs in the core part increases the reliability by a rather
larger amount compared to fixing a bug that is there in a remote part of the software. So,
the assumptions are too simple, but then this can be used to predict reliability with some
approximation.
1124
(Refer Slide Time: 07:51)
The graphical representation of the model is this step function, the rate of occurrence of
failure on the y axis and time and the x axis and each time there is a failure, the rate of
occurrence of failure comes down reliability improves. And, based on how frequently are
errors discovered and so on we can fit onto this model and predict the reliability of a
given application, this is the simplest model.
1125
Here one of the implicit assumption is that all errors contribute equally to the reliability
growth we know from our basic understanding that different errors contribute differently
to reliability growth. And therefore, the results of this model will not be very accurate.
Another basic model is the Jelinski Moranda model again proposed long time back. Here
the basic assumption is that the failure rate is proportional to the number of faults
remaining in the software. Again implicit assumption being that each error is responsible
to cause failure at the same rate.
So, the failure rate is proportional to the number of faults remaining in the software and
each time an error is repaired reliability does not increase by a constant amount. We can
see here since the failure rate is proportional to the number of faults remaining in the
software. If n is the number of faults, the failure rate is the function of n, it is
proportional. So, let us say k n and the failure rate after bug has been detected and
corrected is n minus 1, k n minus 1.
We can see that the improvement in the reliability is not constant here and is a function
of the number of bugs. Here initially the improvement in reliability will be low on a bug
fix because out of n bugs we fixed one bug. So, basically proportional to 1 by n, the, but
towards the end if there are two bugs and we fixed one of that the reliability growth will
be higher this is the assumption here.
1126
(Refer Slide Time: 11:27)
Its better than the step function model here each bug fix contributes differently to the
reliability growth, but then still there are several short comings. As we are saying that
intuitively we know that the bugs that are there in the core part get discovered early
contribute to larger growth of reliability.
But then as time passes by the bugs that are there on the noncore part of the software get
detected and these contribute to lower increase in reliability. And therefore, initially the
reliability growth should be high and later part the reliability growth should be slow
down, but here this model does not do that.
1127
(Refer Slide Time: 12:49)
Initial faults that are detected and corrected should contribute maximum to the reliability
growth and the rate of reliability growth should be large initially and slowed down later
on and this is contrary to the assumption of this model. And therefore, this model will
not really give very accurate results.
Now, let us look at the Littlewood and Verall’s model, in both the models we have seen
So far, the step function model and the Jelinski Moranda model both it was assumed that
the bug fixes are ideal, that is when a bug is fixed error is reduced. But in reality on a
1128
bug fix more errors may get introduced that bug may get fixed, but it may cause more
error to get introduced. And therefore, the number of bugs may actually increase. And
therefore, on a bug fix the reliability may actually decrease there is a negative reliability
growth this really happens practically and this is taken care in the Littlewood and
Verall’s model.
Negative reliability growth may occur sometimes when the repair introduces further
errors. And here as we can see that the growth in reliability is not constant and also the
average improvement in reliability per repair decreases because sometimes there are
negative reliability growths.
So, this matches with the intuitive understanding of the reliability growth. It represents
diminishing return as the test continues and this model of course, matches with the
intuitive idea of the reliability growth. And therefore, would give a more accurate result
than both the step function model and the Jelinski Moranda model.
1129
(Refer Slide Time: 16:01)
There are several dozens of reliability growth models that have been proposed based on
these simple reliability models. These provide more accurate approximations of the
reliability growth, but then ours being a project management course we are more
interested in getting the basic ideas on the reliability growth the basic models rather than
really discussing very sophisticated models and spending time on that. So, these more
sophisticated models are out of scope of our discussion.
1130
But given that there are several dozens of models, each making some simplifying
assumptions and some realistic assumptions and so on. How do we determine which
model to use, is it that there is a best model which will be used for all applications that
will give the best result? No not really because the reliability growth is dependent on the
specific application that we are considering.
The bug distribution the number of functions supported the complexity of the software
and so on. And therefore, we cannot straight away say that this model is suitable for this
type of software, but what we can do is given the failure data over a time period a
considerable time period we can try to plot it and match with these growth models.
The one it matches most accurately that will be the one to be used and the best fit model
is taken and based on that we can say that after elapse of certain time what will be the
reliability of the software.
Now, let us look at another technique the statistical testing technique, this is a testing
process no doubt, but then here the objective is not to detect bugs. Here the objective is
to determine reliability and if some bugs get detected that is not the focus, but then that is
good that some bugs get detected, but here the objective is to determine the reliability of
the software. And naturally the test case design here is different than the defect testing.
One thing we can say that the test data here these are designed based on the operational
profile of the software.
1131
Since the test data is defined based on the operational profile of the software this gives a
numerical value for the reliability of the software which is agreeable to most users. We
had said that the reliability of a software is observer dependent, different observers may
have a different estimation of the reliability of the software. And we had said that it will
be confusing if we give different reliability values for the same software for different
classes of users we need to give only one value.
And the value that we can give is the average of all the users and that is captured here in
the operational profile, this is the average rate at which the different functions of the
software are invoked considering all the users.
We had discussed about the operational profile in one of our earlier lectures. Here we
look at the different functionalities like a let us say word processing software may have
the functionalities create document, edit document, print file operation etcetera. And then
we find out the test data which correspond to create, edit print file operation etcetera.
And we observe how the software gets used by different users.
Maybe we can write a small log collecting software which will keep on collecting the log
of uses by various users of the software and from there we can find out what is the rate at
which these different functions are invoked by different users. And based on this log we
can assign a probability value to each input class, each input class basically belongs to
some of the operation. And then we will have all the functionalities of the software and
1132
probability of an input value getting selected from that class that will form the
operational profile.
So, here in statistical testing the first step is to determine the operational profile of the
software, the user’s pattern can be captured by a log and then we can analyze to assign
probabilities to each of the functions supported by the software.
Typically we will have various functions supported by the software and also different
scenarios for the functionalities and then we give a probability of the function being
invoked. This forms the operational profile definition of the software.
1133
(Refer Slide Time: 22:59)
The second step is to generate the test data; here the test data should correspond to the
operational profile. What we mean is that if 40 percent 0.4 is the probability of the first
scenario of F1 functionality, then in our test suite, we should have 40 percent of the test
cases where F1 is invoked on the f11 scenario.
Similarly, 0.1 is the probability of the f12 scenario of functionality F1 and therefore, 10
percent of the test cases should correspond to the f12 scenario of the F1 functionality. As
you can see here that if we analyze the test suite we should find a correspondence with
the actual uses of the software. In the defect testing we do not do this we just anticipate
where defects might be there and we design test cases, here we do not do that. Here
based on the user’s pattern we generate test data.
If f11 scenario of the F 1 functionality has a probability of 0.4 in our test suite design, we
design 40 percent of the test cases with test values that correspond to the f11 scenario of
the F1 functionality.
1134
(Refer Slide Time: 24:49)
And in the step 3 actually carry out the execution of the software using the test cases and
find out the failures. But then measuring the clock time may not be the right thing to do
we should actually measure the execution time. As we are discussing earlier, that the
software runs very fast. Each functionality it typically completes in milliseconds, but the
human input may be let us say you can give a input in several seconds or minutes.
And therefore, most of the time the software idles on each input it runs for a very short
time and then idles for the next input. Therefore we should have some instrumentation
some means by which we collect the actual run time information of the software and that
we plot on our failure data, we consider that as the time of failure not just the clock time.
Using clock time we will result in a very inaccurate reliability estimation.
1135
(Refer Slide Time: 26:31)
And finally, after statistically significant number of failures have been observed the
reliability can be computed that is how many functionalities were invoked and how many
failures where observed and so on.
Here the result is likely to be agreeable to most of the users because the test data have
been defined based on the operational profile. We are at the end of this lecture we will
stop here and continue in the next lecture.
Thank you.
1136
Software Project Management
Prof. Rajib Mall
Department of Computer Science and Engineering
Indian Institute of Technology, Kharagpur
Lecture – 60
Software Testing
Welcome to this lecture. In the last lecture, we had discussed about how to estimate the
reliability of a given software application. One way we had said is a reliability growth
modeling. We can consider a model; there are several models have been proposed, we
discussed only three models. So, these models are basically graphical representation of
the failure rate over time.
And, for the specific software application that we are interested we just collect its failure
data over time and then try to plot it and see that which model is the most accurate which
fits this our failure data most accurately. And, based on that, we can predict the
reliability of the software after certain time at the present time and so on.
The second approach we discussed was statistical testing. In statistical testing, we use the
operational profile and design test cases. Even though name is testing, here, the objective
is not to detect bugs. The objective of statistical testing is to estimate the reliability of the
software. Here we design the test cases such that the test data correspond to the
frequency of invocation of the functions by the users. If the users on the average invoke
1137
some functionality 40 percent of the time, then our test data should actually have 40
percent test data corresponding to that functionality.
The other important thing is that, it relies on using large test data because as the name
says it is a statistical technique, we should have a significantly large data based on which
we can infer and predict the reliability after certain time. If we use only handful of data
that will give a very wrong result and also we assume that only a small percent of test
input is likely to cause system failure. If there are system failures occurring in almost
every test cases, then of course, the prediction of reliability will not only be poor, but it
will be very gross it won’t be accurate.
But, the complexity of using statistical testing is that how do we generate test data
because we need a significantly large number of data. It should be statistically significant
number of data and not only that if we give data that are most likely to be input by the
user that is ok, but then we must have a significant percentage of unlikely input data
should be also included.
These basically correspond to the corner cases that is the input value that the users give
on normal operation that is fine, but then we should also include the input values or the
combination of the input values which the users normally do not give, but may sometime
give and those are the ones which actually cause failures. Those are called as the corner
test cases.
1138
But, creating those test cases are actually the challenge. The normal values that we can
easily generate, but the unlikely values that becomes a big challenge and the statistical
testing, the accuracy of the result depends largely on whether we have taken care to have
enough corner cases, significant percentage of these corner cases.
Here we give a large number of input for the ones which are frequently used by the
average user and therefore, the average user will find that the system is portrays his
estimation or his understanding of the reliability of the software. And, here the idea is to
test the parts that are frequently used with more test cases and therefore, those are the
bugs in those areas are removed.
The frequently used functions they were fine because lot of test cases have been
designed for that and therefore, not only that we get a reliability estimation, but also
significant number of bugs are removed and that too from the frequently used functions.
And, therefore, the reliability of the software appears to be higher compared to when the
bugs are removed uniformly from all functions. Here we are concentrating on testing the
function which are most frequently used.
1139
(Refer Slide Time: 07:03)
And, here we define the test data based on the operational profile and therefore, it gives a
accurate estimation of the reliability as perceived by the average user compared to the
other type of measurement because the test data is generated based on the operational
profile.
But, if we think of the disadvantages of the statistical testing one is that the results to a
large extent depend on how do we define the operational profile and also how do we
design the test cases based on the operational profile. Even if we define the operational
1140
profile by a laborious study where we collect the logs across a large number of users; in
actual users scenario of the software, but then even if we have the operational profile
correctly the first challenge is to define the operational profile. It has to be collected from
the actual users situation and also a large interval of users.
The second problem is that even if we have the operational profile correct, how do we
design the test cases. Designing the test cases that are rather usual test values is not
difficult, but selecting a proportionate unlikely input or the corner cases is a big
challenge and if we do not do this the results would not be accurate.
In the summary of our reliability study we can say that the reliability of a software can be
defined as the trust worthiness or dependability. More accurately this is a very overall
definition trust worthiness or dependability is their intuitive idea of the reliability of a
software. More accurately we can say that the probability of the product working
correctly over a given period of time.
1141
(Refer Slide Time: 09:41)
And, we had seen the problems in software reliability measurement. It is seen that the
reliability changes as bugs are detected, unlike the hardware and not only that it is
observe dependent. The reliability of a software depends on who is using and therefore,
we had said that we must use the operational profile to estimate the reliability.
Our test cases should be designed based on the operational profile of the software then
we can get a result of the reliability which is agreeable across most of the users and the
operational profile is based on the different classes of input and the probability of their
occurrence.
1142
(Refer Slide Time: 10:39)
And, the statistical testing is based on testing using a large data set. Tens of thousands or
millions of data, test data which are designed based on the operational profile to get a
very realistic figure. We will now discuss about software testing. The project manager
needs to also understand the software testing issues because finally, getting a good
quality product testing is a major contributor.
And, to manage the software, so that the software is a good quality the project manager
needs to understand some of the basic issues in software testing. With that motivation,
we will discuss some issues on software testing.
1143
(Refer Slide Time: 11:41)
The first thing we will discuss is the difference between a fault and a failure. When a
program is being tested it may fail. For example, on a given input it may display fatal
error, the program will not now terminate. What we are observing here is not a bug, if we
gave the input test input and we observed that this message came that the program will
now terminate fatal error. We are observing the failure not the error.
So, what we observe on testing is a failure, but the failure is caused by a defect, bug or a
fault. In other words, we say that a failure is a manifestation of a fault. There is a bug
somewhere in the code and that cause the failure, but we observed the tester the failure.
But, then, do every bug in the software cause a failure? No, there may be bugs which
may never cause a failure. We will just discuss about that little later. You can also think
about it that how can a bug be present, but not cause a failure.
1144
(Refer Slide Time: 13:25)
Let us distinguish between error, fault and failure. Programs, even though now a lot of
automation automated tools are available, still programming is largely effort intensive. In
1993, IEEE standard 1044 errors and faults were considered to be synonyms. But, in
2010 revision finer distinctions were proposed, because there was a need to be more
expressive in communications, we need to distinguish between error and fault because
coding, designing etcetera are laborious manual effort and people do mistakes. And,
therefore, we must know what is a error, mistake, fault, failure because these are the
terms that are frequently used in this area.
1145
To understand we will just see this picture here the programmer here is typing using his
laptop or desktop and initially he does the specification, design and then code. During
the specification may make several mistakes. The mistakes we represented using this
blue rectangles, but many mistakes they do not create problem, they are not bugs. Some
of them are bugs, but then all bugs do not create failure, some bugs create failure.
Similar is the design and the code. The code the programmer may make mistakes or
errors, but all errors are not faults. Let us say the programmer made a mistake instead of
writing a is equal to b made a mistake wrote a equal to c, but then it so happened that b
and c had the same value and therefore, it is a not a bug, it is not a fault. But, it was a
mistake on the part of the programmer, he should have written a equal to c, but he wrote
a equal to b.
But, then some of the faults may cause failure. There may be a fault let us say
erroneously wrote that on some condition b is equal to 2 into b, but then let us assume
that condition never occurs for the normal inputs. Instead of writing b into 3 written b b
is equal to b into 2. There was a mistake committed and that is a fault, but then that does
not cause failure because those inputs are not given during the actual uses.
Anticipating that such conditions may occur the programmer had given, but that
condition never occurs and therefore, that bug does not express does not manifest as a
failure.
1146
All programmers however proficient make mistakes and many of these mistakes they are
basically bugs remain in the code and even with the most experienced programmers the
data collected shows that average there are 50 bugs per 1000 lines of source code, is a
large number. Even after testing thoroughly by professional testers, we do not really
eliminate all the bugs.
Out of the 50 bugs that are there after the programmer completed programming by the
time the testing is over most of the bugs are detected, but still 1 bug per 1000 lines of
source code is typically present, is a study saying that even after though testing, 1 bug
per 1000 line of source code is still there.
For some applications this may be acceptable, but let us consider a banking application
where there is a bug and when the bug gets triggered into failure, there may be
significant financial loss or may be a nuclear power plant, 1 bug per 1000 line of source
code is not acceptable because those are safety critical systems, can cause huge damage
if there is a failure.
But, what is the distribution of the bug? Typically majority of the bug that remain after
development pertain to the specification and design, 40 percent of the bugs are based on
the coding mistakes. Specification and design contribute the larger percentage of bugs
about 60 percent; 40 percent bugs are present due to coding mistakes.
1147
But, given that program development is a manual activity, bugs are inevitable because
people do mistakes after all, how does one go about reducing the bugs to the minimum?
One technique is review; review the work at various stages that is very well accepted
technique to reduce the bugs. Testing – this is possible the most important technique,
most effective technique to reduce the number of bugs.
Formal specification and verification this can also be used to reduce the bugs. And, even
use of a proper development process, developing good quality software then also reduce
the number of bugs. So, as we can see that there are various ways in which we can
remove the bugs and testing remains as one of the most promising ways of reducing the
bugs.
But, given a software how does one go about testing? To test any software we need to
give it input data and then observe the output, and see if the generated output matches
with the expectation we have for the given input. That is, observe the output and check if
the program behaved as expected; is the system software to be tested we give input and
observe if the output is as we expected otherwise we say that there is a failure.
1148
(Refer Slide Time: 22:21)
If the program does not behave as expected, we find what are the test data we gave and
under what condition we gave the test data that is this software was at what state when
we gave the test data. And, this we note it down the condition and the test data and this is
called as the test report which contains all the conditions under which different data were
given, the output produced and whether it was a failure or not.
And, these are taken up later by the developers. They reproduce the failure, debug and
then debug and then correct the software.
1149
Out of all development activities, testing consumes the largest effort. Largest man power
among all roles, companies invest heavily on testing, give much more effort to testing
than design, coding or specification. And, of course, that means, that if you walk into a
company you are more likely to meet a tester because number of testers are much more
than other roles, which also implies that more job opportunities in testing because the
companies need large number of testers.
For a typical software, 50 percent of the development effort is spent on testing, but then
if we look at the time for which testing is carried out it is about 10 percent of the
development time. If it is a 1 month project, then may be hardly 3 – 4 days are given for
testing, the rest are for specification, design, coding and so on.
But, then 50 percent of the development effort is spent in 10 percent of the time how is it
possible? It is possible because the testers can work parallely there is lot of parallelism in
the testing activities. We can deploy a large number of testers to carry out testing at the
same time, they test the different functionalities, different test data and so on.
On the other hand, in design we can have only few designers designing it. We cannot
really put large number of designers hundreds of designers designing it. There is
typically a couple of designers. Similarly, coding we cannot deploy thousands of coders.
We need to assign at least some module to each developer, each coder and therefore, the
parallelism in other development activities are less and that is why they take more time
even though less effort is given. During testing significantly larger effort is done in a
shorter time.
1150
(Refer Slide Time: 25:59)
The testing techniques are evolving very rapidly that becoming more complex and
sophisticated. The main reasons are larger and more complex programs; newer
programming paradigms; test automation and for this and also new testing results. So,
these all contribute to making testing a very challenging work.
Testing is a actually very challenging even though the old time view was that testing is
only a routine work, but now that has changed because testing has become very
challenging. Earlier about 50 years back the testing was largely monkey testing where
1151
only the random values were given, but now large number of innovations have taken
place. Many testing techniques testing tools etcetera and the tester needs a good
knowledge of these techniques.
In reality, how long does a testing go on? As we can see on the unified process
representation here the testing is actually carried out over the life cycle. At different time
of the life cycle the test cases are defined, unit testing is conducted, integration testing
defined, conducted, usability testing, system testing is defined and conducted.
1152
But, then one important question is to test how long, because as more and more test data
defined bugs get detected. One way is that we observe the number of bugs that are
getting detected over unit time and as testing progresses, we find that the number of bugs
detected per day is decreases and then we have a situation where testing takes place for a
day or a week and no bug is detected and that is the time when we may stop testing.
The other way is to seed bugs. The manger introduces bugs into the program without
telling the developers and testers and then observes how many of the bugs that he had
introduced are getting detected. If he finds that most of the bugs have got detected, then
the manger knows that it is the time to stop testing.
Similarly, for coding whether all the design modules that were designed have been taken
care properly and that is called as the verification whereas, validation is the process of
determining whether a fully developed system confirms to its specification. So, that
means, the validation is carried out on the completed system whereas, verification are
done as the development proceeds to check whether the development is preceding
1153
correctly, that is, one activity produces result that are consistent with the output produced
by the previous activity. We are running out of time for this lecture and we will stop
here.
Thank you.
1154
THIS BOOK IS
NOT FOR SALE
NOR COMMERCIAL USE