0% found this document useful (0 votes)
46 views43 pages

SEPM Unit-1

Module 1: (09hrs.) The Software Product and Software Process: Software Product and Process Characteristics, Software Process Models: Linear Sequential Model, Prototyping Model, RAD Model, Evolutionary Process Models like Incremental Model,Spiral Model, Component Assembly Model, RUP and Agile processes. Software Process customization and improvement, CMM, Product and Process Metrics, Feasibility Analysis, Cost Estimation Model.

Uploaded by

shubhadubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views43 pages

SEPM Unit-1

Module 1: (09hrs.) The Software Product and Software Process: Software Product and Process Characteristics, Software Process Models: Linear Sequential Model, Prototyping Model, RAD Model, Evolutionary Process Models like Incremental Model,Spiral Model, Component Assembly Model, RUP and Agile processes. Software Process customization and improvement, CMM, Product and Process Metrics, Feasibility Analysis, Cost Estimation Model.

Uploaded by

shubhadubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Subject Code: PEC-CSDS501 (A)

Subject Name: Software Engineering & Project Management

Module 1: (08 hrs.) The Software Product and Software Process: Software Product
and Process Characteristics, Software Process Models: Linear Sequential Model,
Prototyping Model, RAD Model, Evolutionary Process Models like Incremental
Model, Spiral Model, Component Assembly Model, RUP and Agile processes.
Software Process customization and improvement, CMM, Product and Process
Metrics, Feasibility Analysis, Cost Estimation Model
MODULE 1 LECTURE NOTE 1

INTRODUCTION TO SOFTWARE ENGINEERING

The term software engineering is composed of two words, software and engineering.

Software is more than just a program code. A program is an executable code, which serves some
computational purpose. Software is considered to be a collection of executable programming
code, associated libraries and documentations. Software, when made for a specific requirement is
called software product.

Engineering on the other hand, is all about developing products, using well-defined, scientific
principles and methods.

So, we can define software engineering as an engineering branch associated with the
development of software product using well-defined scientific principles, methods and
procedures. The outcome of software engineering is an efficient and reliable software product.

IEEE defines software engineering as:

The application of a systematic, disciplined, quantifiable approach to the development, operation


and maintenance of software. We can alternatively view it as a systematic collection of past
experience. The experience is arranged in the form of methodologies and guidelines.

A small program can be written without using software engineering principles. But if one wants
to develop a large software product, then software engineering principles are absolutely
necessary to achieve a good quality software cost effectively.

Without using software engineering principles it would be difficult to develop large programs. In
industry it is usually needed to develop large programs to accommodate multiple functions. A
problem with developing such large commercial programs is that the complexity and difficulty
levels of the programs increase exponentially with their sizes. Software engineering helps to
reduce this programming complexity. Software engineering principles use two important
techniques to reduce problem complexity: abstraction and decomposition. The principle of
abstraction implies that a problem can be simplified by omitting irrelevant details. In other
words, the main purpose of abstraction is to consider only those aspects of the problem that are
relevant for certain purpose and suppress other aspects that are not relevant for the given
purpose. Once the simpler problem is solved, then the omitted details can be taken into
consideration to solve the next lower level abstraction, and so on. Abstraction is a powerful way
of reducing the complexity of the problem. The other approach to tackle problem complexity is
decomposition. In this technique, a complex problem is divided into several smaller problems
and then the smaller problems are solved one by one. However, in this technique any random
decomposition of a problem into smaller parts will not help. The problem has to be decomposed
such that each component of the decomposed problem can be solved independently and then the
solution of the different components can be combined to get the full solution. A good
decomposition of a problem should minimize interactions among various components. If the
different subcomponents are interrelated, then the different components cannot be solved
separately and the desired reduction in complexity will not be realized.

NEED OF SOFTWARE ENGINEERING

The need of software engineering arises because of higher rate of change in user requirements
and environment on which the software is working.

Large software - It is easier to build a wall than to a house or building, likewise, as the size of
software become large engineering has to step to give it a scientific process.

Scalability- If the software processes were not based on scientific and engineering concepts, it
would be easier to re-create new software than to scale an existing one.

Cost- As hardware industry has shown its skills and huge manufacturing has lower down the
price of computer and electronic hardware. But the cost of software remains high if proper
process is not adapted.

Dynamic Nature- The always growing and adapting nature of software hugely depends upon the
environment in which the user works. If the nature of software is always changing, new
enhancements need to be done in the existing one. This is where software engineering plays a
good role.

Quality Management- Better process of software development provides better and quality
software product.

CHARACTERISTICS OF GOOD SOFTWARE

A software product can be judged by what it offers and how well it can be used. This software
must satisfy on the following grounds:

Operational

Transitional

Maintenance
Well-engineered and crafted software is expected to have the following characteristics:

Operational: This tells us how well software works in operations.

It can be measured on:

Budget

Usability

Efficiency

Correctness

Functionality

Dependability

Security

Safety

Transitional :This aspect is important when the software is moved from one platform to
another:

Portability

Interoperability

Reusability

Adaptability

Maintenance: This aspect briefs about how well a software has the capabilities to maintain
itself in the ever changing environment:

Modularity

Maintainability

Flexibility

Scalability
In short, Software engineering is a branch of computer science, which uses well-defined
engineering concepts required to produce efficient, durable, scalable, in-budget and on-time
software products

LECTURE NOTE 2

SOFTWARE DEVELOPMENT LIFE CYCLE LIFE CYCLE MODEL

A software life cycle model (also called process model) is a descriptive and diagrammatic
representation of the software life cycle. A life cycle model represents all the activities required
to make a software product transit through its life cycle phases. It also captures the order in
which these activities are to be undertaken. In other words, a life cycle model maps the different
activities performed on a software product from its inception to retirement. Different life cycle
models may map the basic development activities to phases in different ways. Thus, no matter
which life cycle model is followed, the basic activities are included in all life cycle models
though the activities may be carried out in different orders in different life cycle models. During
any life cycle phase, more than one activity may also be carried out.

THE NEED FOR A SOFTWARE LIFE CYCLE MODEL

The development team must identify a suitable life cycle model for the particular project and
then adhere to it. Without using of a particular life cycle model the development of a software
product would not be in a systematic and disciplined manner. When a software product is being
developed by a team there must be a clear understanding among team members about when and
what to do. Otherwise it would lead to chaos and project failure. This problem can be illustrated
by using an example. Suppose a software development problem is divided into several parts and
the parts are assigned to the team members. From then on, suppose the team members are
allowed the freedom to develop the parts assigned to them in whatever way they like. It is
possible that one member might start writing the code for his part, another might decide to
prepare the test documents first, and some other engineer might begin with the design phase of
the parts assigned to him. This would be one of the perfect recipes for project failure. A software
life cycle model defines entry and exit criteria for every phase. A phase can start only if its
phase-entry criteria have been satisfied. So without software life cycle model the entry and exit
criteria for a phase cannot be recognized. Without software life cycle models it becomes difficult
for software project managers to monitor the progress of the project.

Different software life cycle models Many life cycle models have been proposed so far. Each of
them has some advantages as well as some disadvantages. A few important and commonly used
life cycle models are as follows:

Classical
Waterfall Model

Iterative Waterfall Model

Prototyping Model

Evolutionary Model

Spiral Model

1. CLASSICAL WATERFALL MODEL :

The classical waterfall model is intuitively the most obvious way to develop software.
Though the classical waterfall model is elegant and intuitively obvious, it is not a practical
model in the sense that it cannot be used in actual software development projects. Thus, this
model can be considered to be a theoretical way of developing software. But all other life
cycle models are essentially derived from the classical waterfall model. So, in order to be
able to appreciate other life cycle models it is necessary to learn the classical waterfall
model. Classical waterfall model divides the life cycle into the following phases as shown in
fig.2.1:

Fig 2.1: Classical Waterfall Model


Feasibility study - The main aim of feasibility study is to determine whether it would be
financially and technically feasible to develop the product.

 At first project managers or team leaders try to have a rough understanding of what
is required to be done by visiting the client side. They study different input data to
the system and output data to be produced by the system. They study what kind of
processing is needed to be done on these data and they look at the various
constraints on the behavior of the system.
 After they have an overall understanding of the problem they investigate the
different solutions that are possible. Then they examine each of the solutions in
terms of what kind of resources required, what would be the cost of development
and what would be the development time for each solution.
 Based on this analysis they pick the best solution and determine whether the solution
is feasible financially and technically. They check whether the customer budget
would meet the cost of the product and whether they have sufficient technical
expertise in the area of development.

Requirements analysis and specification: - The aim of the requirements analysis and
specification phase is to understand the exact requirements of the customer and to document
them properly. This phase consists of two distinct activities, namely

 Requirements gathering and analysis


 Requirements specification

The goal of the requirements gathering activity is to collect all relevant information
from the customer regarding the product to be developed. This is done to clearly
understand the customer requirements so that incompleteness and inconsistencies are
removed.

The requirements analysis activity is begun by collecting all relevant data regarding the
product to be developed from the users of the product and from the customer through
interviews and discussions. For example, to perform the requirements analysis of a
business accounting software required by an organization, the analyst might interview
all the accountants of the organization to ascertain their requirements. The data collected
from such a group of users usually contain several contradictions and ambiguities, since
each user typically has only a partial and incomplete view of the system. Therefore it is
necessary to identify all ambiguities and contradictions in the requirements and resolve
them through further discussions with the customer. After all ambiguities,
inconsistencies, and incompleteness have been resolved and all the requirements
properly understood, the requirements specification activity can start. During this
activity, the user requirements are systematically organized into a Software
Requirements Specification (SRS) document. The customer requirements identified
during the requirements gathering and analysis activity are organized into a SRS
document. The important components of this document are functional requirements, the
nonfunctional requirements, and the goals of implementation.

Design: The goal of the design phase is to transform the requirements specified in the
SRS document into a structure that is suitable for implementation in some programming
language. In technical terms, during the design phase the software architecture is
derived from the SRS document. Two distinctly different approaches are available: the
traditional design approach and the object-oriented design approach. Traditional design
approach –

 Traditional design consists of two different activities; first a structured analysis


of the requirements specification is carried out where the detailed structure of
the problem is examined. This is followed by a structured design activity.
During structured design, the results of structured analysis are transformed into
the software design.
 Object-oriented design approach -In this technique, various objects that occur in
the problem domain and the solution domain are first identified, and the different
relationships that exist among these objects are identified. The object structure is
further refined to obtain the detailed design.

Coding and unit testing:-The purpose of the coding phase (sometimes called the
implementation phase) of software development is to translate the software design
into source code. Each component of the design is implemented as a program
module. The end-product of this phase is a set of program modules that have been
individually tested. During this phase, each module is unit tested to determine the
correct working of all the individual modules. It involves testing each module in
isolation as this is the most efficient way to debug the errors identified at this stage.

Integration and system testing: -Integration of different modules is undertaken


once they have been coded and unit tested. During the integration and system testing
phase, the modules are integrated in a planned manner. The different modules
making up a software product are almost never integrated in one shot. Integration is
normally carried out incrementally over a number of steps. During each integration
step, the partially integrated system is tested and a set of previously planned
modules are added to it. Finally, when all the modules have been successfully
integrated and tested, system testing is carried out. The goal of system testing is to
ensure that the developed system conforms to its requirements laid out in the SRS
document. System testing usually consists of three different kinds of testing
activities:

 α – testing: It is the system testing performed by the development team.


 β –testing: It is the system testing performed by a friendly set of customers.
 Acceptance testing: It is the system testing performed by the customer
himself after the product delivery to determine whether to accept or reject
the delivered product. System testing is normally carried out in a planned
manner according to the system test plan document. The system test plan
identifies all testing-related activities that must be performed, specifies the
schedule of testing, and allocates resources. It also lists all the test cases and
the expected outputs for each test case.

Maintenance: -Maintenance of a typical software product requires much more than the effort
necessary to develop the product itself. Many studies carried out in the past confirm this and
indicate that the relative effort of development of a typical software product to its maintenance
effort is roughly in the 40:60 ratios. Maintenance involves performing any one or more of the
following three kinds of activities:

 Correcting errors that were not discovered during the product development phase. This is
called corrective maintenance.
 Improving the implementation of the system, and enhancing the functionalities of the
system according to the customer’s requirements. This is called perfective maintenance.
 Porting the software to work in a new environment. For example, porting may be
required to get the software to work on a new computer platform or with a new operating
system. This is called adaptive maintenance.

Shortcomings of the classical waterfall model

The classical waterfall model is an idealistic one since it assumes that no development error is
ever committed by the engineers during any of the life cycle phases. However, in practical
development environments, the engineers do commit a large number of errors in almost every
phase of the life cycle. The source of the defects can be many: oversight, wrong assumptions, use
of inappropriate technology, communication gap among the project engineers, etc. These defects
usually get detected much later in the life cycle. For example, a design defect might go unnoticed
till we reach the coding or testing phase. Once a defect is detected, the engineers need to go back
to the phase where the defect had occurred and redo some of the work done during that phase
and the subsequent phases to correct the defect and its effect on the later phases. Therefore, in
any practical software development work, it is not possible to strictly follow the classical
waterfall model.

LECTURE NOTE 3

2. ITERATIVE WATERFALL MODEL

To overcome the major shortcomings of the classical waterfall model, we come up with the
iterative waterfall model.

Fig 3.1 : Iterative Waterfall Model

Here, we provide feedback paths for error correction as & when detected later in a phase.
Though errors are inevitable, but it is desirable to detect them in the same phase in which
they occur. If so, this can reduce the effort to correct the bug.

The advantage of this model is that there is a working model of the system at a very early
stage of development which makes it easier to find functional or design flaws. Finding issues
at an early stage of development enables to take corrective measures in a limited budget.

The disadvantage with this SDLC model is that it is applicable only to large and bulky
software development projects. This is because it is hard to break a small software system
into further small serviceable increments/modules.
3. PROTOTYPING MODEL

Prototype

A prototype is a toy implementation of the system. A prototype usually exhibits limited


functional capabilities, low reliability, and inefficient performance compared to the actual
software. A prototype is usually built using several shortcuts. The shortcuts might involve
using inefficient, inaccurate, or dummy functions. The shortcut implementation of a
function, for example, may produce the desired results by using a table look-up instead of
performing the actual computations. A prototype usually turns out to be a very crude version
of the actual system.

Need for a prototype in software development

There are several uses of a prototype. An important purpose is to illustrate the input data
formats, messages, reports, and the interactive dialogues to the customer. This is a valuable
mechanism for gaining better understanding of the customer’s needs:
 how the screens might look like
 how the user interface would behave
 how the system would produce outputs

Another reason for developing a prototype is that it is impossible to get the perfect product in
the first attempt. Many researchers and engineers advocate that if you want to develop a good
product you must plan to throw away the first version. The experience gained in developing the
prototype can be used to develop the final product.

A prototyping model can be used when technical solutions are unclear to the development team.
A developed prototype can help engineers to critically examine the technical issues associated
with the product development. Often, major design decisions depend on issues like the response
time of a hardware controller, or the efficiency of a sorting algorithm, etc. In such circumstances,
a prototype may be the best or the only way to resolve the technical issues.

A prototype of the actual product is preferred in situations such as:

 User requirements are not complete


 Technical issues are not clear

Fig 3.2: Prototype Model

4. EVOLUTIONARY MODEL

It is also called successive versions model or incremental model. At first, a simple working
model is built. Subsequently it undergoes functional improvements & we keep on adding
new functions till the desired system is built.

Applications:

 Large projects where you can easily find modules for incremental implementation.
Often used when the customer wants to start using the core features rather than
waiting for the full software.
 Also used in object oriented software development because the system can be easily
portioned into units in terms of objects.

Advantages:

 User gets a chance to experiment partially developed system.


 Reduce the error because the core modules get tested thoroughly.

Disadvantages:
It is difficult to divide the problem into several versions that would be acceptable to the 
customer which can be incrementally implemented & delivered.

Fig 3.3: Evolutionary Model

5. SPIRAL MODEL

The Spiral model of software development is shown in fig. 4.1. The diagrammatic
representation of this model appears like a spiral with many loops. The exact number of
loops in the spiral is not fixed. Each loop of the spiral represents a phase of the software
process. For example, the innermost loop might be concerned with feasibility study, the next
loop with requirements specification, the next one with design, and so on. Each phase in this
model is split into four sectors (or quadrants) as shown in fig. 4.1. The following activities
are carried out during each phase of a spiral model.
Fig 4.1: Spiral Model

First quadrant (Objective setting)

 During the first quadrant, it is needed to identify the objectives of the phase.
 Examine the risks associated with these objectives.

Second Quadrant (Risk Assessment and Reduction)

• A detailed analysis is carried out for each identified project risk.


• Steps are taken to reduce the risks. For example, if there is a risk that the requirements
are inappropriate, a prototype system may be developed.

Third Quadrant (Development and Validation)

• Develop and validate the next level of the product after resolving the identified
risks.
Fourth Quadrant (Review and Planning)

• Review the results achieved so far with the customer and plan the next iteration
around the spiral.
• Progressively more complete version of the software gets built with each iteration
around the spiral.

Circumstances to use spiral model

The spiral model is called a meta model since it encompasses all other life cycle models. Risk
handling is inherently built into this model. The spiral model is suitable for development of
technically challenging software products that are prone to several kinds of risks. However, this
model is much more complex than the other models – this is probably a factor deterring its use in
ordinary projects.

RAD (Rapid Application Development) Model

RAD is a linear sequential software development process model that emphasizes a concise
development cycle using an element based construction approach. If the requirements are well
understood and described, and the project scope is a constraint, the RAD process enables a
development team to create a fully functional system within a concise time period.

RAD (Rapid Application Development) is a concept that products can be developed faster and of
higher quality through:

o Gathering requirements using workshops or focus groups

o Prototyping and early, reiterative user testing of designs

o The re-use of software components

o A rigidly paced schedule that refers design improvements to the next product version

o Less formality in reviews and other team communication


The various phases of RAD are as follows:

1.Business Modelling: The information flow among business functions is defined by answering
questions like what data drives the business process, what data is generated, who generates it,
where does the information go, who process it and so on.

2. Data Modelling: The data collected from business modeling is refined into a set of data
objects (entities) that are needed to support the business. The attributes (character of each entity)
are identified, and the relation between these data objects (entities) is defined.

3. Process Modelling: The information object defined in the data modeling phase are
transformed to achieve the data flow necessary to implement a business function. Processing
descriptions are created for adding, modifying, deleting, or retrieving a data object.

4. Application Generation: Automated tools are used to facilitate construction of the software;
even they use the 4th GL techniques.

5. Testing & Turnover: Many of the programming components have already been tested since
RAD emphasis reuse. This reduces the overall testing time. But the new part must be tested, and
all interfaces must be fully exercised.
When to use RAD Model?

o When the system should need to create the project that modularizes in a short span time
(2-3 months).

o When the requirements are well-known.

o When the technical risk is limited.

o When there's a necessity to make a system, which modularized in 2-3 months of period.

o It should be used only if the budget allows the use of automatic code generating tools.

Advantage of RAD Model

o This model is flexible for change.

o In this model, changes are adoptable.

o Each phase in RAD brings highest priority functionality to the customer.

o It reduced development time.

o It increases the reusability of features.

Disadvantage of RAD Model

o It required highly skilled designers.

o All application is not compatible with RAD.

o For smaller projects, we cannot use the RAD model.

o On the high technical risk, it's not suitable.

o Required user involvement.

Component Assembly Model

It is just like the Prototype model, in which first a prototype is created according to the
requirements of the customer. Thus, this is one of the most beneficial advantages of component
assembly model as it saves lots of time during the software development program.

Component Assembly Model is just like the Prototype model, in which first a prototype is
created according to the requirements of the customer and sent to the user for evaluation to get
the feedback for the modifications to be made and the same procedure is repeated until the
software will cater the need of businesses and consumers is realized. Thus it is also an iterative
development model.

Component Assembly model has been developed to answer the problems faced during the
Software Development Life Cycle (SDLC). Instead of searching for different codes and
languages, the developers using this model opt for the available components and use them to
make an efficient program. Component Assembly Model is an iterative development model that
works like the Prototype model and keeps developing a prototype on the basis of the user
feedback until the prototype resembles the specifications provided by the customer and the
business.

Moreover, Component Assembly model resembles to the Rapid Application Development


(RAD) model and uses the available resources and GUIs to create a software program. Today, a
number of SDKs are available that makes it easier for the developers to design a program using
less number of codes with the help of SDK. This method has ample of time to concentrate on the
other components of the program apart from the coding language, user input and graphical
interaction of both user and software program.

In addition to that, a Component Assembly Model uses a number of previously made


components and does not need the use of SDK for creating a program but puts the powerful
components together to develop an effective an efficient program. Thus, this is one of the most
beneficial advantages of component assembly model as it saves lots of time during the software
development program.

Comparison of different life-cycle models

The classical waterfall model can be considered as the basic model and all other life cycle
models as embellishments of this model. However, the classical waterfall model cannot be used
in practical development projects, since this model supports no mechanism to handle the errors
committed during any of the phases.

This problem is overcome in the iterative waterfall model. The iterative waterfall model is
probably the most widely used software development model evolved so far. This model is simple
to understand and use. However this model is suitable only for well-understood problems; it is
not suitable for very large projects and for projects that are subject to many risks.

The prototyping model is suitable for projects for which either the user requirements or the
underlying technical aspects are not well understood. This model is especially popular for
development of the user-interface part of the projects.
The evolutionary approach is suitable for large problems which can be decomposed into a set of
modules for incremental development and delivery. This model is also widely used for object
oriented development projects. Of course, this model can only be used if the incremental delivery
of the system is acceptable to the customer.

The spiral model is called a meta model since it encompasses all other life cycle models. Risk
handling is inherently built into this model. The spiral model is suitable for development of
technically challenging software products that are prone to several kinds of risks. However, this
model is much more complex than the other models – this is probably a factor deterring its use in
ordinary projects.

The different software life cycle models can be compared from the viewpoint of the customer.
Initially, customer confidence in the development team is usually high irrespective of the
development model followed. During the lengthy development process, customer confidence
normally drops off, as no working product is immediately visible. Developers answer customer
queries using technical slang, and delays are announced. This gives rise to customer resentment.
On the other hand, an evolutionary approach lets the customer experiment with a working
product much earlier than the monolithic approaches. Another important advantage of the
incremental model is that it reduces the customer’s trauma of getting used to an entirely new
system. The gradual introduction of the product via incremental phases provides time to the
customer to adjust to the new product. Also, from the customer’s financial viewpoint,
incremental development does not require a large upfront capital outlay. The customer can order
the incremental versions as and when he can afford them.

Rational Unified Process (RUP)

The rational unified process (RUP) is a software engineering and development process focused
on using the unified modeling language (UML) to design and build software. Using the RUP
process allows you to operate business analysis, design, testing and implementation throughout
the software development process and its unique stages, helping you create a customized
product. You can use beta test models and prototypes of various software components in all
phases of RUP to:

 Better achieve milestones

 Calibrate elements of design

 Troubleshoot concerns

 Present the best possible software solutions


What are the phases of RUP?

There are five phases of RUP that can help you decrease development costs, wasted resources
and total project management time. Here's a detailed explanation of each phase:

Inception

In the inception stage of RUP, you communicate and plan the software concept or idea,
evaluating what resources you need for the project and determining if it's viable. You use case
modeling to identify the project scope, costs and time required to build it. If there are specific
customer needs or requests for the software, you consider how to incorporate them effectively
within the design plan.

Elements often included in the inception stage are:

 Risk assessments and project plans

 Vision or mission statements

 Financial projections and business models

 Prototype development

Elaboration

During the elaboration phase, you further evaluate the resources and costs needed for the
project's full development, creating actionable and executable baseline architecture of the
software. This detailed stage aims to diminish cost totals and risk and produce a revised use case
model. You compare the software projections against the established milestones and project
criteria. If there are discrepancies, you redesign, adjust or cancel the project. Elements often
included in the elaboration stage are:

 Use case model

 Viable software architecture

 Risk reduction plans

 Use manual

You often collaborate with IT colleagues in this phase to make sure software architecture
provides stability and addresses risks. The use case model created during the elaboration stage
serves as a blueprint for the rest of the project's phases. If the current design and costs get
approved, you move on to software construction.
Construction

This phase of RUP often takes the longest because you create, write, collaborate and test your
software and applications, focusing on the features and components of the system and how well
they function. You typically start by incrementally expanding upon the baseline architecture,
building code and software until it's complete. You manage costs and quality in this phase,
intending to produce a completed software system and user manual. Review the software user
stability and transition plan before ending the RUP construction phase.

Transition

The transition stage releases the project to the user, whether that's the public or internal users like
employees. A transition phase is rarely perfect and often includes making system adjustments
based on practical and daily usage. Ensuring a smooth transition and rectifying software issues
timely can help make this stage a success.

Elements often involved in the transition period include:

 Beta testing

 Education and training

 Deployment and data analytics

 Collection of user feedback

Production

This last phase of the RUP process includes software deployment, intending to gain user
acceptance. You maintain and update the software accordingly, often based on feedback from
people who use the software, app, program or platform.

This last stage usually includes:

 Packaging, distribution and installation

 User help and assistance platform availability

 Data migration

 Continued user acceptance initiatives


Advantages of RUP

RUP can provide software development or design teams an array of advantages, including:

 Offering thorough documentation: The RUP process involves carefully documenting


each step, which can be highly beneficial for collaborative projects.

 Enhancing risk management practices: RUP can help software individuals proactively
respond to potential software challenges. This can improve risk management and
troubleshooting efforts.

 Giving regular feedback to stakeholders: A vital part of the RUP process is giving
consistent updates to project stakeholders. These stakeholders may range from other
software individuals involved in the project to company leaders or vendors.

 Reducing total project time: RUP may allow the software development team to lower
their time in both the development and integration stages.

 Determining working elements early on in the project: With RUP, project


stakeholders may notice potential software issues earlier on during the design or
development processes. This can make mitigating or solving challenges easier before
they become more complex.

Potential drawbacks of RUP

Using RUP can come with some possible disadvantages. If you or your team plans to use RUP,
it's important to prepare for these potential issues so you can proactively navigate challenges.

Following are some of the potential drawbacks of RUP, plus ideas about how to overcome them:

 Complexity of process: Since RUP is a complicated procedure, successfully performing


it requires software team members with great expertise. If some of the individuals on
your software team are new to the field, it might be easier to choose a different software
development process.

 Cost and time: The amount of documentation required for RUP can be time-consuming
and expensive. Software teams with smaller budgets might benefit from choosing a more
cost-efficient approach for their project.

 Challenge of using it for projects with multiple development streams: RUP may
cause confusion during the testing stage for larger projects involving multiple
components and software teams. Because of its emphasis on ongoing integration, those
working on projects with multiple development streams may want to either slow down
the RUP process or look for another development procedure.
Agile Model

The meaning of Agile is swift or versatile."Agile process model" refers to a software


development approach based on iterative development. Agile methods break tasks into smaller
iterations, or parts do not directly involve long term planning. The project scope and
requirements are laid down at the beginning of the development process. Plans regarding the
number of iterations, the duration and the scope of each iteration are clearly defined in advance.

Each iteration is considered as a short time "frame" in the Agile process model, which typically
lasts from one to four weeks. The division of the entire project into smaller parts helps to
minimize the project risk and to reduce the overall project delivery time requirements. Each
iteration involves a team working through a full software development life cycle including
planning, requirements analysis, design, coding, and testing before a working product is
demonstrated to the client.

Phases of Agile Model:

Following are the phases in the Agile model are as follows:

1. Requirements gathering

2. Design the requirements


3. Construction/ iteration

4. Testing/ Quality assurance

5. Deployment

6. Feedback

1. Requirements gathering: In this phase, you must define the requirements. You should
explain business opportunities and plan the time and effort needed to build the project. Based on
this information, you can evaluate technical and economic feasibility.

2. Design the requirements: When you have identified the project, work with stakeholders to
define requirements. You can use the user flow diagram or the high-level UML diagram to show
the work of new features and show how it will apply to your existing system.

3. Construction/ iteration: When the team defines the requirements, the work begins. Designers
and developers start working on their project, which aims to deploy a working product. The
product will undergo various stages of improvement, so it includes simple, minimal
functionality.

4. Testing: In this phase, the Quality Assurance team examines the product's performance and
looks for the bug.

5. Deployment: In this phase, the team issues a product for the user's work environment.

6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.

Agile Testing Methods:

o Scrum

o Crystal

o Dynamic Software Development Method(DSDM)

o Feature Driven Development(FDD)

o Lean Software Development

o eXtreme Programming(XP)

Scrum
SCRUM is an agile development process focused primarily on ways to manage tasks in team-
based development conditions.

There are three roles in it, and their responsibilities are:

o Scrum Master: The scrum can set up the master team, arrange the meeting and remove
obstacles for the process

o Product owner: The product owner makes the product backlog, prioritizes the delay and
is responsible for the distribution of functionality on each repetition.

o Scrum Team: The team manages its work and organizes the work to complete the sprint
or cycle.

eXtreme Programming(XP)

This type of methodology is used when customers are constantly changing demands or
requirements, or when they are not sure about the system's performance.

Crystal:

There are three concepts of this method-

1. Chartering: Multi activities are involved in this phase such as making a development
team, performing feasibility analysis, developing plans, etc.

2. Cyclic delivery: under this, two more cycles consist, these are:

A. Team updates the release plan.

B. Integrated product delivers to the users.

Wrap up: According to the user environment, this phase performs deployment, post-
deployment.

Dynamic Software Development Method(DSDM):

DSDM is a rapid application development strategy for software development and gives an agile
project distribution structure. The essential features of DSDM are that users must be actively
connected, and teams have been given the right to make decisions. The techniques used in
DSDM are:

1. Time Boxing

2. MoSCoW Rules

3. Prototyping
The DSDM project contains seven stages:

1. Pre-project

2. Feasibility Study

3. Business Study

4. Functional Model Iteration

5. Design and build Iteration

6. Implementation

7. Post-project

Feature Driven Development(FDD):

This method focuses on "Designing and Building" features. In contrast to other smart methods,
FDD describes the small steps of the work that should be obtained separately per function.

Lean Software Development:

Lean software development methodology follows the principle "just in time production." The
lean method indicates the increasing speed of software development and reducing costs. Lean
development can be summarized in seven phases.

1. Eliminating Waste

2. Amplifying learning

3. Defer commitment (deciding as late as possible)

4. Early delivery

5. Empowering the team

6. Building Integrity

7. Optimize the whole

When to use the Agile Model?

o When frequent changes are required.

o When a highly qualified and experienced team is available.

o When a customer is ready to have a meeting with a software team all the time.
o When project size is small.

Advantage(Pros) of Agile Method:

1. Frequent Delivery

2. Face-to-Face Communication with clients.

3. Efficient design and fulfils the business requirement.

4. Anytime changes are acceptable.

5. It reduces total development time.

Disadvantages(Cons) of Agile Model:

1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken
throughout various phases can be misinterpreted at any time by different team members.

2. Due to the lack of proper documentation, once the project completes and the developers
allotted to another project, maintenance of the finished project can become a difficulty.

Software Engineering Institute Capability Maturity Model (SEICMM)

The Capability Maturity Model (CMM) is a procedure used to develop and refine an
organization's software development process.

The model defines a five-level evolutionary stage of increasingly organized and consistently
more mature processes.

CMM was developed and is promoted by the Software Engineering Institute (SEI), a research
and development center promote by the U.S. Department of Defense (DOD).

Capability Maturity Model is used as a benchmark to measure the maturity of an organization's


software process.

Methods of SEICMM
There are two methods of SEICMM:

Capability Evaluation: Capability evaluation provides a way to assess the software process
capability of an organization. The results of capability evaluation indicate the likely contractor
performance if the contractor is awarded a work. Therefore, the results of the software process
capability assessment can be used to select a contractor.

Software Process Assessment: Software process assessment is used by an organization to


improve its process capability. Thus, this type of evaluation is for purely internal use.

SEI CMM categorized software development industries into the following five maturity levels.
The various levels of SEI CMM have been designed so that it is easy for an organization to build
its quality system starting from scratch slowly.
Level 1: Initial

Ad hoc activities characterize a software development organization at this level. Very few or no
processes are described and followed. Since software production processes are not limited,
different engineers follow their process and as a result, development efforts become chaotic.
Therefore, it is also called a chaotic level.

Level 2: Repeatable

At this level, the fundamental project management practices like tracking cost and schedule are
established. Size and cost estimation methods, like function point analysis, COCOMO, etc. are
used.

Level 3: Defined

At this level, the methods for both management and development activities are defined and
documented. There is a common organization-wide understanding of operations, roles, and
responsibilities. The ways through defined, the process and product qualities are not measured.
ISO 9000 goals at achieving this level.

Level 4: Managed

At this level, the focus is on software metrics. Two kinds of metrics are composed.
Product metrics measure the features of the product being developed, such as its size,
reliability, time complexity, understandability, etc.

Process metrics follow the effectiveness of the process being used, such as average defect
correction time, productivity, the average number of defects found per hour inspection, the
average number of failures detected during testing per LOC, etc. The software process and
product quality are measured, and quantitative quality requirements for the product are met.
Various tools like Pareto charts, fishbone diagrams, etc. are used to measure the product and
process quality. The process metrics are used to analyze if a project performed satisfactorily.
Thus, the outcome of process measurements is used to calculate project performance rather than
improve the process.

Level 5: Optimizing

At this phase, process and product metrics are collected. Process and product measurement data
are evaluated for continuous process improvement.

Key Process Areas (KPA) of a software organization

Except for SEI CMM level 1, each maturity level is featured by several Key Process Areas
(KPAs) that contains the areas an organization should focus on improving its software process to
the next level. The focus of each level and the corresponding key process areas are shown in the
fig.
SEI CMM provides a series of key areas on which to focus to take an organization from one
level of maturity to the next. Thus, it provides a method for gradual quality improvement over
various stages. Each step has been carefully designed such that one step enhances the capability
already built up.

METRICS FOR SOFTWARE PROJECT SIZE ESTIMATION

Accurate estimation of the problem size is fundamental to satisfactory estimation of effort, time
duration and cost of a software project. In order to be able to accurately estimate the project size,
some important metrics should be defined in terms of which the project size can be expressed.
The size of a problem is obviously not the number of bytes that the source code occupies. It is
neither the byte size of the executable code. The project size is a measure of the problem
complexity in terms of the effort and time required to develop the product.

Currently two metrics are popularly being used widely to estimate size: lines of code (LOC) and
function point (FP). The usage of each of these metrics in project size estimation has its own
advantages and disadvantages.

Lines of Code (LOC)

LOC is the simplest among all metrics available to estimate project size. This metric is very
popular because it is the simplest to use. Using this metric, the project size is estimated by
counting the number of source instructions in the developed program. Obviously, while counting
the number of source instructions, lines used for commenting the code and the header lines
should be ignored.

Determining the LOC count at the end of a project is a very simple job. However, accurate
estimation of the LOC count at the beginning of a project is very difficult. In order to estimate
the LOC count at the beginning of a project, project managers usually divide the problem into
modules, and each module into submodules and so on, until the sizes of the different leaf-level
modules can be approximately predicted. To be able to do this, past experience in developing
similar products is helpful. By using the estimation of the lowest level modules, project
managers arrive at the total size estimation.

Function point (FP) Function point metric was proposed by Albrecht [1983]. This metric
overcomes many of the shortcomings of the LOC metric. Since its inception in late 1970s,
function point metric has been slowly gaining popularity. One of the important advantages of
using the function point metric is that it can be used to easily estimate the size of a software
product directly from the problem specification. This is in contrast to the LOC metric, where the
size can be accurately determined only after the product has fully been developed. The
conceptual idea behind the function point metric is that the size of a software product is directly
dependent on the number of different functions or features it supports. A software product
supporting many features would certainly be of larger size than a product with less number of
features. Each function when invoked reads some input data and transforms it to the
corresponding output data. For example, the issue book feature (as shown in fig. 31.1) of a
Library Automation Software takes the name of the book as input and displays its location and
the number of copies available. Thus, a computation of the number of input and the output data
values to a system gives some indication of the number of functions supported by the system.
Albrecht postulated that in addition to the number of basic functions that a software performs,
the size is also dependent on the number of files and the number of interfaces.

Fig. 31.1: System function as a map of input data to output data

Besides using the number of input and output data values, function point metric computes the
size of a software product (in units of functions points or FPs) using three other characteristics of
the product as shown in the following expression. The size of a product in function points (FP)
can be expressed as the weighted sum of these five problem characteristics. The weights
associated with the five characteristics were proposed empirically and validated by the
observations over many projects. Function point is computed in two steps. The first step is to
compute the unadjusted function point (UFP).

UFP = (Number of inputs)*4 + (Number of outputs)*5 + (Number of inquiries)*4 +


(Number of files)*10 + (Number of interfaces)*10
Number of inputs: Each data item input by the user is counted. Data inputs should be
distinguished from user inquiries. Inquiries are user commands such as print-account-balance.
Inquiries are counted separately. It must be noted that individual data items input by the user are
not considered in the calculation of the number of inputs, but a group of related inputs are
considered as a single input. For example, while entering the data concerning an employee to an
employee pay roll software; the data items name, age, sex, address, phone number, etc. are
together considered as a single input. All these data items can be considered to be related, since
they pertain to a single employee.

Number of outputs: The outputs considered refer to reports printed, screen outputs, error
messages produced, etc. While outputting the number of outputs the individual data items within
a report are not considered, but a set of related data items is counted as one input.

Number of inquiries: Number of inquiries is the number of distinct interactive queries which
can be made by the users. These inquiries are the user commands which require specific action
by the system.

Number of files: Each logical file is counted. A logical file means groups of logically related
data. Thus, logical files can be data structures or physical files.

Number of interfaces: Here the interfaces considered are the interfaces used to exchange
information with other systems. Examples of such interfaces are data files on tapes, disks,
communication links with other systems etc.

Once the unadjusted function point (UFP) is computed, the technical complexity factor (TCF) is
computed next. TCF refines the UFP measure by considering fourteen other factors such as high
transaction rates, throughput, and response time requirements, etc. Each of these 14 factors is
assigned from 0 (not present or no influence) to 6 (strong influence). The resulting numbers are
summed, yielding the total degree of influence (DI). Now, TCF is computed as (0.65+0.01*DI).
As DI can vary from 0 to 70, TCF can vary from 0.65 to 1.35. Finally, FP=UFP*TCF.

Shortcomings of function point (FP) metric

LOC as a measure of problem size has several shortcomings:

LOC gives a numerical value of problem size that can vary widely with individual coding style –
different programmers lay out their code in different ways. For example, one programmer might
write several source instructions on a single line whereas another might split a single instruction
across several lines. Of course, this problem can be easily overcome by counting the language
tokens in the program rather than the lines of code. However, a more intricate problem arises
because the length of a program depends on the choice of instructions used in writing the
program. Therefore, even for the same problem, different programmers might come up with
programs having different LOC counts. This situation does not improve even if language tokens
are counted instead of lines of code.

A good problem size measure should consider the overall complexity of the problem and the
effort needed to solve it. That is, it should consider the local effort needed to specify, design,
code, test, etc. and not just the coding effort. LOC, however, focuses on the coding activity
alone; it merely computes the number of source lines in the final program. We have already seen
that coding is only a small part of the overall software development activities. It is also wrong to
argue that the overall product development effort is proportional to the effort required in writing
the program code. This is because even though the design might be very complex, the code
might be straightforward and vice versa. In such cases, code size is a grossly improper indicator
of the problem size.

LOC measure correlates poorly with the quality and efficiency of the code. Larger code size does
not necessarily imply better quality or higher efficiency. Some programmers produce lengthy
and complicated code as they do not make effective use of the available instruction set. In fact, it
is very likely that a poor and sloppily written piece of code might have larger number of source
instructions than a piece that is neat and efficient.

LOC metric penalizes use of higher-level programming languages, code reuse, etc. The
paradox is that if a programmer consciously uses several library routines, then the LOC count
will be lower. This would show up as smaller program size. Thus, if managers use the LOC
count as a measure of the effort put in the different engineers (that is, productivity), they would
be discouraging code reuse by engineers.

LOC metric measures the lexical complexity of a program and does not address the more
important but subtle issues of logical or structural complexities. Between two programs with
equal LOC count, a program having complex logic would require much more effort to develop
than a program with very simple logic. To realize why this is so, consider the effort required to
develop a program having multiple nested loop and decision constructs with another program
having only sequential control flow.

It is very difficult to accurately estimate LOC in the final product from the problem
specification. The LOC count can be accurately computed only after the code has been fully
developed. Therefore, the LOC metric is little use to the project managers during project
planning, since project planning is carried out even before any development activity has started.
This possibly is the biggest shortcoming of the LOC metric from the project manager’s
perspective.

Feature Point Metric

A major shortcoming of the function point measure is that it does not take into account the
algorithmic complexity of a software. That is, the function point metric implicitly assumes that
the effort required to design and develop any two functionalities of the system is the same. But,
we know that this is normally not true, the effort required to develop any two functionalities may
vary widely. It only takes the number of functions that the system supports into consideration
without distinguishing the difficulty level of developing the various functionalities. To overcome
this problem, an extension of the function point metric called feature point metric is proposed.
Feature point metric incorporates an extra parameter algorithm complexity. This parameter
ensures that the computed size using the feature point metric reflects the fact that the more is the
complexity of a function, the greater is the effort required to develop it and therefore its size
should be larger compared to simpler functions.

Project Estimation Techniques

Estimation of various project parameters is a basic project planning activity. The important
project parameters that are estimated include: project size, effort required to develop the
software, project duration, and cost. These estimates not only help in quoting the project cost to
the customer, but are also useful in resource planning and scheduling. There are three broad
categories of estimation techniques:

• Empirical estimation techniques


• Heuristic techniques
• Analytical estimation techniques

Empirical Estimation Techniques

Empirical estimation techniques are based on making an educated guess of the project
parameters. While using this technique, prior experience with development of similar products is
helpful. Although empirical estimation techniques are based on common sense, different
activities involved in estimation have been formalized over the years. Two popular empirical
estimation techniques are: Expert judgment technique and Delphi cost estimation.

Expert Judgment Technique

Expert judgment is one of the most widely used estimation techniques. In this approach, an
expert makes an educated guess of the problem size after analyzing the problem thoroughly.
Usually, the expert estimates the cost of the different components (i.e. modules or subsystems) of
the system and then combines them to arrive at the overall estimate. However, this technique is
subject to human errors and individual bias. Also, it is possible that the expert may overlook
some factors inadvertently. Further, an expert making an estimate may not have experience and
knowledge of all aspects of a project. For example, he may be conversant with the database and
user interface parts but may not be very knowledgeable about the computer communication part.
A more refined form of expert judgment is the estimation made by group of experts. Estimation
by a group of experts minimizes factors such as individual oversight, lack of familiarity with a
particular aspect of a project, personal bias, and the desire to win contract through overly
optimistic estimates. However, the estimate made by a group of, experts may still exhibit bias on
issues where the entire group of experts may be biased due to reasons such as political
considerations. Also, the decision made by the group may be dominated by overly assertive
members.

Delphi Cost Estimation

Delphi cost estimation approach tries to overcome some of the shortcomings of the expert
judgment approach. Delphi estimation is carried out by a team comprising of a group of experts
and a coordinator. In this approach, the coordinator provides each estimator with a copy of the
software requirements specification (SRS) document and a form for recording his cost estimate.
Estimators complete their individual estimates anonymously and submit to the coordinator. In
their estimates, the estimators mention any unusual characteristic of the product which has
influenced his estimation. The coordinator prepares and distributes the summary of the responses
of all the estimators, and includes any unusual rationale noted by any of the estimators. Based on
this summary, the estimators re-estimate. This process is iterated for several rounds. However,
no discussion among the estimators is allowed during the entire estimation process. The idea
behind this is that if any discussion is allowed among the estimators, then many estimators may
easily get influenced by the rationale of an estimator who may be more experienced or senior.
After the completion of several iterations of estimations, the coordinator takes the responsibility
of compiling the results and preparing the final estimate.

COCOMO Model

Boehm proposed COCOMO (Constructive Cost Estimation Model) in 1981.COCOMO is one of


the most generally used software estimation models in the world. COCOMO predicts the efforts
and schedule of a software product based on the size of the software.

The necessary steps in this model are:

1. Get an initial estimate of the development effort from evaluation of thousands of


delivered lines of source code (KDLOC).

2. Determine a set of 15 multiplying factors from various attributes of the project.

3. Calculate the effort estimate by multiplying the initial estimate with all the multiplying
factors i.e., multiply the values in step1 and step2.

The initial estimate (also called nominal estimate) is determined by an equation of the form used
in the static single variable models, using KDLOC as the measure of the size. To determine the
initial effort Ei in person-months the equation used is of the type is shown below

Ei=a*(KDLOC)b

Backward Skip 10sPlay VideoForward Skip 10s


The value of the constant a and b are depends on the project type.

In COCOMO, projects are categorized into three types:

1. Organic

2. Semidetached

3. Embedded

1.Organic: A development project can be treated of the organic type, if the project deals with
developing a well-understood application program, the size of the development team is
reasonably small, and the team members are experienced in developing similar methods of
projects. Examples of this type of projects are simple business systems, simple inventory
management systems, and data processing systems.

2. Semidetached: A development project can be treated with semidetached type if the


development consists of a mixture of experienced and inexperienced staff. Team members may
have finite experience in related systems but may be unfamiliar with some aspects of the order
being developed. Example of Semidetached system includes developing a new operating
system (OS), a Database Management System (DBMS), and complex inventory
management system.

3. Embedded: A development project is treated to be of an embedded type, if the software being


developed is strongly coupled to complex hardware, or if the stringent regulations on the
operational method exist. For Example: ATM, Air Traffic control.

For three product categories, Bohem provides a different set of expression to predict effort (in a
unit of person month)and development time from the size of estimation in KLOC(Kilo Line of
code) efforts estimation takes into account the productivity loss due to holidays, weekly off,
coffee breaks, etc.

According to Boehm, software cost estimation should be done through three stages:

1. Basic Model

2. Intermediate Model

3. Detailed Model

1. Basic COCOMO Model: The basic COCOMO model provide an accurate size of the project
parameters. The following expressions give the basic COCOMO estimation model:

Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Where

KLOC is the estimated size of the software product indicate in Kilo Lines of Code,

a1,a2,b1,b2 are constants for each group of software products,

Tdev is the estimated time to develop the software, expressed in months,

Effort is the total effort required to develop the software product, expressed in person months
(PMs).

Estimation of development effort

For the three classes of software products, the formulas for estimating the effort based on the
code size are shown below:

Organic: Effort = 2.4(KLOC) 1.05 PM

Semi-detached: Effort = 3.0(KLOC) 1.12 PM

Embedded: Effort = 3.6(KLOC) 1.20 PM

Estimation of development time

For the three classes of software products, the formulas for estimating the development time
based on the effort are given below:

Organic: Tdev = 2.5(Effort) 0.38 Months

Semi-detached: Tdev = 2.5(Effort) 0.35 Months

Embedded: Tdev = 2.5(Effort) 0.32 Months

Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes. Fig shows a plot of estimated effort versus product
size. From fig, we can observe that the effort is somewhat superliner in the size of the software
product. Thus, the effort required to develop a product increases very rapidly with project size.
The development time versus the product size in KLOC is plotted in fig. From fig it can be
observed that the development time is a sub linear function of the size of the product, i.e. when
the size of the product increases by two times, the time to develop the product does not double
but rises moderately. This can be explained by the fact that for larger products, a larger number
of activities which can be carried out concurrently can be identified. The parallel activities can be
carried out simultaneously by the engineers. This reduces the time to complete the project.
Further, from fig, it can be observed that the development time is roughly the same for all three
categories of products. For example, a 60 KLOC program can be developed in approximately 18
months, regardless of whether it is of organic, semidetached, or embedded type.
From the effort estimation, the project cost can be obtained by multiplying the required effort by
the manpower cost per month. But, implicit in this project cost computation is the assumption
that the entire project cost is incurred on account of the manpower cost alone. In addition to
manpower cost, a project would incur costs due to hardware and software required for the project
and the company overheads for administration, office space, etc.

It is important to note that the effort and the duration estimations obtained using the COCOMO
model are called a nominal effort estimate and nominal duration estimate. The term nominal
implies that if anyone tries to complete the project in a time shorter than the estimated duration,
then the cost will increase drastically. But, if anyone completes the project over a longer period
of time than the estimated, then there is almost no decrease in the estimated cost value.

Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and
development time for each of the three model i.e., organic, semi-detached & embedded.

Solution: The basic COCOMO equation takes the form:


Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC

(i)Organic Mode

E = 2.4 * (400)1.05 = 1295.31 PM


D = 2.5 * (1295.31)0.38=38.07 PM

(ii)Semidetached Mode

E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM

(iii) Embedded Mode

E = 3.6 * (400)1.20 = 4772.81 PM


D = 2.5 * (4772.8)0.32 = 38 PM

Example2: A project size of 200 KLOC is to be developed. Software development team has
average experience on similar type of projects. The project schedule is not very tight. Calculate
the Effort, development time, average staff size, and productivity of the project.

Solution: The semidetached mode is the most appropriate mode, keeping in view the size,
schedule and experience of development time.

Hence E=3.0(200)1.12=1133.12PM
D=2.5(1133.12)0.35=29.3PM

P = 176 LOC/PM

2. Intermediate Model: The basic Cocomo model considers that the effort is only a function of
the number of lines of code and some constants calculated according to the various software
systems. The intermediate COCOMO model recognizes these facts and refines the initial
estimates obtained through the basic COCOMO model by using a set of 15 cost drivers based on
various attributes of software engineering.
(i) Product attributes -

o Required software reliability extent

o Size of the application database

o The complexity of the product

Hardware attributes -

o Run-time performance constraints

o Memory constraints

o The volatility of the virtual machine environment

o Required turnabout time

Personnel attributes -

o Analyst capability

o Software engineering capability

o Applications experience

o Virtual machine experience

o Programming language experience

Project attributes -

o Use of software tools

o Application of software engineering methods

o Required development schedule

Project ai bi ci di

Organic 2.4 1.05 2.5 0.38

Semidetached 3.0 1.12 2.5 0.35


Embedded 3.6 1.20 2.5 0.32

3. Detailed COCOMO Model: Detailed COCOMO incorporates all qualities of the standard
version with an assessment of the cost drivers effect on each method of the software engineering
process. The detailed model uses various effort multipliers for each cost driver property. In
detailed cocomo, the whole software is differentiated into multiple modules, and then we apply
COCOMO in various modules to estimate effort and then sum the effort.

The Six phases of detailed COCOMO are:

1. Planning and requirements

2. System structure

3. Complete structure

4. Module code and test

5. Integration and test

6. Cost Constructive model

The effort is determined as a function of program estimate, and a set of cost drivers are given
according to every phase of the software lifecycle.

You might also like