2.CS4116PE-Software Project Management-Compressed
2.CS4116PE-Software Project Management-Compressed
Unit 1:
The mere act of measuring human processes changes them because of people’s
fears, and so forth. Measurements are both expensive and disruptive; overzealous
measurements can disrupt the process under study.
Principles of Software Process Change:
People:
•The best people are always in short supply
•you probably have about the best team you can get right now.
•With proper leadership and support, most people can do much better than they
are currently doing Design:
•Superior products have superior design. Successful products are designed by
people who understand the application (domain engineer).
•A program should be viewed as executable knowledge. Program designers should
have application knowledge.
The Six Basic Principles of Software Process Change:
•Major changes to the process must start at the top
. •Ultimately, everyone must be involved.
•Effective change requires great knowledge of the current process
•Change is continuous
•Software process changes will not be retained without conscious effort and
periodic reinforcement
•Software process improvement requires investment.
Continuous Change:
•Reactive changes generally make things worse
•Every defect is an improvement opportunity
•Crisis prevention is more important than crisis recovery
SOFTWARE PROCESSES CHANGES WON’T STICK BY THEMSELVES
The tendency for improvements to deteriorate is characterized by the term
entrophy
are:
•Learn how the organization works
•Identify its major problems
•Enroll its opinion leaders in the change process
The essential approach is to conduct a series of structured interviews with key
people in the
organization to learn their problems, concerns, and creative ideas.
ASSESSMENT OVERVIEW:
A software assessment is not an audit. Audits are conducted for senior managers
who suspect problems and send in experts to uncover them. A software process
assessment is a review of a software organization to advise its management and
professionals on how they can improve their operation.
The phases of assessment are:
•Preparation - Senior management agrees to participate in the process and to take
actions on the resulting recommendations or explain why not. Concludes with a
training program for the assessment team
•Assessment - The on-site assessment period. It takes several days to two or more
weeks. It concludes with a preliminary report to local management.
•Recommendations - Final recommendations are presented to local managers. A
local action team is then formed to plan and implement the recommendations.
Five Assessment Principles:
•The need for a process model as a basis for assessment
•The requirement for confidentiality
•Senior management involvement
•An attitude of respect for the views of the people in the organization be assessed
•An action orientation
Start with a process model - Without a model, there is no standard; therefore, no
measure of change. Observe strict confidentiality - Otherwise, people will learn
they cannot speak in confidence. This means managers can’t be in interviews with
their subordinates. Involve senior management - The senior manager (called site
manager here) sets the organizations priorities. The site manager must be
personally involved in the assessment and its follow-up actions. Without this
support, the assessment is a waste of time because lasting improvement must
survive periodic crises. Respect the people in the assessed organization - They
probably work hard and are trying to improve. Do not appear arrogant; otherwise,
they will not cooperate and may try to prove the team is ineffective. The only
source of real information is from the workers.
Assessment recommendations should highlight the three or four items of highest
priority. Don’t overwhelm the organization. The report must always be in writing.
Implementation Considerations - The greatest risk is that no significant
improvement actions will be taken (the “disappearing problem” syndrome).
Superficial changes won’t help. A small, full-time group should guide the
implementation effort, with participation from other action plan working groups.
Don’t forget that site managers can change or be otherwise distracted, so don’t
rely on that person solely, no matter how committed.
THE INITIAL PROCESS(LEVEL1)
Usually ad hoc and chaotic - Organization operates without formalized
procedures, cost estimates, and project plans. Tools are neither well integrated
with the process nor uniformly applied. Change control is lax, and there is little
senior management exposure or understanding of the problems and issues. Since
many problems are deferred or even forgotten, software installation and
maintenance often present serious problems. While organizations at this level may
have formal procedures for planning and tracking work, there is no management
mechanism to insure they are used. Procedures are often abandoned in a crisis in
favor of coding and testing. Level 1 organizations don’t use design and code
inspections and other techniques not directly related to shipping a product.
Organizations at Level 1 can improve their performance by instituting basic
project controls.
decomposition of the development cycle into tasks, each of which has a defined set
of prerequisites, functional decompositions, verification procedures, and task
completion specifications.
•Introduce a family of software engineering methods and technologies. These
include design and code inspections, formal design methods, library control
systems, and comprehensive testing methods. Prototying and modern languages
should be considered.
THE DEFINED PROCESS (LEVEL 3)
The organization has the foundation for major and continuing change. When faced
with a crisis, the software teams will continue to use the same process that has
been defined.
However, the process is still only qualitative; there is little data to indicate how
much is accomplished or how effective the process is. There is considerable debate
about the value of software process measurements and the best one to use.
The key steps required to advance from the Defined Process to the next level are:
•Establish a minimum set of basic process measurements to identify the quality
and cost parameters of each process step. The objective is to quantify the relative
costs and benefits of each major process activity, such as the cost and yield of
error detection and correction methods.
•Establish a process database and the resources to manage and maintain it. Cost
and yield data should be maintained centrally to guard against loss, to make it
available for all projects, and to facilitate process quality and productivity
analysis. Provide sufficient process resources to gather and maintain the process
data and to advise project members on its use. Assign skilled professionals to
monitor the quality of the data before entry into the database and to provide
guidance on the analysis methods and interpretation.
•Assess the relative quality of each product and inform management where quality
targets are
not being met. Should be done by an independent quality assurance group.
Quality products are a result of quality processes. CMMI has a strong focus on
qualityrelated activities including requirements management, quality assurance,
verification, and validation.
Create value for the stockholders: Mature organizations are more likely to make
better cost and revenue estimates than those with less maturity, and then perform
in line with those estimates. CMMI supports quality products, predictable
schedules, and effective measurement to support management in making accurate
and defensible forecasts. This process maturity can guard against project
performance problems that could weaken the value of the organization in the eyes
of investors.
Enhance customer satisfaction: Meeting cost and schedule targets with high-
quality products that are validated against customer needs is a good formula for
customer satisfaction. CMMI addresses all of these ingredients through its
emphasis on planning, monitoring, and measuring, and the improved
predictability that comes with more capable processes.
Integration, then you should select IPPD. The discipline amplifications for
IPPD receive special emphasis.
• If you are improving your source selection processes like Integrated Supplier
Management then you should select Supplier sourcing (SS). The discipline
amplifications for supplier sourcing receive special emphasis.
• If you are improving multiple disciplines, then you need to work on all the
areas related to those disciplines and pay attention to all of the discipline
amplifications for those disciplines
Process Areas
Common Features
This chapter will discuss about two CMMI representations and rest of the subjects
will be
The People Capability Maturity Model (People CMM, P-CMM) is part of the CMMI product
family of process maturity models. It is a framework to guide organisations in improving
their processes for managing and developing human workforces. It helps organisations to
characterize the maturity of their workforce practices, establish a program of continuous
workforce development, set priorities for improvement actions, integrate workforce
development with Process Improvement, and establish a culture of excellence. PCMM is
based on proven practices in fields of human resources, knowledge management, and
organisational development. P-CMM is part of the CMMI product family of process
maturity models. It describes a progression for continuous improvement and process
improvement of the HR processes for managing and developing human workforces. The P-
CMM framework enables organisations to incrementally focus on key process areas and to
lay foundations for improvement in workforce practices. Unlike other HR models, P-CMM
requires that key process areas, improvements, interventions, policies, and procedures
are institutionalised across the organisation — irrespective of function or level. Therefore,
all improvements have to percolate throughout the organisation, to ensure consistency of
focus, to place emphasis on a participatory culture, embodied in a team-based
environment, and encouraging individual innovation and creativity. Process Maturity
Rating The process maturity rating is from ad hoc and inconsistently performed practices,
to a mature and disciplined development of the knowledge, skills, and motivation of the
workforce.
1. People
2. Process
3. Products, Technology
PSP
The PSP aims to provide software engineers with disciplined methods for
improving personal
TSP
The team software process (TSP) provides a defined operational process framework
that is designed to help teams of managers and engineers organize projects and
produce software the principles products that range in size from small projects of
several thousand lines of code (KLOC) to very large projects greater than half a
million lines of code. The TSP is intended to improve the levels of quality and
productivity of a team's software development project, in order to help them better
meet the cost and schedule commitments of developing a software system The
initial version of the TSP was developed and piloted by Watts Humphrey in the late
Dept of CSE, NRCM 20 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
1990s and the Technical Report for TSP sponsored by the U.S. Department of
Defense was published in November 2000. The book by Watts Humphrey,
Introduction to the Team Software Process, presents a view of the TSP intended for
use in academic settings, that focuses on the process of building a software
production team, establishing team goals, distributing team roles, and other
teamwork-related activities. The primary goal of TSP is to create a team
environment for establishing and maintaining a self-directed team, and supporting
disciplined individual work as a base of PSP framework. Self-directed team means
that the team manages itself, plans and tracks their work, manages the quality of
their work, and works proactively to meet team goals. TSP has two principal
components: team-building and team-working. Team-building is a process that
defines roles for each team member and sets up teamwork through TSP launch
and periodical relaunch. Team-working is a process that deals with engineering
processes and practices utilized by the team. TSP, in short, provides engineers
and managers with a way that establishes and manages their team to produce the
high-quality software on schedule and budget.
Before engineers can participate in the TSP, it is required that they have already
learned about the PSP, so that the TSP can work effectively. Training is also
required for other team members, the team lead and management. The TSP
software development cycle begins with a planning process called the launch, led
by a coach who has been specially trained, and is either certified or provisional.
The launch is designed to begin the team building process, and during this time
teams and managers establish goals, define team roles, assess risks, estimate
effort, allocate tasks, and produce a team plan. During an execution phase,
developers track planned and actual effort, schedule, and defects meeting
regularly (usually weekly) to report status and revise plans. A development cycle
ends with a Post Mortem to assess performance,revise planning parameters, and
capture lessons learned for process improvement. The coach role focuses on
supporting the team and the individuals on the team as the process expert while
being independent of direct project management responsibility. The team leader
role is different from the coach role in that, team leaders are responsible to
management for products and project outcomes while the coach is responsible for
developing individual and team performance
Unit 2:
Software Project Management Renaissance
Conventional Software Management, Evolution of Software Economics, Improving
Software
Economics, The old way and the new way.
Life-Cycle Phases and Process artifacts
Engineering and Production stages, inception phase, elaboration phase,
construction phase, transition
phase, artifact sets, management artifacts, engineering artifacts and pragmatic
artifacts, model-based
software architectures.
IN THEORY
1.There are two essential steps common to the development of computer programs:
analysis and coding.
Waterfall Model part 1: The two basic steps to building a program.
2. In order to manage and control all of the intellectual freedom associated with
software development, one must introduce several other "overhead" steps,
including system requirements definition, software requirements definition,
program design, and testing. These steps supplement the analysis and coding
steps. Below Figure illustrates the resulting project profile and the basic steps in
developing a large-scale program.
Requirement
Analysis
Design
Coding
Testing
Operation
3. The basic framework described in the waterfall model is risky and invites
failure. The testing phase that occurs at the end of the development cycle is the
first event for which timing, storage, input/output transfers, etc., are experienced
as distinguished from analyzed. The resulting design changes are likely to be so
disruptive that the software requirements upon which the design is based are
likely violated. Either the requirements must be modified or a substantial design
change is warranted.
Five necessary improvements for waterfall model are:-
1. Program design comes first. Insert a preliminary program design phase
between the software requirements generation phase and the analysis phase.
By this technique, the program designer assures that the software will
not fail because of storage, timing, and data flux (continuous change). As
analysis proceeds in the succeeding phase, the program designer must impose
on the analyst the storage, timing, and operational constraints in such a way
that he senses the consequences. If the total resources to be applied are
insufficient or if the embryonic(in an early stage of development) operational
design is wrong, it will be recognized at this early stage and the iteration with
requirements and preliminary design can be redone before final design,
aren't worth studying at this early point, and, finally, arrive at an error-free
program.
4.Plan, control, and monitor testing. Without question, the biggest user of
project resources-manpower, computer time, and/or management judgment-
is the test phase. This is the phase of greatest risk in terms of cost and
schedule. It occurs at the latest point in the schedule, when backup
alternatives are least available, if at all. The previous three recommendations
were all aimed at uncovering and solving problems before entering the test
phase. However, even after doing these things, there is still a test phase and
there are still important things to be done, including: (1) employ a team of
test specialists who were not responsible for the original design; (2) employ
visual inspections to spot the obvious errors like dropped minus signs,
missing factors of two, jumps to wrong addresses (do not use the computer to
detect this kind of thing, it is too expensive); (3) test every logic path; (4)
employ the final checkout on the target computer.
5. Involve the customer. It is important to involve the customer in a formal
way so that he has committed himself at earlier points before final delivery.
There are three points following requirements definition where the insight,
judgment, and commitment of the customer can bolster the development
effort. These include a "preliminary software review" following the preliminary
program design step, a sequence of "critical software design reviews" during
program design, and a "final software acceptance review".
IN PRACTICE
Some software projects still practice the conventional software management
approach.
It is useful to summarize the characteristics of the conventional process as it has
typically been applied, which is not necessarily as it was intended. Projects
destined for trouble frequently exhibit the following symptoms:
In the conventional model, the entire system was designed on paper, then
implemented all at once, then integrated. Table 1-1 provides a typical profile of
cost expenditures across the spectrum of software activities.
Late risk resolution A serious issue associated with the waterfall lifecycle was
the lack of early risk resolution. Figure 1.3 illustrates a typical risk profile for
conventional waterfall model projects. It includes four distinct periods of risk
exposure, where risk is defined as the probability of missing a cost, schedule,
feature, or quality goal. Early in the life cycle, as the requirements were being
specified, the actual risk exposure was highly unpredictable.
The following sequence of events was typical for most contractual software
efforts:
1. The contractor prepared a draft contract-deliverable document that captured
an intermediate artifact and delivered it to the customer for approval.
2. The customer was expected to provide comments (typically within 15 to 30
days).
3. The contractor incorporated these comments and submitted (typically within
15 to 30 days) a final version for approval.
This one-shot review process encouraged high levels of sensitivity on the part of
customers and contractors.
Focus on Documents and Review Meetings:
The conventional process focused on producing various documents that
attempted to describe the software product, with insufficient focus on producing
tangible increments of the products themselves. Contractors were driven to
produce literally tons of paper to meet milestones and demonstrate progress to
stakeholders, rather than spend their energy on tasks that would reduce risk
and produce quality software. Typically,
Presenters and the audience reviewed the simple things that they understood
rather than the complex and important issues. Most design reviews therefore
resulted in low engineering value and high cost in terms of the effort and
schedule involved in their preparation and conduct. They presented merely a
facade of progress.
SOFTWARE ECONOMICS:
Most software cost models can be abstracted into a function of five basic
parameters: size, process, personnel, environment, and required quality.
1.The size of the end product (in human-generated components), which is
typically quantified in terms of the number of source instructions or the
number of function points required to develop the required functionality
2.The process used to produce the end product, in particular the ability of
the process to avoid non-value-adding activities (rework, bureaucratic
delays, communications overhead)
3.The capabilities of software engineering personnel, and particularly their
experience with the computer science issues and the applications domain
issues of the project
4.The environment, which is made up of the tools and techniques available
to support efficient software development and to automate the process
5.The required quality of the product, including its features, performance,
reliability, and adaptability
The relationships among these parameters and the estimated cost can be written
as follows:
Effort = (Personnel) (Environment) (Quality) ( Sizeprocess)
Assembly 320
C 128
FORTAN77 105
COBOL85 91
Ada83 71
C++ 56
Ada95 55
Java 55
Visual Basic 35
Table 3-2
REUSE
Reusing existing components and building reusable components have been
natural software engineering activities since the earliest improvements in
programming languages. With reuse in order to minimize development costs
while achieving all the other required attributes of performance, feature set, and
quality. Try to treat reuse as a mundane part of achieving a return on
investment.
Most truly reusable components of value are transitioned to commercial products
supported by organizations with the following characteristics:
•They have an economic motivation for continued support.
•They take ownership of improving product quality, adding new features,
and transitioning to new technologies.
•They have a sufficiently broad customer base to be profitable.
Dept of CSE, NRCM 42 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
The cost of developing a reusable component is not trivial. Figure 3-1 examines
the economic trade-offs. The steep initial curve illustrates the economic obstacle
to developing reusable components.
Reuse is an important discipline that has an impact on the efficiency of all
workflows and the quality of most artifacts.
COMMERCIAL COMPONENTS
A common approach being pursued today in many domains is to maximize
integration of commercial components and off-the-shelf products. While the use
of commercial components is certainly desirable as a means of reducing custom
development, it has not proven to be straightforward in practice. Table 3-3
identifies some of the advantages and disadvantages of using commercial
components.
Although these three levels of process overlap somewhat, they have different
objectives, audiences, metrics, concerns, and time scales as shown in Table 3-4
another, more abstract representation. For example, compilers and linkers have
provided automated transition of source code into executable code.
Reverse engineering is the generation or modification of a more abstract
representation from an existing artifact (for example, creating a .visual design
model from a source code representation).
Economic improvements associated with tools and environments. It is common
for tool vendors to make relatively accurate individual assessments of life-cycle
activities to support claims about the potential economic impact of their tools.
For example, it is easy to find statements such as the following from companies
in a particular tool.
• Requirements analysis and evolution activities consume 40% of life-cycle
costs.
• Software design activities have an impact on more than 50% of the
resources.
• Coding and unit testing activities consume about 50% of software devel-
opment effort and schedule.
• Test activities can consume as much as 50% of a project's resources.
• Configuration control and change management are critical activities that
can consume as much as 25% of resources on a large-scale project.
• Documentation activities can consume more than 30% of project
engineering resources.
• Project management, business administration, and progress assessment
can consume as much as 30% of project budgets.
ACHIEVING REQUIRED QUALITY
Software best practices are derived from the development process and
technologies. Table 3-5 summarizes some dimensions of quality improvement.
Key practices that improve overall software quality include the following:
• Focusing on driving requirements and critical use cases early in the life
cycle, focusing on requirements completeness and traceability late in the life
cycle, and focusing throughout the life cycle on a balance between
requirements evolution, design evolution, and plan evolution
• Using metrics and indicators to measure the progress and quality of an
architecture as it evolves from a high-level prototype into a fully compliant
product
• Providing integrated life-cycle environments that support early and contin-
uous configuration control, change management, rigorous design methods,
document automation, and regression test automation
• Using visual modeling and higher level languages that support architectural
control, abstraction, reliable programming, reuse, and self-documentation
• Early and continuous insight into performance issues through demonstra-
tion-based evaluations
Top 10 principles of modern software management are. (The first five, which are
the main themes of my definition of an iterative process, are summarized in
Figure 4-1.)
1. Base the process on an architecture-first approach. This requires that a
demonstrable balance be achieved among the driving requirements, the
architecturally significant design decisions, and the life-cycle plans before the
resources are committed for full-scale development.
2. Establish an iterative life-cycle process that confronts risk early. With
today's sophisticated software systems, it is not possible to define the entire
problem, design the entire solution, build the software, and then test the end
product in sequence. Instead, an iterative process that refines the problem
understanding, an effective solution, and an effective plan over several iterations
encourages a balanced treatment of all stakeholder objectives. Major risks must
be addressed early to increase predictability and avoid expensive downstream
scrap and rework.
3. Transition design methods to emphasize component-based development.
Moving from a line-of-code mentality to a component-based mentality is
Modern software development processes have moved away from the conventional
waterfall model, in which each stage of the development process is dependent on
completion of the previous stage.
The economic benefits inherent in transitioning from the conventional
waterfall model to an iterative development process are significant but difficult to
quantify. As one benchmark of the expected economic impact of process
improvement, consider the process exponent parameters of the COCOMO II
model. (Appendix B provides more detail on the COCOMO model) This exponent
can range from 1.01 (virtually no diseconomy of scale) to 1.26 (significant
diseconomy of scale). The parameters that govern the value of the process
exponent are application precedentedness, process flexibility, architecture risk
resolution, team cohesion, and software process maturity.
The following paragraphs map the process exponent parameters of CO COMO II
to my top 10 principles of a modern process.
• Application precedentedness. Domain experience is a critical factor in
understanding how to plan and execute a software development project. For
unprecedented systems, one of the key goals is to confront risks and
establish early precedents, even if they are incomplete or experimental. This
is one of the primary reasons that the software industry has moved to an
iterative life-cycle process. Early iterations in the life cycle establish
precedents from which the product, the process, and the plans can be elab-
orated in evolving levels of detail.
• Process flexibility. Development of modern software is characterized by
such a broad solution space and so many interrelated concerns that there is
a paramount need for continuous incorporation of changes. These changes
may be inherent in the problem understanding, the solution space, or the
plans. Project artifacts must be supported by efficient change management
2.The production stage, driven by more predictable but larger teams doing
construction, test, and deployment activities
The transition between engineering and production is a crucial event for the var-
ious stakeholders. The production plan has been agreed upon, and there is a
good enough understanding of the problem and the solution that all stakeholders
can make a firm commitment to go ahead with production.
Engineering stage is decomposed into two distinct phases, inception and
elaboration, and the production stage into construction and transition. These
four phases of the life-cycle process are loosely mapped to the conceptual
framework of the spiral model as shown in Figure 5-1
INCEPTION PHASE
The overriding goal of the inception phase is to achieve concurrence among
stakeholders on the life-cycle objectives for the project.
PRIMARY OBJECTIVES
• Establishing the project's software scope and boundary conditions, includ-
ing an operational concept, acceptance criteria, and a clear understanding
of what is and is not intended to be in the product
• Discriminating the critical use cases of the system and the primary scenar-
ios of operation that will drive the major design trade-offs
• Demonstrating at least one candidate architecture against some of the pri-
mary scenanos
• Estimating the cost and schedule for the entire project (including detailed
estimates for the elaboration phase)
• Estimating potential risks (sources of unpredictability)
ESSENTIAL ACTIVTIES
• Formulating the scope of the project. The information repository should be
sufficient to define the problem space and derive the acceptance criteria for
the end product.
• Synthesizing the architecture. An information repository is created that is
sufficient to demonstrate the feasibility of at least one candidate
architecture and an, initial baseline of make/buy decisions so that the cost,
schedule, and resource estimates can be derived.
• Planning and preparing a business case. Alternatives for risk management,
staffing, iteration plans, and cost/schedule/profitability trade-offs are eval-
uated.
ELABORATION PHASE
ESSENTIAL ACTIVITIES
• Elaborating the vision.
• Elaborating the process and infrastructure.
• Elaborating the architecture and selecting components.
PRIMARY EVALUATION CRITERIA
• Is the vision stable?
• Is the architecture stable?
• Does the executable demonstration show that the major risk elements have
been addressed and credibly resolved?
• Is the construction phase plan of sufficient fidelity, and is it backed up with
a credible basis of estimate?
• Do all stakeholders agree that the current vision can be met if the current
plan is executed to develop the complete system in the context of the cur-
rent architecture?
• Are actual resource expenditures versus planned expenditures acceptable?
CONSTRUCTION PHASE
During the construction phase, all remaining components and application
features are integrated into the application, and all features are thoroughly
tested. Newly developed software is integrated where required. The construction
phase represents a production process, in which emphasis is placed on
managing resources and controlling operations to optimize costs, schedules, and
quality.
PRIMARY OBJECTIVES
•Minimizing development costs by optimizing resources and avoiding
unnecessary scrap and rework
•Achieving adequate quality as rapidly as practical
•Achieving useful versions (alpha, beta, and other test releases) as rapidly as
practical
ESSENTIAL ACTIVITIES
•Resource management, control, and process optimization
•Complete component development and testing against evaluation criteria
•Assessment of product releases against acceptance criteria of the vision
PRIMARY OBJECTIVES
•Achieving user self-supportability
•Achieving stakeholder concurrence that deployment baselines are complete
and consistent with the evaluation criteria of the vision
•Achieving final product baselines as rapidly and cost-effectively as practical
ESSENTIAL ACTIVITIES
•Synchronization and integration of concurrent construction increments
into consistent deployment baselines
•Deployment-specific engineering (cutover, commercial packaging and pro-
duction, sales rollout kit development, field personnel training)
•Assessment of deployment baselines against the complete vision and
acceptance criteria in the requirements set
EVALUATION CRITERIA
•Is the user satisfied?
•Are actual resource expenditures versus planned expenditures acceptable?
ARTIFACTS OF THE PROCESS
THE ARTIFACT SETS
To make the development of a complete software system manageable, distinct
collections of information are organized into artifact sets. Artifact represents
cohesive information that typically is developed and reviewed as a single entity.
Life-cycle software artifacts are organized into five distinct sets that are roughly
partitioned by the underlying language of the set: management (ad hoc textual
formats), requirements (organized text and models of the problem space), design
(models of the solution space), implementation (human-readable programming
language and associated source files), and deployment (machine-process able
languages and associated files). The artifact sets are shown in Figure 6-1.
Management set artifacts are evaluated, assessed, and measured through a com-
bination of the following:
•Relevant stakeholder review
•Analysis of changes between the current version of the artifact and previ-
ous versions
•Major milestone demonstrations of the balance among all artifacts and, in
particular, the accuracy of the business case and vision artifacts
Deployment Set
The deployment set includes user deliverables and machine language notations,
executable software, and the build scripts, installation scripts, and executable
target specific data necessary to use the product in its target environment.
Deployment sets are evaluated, assessed, and measured through a combination
of the following:
•Testing against the usage scenarios and quality attributes defined in the
requirements set to evaluate the consistency and completeness and the~
semantic balance between information in the two sets
•Testing the partitioning, replication, and allocation strategies in mapping
components of the implementation set to physical resources of the deploy-
ment system (platform type, number, network topology)
•Testing against the defined usage scenarios in the user manual such as
installation, user-oriented dynamic reconfiguration, mainstream usage, and
anomaly management
•Analysis of changes between the current version of the deployment set and
previous versions (defect elimination trends, performance changes)
•Subjective review of other dimensions of quality
Each artifact set is the predominant development focus of one phase of the life
cycle; the other sets take on check and balance roles. As illustrated in Figure 6-
2, each phase has a predominant focus: Requirements are the focus of the
inception phase; design, the elaboration phase; implementation, the construction
phase; and deployment, the transition phase. The management artifacts also
evolve, but at a fairly constant level across the life cycle.
Most of today's software development tools map closely to one of the five artifact
sets.
1.Management: scheduling, workflow, defect tracking, change management,
documentation, spreadsheet, resource management, and presentation tools
Dept of CSE, NRCM 69 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
The inception phase focuses mainly on critical requirements usually with a sec-
ondary focus on an initial deployment view. During the elaboration phase, there
is much greater depth in requirements, much more breadth in the design set,
and further work on implementation and deployment issues. The main focus of
the construction phase is design and implementation. The main focus of the
transition phase is on achieving consistency and completeness of the deployment
set in the context of the other sets.
TEST ARTIFACTS
•The test artifacts must be developed concurrently with the product from
inception through deployment. Thus, testing is a full-life-cycle activity, not a
late life-cycle activity.
•The test artifacts are communicated, engineered, and developed within the
same artifact sets as the developed product.
•The test artifacts are implemented in programmable and repeatable for-
mats (as software programs).
•The test artifacts are documented in the same way that the product is
documented.
•Developers of the test artifacts use the same tools, techniques, and training
as the software engineers developing the product.
Test artifact subsets are highly project-specific, the following example clarifies
the relationship between test artifacts and the other artifact sets. Consider a
project to perform seismic data processing for the purpose of oil exploration. This
system has three fundamental subsystems: (1) a sensor subsystem that captures
raw seismic data in real time and delivers these data to (2) a technical operations
subsystem that converts raw data into an organized database and manages
queries to this database from (3) a display subsystem that allows workstation
operators to examine seismic data in human-readable form. Such a system
would result in the following test artifacts:
•Management set. The release specifications and release descriptions cap-
ture the objectives, evaluation criteria, and results of an intermediate mile-
stone. These artifacts are the test plans and test results negotiated among
internal project teams. The software change orders capture test results
(defects, testability changes, requirements ambiguities, enhancements) and
the closure criteria associated with making a discrete change to a baseline.
•Requirements set. The system-level use cases capture the operational con-
cept for the system and the acceptance test case descriptions, including the
expected behavior of the system and its quality attributes. The entire
requirement set is a test artifact because it is the basis of all assessment
activities across the life cycle.
•Design set. A test model for nondeliverable components needed to test the
product baselines is captured in the design set. These components include
such design set artifacts as a seismic event simulation for creating realistic
sensor data; a "virtual operator" that can support unattended, after-hours
test cases; specific instrumentation suites for early demonstration of
resource usage; transaction rates or response times; and use case test driv-
ers and component stand-alone test drivers.
Dept of CSE, NRCM 73 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Release Descriptions
Release description documents describe the results of each release, including
performance against each of the evaluation criteria in the corresponding release
specification. Release baselines should be accompanied by a release description
document that describes the evaluation criteria for that configuration baseline
and provides substantiation (through demonstration, testing, inspection, or
analysis) that each criterion has been addressed in an acceptable manner. Figure
6-7 provides a default outline for a release description.
Status Assessments
Status assessments provide periodic snapshots of project health and status,
including the software project manager's risk assessment, quality indicators, and
management indicators. Typical status assessments should include a review of
resources, personnel staffing, financial data (cost and revenue), top 10 risks,
technical progress (metrics snapshots), major milestone plans and results, total
project or product scope & action items
Environment
An important emphasis of a modern approach is to define the development and
maintenance environment as a first-class artifact of the process. A robust,
integrated development environment must support automation of the
development process.
This environment should include requirements management, visual modeling,
document automation, host and target programming tools, automated regression
testing, and continuous and integrated change management, and feature and
defect tracking.
Deployment
A deployment document can take many forms. Depending on the project, it could
include several document subsets for transitioning the product into operational
status.
In big contractual efforts in which the system is delivered to a separate mainte-
nance organization, deployment artifacts may include computer system
operations manuals, software installation manuals, plans and procedures for
cutover (from a legacy system), site surveys, and so forth. For commercial
software products, deployment artifacts may include marketing plans, sales
rollout kits, and training courses.
Management Artifact Sequences
In each phase of the life cycle, new artifacts are produced and previously
developed artifacts are updated to incorporate lessons learned and to capture
further depth and breadth of the solution. Figure 6-8 identifies a typical
sequence of artifacts across the life-cycle phases.
ENGINEERING ARTIFACTS
Most of the engineering artifacts are captured in rigorous engineering notations
such as UML, programming languages, or executable machine codes. Three
engineering artifacts are explicitly intended for more general review, and they
deserve further elaboration.
Dept of CSE, NRCM 79 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Vision Document
The vision document provides a complete vision for the software system under
development and. supports the contract between the funding authority and the
development organization. A project vision is meant to be changeable as
understanding evolves of the requirements, architecture, plans, and technology.
A good vision document should change slowly. Figure 6-9 provides a default
outline for a vision document.
Architecture Description
The architecture description provides an organized view of the software
architecture under development. It is extracted largely from the design model
and includes views of the design, implementation, and deployment sets sufficient
to understand how the operational concept of the requirements set will be
achieved. The breadth of the architecture description will vary from project to
project depending on many factors. Figure 6-10 provides a default outline for an
architecture description.
The software user manual provides the user with the reference documentation
necessary to support the delivered software. Although content is highly variable
across application domains, the user manual should include installation
procedures, usage procedures and guidance, operational constraints, and a user
interface description, at a minimum. For software products with a user interface,
this manual should be developed early in the life cycle because it is a necessary
mechanism for communicating and stabilizing an important subset of
requirements. The user manual should be written by members of the test team,
who are more likely to understand the user's perspective than the development
team.
PRAGMATIC ARTIFACTS
• People want to review information but don't understand the language of the
artifact. Many interested reviewers of a particular artifact will resist having
to learn the engineering language in which the artifact is written. It is not
uncommon to find people (such as veteran software managers, veteran
quality assurance specialists, or an auditing authority from a regulatory
agency) who react as follows: "I'm not going to learn UML, but I want to
review the design of this software, so give me a separate description such as
some flowcharts and text that I can understand."
• People want to review the information but don't have access to the tools. It
is not very common for the development organization to be fully tooled; it is
extremely rare that the/other stakeholders have any capability to review the
engineering artifacts on-line. Consequently, organizations are forced to
exchange paper documents. Standardized formats (such as UML, spread-
sheets, Visual Basic, C++, and Ada 95), visualization tools, and the Web are
rapidly making it economically feasible for all stakeholders to exchange
information electronically.
• Human-readable engineering artifacts should use rigorous notations that
are complete, consistent, and used in a self-documenting manner. Properly
Dept of CSE, NRCM 81 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
spelled English words should be used for all identifiers and descriptions.
Acronyms and abbreviations should be used only where they are well
accepted jargon in the context of the component's usage. Readability should
be emphasized and the use of proper English words should be required in
all engineering artifacts. This practice enables understandable
representations, browse able formats (paperless review), more-rigorous
notations, and reduced error rates.
• Useful documentation is self-defining: It is documentation that gets used.
• Paper is tangible; electronic artifacts are too easy to change. On-line and
Web-based artifacts can be changed easily and are viewed with more
skepticism because of their inherent volatility.
MODEL BASED SOFTWARE ARCHITECTURE
ARCHITECTURE: A MANAGEMENT PERSPECTIVE
The most critical technical product of a software project is its architecture: the
infrastructure, control, and data interfaces that permit software components to
cooperate as a system and software designers to cooperate efficiently as a team.
When the communications media include multiple languages and intergroup
literacy varies, the communications problem can become extremely complex and
even unsolvable. If a software development team is to be successful, the inter
project communications, as captured in the software architecture, must be both
accurate and precise
From a management perspective, there are three different aspects of
architecture.
1.An architecture (the intangible design concept) is the design of a software
system this includes all engineering necessary to specify a complete bill of
materials.
2.An architecture baseline (the tangible artifacts) is a slice of information
across the engineering artifact sets sufficient to satisfy all stakeholders that
the vision (function and quality) can be achieved within the parameters of
the business case (cost, profit, time, technology, and people).
3.An architecture description (a human-readable representation of an archi-
tecture, which is one of the components of an architecture baseline) is an
organized subset of information extracted from the design set model(s). The
architecture description communicates how the intangible concept is
realized in the tangible artifacts.
The number of views and the level of detail in each view can vary widely.
The importance of software architecture and its close linkage with modern soft-
ware development processes can be summarized as follows:
•Achieving a stable software architecture represents a significant project
milestone at which the critical make/buy decisions should have been
resolved.
•Architecture representations provide a basis for balancing the trade-offs
between the problem space (requirements and constraints) and the solution
space (the operational product).
•The architecture and process encapsulate many of the important (high-
payoff or high-risk) communications among individuals, teams,
organizations, and stakeholders.
•Poor architectures and immature processes are often given as reasons for
project failures.
•A mature process, an understanding of the primary requirements, and a
demonstrable architecture are important prerequisites for predictable
planning.
•Architecture development and process definition are the intellectual steps
that map the problem to a solution without violating the constraints; they
require human innovation and cannot be automated.
UNIT - III
Workflows and Checkpoints of process, Software process workflows, Iteration
workflows, Major milestones, minor milestones, periodic status assessments.
Process Planning Work breakdown structures, Planning guidelines, cost and
schedule estimating process, iteration planning process, Pragmatic planning.
SOFTWARE MANAGEMENT PROCESS FRAMEWORK:
Software process workflows:
The term workflow is used to mean a thread of cohesive and mostly
sequential activities. Workflows are mapped to product artifacts.
There are seven top level workflows:
1. Management workflow: Controlling the process and ensuring with conditions
for all stakeholders
2. Environment workflow: automating the process and evolving the maintenance
environment
3. Requirements workflow: analyzing the problem space and evolving the
requirements artifacts.
4. Design workflow: modeling the solution and evolving the architecture and
design artifacts
5. Implementation workflow: programming the components and evolving the
implementation and deployment artifacts
6. Assessment workflow: assessing the trends in process and product quality
7. Deployment workflow: transitioning the end products to the user
Four basic key principles of the modern process frame work:
Architecture-first approach: implementing and testing the architecture must precede
full-scale development and testing and must precede the downstream focus on
completeness and quality of the product features.
Iterative life-cycle process: the activities and artifacts of any given workflow may
require more than one pass to achieve adequate results.
Roundtrip engineering: Raising the environment activities to a first-class workflow is
Dept of CSE, NRCM 87 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
critical; the environment is the tangible embodiment of the project’s process and
notations for producing the artifacts.
Demonstration-based approach: Implementation and assessment activities are
initiated nearly in the life-cycle, reflecting the emphasis on constructing executable
subsets of the involving architecture.
Explain in detail about the iteration workflows of the software process?
Iteration consists of sequential set of activities in various proportions, depending on
where the iteration is located in the development cycle. Each iteration is defined in
terms of a se t of allocated usage scenarios. The components needed to implement all
selected scenarios are developed and integrated with the results of previous
iterations. An individual iteration’s workflow illustrated in the following sequence:
Management: Iteration planning to determine the content of the release and develop
the detailed plan for the iteration, assignment of work packages, or tasks, to the
development team.
Environment: evolving the software change order database to reflect all new
baselines and changes to existing baselines for all product, test and environment
components
Requirements: analyzing the baseline plan, the baseline architecture, and the
baseline requirements set artifacts to fully elaborate the use cases to the
demonstrated at the end of the iteration and their evaluation criteria.
Design: Evolving the baseline architecture and the baseline design set artifacts to
elaborate fully the design model and test model components necessary to
demonstrate against the evolution criteria allocated to this iteration.
It is important to have visible milestones in the life cycle, where various stakeholders
meet to discuss progress and planes.
The purpose of this events is to:
Synchronize stakeholder expectations and achieve concurrence on the requirements,
the design, and the plan.
Synchronize related artifacts into a consistent and balanced state.
Synchronize related artifacts into a consistent and balanced state Identify the
important risks, issues, and out-of-tolerance conditions.
Perform a global assessment for the whole life-cycle.
Three types of joint management reviews are conducted throughout the process:
Major milestones –provide visibility to system wide issues, synchronize the
management and engineering perspectives and verify that the aims of the phase have
been achieved.
MAJOR MILESTONES
The four major milestones occur at the transition points between life-cycle phases.
They can be used in many different process models, including the conventional
waterfall model. In an iterative model, the major milestones are used to achieve
concurrence among all stakeholders on the current state of the project. Different
stakeholders have very different concerns:
Customers: schedule and budget estimates, feasibility, risk assessment,
requirements understanding, progress, product line compatibility
Users: consistency with requirements and usage scenarios, potential for
accommodating growth, quality attributes.
Architectures and systems engineers: product line compatibility, requirements
change, tradeoff analyses, completeness and consistency, balance among risk,
quality, and usability.
All iterations are not created equal. An iteration can take on very different forms and
priorities, depending on where the project is in the life cycle. Early iterations focus on
analysis and design with substantial elements of discovery, experimentation, and risk
assessment. Later iterations focus much more on completeness, consistency,
usability, and change management.
Iteration readiness review: this informal milestone is conducted at the start of each
iteration to review the detailed iteration plan the evolution criteria that have been
allocated to this iteration.
Iteration Assessment review: this informal milestone is conducted at the end of each
iteration to assess the degree of which the iteration achieved its objectives and
satisfied its evaluation criteria, to review iteration achieved its objectives and satisfied
its evaluation criteria, to review iteration results, to review qualification test results, to
determine the amount of rework to be done, and to review the impact of the iteration
results on the plan for subsequent iterations.
PERIODIC STATUS ASSESSMENTS
Periodic stats assessments are management reviews conducted at regular intervals to
address progress and quality indicators, ensure continuous attention to project
dynamics, and maintain open communications among all stakeholders.
Status assessments provide the following:
A mechanism for openly addressing, communicating, and resolving management
issues, technical issues, and project risks
Objective data directly from on-going activities and evolving product configurations
A mechanism for disseminating process, progress quality trends, practices and
experience information to and from all stakeholders in an open forum.
The default content of periodic status assessments should include the topics
identified in the following ta
ITERATIVE PROCESS PLANNING
A WBS is simply a hierarchy of elements that decomposes the project plan into the
discrete work tasks. A WBS provides the following information structure:
A delineation of all significant work A clear task decomposition for assignment of
responsibilities
A framework for scheduling, budgeting, and expenditure tracking.
The development of a work breakdown structure is dependent on the project
management style, organizational culture, customer preference, financial constraints
and several other hard- to-define parameters.
Conventional WBS Issues:
Conventional WBS frequently suffer from three fundamental flaws:
Once this structure is ingrained in the WBS and then allocated to responsible
managers with budgets, schedules and expected deliverables, a concrete planning
foundation has been set that is difficult and expensive to change.
Large software projects tend to be over planned and small projects tend to be
under planned. The WBS shown in the above figure is overly simplistic for
most large-scale systems, where size or more levels of WBS elements are
commonplace.
Conventional WBS are project-specific, and cross-project comparisons are
usually difficult or impossible:
Most organizations allow individual projects to define their own project-specific
structure tailored to the project manager’s style, the customer’s demands, or
other project-specific preferences.
It is extremely difficult to compare plans, financial data, schedule data,
organizational efficiencies, cost trends, productivity tends, or quality tends
across multiple projects.
Some of the following simple questions, which are critical to any organizational
process improvement program, cannot be answered by most project teams
that use conventional WBS.
What is the ratio of productive activities to overhead activities?
What is the percentage of effort expanded in rework activities?
What is the percentage of cost expended in software capital equipment?
An evolutionary WBS should organize the planning elements around the process
framework rather than the product framework. The basic recommendation
for the WBS is to organize the hierarchy as follows:
Second level elements are defined for each phase of the life cycle (inceptions,
elaboration, construction and transition)
Third level elements are defined for the focus of activities that produce the
artifacts of each phase.
A default WBS consistent with the process framework (phases, workflows, and
artifacts) is shown in the following figure
The structure shown is intended to be merely a starting point. It needs to be
tailored to the specifics of a project in many ways.
PLANNING GUIDELINES
• Software projects span a broad range of application domains. It is valuable
but risky to make specific planning recommendations independent of
project context. Project-independent planning advice is also risky. There is
the risk that the guidelines may be adopted blindly without being adapted to
specific project circumstance. Two simple planning guidelines should be
considered when a project plan is being initiated or assessed. The first
guideline, detailed in Table 10-1, prescribes a default allocation of costs
among the first-level WBS elements. The second guideline, detailed in Table
10-25, prescribes allocation of effort and schedule across the lifecycle
phases.
Web budgeting defaults
Management 10%
Environment 10%
Requirement 10%
Design 15%
Implementation 25%
Assessment 25%
Deployment 5%
Total 100%
on n n n
1. The lowest level WBS elements are elaborated into detailed tasks
2. Estimates are combined and integrated into higher level budgets and
milestones.
3. Comparisons are made with the top-down budgets and schedule
milestones.
PRAGMATIC PLANNING
• Even though good planning is more dynamic in an iterative process, doing it
accurately is far easier. While executing iteration N of any phase, the
software project manager must be monitoring and controlling against a plan
that was initiated in iteration N-1 and must be planning iteration N+1. the
art of good project management is to make trade-offs in the current iteration
plan and the next iteration plan based on objective results in the current
iteration and previous iterations. Aside form bad architectures and
misunderstood requirement, inadequate planning (and subsequent bad
management) is one of the most common reasons for project failures.
Conversely, the success of every successful project can be attributed in part
to good planning.
• A project’s plan is a definition of how the project requirements will be
transformed into a product within the business constraints. It must be
realistic, it must be current, it must be a team product, it must be
understood by the stake holders, and it must be used. Plans are not just for
mangers. The more open and visible the planning process and results, the
more ownership there is among the team members who need to execute it.
Bad, closely held plans cause attrition. Good, open plans can shape
cultures and encourage teamwork.
Dept of CSE, NRCM 100 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
UNIT 4:
PROJECT ORGANIZATIONS
Line-of- business organizations, project organizations, evolution of organizations,
process automation. Project Control and process instrumentation, The seven-core
metrics, management indicators, quality indicators, life-cycle expectations,
Pragmatic software metrics, metrics automation.
PROJECT ORGANIZATION AND RESPONSIBILITIES:
INTRODUCTION: Software lines of business and project teams have different
motivations. Software lines of business are motivated by return on investment,
new business discriminators, market diversification and profitability. Software
professionals in both types of organizations are motivated by career growth, job
satisfaction and the opportunity to make a difference.
LINES-OF-BUSINESS ORGANIZATIONS: Figure 11-1 maps roles and
responsibilities to a default line-of-business organization. This structure can be
tailored to specific circumstances.
• The main features of the default organization are as follows:
• Responsibility for process definition and maintenance is specific to a
cohesive line of business.
• Responsibility for process automation is an organizational role and is
equal in importance to the process definition role.
Organization roles may be fulfilled by a single individual or several different
teams, depending on the scale of the organization
Dept of CSE, NRCM 101 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 103 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 104 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
• Resource management
• Stakeholders satisfaction
• Risk management
• Assignment or personnel
• Project controls and scope definition
• Quality assurance
SOFTWARE ARCHITECTURE TEAM
Dept of CSE, NRCM 105 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 106 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 107 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
PROCESS AUTOMATION
▪ Three levels of process are
▪ Metaprocess: An organization’s policies, procedures, and practices for
managing a software intensive line of business. The automation support for
this level is called an infrastructure. An infrastructure is an inventory of
preferred tools, artifact templates, microprocess guidelines, macroprocess
guidelines, project performance repository, database of organizational skill
sets, and library of precedent examples of past project plans and results.
▪ Macroprocess: A project's policies, procedures, and practices for producing
a complete software product within certain cost, schedule, and quality
constraints. The automation support for a project's process is called an.
environment. An environment is a specific collection of tools to produce a
specific set of artifacts as governed by a specific project plan.
▪ Microprocess: A project team's policies, procedures, and practices for
achieving an artifact of the software process. The automation support for
generating an artifact is generally called a tool. Typical tools include
requirements management, visual modeling, compilers, editors, debuggers,
Dept of CSE, NRCM 108 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
▪ MANAGEMENT
▪ There are many opportunities for automating the project planning and
control activities of the management workflow. Software cost
estimation tools and WBS tools are useful for generating the planning
artifacts. For managing against a plan, workflow management tools
and a software project control panel that can maintain an on-line
version of the status assessment are advantageous. This automation
support can considerably improve the insight of the metrics collection
and reporting concepts.
▪ ENVIRONMENT
Dept of CSE, NRCM 109 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 110 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 111 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
ROUND-TRIP ENGINEERING
▪ Round-trip engineering is the environment support necessary to maintain
consistency among the engineering artifacts.
▪ Figure 12-2 depicts some important transitions between information
repositories. The automated translation of design models to source code
(both forward and reverse engineering) is fairly well established. The
automated translation of design models to process (distribution) models is
Dept of CSE, NRCM 112 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
CHANGE MANAGEMENT
Change management is as critical to iterative processes as planning. Tracking
changes in the technical artifacts is crucial to understanding the true technical
progress trends and quality trends toward delivering an acceptable end product or
interim release. In a modern process-in which requirements, design, and
implementation set artifacts are captured in rigorous notations early in the life
cycle and are evolved through multiple generations-change management has
become fundamental to all phases and almost all activities
SOFTWARE CHANGE ORDERS
▪ The atomic unit of software work that is authorized to create, modify, or
obsolesce components within a configuration baseline is called a software
change order (SCO). Software change orders are a key mechanism for
partitioning, allocating, and scheduling software work against an
established software baseline and for assessing progress and quality. The
Dept of CSE, NRCM 113 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
example SCO shown in Figure 12-3 is a good starting point for describing a
set of change primitives. It shows the level of detail required to achieve the
metrics and change management rigor necessary for a modern software
process.
▪ The basic fields of the SCO are title, description, metrics, resolution,
assessment and disposition.
▪ Title. The title is suggested by the originator and is finalized upon
acceptance by the configuration control board (CCB).
▪ Description: The problem description includes the name of the originator,
date of origination, CCB-assigned SCO identifier, and relevant version
identifiers of related support software.
▪ Metrics: The metrics collected for each sea are important for planning, for
scheduling, and for assessing quality improvement. Change categories are
type 0 (critical bug), type 1 (bug), type 2 (enhancement), type 3 (new
feature), and type 4 (other)
▪ Resolution: This field includes the name of the person responsible for
implementing the change, the components changed, the actual metrics, and
a description of the change
▪ Assessment: This field describes the assessment technique as inspection,
analysis, demonstration, or test. Where applicable, it should also reference
all existing test cases and new test cases executed, and it should identify all
different test configurations, such as platforms, topologies, and compilers.
▪ Disposition: The SCO is assigned one of the following states by the CCB:
▪ Proposed: written, pending CCB review
▪ Accepted: CCB-approved for resolution
▪ Rejected: closed, with rationale, such as not a problem, duplicate, obsolete
change, resolved by another SCO
▪ Archived: accepted but postponed until a later release
Dept of CSE, NRCM 114 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
CONFIGURATION BASELINE
A configuration baseline is a named collection of software components and
supporting documentation that is subject to change management and is
upgraded, maintained, tested, statused and obsolesced as a unit.
There are general1y two classes of baselines: external product releases and
internal testing releases.
A configuration baseline is a named collection of components that is treated
as a unit. It is controlled formally because it is a packaged exchange
Dept of CSE, NRCM 115 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 116 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 117 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 118 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 119 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 120 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
▪ Appendixes
▪ Current process assessment
▪ Software process improvement plan.
▪ Some of the typical components of an organization’s automation
building blocks are as follows:
▪ Standardized tool selections (through investment by the
organization in a site license or negotiation of a favorable
discount with a tool vendor so that project teams are motivated
economically to use that tool), which promote common
workflows and a higher ROI on training.
▪ Standard notations for artifacts, such as UML for all design
models, or Ada 95 for all custom-developed, reliability-critical
implementation artifacts.
▪ Tool adjuncts such as existing artifact templates (architecture
description, evaluation criteria, release descriptions, status
assessment) or customizations.
▪ Activity templates (iteration planning, major milestone
activities, configuration control boards).
▪ Other indirectly useful components of an organization’s infrastructure
▪ A reference library of precedent experience for planning, assessing
and improving process performance parameters; answers for how
well? How much? Why?
▪ Existing case studies, including objective benchmarks of performance
for successful projects that followed the organization process.
▪ A library of project artifact examples such as software development
plans, architecture descriptions and status assessment histories.
▪ Mock audits and compliance traceability for external process
assessment frameworks.
Dept of CSE, NRCM 121 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 122 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 123 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Quality Indicators
The quality indicators are based on the measurement of the changes occurred in
software.
SEVEN CORE METRICS OF SOFTWARE PROJECT
Software metrics instrument the activities and products of the software
development/integration process. Metrics values provide an important perspective
for managing the process. The most useful metrics are extracted directly from the
evolving artifacts.
There are seven core metrics that are used in managing a modern process.
Seven core metrics related to project control:
Management Indicators
1.Work and Progress
2.Budgeted cost and expenditures
3.Staffing and team dynamics
Quality Indicators
4. Change traffic and stability
5. Breakage and modularity
6. Rework and adaptability
7. Mean time between failures (MTBF) and maturity
MANAGEMENTINDICATORS:
1.Work and progress
This metric measures the work performed over time. Work is the effort to
be accomplished to complete a certain set of tasks. The various activities
of an iterative development project can be measured by defining a
planned estimate of the work in an objective measure, then tracking
progress (work completed overtime) against that plan.
The default perspectives of this metric are: Software architecture team: -
Use cases demonstrated.
Dept of CSE, NRCM 124 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 125 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
The default perspectives ofthis metric are people per month added and
people per month leaving. These three management indicators are
responsible for technical progress, financial status and staffing progress.
Dept of CSE, NRCM 126 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Actual cost: It is the actual spending profile for a project over its actual
schedule.
Earned value: It is the value that represents the planned cost of the
actual progress.
Staffing and team dynamics
This metric measures the personnel changes over time, which involves
staffing additions and reductions over time. An iterative development
should start with a small team until the risks in the requirements and
architecture have been suitably resolved. Depending on the overlap of
iterations and other project specific circumstances, staffing can vary.
Increase in staff can slow overall project progress as new people
consume the productive team of existing people in coming up to speed.
Low attrition of good people is a sign of success. The default perspectives
of this metric are people per month added and people per month leaving.
These three management indicators are responsible for technical
progress, financial status and staffing progress.
Dept of CSE, NRCM 127 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
This metric measures the change traffic over time. The number of
software change orders opened and closed over the life cycle is called
change traffic. Stability specifies the relationship between opened versus
closed software change orders. This metric can be collected by change
type, by release, across all releases, by term, by components, by
subsystems, etc.
The below figure shows stability expectation over a healthy project’s life
cycle
Dept of CSE, NRCM 128 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 129 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 130 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Moderate
Stability Volatil Moderat Stable
e e
Dept of CSE, NRCM 131 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
METRICS AUTOMATION:
Many opportunities are available to automate the project control activities of
a software project. A Software Project Control Panel (SPCP) is essential for
managing against a plan. This panel integrates data from multiple sources to
show the current status of some aspect of the project. The panel can support
standard features and provide extensive capability for detailed situation
analysis. SPCP is one example of metrics automation approach that collects,
organizes and reports values and trends extracted directly from the evolving
engineering artifacts.
SPCP:
To implement a complete SPCP, the following are necessary.
➢ Metrics primitives - trends, comparisons and progressions
➢ A graphical user interface.
➢ Metrics collection agents
➢ Metrics data management server
➢ Metrics definitions - actual metrics presentations for
requirementsprogress, implementation progress, assessment progress,
design progress and other progress dimensions.
➢ Actors - monitor and administrator.
Monitor defines panel layouts, graphical objects and linkages to project
data. Specific monitors called roles include software project managers,
Dept of CSE, NRCM 132 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
The format and content of any project panel are configurable to the
software project manager's preference for tracking metrics of top-level
interest. The basic operation of an SPCP can be described by
Dept of CSE, NRCM 133 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
aspects of the design that aren't worth studying at this early point, and,
finally, arrive at an error-free program.
Plan, control, and monitor testing. Without question, the biggest user of
project resources-manpower, computer time, and/or management
judgment-is the test phase.
This is the phase of greatest risk in terms of cost and schedule.
It occurs at the latest point in the schedule, when backup alternatives are
least available, if at all. The previous three recommendations were all
aimed at uncovering and solving problems before entering the test
phase. However, even after doing these things, there is still a test phase
and there are still important things to be done, including:
(1) employ a team of test specialists who were not responsible for the
original design;
(2) employ visual inspections to spot the obvious errors like dropped
minus signs, missing factors of two, jumps to wrong addresses (do not
use the computer to detect this kind of thing, it is too expensive);
(3) test every logic path;
(4) employ the final checkout on the target computer.
1. Involve the customer. It is important to involve the customer in a
formal way so that hehas committed himself at earlier points before final
delivery. There are three points following requirements definition where
the insight, judgment, and commitment of the customer can bolster the
development effort. These include a "preliminary software review"
following the preliminary program design step, a sequence of "critical
software design reviews" during program design, and a "final software
acceptance review".
Dept of CSE, NRCM 135 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
UNIT 5:
CCPDS-R Case Study and Future Software Project Management Practices Modern
Project Profiles, Next-Generation software Economics, Modern Process Transitions
COMMAND CENTER PROCESSING AND DISPLAY SYSTEM-REPLACEMENT
(CCPDS-R)
a The metrics histories were all derived directly from the artifacts of the project's
process. These data were used to manage the project and were embraced by
practitioners, managers, and stakeholders.
There are very few well-documented projects with objective descriptions of what
worked, what didn't, and why. This was one of my primary motivations for
providing the level of detail contained in this appendix. It is heavy in project-
specific details, approaches, and results, for three reasons:
1. Generating the case study wasn't much work. CCPDS-R is unique in its
detailed and automated metrics approach. All the data were derived directly from
the historical artifacts of the project's process.
2. This sort of objective case study is a true indicator of a mature organization and
a mature project process. The absolute values of this historical perspective are
only marginally useful. However, the trends, lessons learned, and relative
priorities are distinguishing characteristics of successful software development.
Dept of CSE, NRCM 136 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
1. The Common Subsystem was the primary missile warning system within the
Cheyenne Mountain Upgrade program. It required about 355,000 source lines of
code, had a 48-month software development schedule, and laid the foundations
for the subsystems that followed (reusable components, tools, environment,
process, procedures). The Common Subsystem included a primary installation in
Cheyenne Mountain, with a backup system deployed at Offutt Air Force Base,
Nebraska.
Dept of CSE, NRCM 137 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
• The CD phase was very similar in intent to the inception phase. The primary
products were a system specification (a vision document), an FSD phase
proposal (a business case, including the technical approach and a fixed-
price-incentive and award-fee cost proposal), and a software development
plan. The CD phase also included a system design review, technical
interchange meetings with the government stakeholders (customer and
user), and several contract-deliverable documents. These events and
products enabled the FSD source selection to be based on demonstrated
performance of the contractor-proposed team as well as the FSD proposal.
CCPDS-R was also a very large software development activity and was one of
the first projects to use the Ada programming language. There was serious
concern that the Ada development environments, contractor processes, and
contractor training programs might not be mature enough to use on a full-
scale development effort. The purpose of the software engineering exercise
was to demonstrate that the contractor's proposed software process, Ada
Dept of CSE, NRCM 138 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
environment, and software team were in place, were mature, and were
demonstrable
• Conduct a mock design review with the customer 23 days after receipt of
the specification.
• Three milestones were conducted and more than 30 action items resolved.
Dept of CSE, NRCM 139 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
• Several needed improvements to the process and the tools were identified.
The concept of evolving the plan, requirements, process, design, and
environment at each major milestone was considered potentially risky but
was implemented with rigorous change management.
Dept of CSE, NRCM 140 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
4. Test and Simulation (TAS). This CSCI comprised test scenario generation,
test message injection, data recording, and scenario playback.
Continuous Integration
In the iterative development process, firstly, the overall architecture of the project
is created and then all the integration steps are evaluated to identify and eliminate
the design errors. This approach eliminates problems such as down stream
integration, late patches and shoe-horned software fixes by implementing
sequential or continuous integration rather than implementing large-scale
integration during the project completion
Dept of CSE, NRCM 141 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
In the modern project profile, the distribution of cost among various workflows or
project is completely different from that of traditional project profile as shown
below
Management 5% 10%
Environment 5% 10%
Requirements 5% 10%
Deployment 5% 5%
Dept of CSE, NRCM 142 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
As shown in the table, the modern projects spend only 25% of their budget for
integration and Assessment activities whereas; traditional projects spend almost
40% of their total budget for these activities. This is because, the traditional
project involve inefficient large-scale integration and late identification of design
issues
▪ To obtain a useful perspective of risk management, the project life cycle has
to be applied on the principles of software management. The following are
the 80:20 principles.
▪ 80% of the software scrap and rework is due to 20% if the changes.
▪ The following figure shows the risk management profile of a modern project.
EVOLUTIONARY REQUIREMENTS
Dept of CSE, NRCM 144 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
▪ In the project life cycle the requirements and design are given the first and
the second preference respectively. The third preference is given to the
traceability between the requirement and the design components these
preferences are given in order to make the design structure evolve into an
organization so it parallels the structure of the requirements organization.
Dept of CSE, NRCM 145 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
As shown in the above figure, the top category system requirements are kept as
the vision whereas, those with the lower category are evaluated. The motive
behind theses artifacts is to gain fidelity with respect to the progress in the project
lifecycle. This serves as a significant different from the traditional approach
because, in traditional approach the fidelity is predicted early in the project life
cycle
Dept of CSE, NRCM 146 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
The table below shows the tangible results of major milestones in a modern
process
▪ From the above table, it can be observed that the progress of the project is
not possible unless all the demonstration objectives are satisfied. This
statement does not present the renegotiation of objectives, even when the
demonstration results allow the further processing of trade offs present in
the requirement, design, plans and technology.
▪ Modern iterative process that rely on the results of the demonstration need
al its stakeholders to be well-educated and with a g good analytical ability
so as to distinguish between the obviously negative results and the real
Dept of CSE, NRCM 147 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
If the architecture is focused at the initial stage, then there will be a good
foundation for almost 20% of the significant stuff that are responsible for the
overall success of the project. This stuff include the requirements, components
use cases, risks and errors. In other words, if the components that are being
involved in the architecture are well known then the expenditure causes by scrap
and rework will be comparatively less.
2. Develop an iterative life-cycle process that identifies the risks at an early stage
The quantity of the human generated source code and the customized
development can be reduced by concentrating on individual components rather
than individual lines-of-code. The complexity of software is directly proportional to
the number of artifacts it contains that is, if the solution is smaller then the
complexity associated with its management is less.
Dept of CSE, NRCM 148 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
5. Improve change freedom with the help of automated tools that support round-
trip engineering.
The design artifacts that are modeled using a model based notation like
UML, are rich in graphics and texture. These modeled artifacts facilitate the
following tasks.
▪ Complexity control
▪ Objective fulfillment
Dept of CSE, NRCM 149 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
9.The Points Increments and generations must be made based on the evolving levels
of detail
Here, the ‘levels of detail’ refers to the level of understanding requirements and
architecture. The requirements, iteration content, implementations and
acceptance testing can be organized using cohesive usage scenarios.
▪ According to airline software council, there are about nine best practices
associated with software management. Theses practices are implemented in
order to reduce the complexity of the larger projects and to improve software
management discipline.
1. Formal Risk Management: Earlier risk management can be done by making use
of iterative life cycle process that identifies the risks at early stage.
Dept of CSE, NRCM 150 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
5. Binary quality Gates at the inch-pebble level: The concept behind this practice is
quite confusing. Most of the organizations have misunderstood the concept and
have developed an expensive and a detailed plan during the initial phase of the
lifecycle, but later found the necessity to change most of their detailed plan due to
the small changes in requirements or architectural. This principle states that first
start planning with an understanding of requirements and the architecture.
Milestones must be established during engineering stage and inch-pebble must be
followed in the production stage.
6. Plan versus visibility of progress throughout the progress: This practice involves a
direct communication between different team members of a project so that, they
can discuss the significant issues related to the project as well as notice the
progress of the project in-comparison to their estimated progress
7.Identifying defects associated with the desired quality: This practice is similar to
the architecture-first approach and objective quality control principles of software
Dept of CSE, NRCM 151 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
▪ The architecture stage cost model should reflect certain diseconomy of scale
(exponent less than 1.0) because it is based on research and development-
oriented concerns. Whereas the production stage cost model should reflect
economy of scale (exponent less than 1.0) for production of commodities.
Dept of CSE, NRCM 152 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
the process exponent will be less than 1.0 at the time of production
because large systems have more automated proves components and
architectures which are easily reusable.
▪ It is technology or schedule-based
Dept of CSE, NRCM 153 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
• It is cost-based
Dept of CSE, NRCM 155 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
▪ A strict notation must be applied for design artifacts so, that the prediction
of a design scale can be improved. The Next-generation software cost model
should automate the process of measuring design scale directly from UML
diagrams. There should be two major improvements. There are,
▪ The use of rigorous design notations. This will enable the automation
and standardization of scale measure so that they can be easily traced
which helps to determine the total cost associated with production.
▪ The next generation software process has two potential breakthroughs, they
are,
Dept of CSE, NRCM 156 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
It will reduce the four sets of fundamental technical artifacts into three sets. This
is achieved by automating the activities related to human-generated source code
so as to eliminate the need fro a separate implementation set
Identifying and solving a software problem in the design phase is almost 100
times cost effective than solving the same problem after delivery.
This quotation or metric serves as a base for most software processes. Modern
processes, component-based development techniques and architectural
frameworks mainly focuses on enhancing this relationship. The architectural
errors are solved by implementing an architecture-first approach. Modern process
plays a crucial role in identification of risks
Dept of CSE, NRCM 157 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Most o the experts in the software industry find it difficult to maintain the
software than development. The ratio between development and maintenance can
be measured by computing productivity cost. One of the interesting fact of
iterative development is that the dividing line between the development and
maintenance is vanishing. Moreover, a good iterative process and an architecture
will cause the reduction in the scrap and rework levels so this ratio (i.e.,) 2:1 can
be reduced to 1:1.
Both the software development cot and the maintenance cost are dependent
on the number of lines in the source code.
This metric was applicable to the conventional cost models which were
lacking in-terms of commercial components, reusing techniques, automated code
generators etc. The implementation of commercial components, reusing
techniques and automated code generators will make this metric inappropriate.
However, the development cost is still dependent on the commercial components,
reuse technique and automatic code generators and their integration.
The personal skills, team work ability and the motivation of employees are the
crucial factors responsible for the success and the failure of any project. The next-
Dept of CSE, NRCM 158 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
As the computers are becoming more and more popular, the need for software
an hardware applications is also increasing. The hardware components are
becoming cheaper whereas, the software applications are becoming more
complicated as a result, highly skilled professionals needed for development and
controlling the software applications, the in turn increases the cost. In 1955 the
software to hardware cost ratio was 15:85 and in 1985 this ratio was 85:15. This
ratio continuously increases with respect to the need for variety of software
applications. Certain software applications have already been developed which
provides automated configuration control and analysis of quality assurance. The
next-generation cost models must focus on automation of production and testing.
Software system and products cost three times the cost associated with
individual software programs per SLOC software-system products cost 9
times more than the cost of individual software program.
Dept of CSE, NRCM 159 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
must reduce this diseconomy of scale. The economy of the scale must be
achievable under the customer specific software systems with a common
architecture, common environment and common process.
▪ The walkthrough and other forms of human inspection catch only the
surface and style issues. However, the critical issues are not caught by the
walkthroughs so, this metric doesn’t prove to the reliable.
Only 20% of the contributors are responsible for the 80% of the
contributions.
The lower-level managers and the middle level managers should participate
in the project development
Any organization which ha an employee count less than or equal to 25 does not
need to have pure managers. The responsibility of the managers in this type of
organization will be similar to that of a project manager. Pure managers are
Dept of CSE, NRCM 160 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
needed when personal resources exceed 25. Firstly, these managers understand
the status of the project them, develop the plans and estimate the results. The
manager should participate in developing the plans. This transition affects the
software project managers
Dept of CSE, NRCM 161 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
This transition will unmark all the engineering or process issues so, it is
mostly refused by management team, and widely accepted by users, customers
and the engineering team.
The performance of the project can be determined earlier in the life cycle.
The success and failure of any project depends on the planning and
architectural phases of life cycle so, these phases must employ high-skilled
professionals. However, the remaining phases may work well an average team.
The requirements and designs of any successful project arguments along with
the continuous negotiations and trade-offs. The difference between real and
apparent issued of a successful project can easily be determined. This transition
may affect any team of stakeholders
Dept of CSE, NRCM 162 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Most of the contractors for any software contracting firm focus only on
obtaining their profit margins beyond the acceptable range of 5% and 15%. They
don’t look for the quality of finished product as a result, the customers will be
affected. For the success of any software industry, the good quality and at a
reasonable rate them, customer will not worry about the profit the contractor has
Dept of CSE, NRCM 163 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 164 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR
NR21: CS4116PE-Software Project Management
Dept of CSE, NRCM 165 Dr. P. Dileep Kumar Reddy, Professor, Dean-IPR