SPPM Unit 2 Notes Jntuh
SPPM Unit 2 Notes Jntuh
UNIT – II
Conventional Software Management: The waterfall model, conventional software Management performance.
Evolution of Software Economics: Software Economics, pragmatic software cost estimation.
Improving Software Economics: Reducing Software product size, improving software processes, improving team
effectiveness, improving automation, Achieving required quality, peer inspections.
The Old Way and the New: The principles of conventional software engineering, principles of modern software engineering,
transitioning to an iterative process
Life Cycle Phases and Process artifacts: Engineering and Production stages, inception, elaboration, construction, transition
phase, The artifact sets, management artifacts, engineering artifacts and pragmatic artifacts.
Model Based Software Architecture: A management perspective and technical perspective.
Requirement
Analysis
Design
Coding
Testing
Operation
3. The basic framework described in the waterfall model is risky and invites failure. The testing phase that
occurs at the end of the development cycle is the first event for which timing, storage, input/output
transfers, etc., are experienced as distinguished from analyzed. The resulting design changes are likely
to be so disruptive that the software requirements upon which the design is based are likely violated.
Either the requirements must be modified or a substantial design change is warranted.
1. Program design comes first. Insert a preliminary program design phase between the software
requirements generation phase and the analysis phase. By this technique, the program designer
assures that the software will not fail because of storage, timing, and data flux (continuous
change). As analysis proceeds in the succeeding phase, the program designer must impose on the
analyst the storage, timing, and operational constraints in such a way that he senses the consequences.
If the total resources to be applied are insufficient or if the embryonic(in an early stage of development)
operational design is wrong, it will be recognized at this early stage and the iteration with requirements
and preliminary design can be redone before final design, coding, and test commences. How is this
program design procedure implemented?
2. Document the design. The amount of documentation required on most software programs is quite a lot,
certainly much more than most programmers, analysts, or program designers are willing to do if left to their own
devices. Why do we need so much documentation? (1) Each designer must communicate with interfacing
designers, managers, and possibly customers. (2) During early phases, the documentation is the design. (3) The
real monetary value of documentation is to support later modifications by a separate test team, a separate
maintenance team, and operations personnel who are not software literate.
3. Do it twice. If a computer program is being developed for the first time, arrange matters so that the version
finally delivered to the customer for operational deployment is actually the second version insofar as critical
design/operations are concerned. Note that this is simply the entire process done in miniature, to a time scale
that is relatively small with respect to the overall effort. In the first version, the team must have a special broad
competence where they can quickly sense trouble spots in the design, model them, model alternatives, forget
the straightforward aspects of the design that aren't worth studying at this early point, and, finally, arrive at an
error-free program.
4. Plan, control, and monitor testing. Without question, the biggest user of project resources-manpower,
computer time, and/or management judgment-is the test phase. This is the phase of greatest risk in terms of
cost and schedule. It occurs at the latest point in the schedule, when backup alternatives are least available, if
at all. The previous three recommendations were all aimed at uncovering and solving problems before entering
the test phase. However, even after doing these things, there is still a test phase and there are still important
things to be done, including: (1) employ a team of test specialists who were not responsible for the original
design; (2) employ visual inspections to spot the obvious errors like dropped minus signs, missing factors of
two, jumps to wrong addresses (do not use the computer to detect this kind of thing, it is too expensive); (3)
test every logic path; (4) employ the final checkout on the target computer.
5. Involve the customer. It is important to involve the customer in a formal way so that he has committed himself
at earlier points before final delivery. There are three points following requirements definition where the insight,
judgment, and commitment of the customer can bolster the development effort. These include a "preliminary
software review" following the preliminary program design step, a sequence of "critical software design reviews"
during program design, and a "final software acceptance review".
1.1.2 IN PRACTICE
Some software projects still practice the conventional software management approach.
It is useful to summarize the characteristics of the conventional process as it has typically been applied,
which is not necessarily as it was intended. Projects destined for trouble frequently exhibit the following
symptoms:
Early success via paper designs and thorough (often too thorough) briefings.
Commitment to code late in the life cycle.
Integration nightmares (unpleasant experience) due to unforeseen implementation issues and interface
ambiguities.
Heavy budget and schedule pressure to get the system working.
Late shoe-homing of no optimal fixes, with no time for redesign.
A very fragile, unmentionable product delivered late.
In the conventional model, the entire system was designed on paper, then implemented all at once, then integrated.
Table 1-1 provides a typical profile of cost expenditures across the spectrum of software activities.
Late risk resolution A serious issue associated with the waterfall lifecycle was the lack of early risk resolution.
Figure 1.3 illustrates a typical risk profile for conventional waterfall model projects. It includes four distinct
periods of risk exposure, where risk is defined as the probability of missing a cost, schedule, feature, or quality
goal. Early in the life cycle, as the requirements were being specified, the actual risk exposure was highly
unpredictable.
The following sequence of events was typical for most contractual software efforts:
1. The contractor prepared a draft contract-deliverable document that captured an intermediate artifact
and delivered it to the customer for approval.
2. The customer was expected to provide comments (typically within 15 to 30 days).
3. The contractor incorporated these comments and submitted (typically within 15 to 30 days) a final
version for approval.
This one-shot review process encouraged high levels of sensitivity on the part of customers and contractors.
8. Software systems and products typically cost 3 times as much per SLOC as individual software
programs. Software-system products (i.e., system of systems) cost 9 times as much.
9. Walkthroughs catch 60% of the errors
The relationships among these parameters and the estimated cost can be written as follows:
One important aspect of software economics (as represented within today's software cost models) is that the
relationship between effort and size exhibits a diseconomy of scale. The diseconomy of scale of software
development is a result of the process exponent being greater than 1.0. Contrary to most manufacturing processes,
the more software you build, the more expensive it is per unit item.
Figure 2-1 shows three generations of basic technology advancement in tools, components, and processes.
The required levels of quality and personnel are assumed to be constant. The ordinate of the graph refers to
software unit costs (pick your favorite: per SLOC, per function point, per component) realized by an organization.
The three generations of software development are defined as follows:
1) Conventional: 1960s and 1970s, craftsmanship. Organizations used custom tools, custom processes, and
virtually all custom components built in primitive languages. Project performance was highly predictable
in that cost, schedule, and quality objectives were almost always underachieved.
2) Transition: 1980s and 1990s, software engineering. Organiz:1tions used more-repeatable processes and off-
the-shelf tools, and mostly (>70%) custom components built in higher level languages. Some of the
components (<30%) were available as commercial products, including the operating system, database
management system, networking, and graphical user interface.
3) Modern practices: 2000 and later, software production. This book's philosophy is rooted in the
use of managed and measured processes, integrated automation environments, and mostly
(70%) off-the-shelf components. Perhaps as few as 30% of the components need to be custom
built
Technologies for environment automation, size reduction, and process improvement are not independent of
one another. In each new era, the key is complementary growth in all technologies. For example, the process
advances could not be used successfully without new component technologies and increased tool automation.
Organizations are achieving better economies of scale in successive technology eras-with very large projects
(systems of systems), long-lived products, and lines of business comprising multiple similar projects. Figure 2-2
provides an overview of how a return on investment (ROI) profile can be achieved in subsequent efforts across
life cycles of various domains.
10
most open and well-documented cost estimation models. The general accuracy of conventional cost models (such
as COCOMO) has been described as "within 20% of actuals, 70% of the time."
Most real-world use of cost models is bottom-up (substantiating a target cost) rather than top-down
(estimating the "should" cost). Figure 2-3 illustrates the predominant practice: The software project manager
defines the target cost of the software, and then manipulates the parameters and sizing until the target cost can be
justified. The rationale for the target cost maybe to win a proposal, to solicit customer funding, to attain internal
corporate funding, or to achieve some other goal.
The process described in Figure 2-3 is not all bad. In fact, it is absolutely necessary to analyze the cost risks
and understand the sensitivities and trade-offs objectively. It forces the software project manager to examine the
risks associated with achieving the target costs and to discuss this information with other stakeholders.
A good software cost estimate has the following attributes:
It is conceived and supported by the project manager, architecture team, development team, and test
team accountable for performing the work.
It is accepted by all stakeholders as ambitious but realizable.
It is based on a well-defined software cost model with a credible basis.
It is based on a database of relevant project experience that includes similar processes, similar
technologies, similar environments, similar quality requirements, and similar people.
It is defined in enough detail so that its key risk areas are understood and the probability of success is
objectively assessed.
Extrapolating from a good estimate, an ideal estimate would be derived from a mature cost model with an
experience base that reflects multiple similar projects done by the same team with the same mature processes and
tools.
11
12
The most significant way to improve affordability and return on investment (ROI) is usually to produce a product
that achieves the design goals with the minimum amount of human-generated source material. Component-based
development is introduced as the general term for reducing the "source" language size to achieve a software
solution.
Reuse, object-oriented technology, automatic code production, and higher order programming languages are all
focused on achieving a given system with fewer lines of human-specified source directives (statements).
size reduction is the primary motivation behind improvements in higher order languages (such as C++, Ada 95,
Java, Visual Basic), automatic code generators (CASE tools, visual modeling tools, GUI builders), reuse of
commercial components (operating systems, windowing environments, database management systems,
middleware, networks), and object-oriented technologies (Unified Modeling Language, visual modeling tools,
architecture frameworks).
The reduction is defined in terms of human-generated source material. In general, when size-reducing
technologies are used, they reduce the number of human-generated source lines.
3.1.1 LANGUAGES
Universal function points (UFPs1) are useful estimators for language-independent, early life-cycle estimates. The
basic units of function points are external user inputs, external outputs, internal logical data groups, external data
interfaces, and external inquiries. SLOC metrics are useful estimators for software after a candidate solution is
formulated and an implementation language is known. Substantial data have been documented relating SLOC to
function points. Some of these results are shown in Table 3-2.
Object-oriented technology is not germane to most of the software management topics discussed here, and books
on object-oriented technology abound. Object-oriented programming languages appear to benefit both software
productivity and software quality. The fundamental impact of object-oriented technology is in reducing the
overall size of what needs to be developed.
People like drawing pictures to explain something to others or to themselves. When they do it for software
system design, they call these pictures diagrams or diagrammatic models and the very notation for them a
modeling language.
These are interesting examples of the interrelationships among the dimensions of improving software eco-
nomics.
1. An object-oriented model of the problem and its solution encourages a common vocabulary between
the end users of a system and its developers, thus creating a shared understanding of the problem being
solved.
2. The use of continuous integration creates opportunities to recognize risk early and make incremental
corrections without destabilizing the entire development effort.
1
Function point metrics provide a standardized method for measuring the various functions of a software application.
The basic units of function points are external user inputs, external outputs, internal logical data groups, external data interfaces, and
external inquiries.
14
1. A ruthless focus on the development of a system that provides a well understood collection of essential
minimal characteristics.
2. The existence of a culture that is centered on results, encourages communication, and yet is not afraid
to fail.
3. The effective use of object-oriented modeling.
4. The existence of a strong architectural vision.
5. The application of a well-managed iterative and incremental development life cycle.
3.1.3 REUSE
Reusing existing components and building reusable components have been natural software engineering
activities since the earliest improvements in programming languages. With reuse in order to minimize
development costs while achieving all the other required attributes of performance, feature set, and quality. Try
to treat reuse as a mundane part of achieving a return on investment.
Most truly reusable components of value are transitioned to commercial products supported by organizations
with the following characteristics:
15
16
In a perfect software engineering world with an immaculate problem description, an obvious solution space, a
development team of experienced geniuses, adequate resources, and stakeholders with common goals, we could
execute a software development process in one iteration with almost no scrap and rework. Because we work in
an imperfect world, however, we need to manage engineering activities so that scrap and rework profiles do not
have an impact on the win conditions of any stakeholder. This should be the underlying premise for most process
improvements.
3.3 IMPROVING TEAM EFFECTIVENESS
Teamwork is much more important than the sum of the individuals. With software teams, a project manager
needs to configure a balance of solid talent with highly skilled people in the leverage positions. Some maxims of
team management include the following:
A well-managed project can succeed with a nominal engineering team.
A mismanaged project will almost never succeed, even with an expert team of engineers.
A well-architected system can be built by a nominal team of software builders.
A poorly architected system will flounder even with an expert team of builders.
17
Software project managers need many leadership qualities in order to enhance team effectiveness. The
following are some crucial attributes of successful software project managers that deserve much more attention:
1. Hiring skills. Few decisions are as important as hiring decisions. Placing the right person in the right
job seems obvious but is surprisingly hard to achieve.
2. Customer-interface skill. Avoiding adversarial relationships among stakeholders is a prerequisite for
success.
3. Decision-making skill. The jillion books written about management have failed to provide a clear
definition of this attribute. We all know a good leader when we run into one, and decision-making
skill seems obvious despite its intangible definition.
4. Team-building skill. Teamwork requires that a manager establish trust, motivate progress, exploit
eccentric prima donnas, transition average people into top performers, eliminate misfits, and
consolidate diverse opinions into a team direction.
5. Selling skill. Successful project managers must sell all stakeholders (including themselves) on decisions
and priorities, sell candidates on job positions, sell changes to the status quo in the face of resistance, and
sell achievements against objectives. In practice, selling requires continuous negotiation, compromise,
and empathy
18
Key practices that improve overall software quality include the following:
Focusing on driving requirements and critical use cases early in the life cycle, focusing on requirements
completeness and traceability late in the life cycle, and focusing throughout the life cycle on a balance
between requirements evolution, design evolution, and plan evolution
Using metrics and indicators to measure the progress and quality of an architecture as it evolves from a
high-level prototype into a fully compliant product
Providing integrated life-cycle environments that support early and continuous configuration control,
change management, rigorous design methods, document automation, and regression test automation
Using visual modeling and higher level languages that support architectural control, abstraction, reliable
programming, reuse, and self-documentation
Early and continuous insight into performance issues through demonstration-based evaluations
19
Conventional development processes stressed early sizing and timing estimates of computer program
resource utilization. However, the typical chronology of events in performance assessment was as follows
Project inception. The proposed design was asserted to be low risk with adequate performance margin.
Initial design review. Optimistic assessments of adequate design margin were based mostly on paper
analysis or rough simulation of the critical threads. In most cases, the actual application algorithms and
database sizes were fairly well understood.
Mid-life-cycle design review. The assessments started whittling away at the margin, as early
benchmarks and initial tests began exposing the optimism inherent in earlier estimates.
Integration and test. Serious performance problems were uncovered, necessitating fundamental changes
in the architecture. The underlying infrastructure was usually the scapegoat, but the real culprit was
immature use of the infrastructure, immature architectural solutions, or poorly understood early design
trade-offs.
Transitioning engineering information from one artifact set to another, thereby assessing the consistency,
feasibility, understandability, and technology constraints inherent in the engineering artifacts
Major milestone demonstrations that force the artifacts to be assessed against tangible criteria in the
context of relevant use cases
Environment tools (compilers, debuggers, analyzers, automated test suites) that ensure representation
rigor, consistency, completeness, and change control
Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria, and requirements
compliance
Change management metrics for objective insight into multiple-perspective change trends and
convergence or divergence from quality and progress goals
Inspections are also a good vehicle for holding authors accountable for quality products. All authors of
software and documentation should have their products scrutinized as a natural by-product of the process.
Therefore, the coverage of inspections should be across all authors rather than across all components.
20
1. Make quality Quality must be quantified and mechanisms put into place to motivate its achievement
2. High-quality software is possible. Techniques that have been demonstrated to increase quality include involving
the customer, prototyping, simplifying design, conducting inspections, and hiring the best people
3. Give products to customers early. No matter how hard you try to learn users' needs during the requirements
phase, the most effective way to determine real needs is to give users a product and let them play with it
4.Determine the problem before writing the requirements. When faced with what they believe is a problem,
most engineers rush to offer a solution. Before you try to solve a problem, be sure to explore all the alternatives and
don't be blinded by the obvious solution
5. Evaluate design alternatives. After the requirements are agreed upon, you must examine a variety of architectures
and algorithms. You certainly do not want to use” architecture" simply because it was used in the requirements
specification.
6. Use an appropriate process model. Each project must select a process that makes ·the most sense for that project
on the basis of corporate culture, willingness to take risks, application area, volatility of requirements, and the extent
to which requirements are well understood.
7. Use different languages for different phases. Our industry's eternal thirst for simple solutions to complex
problems has driven many to declare that the best development method is one that uses the same notation through-
out the life cycle.
8. Minimize intellectual distance. To minimize intellectual distance, the software's structure should be as close as
possible to the real-world structure
9. Put techniques before tools. An undisciplined software engineer with a tool becomes a dangerous, undisciplined
software engineer
10. Get it right before you make it faster. It is far easier to make a working program run faster than it is to make a
fast program work. Don't worry about optimization during initial coding
11. Inspect code. Inspecting the detailed design and code is a much better way to find errors than testing
12. Good management is more important than good technology. Good management motivates people to do their
best, but there are no universal "right" styles of management.
21
13. People are the key to success. Highly skilled people with appropriate experience, talent, and training are key.
14.Follow with care. Just because everybody is doing something does not make it right for you. It may be right, but
you must carefully assess its applicability to your environment.
15. Take responsibility. When a bridge collapses we ask, "What did the engineers do wrong?" Even when software
fails, we rarely ask this. The fact is that in any engineering discipline, the best methods can be used to produce awful
designs, and the most antiquated methods to produce elegant designs.
16. Understand the customer's priorities. It is possible the customer would tolerate 90% of the functionality
delivered late if they could have 10% of it on time.
17. The more they see, the more they need. The more functionality (or performance) you provide a user, the more
functionality (or performance) the user wants.
18. Plan to throw one away. One of the most important critical success factors is whether or not a product is entirely
new. Such brand-new applications, architectures, interfaces, or algorithms rarely work the first time.
19. Design for change. The architectures, components, and specification techniques you use must accommodate
change.
20. Design without documentation is not design. I have often heard software engineers say, "I have finished the
design. All that is left is the documentation. "
21. Use tools, but be realistic. Software tools make their users more efficient.
22. Avoid tricks. Many programmers love to create programs with tricks constructs that perform a function
correctly, but in an obscure way. Show the world how smart you are by avoiding tricky code
23. Encapsulate. Information-hiding is a simple, proven concept that results in software that is easier to test
and much easier to maintain.
24. Use coupling and cohesion. Coupling and cohesion are the best ways to measure software's inherent
maintainability and adaptability
25. Use the McCabe complexity measure. Although there are many metrics available to report the inherent
complexity of software, none is as intuitive and easy to use as Tom McCabe's
26. Don't test your own software. Software developers should never be the primary testers of their own
software.
27. Analyze causes for errors. It is far more cost-effective to reduce the effect of an error by preventing it than it is
to find and fix it. One way to do this is to analyze the causes of errors as they are detected
28. Realize that software's entropy increases. Any software system that undergoes continuous change will grow
in complexity and will become more and more disorganized
29. People and time are not interchangeable. Measuring a project solely by person-months makes little sense
30.Expect excellence. Your employees will do much better if you have high expectations for them.
22
Top 10 principles of modern software management are. (The first five, which are the main themes of my definition of an
iterative process, are summarized in Figure 4-1.)
1. Base the process on an architecture-first approach. This requires that a demonstrable balance be achieved
among the driving requirements, the architecturally significant design decisions, and the life- cycle plans
before the resources are committed for full-scale development.
2. Establish an iterative life-cycle process that confronts risk early. With today's sophisticated software
systems, it is not possible to define the entire problem, design the entire solution, build the software, and
then test the end product in sequence. Instead, an iterative process that refines the problem understanding,
an effective solution, and an effective plan over several iterations encourages a balanced treatment of all
stakeholder objectives. Major risks must be addressed early to increase predictability and avoid expensive
downstream scrap and rework.
3. Transition design methods to emphasize component-based development. Moving from a line-of- code
mentality to a component-based mentality is necessary to reduce the amount of human-generated source
code and custom development.
23
5. Enhance change freedom through tools that support round-trip engineering. Round-trip
engineering is the environment support necessary to automate and synchronizeengineering information
in different formats (such as requirements specifications, design models, source code, executable code, test
cases).
6. Capture design artifacts in rigorous, model-based notation. A model based approach (such as UML)
supports the evolution of semantically rich graphical and textual design notations.
7. Instrument the process for objective quality control and progress assessment. Life-cycle assessment of
the progress and the quality of all intermediate products must be integrated into the process.
8. Use a demonstration-based approach to assess intermediate artifacts.
9. Plan intermediate releases in groups of usage scenarios with evolving levels of detail. It is essential
that the software management process drive toward early and continuous demonstrations within the
operational context of the system, namely its use cases.
10. Establish a configurable process that is economically scalable. No single process is suitable for all
software developments.
Table 4-1 maps top 10 risks of the conventional process to the key attributes and principles of a modern
process
24
Modern software development processes have moved away from the conventional waterfall model, in which each
stage of the development process is dependent on completion of the previous stage.
The economic benefits inherent in transitioning from the conventional waterfall model to an iterative
development process are significant but difficult to quantify. As one benchmark of the expected economic impact
of process improvement, consider the process exponent parameters of the COCOMO II model. (Appendix B
provides more detail on the COCOMO model) This exponent can range from 1.01 (virtually no diseconomy of
scale) to 1.26 (significant diseconomy of scale). The parameters that govern the value of the process exponent are
application precedentedness, process flexibility, architecture risk resolution, team cohesion, and software process
maturity.
The following paragraphs map the process exponent parameters of COCOMO II to my top 10 principles of
a modern process.
Application precedentedness. Domain experience is a critical factor in understanding how to plan and
execute a software development project. For unprecedented systems, one of the key goals is to confront risks
and establish early precedents, even if they are incomplete or experimental. This is one of the primary reasons
that the software industry has moved to an iterative life-cycle process. Early iterations in the life cycle
establish precedents from which the product, the process, and the plans can be elaborated in evolving levels
of detail.
Process flexibility. Development of modern software is characterized by such a broad solution space and so
many interrelated concerns that there is a paramount need for continuous incorporation of changes. These
changes may be inherent in the problem understanding, the solution space, or the plans. Project artifacts must
be supported by efficient change management commensurate with project needs. A configurable process
that allows a common framework to be adapted across a range of projects is necessary to achieve a software
return on investment.
Architecture risk resolution. Architecture-first development is a crucial theme underlying a successful
iterative development process. A project team develops and stabilizes architecture before developing all the
components that make up the entire suite of applications components. An architecture-first and component-
based development approach forces the infrastructure, common mechanisms, and control mechanisms to be
elaborated early in the life cycle and drives all component make/buy decisions into the architecture process.
Team cohesion. Successful teams are cohesive, and cohesive teams are successful. Successful teams and
cohesive teams share common objectives and priorities. Advances in technology (such as programming
languages, UML, and visual modeling) have enabled more rigorous and understandable notations for
communicating software engineering information, particularly in the requirements and design artifacts that
previously were ad hoc and based completely on paper exchange. These model-based formats have also
enabled the round-trip engineering support needed to establish change freedom sufficient for evolving
design representations.
Software process maturity. The Software Engineering Institute's Capability Maturity Model (CMM) is a
well-accepted benchmark for software process assessment. One of key themes is that truly mature processes
are enabled through an integrated environment that provides the appropriate level of automation to
instrument the process for objective quality control.
25
Important questions
Explain briefly Waterfall model. Also explain Conventional s/w management performance?
1.
Define Software Economics. Also explain Pragmatic s/w cost estimation?
2.
4. Explain five staffing principal offered by Boehm. Also explain Peer Inspections?
To achieve economies of scale and higher returns on investment, we must move toward a software
manufacturing process driven by technological improvements in process automation and component-based
development. Two stages of the life cycle are:
1. The engineering stage, driven by less predictable but smaller teams doing design and synthesis
activities
2. The production stage, driven by more predictable but larger teams doing construction, test, and
deployment activities
26
The transition between engineering and production is a crucial event for the various stakeholders. The
production plan has been agreed upon, and there is a good enough understanding of the problem and the solution
that all stakeholders can make a firm commitment to go ahead with production.
Engineering stage is decomposed into two distinct phases, inception and elaboration, and the production stage
into construction and transition. These four phases of the life-cycle process are loosely mapped to the conceptual
framework of the spiral model as shown in Figure 5-1
PRIMARY OBJECTIVES
Establishing the project's software scope and boundary conditions, including an operational concept,
acceptance criteria, and a clear understanding of what is and is not intended to be in the product
Discriminating the critical use cases of the system and the primary scenarios of operation that will
drive the major design trade-offs
Demonstrating at least one candidate architecture against some of the primary scenanos
Estimating the cost and schedule for the entire project (including detailed estimates for the
elaboration phase)
Estimating potential risks (sources of unpredictability)
27
ESSENTIAL ACTIVITIES
Formulating the scope of the project. The information repository should be sufficient to define the
problem space and derive the acceptance criteria for the end product.
Synthesizing the architecture. An information repository is created that is sufficient to demonstrate the
feasibility of at least one candidate architecture and an, initial baseline of make/buy decisions so that
the cost, schedule, and resource estimates can be derived.
Planning and preparing a business case. Alternatives for risk management, staffing, iteration plans, and
cost/schedule/profitability trade-offs are evaluated.
PRIMARY EVALUATION CRITERIA
Do all stakeholders concur on the scope definition and cost and schedule estimates?
Are requirements understood, as evidenced by the fidelity of the critical use cases?
Are the cost and schedule estimates, priorities, risks, and development processes credible?
Do the depth and breadth of an architecture prototype demonstrate the preceding criteria? (The primary
value of prototyping candidate architecture is to provide a vehicle for understanding the scope and
assessing the credibility of the development group in solving the particular technical problem.)
Are actual resource expenditures versus planned expenditures acceptable.
At the end of this phase, the "engineering" is considered complete. The elaboration phase activities must ensure
that the architecture, requirements, and plans are stable enough, and the risks sufficiently mitigated, that the cost
and schedule for the completion of the development can be predicted within an acceptable range. During the
elaboration phase, an executable architecture prototype is built in one or more iterations, depending on the scope,
size, & risk.
PRIMARY OBJECTIVES
Baselining the architecture as rapidly as practical (establishing a configuration-managed snapshot in which
all changes are rationalized, tracked, and maintained)
Baselining the vision
Baselining a high-fidelity plan for the construction phase
Demonstrating that the baseline architecture will support the vision at a reasonable cost in a reasonable
time
ESSENTIAL ACTIVITIES
Elaborating the vision.
Elaborating the process and infrastructure.
Elaborating the architecture and selecting components.
28
PRIMARY OBJECTIVES
Minimizing development costs by optimizing resources and avoiding unnecessary scrap and rework
Achieving adequate quality as rapidly as practical
Achieving useful versions (alpha, beta, and other test releases) as rapidly as practical
ESSENTIAL ACTIVITIES
Resource management, control, and process optimization
Complete component development and testing against evaluation criteria
Assessment of product releases against acceptance criteria of the vision
PRIMARY OBJECTIVES
Achieving user self-supportability
Achieving stakeholder concurrence that deployment baselines are complete and consistent with the
evaluation criteria of the vision
Achieving final product baselines as rapidly and cost-effectively as practical
29
ESSENTIAL ACTIVITIES
Synchronization and integration of concurrent construction increments into consistent deployment
baselines
Deployment-specific engineering (cutover, commercial packaging and production, sales rollout kit
development, field personnel training)
Assessment of deployment baselines against the complete vision and acceptance criteria in the
requirements set
EVALUATION CRITERIA
Is the user satisfied?
Are actual resource expenditures versus planned expenditures acceptable?
30
31
Management set artifacts are evaluated, assessed, and measured through a combination of the following:
Relevant stakeholder review
Analysis of changes between the current version of the artifact and previous versions
Major milestone demonstrations of the balance among all artifacts and, in particular, the accuracy of
the business case and vision artifacts
Design Set
UML notation is used to engineer the design models for the solution. The design set contains varying levels
of abstraction that represent the components of the solution space (their identities, attributes, static
relationships, dynamic interactions). The design set is evaluated, assessed, and measured through a combination
of the following:
Analysis of the internal consistency and quality of the design model
Analysis of consistency with the requirements models
Translation into implementation and deployment sets and notations (for example, traceability, source
code generation, compilation, linking) to evaluate the consistency and completeness and the semantic
balance between information in the sets
Analysis of changes between the current version of the design model and previous versions (scrap,
rework, and defect elimination trends)
Subjective review of other dimensions of quality
Implementation set
The implementation set includes source code (programming language notations) that represents the tangible
implementations of components (their form, interface, and dependency relationships)
Implementation sets are human-readable formats that are evaluated, assessed, and measured through a
combination of the following:
Analysis of consistency with the design models
Translation into deployment set notations (for example, compilation and linking) to evaluate the
consistency and completeness among artifact sets
Assessment of component source or executable files against relevant evaluation criteria through
inspection, analysis, demonstration, or testing
Execution of stand-alone component test cases that automatically compare expected results with
actual results
Analysis of changes between the current version of the implementation set and previous versions
(scrap, rework, and defect elimination trends)
Subjective review of other dimensions of quality
32
Deployment Set
The deployment set includes user deliverables and machine language notations, executable software, and the
build scripts, installation scripts, and executable target specific data necessary to use the product in its target
environment.
Deployment sets are evaluated, assessed, and measured through a combination of the following:
Testing against the usage scenarios and quality attributes defined in the requirements set to evaluate the
consistency and completeness and the~ semantic balance between information in the two sets
Testing the partitioning, replication, and allocation strategies in mapping components of the
implementation set to physical resources of the deployment system (platform type, number, network
topology)
Testing against the defined usage scenarios in the user manual such as installation, user-oriented
dynamic reconfiguration, mainstream usage, and anomaly management
Analysis of changes between the current version of the deployment set and previous versions (defect
elimination trends, performance changes)
Subjective review of other dimensions of quality
Each artifact set is the predominant development focus of one phase of the life cycle; the other sets take on check
and balance roles. As illustrated in Figure 6-2, each phase has a predominant focus: Requirements are the focus
of the inception phase; design, the elaboration phase; implementation, the construction phase; and deploy- ment,
the transition phase. The management artifacts also evolve, but at a fairly constant level across the life cycle.
Most of today's software development tools map closely to one of the five artifact sets.
1. Management: scheduling, workflow, defect tracking, change management,
documentation, spreadsheet, resource management, and presentation tools
2. Requirements: requirements management tools
3. Design: visual modeling tools
4. Implementation: compiler/debugger tools, code analysis tools, test coverage analysis tools, and test
management tools
5. Deployment: test coverage and test automation tools, network management tools, commercial components
(operating systems, GUIs, RDBMS, networks, middleware), and installation tools.
33
The inception phase focuses mainly on critical requirements usually with a secondary focus on an initial
deployment view. During the elaboration phase, there is much greater depth in requirements, much more
breadth in the design set, and further work on implementation and deployment issues. The main focus of the
construction phase is design and implementation. The main focus of the transition phase is on achieving
consistency and completeness of the deployment set in the context of the other sets.
34
Management set. The release specifications and release descriptions capture the objectives, evaluation
criteria, and results of an intermediate milestone. These artifacts are the test plans and test results
negotiated among internal project teams. The software change orders capture test results (defects,
testability changes, requirements ambiguities, enhancements) and the closure criteria associated with
making a discrete change to a baseline.
Requirements set. The system-level use cases capture the operational concept for the system and the
acceptance test case descriptions, including the expected behavior of the system and its quality
attributes. The entire requirement set is a test artifact because it is the basis of all assessment activities
across the life cycle.
Design set. A test model for non-deliverable components needed to test the product baselines is
captured in the design set. These components include such design set artifacts as a seismic event
simulation for creating realistic sensor data; a "virtual operator" that can support unattended, after-
hours test cases; specific instrumentation suites for early demonstration of resource usage; transaction
rates or response times; and use case test drivers and component stand-alone test drivers.
Implementation set. Self-documenting source code representations for test components and test drivers
provide the equivalent of test procedures and test scripts. These source files may also include human-
readable data files representing certain statically defined data sets that are explicit test source files.
Output files from test drivers provide the equivalent of test reports.
Deployment set. Executable versions of test components, test drivers, and data files are provided.
Business Case
The business case artifact provides all the information necessary to determine whether the project is worth
investing in. It details the expected revenue, expected cost, technical and management plans, and backup
data necessary to demonstrate the risks and realism of the plans. The main purpose is to transform the vision
into economic terms so that an organization can make an accurate ROI assessment. The financial forecasts
are evolutionary, updated with more accurate forecasts as the life cycle progresses. Figure 6-4
35
36
Release Specifications
The scope, plan, and objective evaluation criteria for each baseline release are derived from the vision statement
as well as many other sources (make/buy analyses, risk management concerns, architectural considerations, shots
in the dark, implementation constraints, quality thresholds). These artifacts are intended to evolve along with the
process, achieving greater fidelity as the life cycle progresses and requirements understanding matures. Figure 6-
6 provides a default outline for a release specification
Release Descriptions
Release description documents describe the results of each release, including performance against each of the
evaluation criteria in the corresponding release specification. Release baselines should be accompanied by a
release description document that describes the evaluation criteria for that configuration baseline and provides
substantiation (through demonstration, testing, inspection, or analysis) that each criterion has been addressed in
an acceptable manner. Figure 6-7 provides a default outline for a release description.
Status Assessments
Status assessments provide periodic snapshots of project health and status, including the software project
manager's risk assessment, quality indicators, and management indicators. Typical status assessments should
include a review of resources, personnel staffing, financial data (cost and revenue), top 10 risks, technical progress
(metrics snapshots), major milestone plans and results, total project or product scope & action items
37
Environment
An important emphasis of a modern approach is to define the development and maintenance environment as a
first-class artifact of the process. A robust, integrated development environment must support automation of the
development process. This environment should include requirements management, visual modeling, document
automation, host and target programming tools, automated regression testing, and continuous and integrated
change management, and feature and defect tracking.
Deployment
A deployment document can take many forms. Depending on the project, it could include several document
subsets for transitioning the product into operational status. In big contractual efforts in which the system is
delivered to a separate maintenance organization, deployment artifacts may include computer system operations
manuals, software installation manuals, plans and procedures for cutover (from a legacy system), site surveys,
and so forth. For commercial software products, deployment artifacts may include marketing plans, sales rollout
kits, and training courses.
38
39
Vision Document
The vision document provides a complete vision for the software system under development and. supports the
contract between the funding authority and the development organization. A project vision is meant to be
changeable as understanding evolves of the requirements, architecture, plans, and technology. A good vision
document should change slowly. Figure 6-9 provides a default outline for a vision document.
Architecture Description
The architecture description provides an organized view of the software architecture under development. It is
extracted largely from the design model and includes views of the design, implementation, and deployment sets
sufficient to understand how the operational concept of the requirements set will be achieved. The breadth of the
architecture description will vary from project to project depending on many factors. Figure 6-10 provides a
default outline for an architecture description.
40
1. Explain briefly two stages of the life cycle engineering and production.
2. Explain different phases of the life cycle process?
Explain the goal of Inception phase, Elaboration phase, Construction phase and
3.
Transition phase.
4. Explain the overview of the artifact set
Write a short note on
5. (a) Management Artifacts (b) Engineering Artifacts (c) Pragmatic Artifacts
41
42
The requirements model addresses the behavior of the system as seen by its end users, analysts, and testers. This
view is modeled statically using use case and class diagrams, and dynamically using sequence, collaboration,
state chart, and activity diagrams.
The use case view describes how the system's critical (architecturally significant) use cases are realized
by elements of the design model. It is modeled statically using use case diagrams, and dynamically
using any of the UML behavioral diagrams.
The design view describes the architecturally significant elements of the design model. This view, an
abstraction of the design model, addresses the basic structure and functionality of the solution. It is
modeled statically using class and object diagrams, and dynamically using any of the UML behavioral
diagrams.
The process view addresses the run-time collaboration issues involved in executing the architecture on
a distributed deployment model, including the logical software network topology (allocation to
processes and threads of control), interprocess communication, and state management. This view is
modeled statically using deployment diagrams, and dynamically using any of the UML behavioral
diagrams.
The component view describes the architecturally significant elements of the implementation set. This
view, an abstraction of the design model, addresses the software source code realization of the system
from the perspective of the project's integrators and developers, especially with regard to releases and
configuration management. It is modeled statically using component diagrams, and dynamically using
any of the UML behavioral diagrams.
The deployment view addresses the executable realization of the system, including the allocation of
logical processes in the distribution view (the logical software topology) to physical resources of the
deployment network (the physical system topology). It is modeled statically using deployment dia-
grams, and dynamically using any of the UML behavioral diagrams.
Generally, an architecture baseline should include the following:
Requirements: critical use cases, system-level quality objectives, and priority relationships among
features and qualities
Design: names, attributes, structures, behaviors, groupings, and relationships of significant classes
and components
Implementation: source component inventory and bill of materials (number, name, purpose, cost) of
all primitive components
Deployment: executable components sufficient to demonstrate the critical use cases and the risk
associated with achieving the system qualities
43
44