SPPM Unit 2
SPPM Unit 2
SPPM Unit 2
Conventional Software Management: The waterfall model, conventional software Management performance.
Evolution of Software Economics: Software Economics, pragmatic software cost estimation.
Improving Software Economics: Reducing Software product size, improving software processes, improving
team effectiveness, improving automation, Achieving required quality, peer inspections.
Analysis
Design
Coding
Testing
Operation
3. The basic framework described in the waterfall model is risky and invites failure. The testing
phase that occurs at the end of the development cycle is the first event for which timing,
storage, input/output transfers, etc., are experienced as distinguished from analyzed. The
resulting design changes are likely to be so disruptive that the software requirements upon
which the design is based are likely violated. Either the requirements must be modified or a
substantial design change is warranted.
1. Program design comes first. Insert a preliminary program design phase between the software
requirements generation phase and the analysis phase. By this technique, the program
designer assures that the software will not fail because of storage, timing, and data flux
(continuous change). As analysis proceeds in the succeeding phase, the program designer must
impose on the analyst the storage, timing, and operational constraints in such a way that he
senses the consequences. If the total resources to be applied are insufficient or if the
embryonic(in an early stage of development) operational design is wrong, it will be recognized
at this early stage and the iteration with requirements and preliminary design can be redone
before final design, coding, and test commences. How is this program design procedure
implemented?
2. Document the design. The amount of documentation required on most software programs is quite a
lot, certainly much more than most programmers, analysts, or program designers are willing to do if left
to their own devices. Why do we need so much documentation? (1) Each designer must communicate
with interfacing designers, managers, and possibly customers. (2) During early phases, the documentation
is the design. (3) The real monetary value of documentation is to support later modifications by a separate
test team, a separate maintenance team, and operations personnel who are not software literate.
3. Do it twice. If a computer program is being developed for the first time, arrange matters so that the
version finally delivered to the customer for operational deployment is actually the second version
insofar as critical design/operations are concerned. Note that this is simply the entire process done in
miniature, to a time scale
that is relatively small with respect to the overall effort. In the first version, the team must have a
special broad competence where they can quickly sense trouble spots in the design, model them,
model alternatives, forget the straightforward aspects of the design that aren't worth studying at this
early point, and, finally, arrive at an error-free program.
4. Plan, control, and monitor testing. Without question, the biggest user of project resources-
manpower, computer time, and/or management judgment-is the test phase. This is the phase of
greatest risk in terms of cost and schedule. It occurs at the latest point in the schedule, when backup
alternatives are least available, if at all. The previous three recommendations were all aimed at
uncovering and solving problems before entering the test phase. However, even after doing these
things, there is still a test phase and there are still important things to be done, including: (1) employ a
team of test specialists who were not responsible for the original design; (2) employ visual inspections
to spot the obvious errors like dropped minus signs, missing factors of two, jumps to wrong addresses
(do not use the computer to detect this kind of thing, it is too expensive); (3) test every logic path; (4)
employ the final checkout on the target computer.
5. Involve the customer. It is important to involve the customer in a formal way so that he has
committed himself at earlier points before final delivery. There are three points following requirements
definition where the insight, judgment, and commitment of the customer can bolster the development
effort. These include a "preliminary software review" following the preliminary program design step, a
sequence of "critical software design reviews" during program design, and a "final software acceptance
review".
1.1.2 IN PRACTICE
Some software projects still practice the conventional software management approach.
It is useful to summarize the characteristics of the conventional process as it has typically been
applied, which is not necessarily as it was intended. Projects destined for trouble frequently exhibit the
following symptoms:
Early success via paper designs and thorough (often too thorough) briefings.
Commitment to code late in the life cycle.
Integration nightmares (unpleasant experience) due to unforeseen implementation issues and
interface ambiguities.
Heavy budget and schedule pressure to get the system working.
Late shoe-homing of no optimal fixes, with no time for redesign.
A very fragile, unmentionable product delivered late.
In the conventional model, the entire system was designed on paper, then implemented all at once, then
integrated. Table 1-1 provides a typical profile of cost expenditures across the spectrum of software
activities.
Late risk resolution A serious issue associated with the waterfall lifecycle was the lack of early risk
resolution. Figure 1.3 illustrates a typical risk profile for conventional waterfall model projects. It
includes four distinct periods of risk exposure, where risk is defined as the probability of missing a cost,
schedule, feature, or quality goal. Early in the life cycle, as the requirements were being specified, the
actual risk exposure was highly unpredictable.
4
Requirements-Driven Functional Decomposition: This approach depends on specifying requirements
com-pletely and unambiguously before other development activities begin. It naively treats all
requirements as equally important, and depends on those requirements remaining constant over the
software development life cycle. These conditions rarely occur in the real world. Specification of
requirements is a difficult and important part of the software development process.
Another property of the conventional approach is that the requirements were typically specified in a
functional manner. Built into the classic waterfall process was the fundamental assumption that the
software itself was decomposed into functions; requirements were then allocated to the resulting
components. This decomposition was often very different from a decomposition based on object-
oriented design and the use of existing components. Figure 1-4 illustrates the result of requirements-
driven approaches: a software structure that is organized around the requirements specification structure.
5
The following sequence of events was typical for most contractual software efforts:
1. The contractor prepared a draft contract-deliverable document that captured an intermediate
artifact and delivered it to the customer for approval.
2.The customer was expected to provide comments (typically within 15 to 30 days).
3. The contractor incorporated these comments and submitted (typically within 15 to 30 days) a
final version for approval.
This one-shot review process encouraged high levels of sensitivity on the part of customers and
contractors.
The relationships among these parameters and the estimated cost can be written as follows:
1) Conventional: 1960s and 1970s, craftsmanship. Organizations used custom tools, custom
processes, and virtually all custom components built in primitive languages. Project
performance was highly predictable in that cost, schedule, and quality objectives were almost
always underachieved.
2) Transition: 1980s and 1990s, software engineering. Organiz:1tions used more-repeatable processes
and off-the-shelf tools, and mostly (>70%) custom components built in higher level languages. Some
of the components (<30%) were available as commercial products, including the operating system,
database management system, networking, and graphical user interface.
3) Modern practices: 2000 and later, software production. This book's philosophy is rooted in the
use of managed and measured processes, integrated automation environments, and
mostly (70%) off-the-shelf components. Perhaps as few as 30% of the components
need to be custom built
Technologies for environment automation, size reduction, and process improvement are not
independent of one another. In each new era, the key is complementary growth in all technologies. For
example, the process advances could not be used successfully without new component technologies and
increased tool automation.
Organizations are achieving better economies of scale in successive technology eras-with very large
projects (systems of systems), long-lived products, and lines of business comprising multiple similar
projects. Figure 2-2 provides an overview of how a return on investment (ROI) profile can be achieved
in subsequent efforts across life cycles of various domains.
8
2.2 PRAGMATIC SOFTWARE COST ESTIMATION
One critical problem in software cost estimation is a lack of well-documented case studies of projects
that used an iterative development approach. Software industry has inconsistently defined metrics or
atomic units of measure, the data from actual projects are highly suspect in terms of consistency and
comparability. It is hard enough to collect a homogeneous set of project data within one organization; it
is extremely difficult to homog-enize data across different organizations with different processes,
languages, domains, and so on.
There have been many debates among developers and vendors of software cost estimation models and
tools.
Three topics of these debates are of particular interest here:
1 Function point metrics provide a standardized method for measuring the various functions of a software application.
The basic units of function points are external user inputs, external outputs, internal logical data groups, external data
interfaces, and external inquiries.
12
Booch also summarized five characteristics of a successful object-oriented project.
1. A ruthless focus on the development of a system that provides a well understood collection of
essential minimal characteristics.
2. The existence of a culture that is centered on results, encourages communication, and yet is not
afraid to fail.
3. The effective use of object-oriented modeling.
4. The existence of a strong architectural vision.
5. The application of a well-managed iterative and incremental development life cycle.
3.1.3 REUSE
Reusing existing components and building reusable components have been natural software engineering
activities since the earliest improvements in programming languages. With reuse in order to minimize
development costs while achieving all the other required attributes of performance, feature set, and
quality. Try to treat reuse as a mundane part of achieving a return on investment.
Most truly reusable components of value are transitioned to commercial products supported by
organizations with the following characteristics:
15
Boehm five staffing principles are
1. The principle of top talent: Use better and fewer people
2. The principle of job matching: Fit the tasks to the skills and motivation of the people available.
3. The principle of career progression: An organization does best in the long run by helping its
people to self-actualize.
4. The principle of team balance: Select people who will complement and harmonize with one
another
5. The principle of phase-out: Keeping a misfit on the team doesn't benefit anyone
Software project managers need many leadership qualities in order to enhance team effectiveness. The
following are some crucial attributes of successful software project managers that deserve much more
attention:
1. Hiring skills. Few decisions are as important as hiring decisions. Placing the right
person in the right job seems obvious but is surprisingly hard to achieve.
2. Customer-interface skill. Avoiding adversarial relationships among stakeholders is a
prerequisite for success.
Decision-making skill. The jillion books written about management have failed to provide a
clear definition of this attribute. We all know a good leader when we run into one, and
decision-making skill seems obvious despite its intangible definition.
Team-building skill. Teamwork requires that a manager establish trust, motivate progress,
exploit eccentric prima donnas, transition average people into top performers, eliminate
misfits, and consolidate diverse opinions into a team direction.
Selling skill. Successful project managers must sell all stakeholders (including themselves) on
decisions and priorities, sell candidates on job positions, sell changes to the status quo in the face
of resistance, and sell achievements against objectives. In practice, selling requires continuous
negotiation, compromise, and empathy
3.4 IMPROVING AUTOMATION THROUGH SOFTWARE ENVIRONMENTS
The tools and environment used in the software process generally have a linear effect on the
productivity of the process. Planning tools, requirements management tools, visual modeling tools,
compilers, editors, debuggers, quality assurance analysis tools, test tools, and user interfaces provide
crucial automation support for evolving the software engineering artifacts. Above all, configuration
management environments provide the foundation for executing and instrument the process. At first
order, the isolated impact of tools and automation generally allows improvements of 20% to 40% in
effort. However, tools and environments must be viewed as the primary delivery vehicle for process
automation and improvement, so their impact can be much higher.
Automation of the design process provides payback in quality, the ability to estimate costs and
schedules, and overall productivity using a smaller team.
Round-trip engineering describes the key capability of environments that support iterative development.
As we have moved into maintaining different information repositories for the engineering artifacts, we
need automation support to ensure efficient and error-free transition of data from one artifact to another.
Forward engineering is the automation of one engineering artifact from another, more abstract
representation. For example, compilers and linkers have provided automated transition of source code
into executable code.
Reverse engineering is the generation or modification of a more abstract representation from an existing
artifact (for example, creating a visual design model from a source code representation).
Economic improvements associated with tools and environments. It is common for tool vendors to make
rela-tively accurate individual assessments of life-cycle activities to support claims about the potential
economic impact of their tools. For example, it is easy to find statements such as the following from
companies in a particular tool.
Requirements analysis and evolution activities consume 40% of life-cycle costs.
Software design activities have an impact on more than 50% of the resources.
Coding and unit testing activities consume about 50% of software development effort and schedule.
Key practices that improve overall software quality include the following:
Focusing on driving requirements and critical use cases early in the life cycle, focusing on
requirements completeness and traceability late in the life cycle, and focusing throughout the
life cycle on a balance between requirements evolution, design evolution, and plan evolution
Using metrics and indicators to measure the progress and quality of an architecture as it
evolves from a high-level prototype into a fully compliant product
Providing integrated life-cycle environments that support early and continuous configuration
control, change management, rigorous design methods, document automation, and regression
test automation
Using visual modeling and higher level languages that support architectural control,
abstraction, reliable programming, reuse, and self-documentation
Early and continuous insight into performance issues through demonstration-based evaluations
Conventional development processes stressed early sizing and timing estimates of computer
program resource utilization. However, the typical chronology of events in performance assessment
was as follows
Project inception. The proposed design was asserted to be low risk with adequate performance
margin.
Initial design review. Optimistic assessments of adequate design margin were based mostly on
paper analysis or rough simulation of the critical threads. In most cases, the actual application
algorithms and database sizes were fairly well understood.
Mid-life-cycle design review. The assessments started whittling away at the margin, as early
benchmarks and initial tests began exposing the optimism inherent in earlier estimates.
Integration and test. Serious performance problems were uncovered, necessitating fundamental
changes in the architecture. The underlying infrastructure was usually the scapegoat, but the
real culprit was immature use of the infrastructure, immature architectural solutions, or poorly
understood early design trade-offs.
3.6 PEER INSPECTIONS: A PRAGMATIC VIEW
Peer inspections are frequently over hyped as the key aspect of a quality system. In my experience, peer
reviews are valuable as secondary mechanisms, but they are rarely significant contributors to quality
compared with the following primary quality mechanisms and indicators, which should be emphasized
in the management process:
Transitioning engineering information from one artifact set to another, thereby assessing the
consistency, feasibility, understandability, and technology constraints inherent in the engineering
artifacts
Major milestone demonstrations that force the artifacts to be assessed against tangible criteria
in the context of relevant use cases
Environment tools (compilers, debuggers, analyzers, automated test suites) that ensure
representation rigor, consistency, completeness, and change control
Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria, and
requirements compliance
Change management metrics for objective insight into multiple-perspective change trends and
convergence or divergence from quality and progress goals
Inspections are also a good vehicle for holding authors accountable for quality products. All
authors of software and documentation should have their products scrutinized as a natural by-
product of the process. Therefore, the coverage of inspections should be across all authors rather
than across all components.
Conventional and Modern Software Management: The principles of conventional software
Engineering, principles of modern software management, transitioning to an iterative process.
Life cycle phases: Engineering and Production stages, Inception, Elaboration, Construction,
Transition Phases.
1.Make quality Quality must be quantified and mechanisms put into place to motivate its achievement
2.High-quality software is possible. Techniques that have been demonstrated to increase quality
include involving the customer, prototyping, simplifying design, conducting inspections, and hiring the
best people
3.Give products to customers early. No matter how hard you try to learn users' needs during the
requirements phase, the most effective way to determine real needs is to give users a product and let
them play with it
4.Determine the problem before writing the requirements. When faced with what they believe is a
problem, most engineers rush to offer a solution. Before you try to solve a problem, be sure to explore
all the alternatives and don't be blinded by the obvious solution
5.Evaluate design alternatives. After the requirements are agreed upon, you must examine a variety of
architectures and algorithms. You certainly do not want to use” architecture" simply because it was used
in the requirements specification.
6.Use an appropriate process model. Each project must select a process that makes ·the most sense for
that project on the basis of corporate culture, willingness to take risks, application area, volatility of
requirements, and the extent to which requirements are well understood.
7.Use different languages for different phases. Our industry's eternal thirst for simple solutions to
complex problems has driven many to declare that the best development method is one that uses the
same notation through-out the life cycle.
8.Minimize intellectual distance. To minimize intellectual distance, the software's structure should be
as close as possible to the real-world structure
9.Put techniques before tools. An undisciplined software engineer with a tool becomes a dangerous,
undisciplined software engineer
10.Get it right before you make it faster. It is far easier to make a working program run faster than it
is to make a fast program work. Don't worry about optimization during initial coding
11.Inspect code. Inspecting the detailed design and code is a much better way to find errors than testing
12.Good management is more important than good technology. Good management motivates people
to do their best, but there are no universal "right" styles of management.
13.People are the key to success. Highly skilled people with appropriate experience, talent, and
training are key.
14.Follow with care. Just because everybody is doing something does not make it right for you. It may
be right, but you must carefully assess its applicability to your environment.
15.Take responsibility. When a bridge collapses we ask, "What did the engineers do wrong?" Even
when software fails, we rarely ask this. The fact is that in any engineering discipline, the best methods
can be used to produce awful designs, and the most antiquated methods to produce elegant designs.
16.Understand the customer's priorities. It is possible the customer would tolerate 90% of the
functionality delivered late if they could have 10% of it on time.
17.The more they see, the more they need. The more functionality (or performance) you provide a
user, the more functionality (or performance) the user wants.
18. Plan to throw one away. One of the most important critical success factors is whether or not a
product is entirely new. Such brand-new applications, architectures, interfaces, or algorithms rarely
work the first time.
19. Design for change. The architectures, components, and specification techniques you use must
accommodate change.
20. Design without documentation is not design. I have often heard software engineers say, "I have
finished the design. All that is left is the documentation. "
21. Use tools, but be realistic. Software tools make their users more efficient.
22. Avoid tricks. Many programmers love to create programs with tricks constructs that perform a
function correctly, but in an obscure way. Show the world how smart you are by avoiding tricky
code
23. Encapsulate. Information-hiding is a simple, proven concept that results in software that is
easier to test and much easier to maintain.
24. Use coupling and cohesion. Coupling and cohesion are the best ways to measure software's
inherent maintainability and adaptability
25. Use the McCabe complexity measure. Although there are many metrics available to report the
inherent
complexity of software, none is as intuitive and easy to use as Tom McCabe's
26.Don't test your own software. Software developers should never be the primary testers of their
own software.
27.Analyze causes for errors. It is far more cost-effective to reduce the effect of an error by preventing
it than it is to find and fix it. One way to do this is to analyze the causes of errors as they are detected
28.Realize that software's entropy increases. Any software system that undergoes continuous change
will grow in complexity and will become more and more disorganized
29.People and time are not interchangeable. Measuring a project solely by person-months makes little
sense
30.Expect excellence. Your employees will do much better if you have high expectations for them.
4.2 THE PRINCIPLES OF MODERN SOFTWARE MANAGEMENT
Top 10 principles of modern software management are. (The first five, which are the main themes of my definition of
an iterative process, are summarized in Figure 4-1.)
1. Base the process on an architecture-first approach. This requires that a demonstrable balance be
achieved among the driving requirements, the architecturally significant design decisions, and the
life-cycle plans before the resources are committed for full-scale development.
2. Establish an iterative life-cycle process that confronts risk early. With today's sophisticated
software systems, it is not possible to define the entire problem, design the entire solution, build
the software, and then test the end product in sequence. Instead, an iterative process that refines
the problem understanding, an effective solution, and an effective plan over several iterations
encourages a balanced treatment of all stakeholder objectives. Major risks must be addressed early
to increase predictability and avoid expensive downstream scrap and rework.
3. Transition design methods to emphasize component-based development. Moving from a line-of-
code mentality to a component-based mentality is necessary to reduce the amount of human-
generated source code and custom development.
Table 4-1 maps top 10 risks of the conventional process to the key attributes and principles of a
modern process
4.3 TRANSITIONING TO AN ITERATIVE PROCESS
Modern software development processes have moved away from the conventional waterfall model, in
which each stage of the development process is dependent on completion of the previous stage.
The economic benefits inherent in transitioning from the conventional waterfall model to an
iterative development process are significant but difficult to quantify. As one benchmark of the expected
economic impact of process improvement, consider the process exponent parameters of the COCOMO II
model. (Appendix B provides more detail on the COCOMO model) This exponent can range from 1.01
(virtually no diseconomy of scale) to 1.26 (significant diseconomy of scale). The parameters that govern
the value of the process exponent are application precedentedness, process flexibility, architecture risk
resolution, team cohesion, and software process maturity.
The following paragraphs map the process exponent parameters of CO COMO II to my top 10
principles of a modern process.
1. Explain briefly Waterfall model. Also explain Conventional s/w management performance?
1. The engineering stage, driven by less predictable but smaller teams doing design and
synthesis activities
2. The production stage, driven by more predictable but larger teams doing construction,
test, and deployment activities
The transition between engineering and production is a crucial event for the various stakeholders. The
production plan has been agreed upon, and there is a good enough understanding of the problem and the
solution that all stakeholders can make a firm commitment to go ahead with production.
Engineering stage is decomposed into two distinct phases, inception and elaboration, and the production
stage into construction and transition. These four phases of the life-cycle process are loosely mapped to
the conceptual framework of the spiral model as shown in Figure 5-1
5.2 INCEPTION PHASE
The overriding goal of the inception phase is to achieve concurrence among stakeholders on the
life-cycle objectives for the project.
PRIMARY OBJECTIVES
Establishing the project's software scope and boundary conditions, including an operational
concept, acceptance criteria, and a clear understanding of what is and is not intended to be in
the product
Discriminating the critical use cases of the system and the primary scenarios of operation
that will drive the major design trade-offs
Demonstrating at least one candidate architecture against some of the primary scenanos
Estimating the cost and schedule for the entire project (including detailed estimates
for the elaboration phase)
Estimating potential risks (sources of unpredictability)
ESSENTIAL ACTMTIES
3. Formulating the scope of the project. The information repository should be sufficient to
define the problem space and derive the acceptance criteria for the end product.
4. Synthesizing the architecture. An information repository is created that is sufficient to
demonstrate the feasibility of at least one candidate architecture and an, initial baseline of
make/buy decisions so that the cost, schedule, and resource estimates can be derived.
5. Planning and preparing a business case. Alternatives for risk management, staffing, iteration
plans, and cost/schedule/profitability trade-offs are evaluated.
1) Do all stakeholders concur on the scope definition and cost and schedule estimates?
Are requirements understood, as evidenced by the fidelity of the critical use cases?
Are the cost and schedule estimates, priorities, risks, and development processes credible?
2) Do the depth and breadth of an architecture prototype demonstrate the preceding criteria?
(The primary value of prototyping candidate architecture is to provide a vehicle for
understanding the scope and assessing the credibility of the development group in solving the
particular technical problem.)
3) Are actual resource expenditures versus planned expenditures acceptable
5.2 ELABORATION PHASE
At the end of this phase, the "engineering" is considered complete. The elaboration phase activities must
ensure that the architecture, requirements, and plans are stable enough, and the risks sufficiently
mitigated, that the cost and schedule for the completion of the development can be predicted within an
acceptable range. During the elaboration phase, an executable architecture prototype is built in one or
more iterations, depending on the scope, size, & risk.
PRIMARY OBJECTIVES
4. Baselining the architecture as rapidly as practical (establishing a configuration-managed snapshot
in which all changes are rationalized, tracked, and maintained)
5. Baselining the vision
6. Baselining a high-fidelity plan for the construction phase
7. Demonstrating that the baseline architecture will support the vision at a reasonable cost in a
reasonable time
Elaborating the vision.
Elaborating the process and infrastructure.
Elaborating the architecture and selecting components.
Is the vision stable?
Is the architecture stable?
Does the executable demonstration show that the major risk elements have been addressed and
credibly resolved?
Is the construction phase plan of sufficient fidelity, and is it backed up with a credible basis of
estimate?
Do all stakeholders agree that the current vision can be met if the current plan is executed to
develop the
complete system in the context of the current architecture?
7. Are actual resource expenditures versus planned expenditures acceptable?
5.4 CONSTRUCTION PHASE
During the construction phase, all remaining components and application features are integrated into the
application, and all features are thoroughly tested. Newly developed software is integrated where required. The
construction phase represents a production process, in which emphasis is placed on managing resources and controlling
operations to optimize costs, schedules, and quality.
PRIMARY OBJECTIVES
Minimizing development costs by optimizing resources and avoiding unnecessary scrap and rework
Achieving adequate quality as rapidly as practical
Achieving useful versions (alpha, beta, and other test releases) as rapidly as practical
ESSENTIAL ACTIVITIES
Resource management, control, and process optimization
Complete component development and testing against evaluation criteria
Assessment of product releases against acceptance criteria of the vision
PRIMARY OBJECTIVES
Achieving user self-supportability
Achieving stakeholder concurrence that deployment baselines are complete and consistent
with the evaluation criteria of the vision
Achieving final product baselines as rapidly and cost-effectively as practical
ESSENTIAL ACTIVITIES
Synchronization and integration of concurrent construction increments into consistent deployment
baselines
Deployment-specific engineering (cutover, commercial packaging and production, sales rollout kit
development, field personnel training)
Assessment of deployment baselines against the complete vision and acceptance criteria in the
requirements set
EVALUATION CRITERIA
Is the user satisfied?
Are actual resource expenditures versus planned expenditures acceptable?
Artifacts of the process: The artifact sets, Management artifacts, Engineering artifacts, programmatic
artifacts.
Model based software architectures: A Management perspective and technical perspective.
Design Set
UML notation is used to engineer the design models for the solution. The design set contains
varying levels of abstraction that represent the components of the solution space (their identities,
attributes, static relationships, dynamic interactions). The design set is evaluated, assessed, and
measured through a combination of the following:
Analysis of the internal consistency and quality of the design model
Analysis of consistency with the requirements models
Translation into implementation and deployment sets and notations (for example, traceability,
source code generation, compilation, linking) to evaluate the consistency and completeness
and the semantic balance between information in the sets
Analysis of changes between the current version of the design model and previous versions
(scrap, rework, and defect elimination trends)
Subjective review of other dimensions of quality
Implementation set
The implementation set includes source code (programming language notations) that represents the
tangible
implementations of components (their form, interface, and dependency relationships)
Implementation sets are human-readable formats that are evaluated, assessed, and measured
through a combination of the following:
Analysis of consistency with the design models
Translation into deployment set notations (for example, compilation and linking) to evaluate
the consistency and completeness among artifact sets
Assessment of component source or executable files against relevant evaluation criteria
through inspection, analysis, demonstration, or testing
Execution of stand-alone component test cases that automatically compare expected results
with actual results
Analysis of changes between the current version of the implementation set and previous
versions (scrap, rework, and defect elimination trends)
Subjective review of other dimensions of quality
Deployment Set
The deployment set includes user deliverables and machine language notations, executable software, and
the build scripts, installation scripts, and executable target specific data necessary to use the product in
its target environment.
Deployment sets are evaluated, assessed, and measured through a combination of the following:
Testing against the usage scenarios and quality attributes defined in the requirements set to
evaluate the consistency and completeness and the~ semantic balance between information in
the two sets
Testing the partitioning, replication, and allocation strategies in mapping components of the
implementation set to physical resources of the deployment system (platform type, number,
network topology)
Testing against the defined usage scenarios in the user manual such as installation, user-
oriented dynamic reconfiguration, mainstream usage, and anomaly management
Analysis of changes between the current version of the deployment set and previous versions
(defect elimination trends, performance changes)
Subjective review of other dimensions of quality
Each artifact set is the predominant development focus of one phase of the life cycle; the other sets take
on check and balance roles. As illustrated in Figure 6-2, each phase has a predominant focus:
Requirements are the focus of the inception phase; design, the elaboration phase; implementation, the
construction phase; and deploy-ment, the transition phase. The management artifacts also evolve, but at
a fairly constant level across the life cycle.
Most of today's software development tools map closely to one of the five artifact sets.
1.Management: scheduling, workflow, defect tracking, change
management, documentation, spreadsheet, resource management,
and presentation tools
2.Requirements: requirements management tools
3.Design: visual modeling tools
4.Implementation: compiler/debugger tools, code analysis tools, test coverage analysis tools, and
test management tools
5.Deployment: test coverage and test automation tools, network management tools, commercial
components (operating systems, GUIs, RDBMS, networks, middleware), and installation tools.
31
Implementation Set versus Deployment Set
The separation of the implementation set (source code) from the deployment set (executable code) is
important because there are very different concerns with each set. The structure of the information
delivered to the user (and typically the test organization) is very different from the structure of the source
code information. Engineering decisions that have an impact on the quality of the deployment set but are
relatively incomprehensible in the design and implementation sets include the following:
Dynamically reconfigurable parameters (buffer sizes, color palettes, number of servers, number
of simultaneous clients, data files, run-time parameters)
Effects of compiler/link optimizations (such as space optimization versus speed optimization)
Performance under certain allocation strategies (centralized versus distributed, primary and
shadow threads, dynamic load balancing, hot backup versus checkpoint/rollback)
Virtual machine constraints (file descriptors, garbage collection, heap size, maximum record
size, disk file rotations)
Process-level concurrency issues (deadlock and race conditions)
Platform-specific differences in performance or behavior
The inception phase focuses mainly on critical requirements usually with a secondary focus on an
initial deployment view. During the elaboration phase, there is much greater depth in requirements,
much more breadth in the design set, and further work on implementation and deployment issues.
The main focus of the construction phase is design and implementation. The main focus of the
transition phase is on achieving consistency and completeness of the deployment set in the context
of the other sets.
32
6.1.4 TEST ARTIFACTS
The test artifacts must be developed concurrently with the product from inception through
deployment. Thus, testing is a full-life-cycle activity, not a late life-cycle activity.
The test artifacts are communicated, engineered, and developed within the same
artifact sets as the developed product.
The test artifacts are implemented in programmable and repeatable formats (as software
programs).
The test artifacts are documented in the same way that the product is documented.
Developers of the test artifacts use the same tools, techniques, and training as the
software engineers developing the product.
Test artifact subsets are highly project-specific, the following example clarifies the relationship
between test artifacts and the other artifact sets. Consider a project to perform seismic data
processing for the purpose of oil exploration. This system has three fundamental subsystems: (1) a
sensor subsystem that captures raw seismic data in real time and delivers these data to (2) a
technical operations subsystem that converts raw data into an organized database and manages
queries to this database from (3) a display subsystem that allows workstation operators to examine
seismic data in human-readable form. Such a system would result in the following test artifacts:
Management set. The release specifications and release descriptions capture the
objectives, evaluation criteria, and results of an intermediate milestone. These artifacts
are the test plans and test results negotiated among internal project teams. The software
change orders capture test results (defects, testability changes, requirements ambiguities,
enhancements) and the closure criteria associated with making a discrete change to a
baseline.
Requirements set. The system-level use cases capture the operational concept for the
system and the acceptance test case descriptions, including the expected behavior of the
system and its quality attributes. The entire requirement set is a test artifact because it is
the basis of all assessment activities across the life cycle.
Design set. A test model for nondeliverable components needed to test the product
baselines is captured in the design set. These components include such design set artifacts
as a seismic event simulation for creating realistic sensor data; a "virtual operator" that
can support unattended, after-hours test cases; specific instrumentation suites for early
demonstration of resource usage; transaction rates or response times; and use case test
drivers and component stand-alone test drivers.
Implementation set. Self-documenting source code representations for test components
and test drivers provide the equivalent of test procedures and test scripts. These source
files may also include human-readable data files representing certain statically defined
data sets that are explicit test source files. Output files from test drivers provide the
equivalent of test reports.
Deployment set. Executable versions of test components, test drivers, and data files are
provided.
Release Specifications
The scope, plan, and objective evaluation criteria for each baseline release are derived from the vision
statement as well as many other sources (make/buy analyses, risk management concerns, architectural
considerations, shots in the dark, implementation constraints, quality thresholds). These artifacts are
intended to evolve along with the process, achieving greater fidelity as the life cycle progresses and
requirements understanding matures. Figure 6-6 provides a default outline for a release specification
Release Descriptions
Release description documents describe the results of each release, including performance against each
of the evaluation criteria in the corresponding release specification. Release baselines should be
accompanied by a release description document that describes the evaluation criteria for that
configuration baseline and provides substantiation (through demonstration, testing, inspection, or
analysis) that each criterion has been addressed in an acceptable manner. Figure 6-7 provides a default
outline for a release description.
Status Assessments
Status assessments provide periodic snapshots of project health and status, including the software project
manager's risk assessment, quality indicators, and management indicators. Typical status assessments
should include a review of resources, personnel staffing, financial data (cost and revenue), top 10 risks,
technical progress (metrics snapshots), major milestone plans and results, total project or product scope
& action items
Environment
An important emphasis of a modern approach is to define the development and maintenance
environment as a first-class artifact of the process. A robust, integrated development environment must
support automation of the development process. This environment should include requirements
management, visual modeling, document automation, host and target programming tools, automated
regression testing, and continuous and integrated change management, and feature and defect tracking.
Deployment
A deployment document can take many forms. Depending on the project, it could include several
document subsets for transitioning the product into operational status. In big contractual efforts in which
the system is delivered to a separate maintenance organization, deployment artifacts may include
computer system operations manuals, software installation manuals, plans and procedures for cutover
(from a legacy system), site surveys, and so forth. For commercial software products, deployment
artifacts may include marketing plans, sales rollout kits, and training courses.
Management Artifact Sequences
In each phase of the life cycle, new artifacts are produced and previously developed artifacts are updated
to incorporate lessons learned and to capture further depth and breadth of the solution. Figure 6-8
identifies a typical sequence of artifacts across the life-cycle phases.
37
6.3 ENGINEERING ARTIFACTS
Most of the engineering artifacts are captured in rigorous engineering notations such as UML,
programming languages, or executable machine codes. Three engineering artifacts are explicitly
intended for more general review, and they deserve further elaboration.
Vision Document
The vision document provides a complete vision for the software system under development and.
supports the contract between the funding authority and the development organization. A project vision
is meant to be changeable as understanding evolves of the requirements, architecture, plans, and
technology. A good vision document should change slowly. Figure 6-9 provides a default outline for a
vision document.
Architecture Description
The architecture description provides an organized view of the software architecture under development.
It is extracted largely from the design model and includes views of the design, implementation, and
deployment sets sufficient to understand how the operational concept of the requirements set will be
achieved. The breadth of the architecture description will vary from project to project depending on
many factors. Figure 6-10 provides a default outline for an architecture description.
38
Software User Manual
The software user manual provides the user with the reference documentation necessary to support the
delivered software. Although content is highly variable across application domains, the user manual
should include installation procedures, usage procedures and guidance, operational constraints, and a
user interface description, at a minimum. For software products with a user interface, this manual should
be developed early in the life cycle because it is a necessary mechanism for communicating and
stabilizing an important subset of requirements. The user manual should be written by members of the
test team, who are more likely to understand the user's perspective than the development team.
1. Explain briefly two stages of the life cycle engineering and production.
2. Explain different phases of the life cycle process?
3. Explain the goal of Inception phase, Elaboration phase, Construction phase and
Transition phase.
4. Explain the overview of the artifact set
5. Write a short note on
(a) Management Artifacts (b) Engineering Artifacts (c) Pragmatic Artifacts
7.Model based software architecture
7.1 ARCHITECTURE: A MANAGEMENT PERSPECTIVE
The most critical technical product of a software project is its architecture: the infrastructure, control,
and data interfaces that permit software components to cooperate as a system and software designers to
cooperate efficiently as a team. When the communications media include multiple languages and
intergroup literacy varies, the communications problem can become extremely complex and even
unsolvable. If a software development team is to be successful, the inter project communications, as
captured in the software architecture, must be both accurate and precise
From a management perspective, there are three different aspects of architecture.
1. An architecture (the intangible design concept) is the design of a software system this
includes all engineering necessary to specify a complete bill of materials.
2. An architecture baseline (the tangible artifacts) is a slice of information across the
engineering artifact sets sufficient to satisfy all stakeholders that the vision (function and
quality) can be achieved within the parameters of the business case (cost, profit, time,
technology, and people).
3. An architecture description (a human-readable representation of an architecture, which is one of
the
components of an architecture baseline) is an organized subset of information extracted from
the design set model(s). The architecture description communicates how the intangible
concept is realized in the tangible artifacts.
The number of views and the level of detail in each view can vary widely.
The importance of software architecture and its close linkage with modern software development
processes can be summarized as follows:
Achieving a stable software architecture represents a significant project milestone at which the
critical make/buy decisions should have been resolved.
Architecture representations provide a basis for balancing the trade-offs between the problem
space (requirements and constraints) and the solution space (the operational product).
The architecture and process encapsulate many of the important (high-payoff or high-risk)
communications among individuals, teams, organizations, and stakeholders.
Poor architectures and immature processes are often given as reasons for project failures.
A mature process, an understanding of the primary requirements, and a demonstrable
architecture are important prerequisites for predictable planning.
Architecture development and process definition are the intellectual steps that map the
problem to a solution without violating the constraints; they require human innovation and
cannot be automated.