ADA216827
ADA216827
ADA216827
I
Exercise
(SEE) S DTIC
LECTE
S JAN16 19901D
G. A. HLuffB
S. NI. Mlorowski
Nox&vember 1989
* Scenario
Task Generation
d nication
K NUDET
* Processing
NUDET ~ Preparedfor
Prprm anager t)r CCI'!)- R P'rogra.i
Alp r n/16r
ti, rele'as. (listninitton unlimited
I 90 01 16 048 MITREchsct
UNCLASSIFIED
SECURITY CLASSIFICATION OF THIS PAGE
Ii
Unclassified
2a SECURITY CLASSIFICATION AUTHORITY 3 DISTRIBUTION/AVAILABILITY OF REPORT
6a NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION
(if applicable)
The MITRE Corporation
6c. ADDRESS (City, State, and ZIP Code) 7b ADDRESS (City, State, and ZIP Code)
Burlington Road
Bedford, MA 01730
8a. NAME OF FUNDING/SPONSORING 8b. OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER
ORGANIZATION (If applicable)
Program Manager (continued) ESD/SR F19628-89-C-0001
8c. ADDRESS (City, State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS
Electronic Systems Division, AFSC PROGRAM PROJECT TASK WORK UNIT
Hanscom AFB, MA 01731-5000 ELEMENT NO. NO. NO. ACCESSION NO.
-1022A
I 11 TITLE (include Security Classification)
The CCPDS-R Software Engineering Exercise (SEE)
32 PERSONAL AUTHOR(S)
Huff, G. A., Maciorowski, S. M.
13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year, Month, Day) 15 PAGE COUNT
Final FROM TO 1989 November 97
17. COSATI CODES 18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number)
FIELD GROUP SUB-GROUP Ada
Contractor Assessment
I Software Acquisition (continued)
19 ABSTRACT (Continue on reverse if necessary and identify by block number)
To evaluate the software engineering capabilities of potential offerors during the
Command Center Processing and Display System-Replacement (CCPDS-R) Full-Scale Development/
Production source selection, ESD and MITRE project personnel devised a software engineer-
ing exercise (SEE) to be carried out by all offerors. The SEE, first used on CCPDS-R,
has since been utilized as a standard source selection technique by ESD and other agencies.
This report describes the CCPDS-R SEE concept and provides a history of the activities and
decisions made in defining and carrying out this first SEE. It documents the SEE material
contained in the CCPDS-R Request for Proposal package and the SEE ground rules and
U
I
I
S
3
I
I
I
U
I
I
I
UNCLASSIFIED
I
ACKNOWLEDGMENT
!
This report has been prepared by The MITRE Corporation under Project No. 022A, Contract
No. F19628-89-C-0001. The contract is sponsored by the Electronic Systems Division, Air Force
Systems Command, United States Air Force, Hanscom Air Force Base, Massachusetts 01731-5000.
The authors wish to thank Steven D. Litvintchouk and John A. Maurer of The MITRE
Corporation for the contributions they made to the Government's effort. Their support contributed
greatly to the overall success of the CCPDS-R Software Engineering Exercise.
U
I
I
!
I
I
I~Aeesston for /
SS i
DTIC TAB
Unamnounced
1ustifioation
[
1
£ BY
Dlstrlbut o/
Availablldt/O
I'ist specal
Iii wid/
I
U TABLE OF CONTENTS
SECTION PAGE
I Introduction 1
2.1 SEETeam 7
2.2 Selected Tools and Methodologies 8
2.3 Schedule 8
2.4 Requirements Analysis Activities 10
2.5 Design Activities 13
2.6 Resulting SEE Design 15
2.7 SEE Documentation 15
3 2.7.1 SEE Specification
2.7.2 Instructions for the Offeror
15
16
1 3.1
3.2
Source Selection Terminology Overview
SEE Source Selection Approach
21
21
3.2.1 SEE Evaluation Criteria 21
3.2.2 SEE Scoring Method 22
3.2.3 Discriminator Issues 23
3.3 Evaluation Tools and Techniques 24
3.3.1 EASE 24
3.3.2 Checklist Questions 25
3.4 Release of SEE to Offerors 25
3.5 Government Evaluation Approach 27
3.5.1 First-Pass Evaluation 27
3.5.2 Audit 27
I
3
*
TABLE OF CONTENTS (Continued) i
SECTION PAGE 1
4 Dry Run Lessons Learned 29 1
4.1 Requirements Analysis 29
4.1.1 Time Allocation 29
4.1.2 Government Interaction 30
4.1.3 Formal Methodology 30
30
4.2
4.3
4.1.4 Completion
Object-Oriented
Ada
of Phase
Design 30
31
I
4.3.1 Flow of Control 31
4.3.2 ADL 32
4.3.3 Personnel 32
34
4.4 DOD-STD-2167
I
I
I
I TABLE OF CONTENTS (Concluded)
I SECTION PAGE
U 7 Offeror Feedback 47
7.1 Size 47
7.2 Appropriateness 47
7.3 Resources 47
7.4 Benefits 47
8 Conclusions/Recommendations 49
8.1 Dry Run 49
8.1.1 Objectives 49
8.1.2 Software Engineering and Ada 50
8.2 Actual Source Selection 51
8.2.1 CCPDS-R SEE Objectives 52
8.2.2 General Observations 52
List of References 55
Glossary 85
v
I
U
U vii
I
I
I
LIST OF ILLUSTRATIONS i
FIGURE PAGE 1
I Projected MITRE SEE Dry-Run Schedule 9 3
2 Actual MITRE SEE Dry-Run Schedule 11
3 MITRE's Overall SEE Architecture 12 I
4 Sample Buhr Diagram: RETRIEVEEVENT 14
viii 5
I
I
I
U SECTION 1
INTRODUCTION
During the source selection for a software intensive system, an offeror is usually evaluated on
his software engineering approach for managing and developing the software for the subject system.
Areas of evaluation include the offeror's methodologies, tool sets, software development plan (SDP)
and staffing. While evaluation of an offeror's software engineering approach during source selection
gives insight into how the offeror intends to implement the software for the system, the evaluation is
limited because it cannot give insight into the offeror's own expertise with the selected methodology.
Frequently, the Government has assessed an offeror's software engineering approach during source
selection as adequate only to discover once on contract that the offeror is not well versed in the
proposed methodology and tool set or that the offeror does not follow the SDP. As a result of the
offeror's lack of expertise in the selected software development approach or failure to follow a firm
plan, significant cost and schedule slips are often encountered during the software development phase.
In an attempt to limit further occurrences of this situation, the Electronic Systems Division
(ESD) of the Air Force Systems Command (AFSC) determined the need for a method to be used during
source selection for evaluating not only an offeror's software development plan but also the offeror's
expertise in the proposed software development approach. The need for this method was perceived as
even greater within the next few years due to the recent Department of Defense (DOD) directives that
mandated the use of Ada as an implementation language. It was feared that proposals would be
submitted by offerors who were not well trained in Ada as a software engineering methodology. To
resolve this situation, therefore, ESD and MITRE conceived the idea of a source selection software
engineering exercise (SEE). As conceived, the purpose of the exercise was to measure the degree of
risk associated with the offeror's software development approach by testing the offeror's proposed
methodology, as demonstrated through the offeror's actual implementation of a small exercise system,
and the offeror's ability to organize a SEE team knowledgeable in the proposed software engineering
approach and Ada.
The Command Center Processing and Display System-Replacement (CCPDS-R) program was
the first of five ESD programs to date to use a SEE during source selection. ESD, with technical
support from The MITRE Corporation, tailored the SEE concept for CCPDS-R, determined the
approach for incorporating the SEE into the source selection process, and drafted the actual exercise
specification which would serve as the basis for the CCPDS-R SEE. Prior to actually using the SEE
during the CCPDS-R source selection, the Government determined that the best way to finalize the SEE
concept, specification and evaluation approach was for MITRE to implement the exercise itself. In that
way, the Government would best be able to assess the feasibility of the SEE, including the scope of the
SEE and the time that would be made available to the offerors for conducting the SEE; to ensure that the
exercise specification which would be given to the offerors was well written and sufficiently
challenging; and to identify meaningful criteria that would be used to evaluate the offeror's SEE results.
IThis report provides a history of the activities and decisions made in defining and carrying out
the SEE for the CCPDS-R full scale development/production (FSD/P) phase source selection, together
with a rationale for those activities and decisions, and a discussion of the Government's experiences
using the SEE on CCPDS-R. In particular, it describes MITRE's dry run of the SEE as part of the
I
I
I
source selection preparation effort, and identifies lessons learned prior to the start of the source n
selection. It also describes the actual execution of the SEE during the CCPDS-R source selection and
the lessons learned during that period. Appendices to this report contain the CCPDS-R SEE exercise
specification and ground rules provided to the CCPDS-R FSD/P offerors, as well as the SEE
information included in the CCPDS-R FSD/P request for proposal (RFP) package.
As defined by the new integrated tactical waning and attack assessment (ITW&A) architecture,
the CCPDS-R will consist of four subsystems. These are the CMAFB subsystem, the Offutt
3
Processing and Correlation Center (OPCC) subsystem, the SAC subsystem, and the processing and
display subsystem (PDS). The PDS is to be located at the CMAFB, NMCC, ANMCC, OPCC, and
SAC.
The CMAFB and OPCC subsystems, referred to as the common subsystems, will have
identical hardware and software. They will interface to all ballistic missile sensors via survivable and
non-survivable media, process the information received from those sensors, generate displays for local
I
consoles, integrate the missile warning information with other manually entered data on air, space, and
intelligence, and have the capability for distributing this correlated information to other command
centers and subscribers. The two common subsystems will process the same sensor information and
I
serve as mutual backups in case of failure of critical components. Both subsystems will be able to
distribute correlated ITW&A data to subscribers at a given time. 3
The SAC subsystem will be physically separate from the OPCC subsystem and will be solely
devoted to the support of the SAC force management and force survival missions. It will receive data
from either the OPCC or CMAFB common subsystems, from PDS, and from command-unique
data and displays for consoles located at the SAC command
I
interfaces. It willlocations
center and other process the
at Offutt Air generate
Force Base.
The PDS subsystem will be capable of receiving and displaying correlated ITW&A information I
from the common subsystem, direct ballistic missile sensor data, and communication systems status
from the Survivable Communications Integration System (SCIS). It will be the primary system for
presentation of ITW&A information at the NMCC, ANMCC, and SAC.
2
I
I
I 1.1.2 Program Description
The CCPDS-R acquisition program consists of two phases: a concept definition/design (CD/D)
phase and a full-scale development/production phase. The CCPDS-R FSD/P effort is primarily a
software intensive effort using Department of Defense Standard (DOD-STD) 2167 [1], Ada as the
design language, and, unless a waiver is granted, Ada as the implementation language. The CCPDS-R
FSD/P effort thus requires that contractors be prepared to design and develop a real-time system in Ada
using modem software engineering practices. The CCPDS-R FSD/P contract was awarded in June
1987 to TRW.
The CCPDS-R CD/D phase, a year-long effort that concluded in August 1986, was primarily a
study effort. The CD/D contractors were TRW and Ford Aerospace and Communications Corporation,
both of which were expected to bid on the FSD/P contract. During the CD/D phase, the contractors
were required to submit draft SDPs. They were also required to perform several Ada-related activities
to analyze the feasibility of using Ada as the CCPDS-R implementation language and to demonstrate the
contractor's capability to design and develop a system in Ada, should the contractor propose to use Ada
as the implementation language. The specific Ada-related activities included:
a. Assess the feasibility of using Ada as the CCPDS-R implementation language, and evaluate
current Ada programming support environments for their suitability of meeting CCPDS-R
requirements
b. Devise a plan for efficient transition to Ada, if it is determined that the use of Ada is not
feasible on this program at the present time
3 c. Define and conduct a demonstration that shows the contractor's readiness to use Ada, if it
is determined that the use of Ada is feasible on this program now
3 d. Provide the rationale for choosing the Ada-based design language (ADL) and present an
example of the ADL.
Despite the above activities, the Government determined that the CD/D contractors had not yet
adequately demonstrated their ability to design and implement a real-time system in Ada using modem
software engineering practices. In general, the contractors had not demonstrated an end-to-end
application of their methodologies and Ada; they had only demonstrated the features of their tool sets.
Therefore, since software development and Ada constitute major risks on CCPDS-R, the Government
recognized the need for an additional method to better evaluate the software engineering and Ada
capabilities of these two contractors and, more importantly, of any other offerors who might submit
5exercise
proposals for the FSD/P phase. To that end, the Government developed the software engineering
as part of the FSD/P source selection process.
1.2 OVERVIEW
The Government viewed the SEE as a practical way to assess each offeror's software
engineering capability prior to FSD/P contract award. The SEE, which consists of a small system to be
*3
I
I
I
designed by each of the offerors, was intended to measure the degree of risk associated with the I
offeror's software development methodology, as documented in the offeror's software development
plan. It focused specifically on the offeror's software development methodology, as demonstrated by
the actual application of the methodology to the exercise system, and on the offeror's ability to organize
a team for the SEE, fully knowledgeable in the proposed methodology and Ada.
I
Based on the offerors' performance on the SEE, the Government expected that it would be
better able to evaluate the offerors' probability of success. In particular, if an offeror failed the exercise,
3
it would be assessed that the offeror had low probability of implementing CCPDS-R within the
proposed cost and schedule. If an offeror successfully completed the exercise, it would not guarantee
that the offeror would be able to complete CCPDS-R successfully; however, it would provide some
level of confidence in the offeror's ability to implement CCPDS-R. In either case, it would provide
£
early identification of problem areas in the offeror's software approach, thereby enabling the
Government to concentrate on these areas immediately at the start of the FSD/P phase, should the
offeror be awarded the contract.
I
The Government did not intend to evaluate every aspect of software development via the SEE.
In particular, the Government did not plan to evaluate those areas that either would not scale up to a
large software development effort or would not provide meaningful or discriminating source selection
information. As eventually defined, the SEE was intended to evaluate the requirements analysis and
design methodologies, the actual SEE design, and the team expertise for the exercise. The SEE was not
intended to evaluate coding, testing, integration, productivity, quality assurance, configuration
I
management, software metrics, full compliance with DOD-STD-2167, schedule, and management of
subcontractors. The Government elected to evaluate the offerors' proposals in these areas by following
the traditional source selection evaluation approach.
I
1.2.2 CCPDS-R SEE System Description 3
For the CCPDS-R SEE to be a meaningful measure of an offeror's ability to design and
develop a real-time system like CCPDS-R, the Government felt that the SEE system would have to
require analysis of quantitative performance requirements and concurrent processing, like CCPDS-R, it
would have to be relevant to the CCPDS-R mission, and it would have to be of suitable size and
I
complexity so that it could be done in a reasonably short period of time. The system devised for the
SEE consists of a missile-warning scenario generator and simulator. The exercise system allows the
user to create and edit scenarios consisting of missile-warning events (where an event is a missile
I
launch or nuclear detonation) and to run in real-time a missile warning simulation controlled by a
selected scenario. The exercise system also allows the user to be able to run a particular scenario
simulation while editing that same scenario file. This requirement forces the offerors to address real-
time, concurrent operations comparable to those found in CCPDS-R. Appendix A contains the
I
CCPDS-R SEE system specification as provided to the offerors.
1.3 SCOPE
This report summarizes the Government's efforts on the software engineering exercise both in
preparation for and during the CCPDS-R FSD/P source selection. Sections 2 through 4 address
Government SEE efforts prior to the start of the CCPDS-R FSD/P source selection. In particular,
section 2 addresses MITRE's own approach for dry running the SEE, section 3 describes the plans
41
I
I
I
I devised by the Government for evaluating the offerors' SEE results during source selection, and section
4 summarizes the lessons learned from the MITRE dry run of the SEE. Sections 5 through 7 describe
the actual execution of the SEE during the CCPDS-R FSD/P source selection. Section 5 summarizes
the actual source selection conduct of the SEE, including the process of issuing the SEE to the offerors,
the SEE products received from the offerors, and the Governments approach to evaluation of the
offeror's SEE products. Section 6 describes the lessons learned from administering the SEE during
source selection, and section 7 summarizes the offerors' feedback concerning the use of the SEE.
Finally, sectionthe8 provides an overall summary of conclusions and recommendations
to conducting SEE during the CCPDS-R FSD/P source selection and as a result ofreached both
using the prior
SEE in
Ithe CCPDS-R FSD/P source selection.
II
I
I
I
I
I
I
I
I
i
I
U SECTION 2
Once the Government devised the SEE concept and completed the SEE requirements definition
and preliminary SEE system specification (also referred to as the exercise specification), but prior to
giving the SEE to the offerors, MITRE assembled a team to dry run the SEE. The primary objectives of
the dry run were to generate a clearly defined SEE system specification, to develop the ground rules for
the offerors to follow when conducting the SEE, to identify a set of discriminating SEE source selection
evaluation criteria, and to assess whether the SEE could reasonably be done in the time allotted to the
offerors. The secondary objectives of the effort were to further educate CCPDS-R staff in requirements
analysis and design methodologies, ADL, and Ada, and to gain familiarity with DOD-STD-2167, a new
software development standard for DOD acquisitions.
This section describes the MITRE dry run of the SEE. In particular, it discusses the makeup of
the MITRE SEE team; the tools, techniques, and methodologies selected for carrying out the
implementation, and the approaches taken for educating team members in them; the schedule; the
activities that occurred during requirements analysis; the activities that occurred during the design phase;
an overview of the resulting SEE design; and the source selection documentation that was produced for
the SEE using the results of the dry run.
The MITRE SEE team consisted of eight people with assigned roles. The particular roles,
along with the planned percentage of total time to be devoted to the SEE, are as shown below.
Userf'Govemment" Representative 1 30
Software Development Manager 1 30
Technical Lead 2 80, 50
Ada Consultant 1 5
Designer 2 80,60
I Designer/Recorder 1 80
All team members had in common a computer software background and knowledge of either
PASCAL or similar higher order languages. Only two team members could be considered both
software engineering/Ada experts with extensive experience. Two other team members had
considerable Ada experience, while the remaining team members had little or no actual Ada experience.
At the start of the dry run, the team had no software development methodology or tool set in place.
I
!7
I
b. For a design methodology, the team selected object-oriented design (QOD) as defined by
Grady Booch in "Software Engineering with Ada" [3]. Booch's version of OOD was
selected because it was one of the better known and documented methodologies, several of
the team members were acquainted with it, and it was expected that potential CCPDS-R
I
FSD/P offerors might propose a similar methodology.
c. For a software development environment, the team chose the VAX/VMS Ada environment
since it was readily accessible via the MITRE Bedford Computer Center, most team
members were familiar with it, and it provided sufficient capabilities to meet the demands
of the exercise.
d. As a graphical design representation technique, the team selected Buhr diagrams as defined
in "System Design With Ada" [4] because the technique is designed for use with Ada, it is
compatible with Booch's OOD, and it provided a more extensive mapping from Ada and
I
design constructs than did Booch's notation.
e. For the Ada-based design language, the team chose a draft ADL standard that had been
developed for another ESD project. A number of team members were acquainted with it.
I
f. The team did not select any particular methodology for requirements analysis, primarily
because the team initially felt that the exercise was relatively small, all members understood
3
the requirements clearly, and no data flow/data dictionary tools were readily available.
To become educated in all of these selected tools and procedures, the team studied numerous
articles, participated in group discussions, and conducted demonstrations under the tutelage of the
technical leads and consultant. The total time allocated throughout the effort for education in the tools
and methodologies was minimal, estimated at approximately five days distributed over a 2-week period.
2.3 SCHEDULE I
Prior to commencing the actual design and development of the SEE, the team technical leads
developed a schedule and work plan for the uffort. Figure 1 depicts this initial projected schedule. This
schedule, though longer than that anticipated for the CCPDS-R offerors, was considered justifiable
since the team required training in the methodology, which the offerors should not; the team members
were not dedicated full time, as the offerors' members were expected to be; and the team needed to
3
prepare additional documentation not required of the offerors. As figure 1 reveals, the projected effort
U
Ic
Iu Z2c
I. - -I -9 - - I-
would extend over a 2 1/2 month period, with I week allocated for requirements analysis, 3 1/2 weeks
for design, 1 1/2 weeks for coding of selected portions, and 2 1/2 weeks for developing the source
I
selection documentation (e.g., final exercise specification, evaluation criteria, etc.). This initial
schedule represented an accelerated effort based on an anticipated 15 July 1986 release of the CCPDS-R
RFP package. The actual schedule followed for the SEE dry run, however, turned out to be
considerably longer. Figure 2 depicts the actual MITRE SEE dry run schedule. The primary reasons
why the team deviated from the original schedule were that team members were unable to devote as
much time as originally planned, particular efforts, such as requirements analysis, took much longer
U
than estimated (see section 4.1.2), and unrelated delays occurred in the CCPDS-R RFP release which
obviated the need for the original, accelerated schedule. 3
2.4 REQUIREMENTS ANALYSIS ACTIVITIES 3
The team commenced its dry run of the SEE by conducting an analysis of the SEE specification
requirements. Input to this requirements analysis effort was the draft SEE system specification
described in section 1.2.2. The team assumed at the start of the requirements analysis phase that the
draft exercise specification was essentially free of major ambiguities and inconsistencies. This
assumption was based on a quick reading of the draft specification and the feeling of the team that such
a short specification probably did not have any serious problems in it. The team's main objectives for
this phase were to define a software architecture that clealy identified the computer software
configuration items (CSCIs) for the exercise system, to create adequately detailed software requirements
specifications (SRSs) that provided the technically important portions of the DOD-STD-2167 data item
(Dl) description (DID), such as the definition of inter-CSCI interfaces, and to identify any ambiguities
remaining in the exercise specification.
I
The team's first step in the requirements analysis effort was the development of an overall
software architecture for the exercise system. The software architecture which the team developed
consisted of four CSCIs: two application-level CSCIs, the missile warning simulator (MWS) and the
scenario generator (SG); a user-system interface (USI) CSCI; and a file manager (FM) CSCI. Figure 3
uses Buhr notation to depict the major components of this architecture and the control and data flows 1
among them. As figure 3 reveals, there are two major, independent control threads that tie the system
together, the first passes from USI through SG to FM, and the second passes from USI through the
MWS to FM. In the absence of other guidelines, the team decided to allocate particular requirements to
each CSCI so as to reflect most accurately and straightforwardly the requirements breakdown in the
U
draft exercise specification. Also, the team decided to decompose the system into these four distinct
CSCIs rather than one CSCI with four computer software components (CSCs) for two primary
reasons: first, the team wanted to make the design non-trivial so that the team would be forced to deal
immediately with issues of interface definition and performance allocation; and second, the team wished
I
to view the exercise specification as if it was a real specification that required a high level decomposition
and the creation of at least two SRSs. I
Upon development of the overall software architecture for the exercise system, the SEE team
carried out a number of other activities and generated specific products. The specific activities
conducted and products generated during the requirements analysis phase included
10
I
I
I
* m
I I
Im
I
I =IIIU
I I
I I
*
I
-,
_ _
I
*
- __ -
*
I
~2J ~Ii
I
I Iii Ii
ill I11 ~
II ii 'II
________ __
I
I
I
00
0 ... ~
U _
II
IW
'U12
I
I a. Documentation for most SRS sections for all four CSCIs. Sections of the SRSs not
prepared included adaptation requirements, qualification requirements, and quality factors.
These sections were not generated either because the team did not have sufficient time to
prepare them, the team did not expect the offerors to complete these sections in their allotted
time to conduct the SEE, or the sections did not provide any elaboration of requirements
contained in the exercise specification.
b. A partial allocation of timing budgets to CSCIs. A complete allocation was not performed
because there was insufficient time to finish this task properly and because the team did not
have control over the target execution environment (a time-shared VAX).
3 d. Data flow diagrams for selected functions and a global data dictionary.
f. A revised draft of the system specification that reflected the discussions held during the
definition of the CSCIs and their interfaces (see section 2.7.1).
I Although the team did not apply a formal requirements analysis methodology, the use of a data
dictionary and data flow diagrams was sufficient for the team to complete the other efforts identified
above, to develop the overall software architecture, and hence, to achieve all of the requirements
analysis phase objectives. The team did feel that a more comprehensive exercise than the one defined
would have forced the use of a formal methodology.
The SEE team started the design phase of the dry run upon completion of the mock SSR and
review of the draft SRS documents. The team's objective for this phase was to raise as many Ada-
related methodology and design issues as possible. It was not the team's objective to develop a
complete, fully documented design. The team selected two of the CSCIs, scenario generator and file
manager, for which to conduct preliminary design. The team picked these two CSCIs for four reasons:
first, these CSCIs shared a non-trivial interface that required the joint, consistent specification of data
elements, control flow, and timing budgets; second, these CSCIs formed a portion of one of the two
major independent control threads in the exercise system; third, the correct operation of the file manager
required that the preliminary design show evidence that certain concurrent rad/write issues had been
resolved; and fourth, of the SRS documents prepared by the team, the SRSs for these CSCIs were the
3 most detailed.
As stated in section 2.2, the team carried out preliminary design for the two selected CSCIs
using Booch's OOD methodology. In addition, the team developed Buhr diagrams and ADL for the SG
and FM CSCIs. Figure 4 is a sample of one of the Buhr diagrams produced for a file manager
function, Retrieve_Event. The team did not develop formal software top-level design documents
(STLDDs) for these CSCIs, due to lack of time; however, the team made most of the technical decisions
needed for these documents and presented the results at a mock preliminary design review (PDR). The
team did no further design work following the mock PDR, the reasons being that team members could
S13
I
ILLS
F-l
0
2 z=I
0 z IC
C.)4U
I
LUU
zI
_ _______
I
a V: I
P-
II
143
I
I
no longer devote large amounts of time to the dry run and members felt that no additional discriminating
design issues would be raised by continued design decomposition.
I At the conclusion of the design phase, the team identified a number of Ada-related methodology
and design issues which were encuntered during the dry run. The most prominent methodology issue
was the transition to preliminary design from requirements analysis, following the guidelines in
Booch's OOD methodology, and in particular, the introduction of ADL. The most important design
issue was related to the Ada tasking model: the prioritization of tasks and the avoidance of deadlock,
race conditions, and task starvation. A second important issue concerned the treatment of system
initialization and termination, and their interrelation with the Ada elaboration rules. With the
identification of these and other issues (see section 4 for a discussion of these issues), the team satisfied
At the completion of the design-phase dry run, the design for the SEE system which emerged
consisted of a menu-driven system containing four CSCIs, each running asynchronously. In the
design, USI is an Ada task which generates the menus used to solicit input commands from the user,
validates all user inputs; forwards valid scenario generator and missile warning simulator commands for
processing to SG and MWS, respectively; accepts data from SG and MWS, and generates appropriate
menus to solicit input commands or missile warning displays. SG is an Ada task that performs scenario
generation processing, allowing the user to create, edit, delete and save scenario files consisting of
missile launch and nuclear detonation message events. MWS is an Ada task that performs the
simulation processing for a particular scenario file; that is to say, it performs processing on the event
messages in the simulation scenario, calculates missile warning display information elements based on
the contents of the event messages, and makes the information elements available for display by USI.
Finally, FM is an Ada task that provides a common set of mechanisms for SG and MWS to access a
centralized database of scenario files and to prioritize requests by SG and MWS for access to the
scenario files.
As stated in section 2, two of the primary objectives of the MITRE SEE dry run were to
generate a clearly defined SEE system specification and to develop the ground rules for the offerors to
follow when conducting the SEE. The Government considered these products essential to scope the
exercise, to achieve a meaningful exercise, and to ensure commonality among offeror approaches and
efforts (e.g., what hardware could be used and what products were to be generated by the offerors as
part of the exercise), thereby enabling the Government to evaluate the offerors' SEE results in an
objective and consistent manner. The actual SEE system specification and ground rules, or detailed
instructions for the offeror, which w-re generated upon completion of the dry run of the SEE are
iontained in appendix A. A description of these documents and their derivation is contained below.
315
I
I
raised as to the appropriate level of detail for the specification; some team members wanted very detailed I
requirements in the specification, and some felt that very detailed requirements were inappropriate since
they implied design. The resolution for this dilemma was to incorporate the very detailed requirements
into the specification only if they were necessary to bound the scope of the exercise; otherwise, the
detailed requirements were omitted and left as a design issue for the offerors. Thus, the major
modifications the team made to the draft specification as a result of the dry uniconcerned the areas of
hardware, growth and flexibility, and performance requirements. I
2.7.1.1 Hardware
The draft SEE system specification stated only that no special hardware was needed for the
exercise system. The team modified the SEE specification, however, to require that no special graphics
I
hardware or capabilities be used and that the user interface be designed to operate on a single dumb
tenninal with keyboard entry device. The team made these changes to ensure a level of commonality
among the offerors' designs and to preclude the offerors from focusing their efforts on sophisticated
graphics capabilities at the expense of addressing key software design issues.
Initially, the SEE system specification had no requirements for growth and flexibility of the
exercise system. Since growth and flexibility are key requirements of the CCPDS-R system, the team
elected to add requirements to the SEE specification in these areas so that the offerors could be evaluated
on their approaches for handling growth and flexibility. In particular, the team added both a general and
a detailed set of growth and flexibility requirements. The general requirement specified that the design
be modular to facilitate changes in software components which are needed to accommodate future
I
changes in operational requirements. The detailed requirements specified that the system include the
capability for the user to query an individual scenario file based on a fixed set of criteria, and that the
system be flexible enough to allow as future growth the capability for the user to query across multiple
scenario files for this same set of fixed criteria.
16 3
I
I
Iinteraction.
formats.
the offenros methodologies and tools, team composition, and deliverable products and their
Based on the results of the dry run of the SEE, the team reached the following conclusions
regarding the scope and duration of the exercise:
c. Allowing the exercise period to exceed four weeks was not considered beneficial.
Comparable to a college "take home" examination which has a point at which no further
improvement in quality is achieved, it was determined that no further discriminatory
information could be obtained by allowing the exercise period to extend beyond four weeks
to six or eight weeks, for example. In fact, having the exercise go beyond four weeks
could be detrimental since it could result in an overload of SEE material for the Government
to evaluate.
Given the above conclusions, the team specified in the detailed instructions for the offerors that
the offerors develop a complete software architecture for the exercise system; conduct requirements
analysis and preliminary design for two or more components of that architecture, with the components
to be selected by the offeror, and conduct detailed design for one or more components of the
architecture, again with the components selected by the offeror. This would provide the Government
with sample products from each major software development phase with minimal burden on the offeror.
Also, the team specified that the exercise duration, from offeror receipt of the SEE specification and
ground rules until delivery of the completed products, be limited to 3 1/2 weeks.
2.7.2.2 Government Interaction
The MITRE dry run of the SEE was conducted in accordance with DOD-STD-2167, as tailored
for CCPDS-R [2]. As such, it included some of the typical reviews held during the software
development effort, such as the software specification review and preliminary design review. During
these reviews, participating personnel assumed the roles of Government acquisition agencies,
Government using agencies, and contractors. Conducting these reviews provided the "contractors" the
opportunity to submit questions to the "Government" to obtain clarification of requirements, resolution
of specification ambiguities, and design verification. During the conduct of these reviews, it became
evident that CCPDS-R offerors might develop similar questions during their implementation of the SEE
which would require resolution. In the interest of fairness, it was considered undesirable to have any
interaction between the Government and the offerors during the exercise period, since one offeror might
inadvertently be given more information or direction than another. Therefore, in the recommended
instructions for the offeror, the team explicitly stated that there would be no interaction between the
* 17
I
I
I
offerors and the Government during the offerors' execution of the exercise. Should the offerors have I
any questions on the exercise, the offerors were instructed to identify appropriate assumptions, to
document those assumptions, and to proceed with the exercise based on those assumptions.
During the dry run of the SEE, the observation was made that while a particular methodology
may be considered complete and satisfactory in theory, it may turn out to require modification once it is
actually used on a real application. This was considered true for the object-oriented design
methodology used by the SEE team (see section 4.2). The instructions for the offeror specified that all
offerors must follow their proposed requirements analysis and design methodologies as documented in
the SDPs submitted with the CCPDS-R technical proposal; however, the offerors were also allowed to
submit with their delivered SEE products changes to their SDPs which provided further concise,
technical details regarding the methodologies used during the SEE requirements analysis and design
phases. These changes would be considered part of the offeror's technical proposal and subject to
I
Government evaluation.
The observation was also made that familiarity with the selected tool set was essential in order
to promote ease of design and development The fact that a number of the team members were not well
versed in the selected VAX Ada environment and tool set slowed progress. However, to require that
the CCPDS-R offerors use the actual tool sets proposed CCPDS-R did not appear suitable since the
offerors might not have all the tools in house. (It was not considered proper for the Government to
mandate that offerors expend funds to obtain these tools for the SEE.) Consequently, in the
recommended instructions for the offeror, the team specified only that the offerors use the tool set
proposed for CCPDS-R to the maximum extent practical, as this would be viewed more favorably by
I
the Government.
1
18!
I
I
2.7.2.5 Deliverable Products
During the course of the SEE dry run, the question arose as to what materials the offerors
should submit for evaluation and in what format the products should be delivered. The team concluded
that, for those software architecture components the offerors chose to analyze and design, the offerors
should submit all requirements analysis and design products, both textual and graphical, that they
generated as part of their methodology and which are required per DOD-STD-2167, as tailored for
CCPDS-R. These products included, for example, SRSs, STLDDs, software detailed design
documents (SDDDs), and performance analyses. Also, the team concluded that the offerors should
submit all textual products of the exercise, including requirements analysis conclusions and
documentation, ADL listings, and other design documentation both in hardcopy form and in machine-
readable, 9-track tape. The tape format provided the Government the capability to browse through the
text, to apply certain design analysis tools to the ADL, and to verify that the offerors' ADL was
compilable. Finally, the team concluded that the offerors should present a briefing to the Government
on their SEE results. This briefing would take place following initial Government evaluation of the
SEE products and would enable the Government to verify its rating of the offerors' SEE performance
and to assess the knowledge of the offerors' team members, as described in section 2.7.2.4. The
instructions for the offeror were written to include these specific directions.
I
I
I
I
U
U
I
I
I
I 1
I
I
I
U SECTION 3
I To evaluate the offerors' performance on the SEE, the MITRE SEE team developed a set of
evaluation criteria based on a set of possible discriminating issues found during the dry run of the SEE.
This section presents a general overview of source selection evaluation terminology. It then describes
the source selection approach chosen for the SEE and delineates how these discriminators were used to
derive a set of objective source selection evaluation criteria. Next, this section presents a description of
some tools and techniques selected to assist the Government in evaluating the offerors' SEE products.
Finally, this section details how the Government presented the SEE to the offerors and how the
Government planned to evaluate the offerors' products.
As defined in Air Force Regulation (AFR) 70-15, "Source Selection Policy and Procedures" [5]
and Electronic Systems Division supplement I to AFR 70-15 [61, during source selection, offerors'
proposals are evaluated against a set of predefined criteria. The evaluation criteria are con-elated to
important aspects of the program which are significant to the selection decision and particularly to
aspects of the program that constitute high risk. The evaluation criteria are arranged as evaluation areas
which are broken down further into items, which in turn may be broken down into evaluation factors
and possibly subfactors. The evaluation criteria and order of importance are described to the prospectivc
offerors in section M of the RFP; however, normally the evaluation factors and subfactors are not
identified in the RFP, section M.
During source selection, offerors' proposals are rated against the evaluation criteria using
predefined standards and scoring methods. At the lowest applicable evaluation criteria category (e.g.,
item, factor, subfactor), standards are prepared and used as positive indicators of the minimum
performance or compliance acceptable to enable an offeror to meet the requirements of that evaluation
criteria. Thus, standards are the measures by which the Government scores an offeror's proposal as
acceptable or unacceptable.
I Based on the dry run, the team determined that the critical issues for evaluating the CCPDS-R
FSD/P SEE products were the robustness and cohesion of the offeror's requirements analysis,
preliminary design, and detailed design methodologies; the offeror's familiarity with the methodologies
and tools; the offeror's Ada/software engineering expertise; the robustness, cohesion, and completeness
of the submitted exercise design; the offeroes ability to address and analyze real-time requirements and
issues; the offeror's clarity and communication of design, including the use of ADL to express design;
and the offeror's compliance with the SEE system specification and the offeror's own SDP. The team
I 21
I
I
I
assessed that an evaluation of these issues as reflected in the offeror's SEE products would provide
sufficient evidence as to the offeror's ability to design and develop a real-time system in Ada using
H
modem software engineering practices. Any other issues such as coding, metrics, and full compliance
with DOD-STD-2167 were considered unnecessary. Thus, the Government included only the above
high-level evaluation criteria for the SEE in section M of the CCPDS-R RFP.
Given this high-level criteria, the Government identified where the SEE should be included in I
the CCPDS-R FSD/P source selection process. Since the CCPDS-R source selection approach
included only two evaluation areas, technical and cost, the Government determined that the SEE be
included as one of the four items, of equal importance, in the technical area. The Government felt that
the SEE should not be incorporated under the source selection general considerations area, since this
area carries less weight than the evaluation areas. Also, the Government concluded that the SEE item
should be decomposed into three factors and associated subfactors as follows:
a. Factor: methodologies
1. Subfactor: requirements analysis methodology
2. Subfactor: design methodology
3. Subfactor: interrelationship between requirements analysis
I
and design methodology
b. Factor: design U
c. Factor: team expertise
1. Subfactor: methodologies
2. Subfactor team composition
U
Since these factors and subfactors only reflected a consolidation and reorganization of the SEE criteria
already contained in the RFP, section M, the Government elected not to include these factors and
subfactors in the section M provided to the prospective offerors [2].
22 I
I
I
I
weaknesses could not be readily or reasonably corrected. With this approach, a rating of fail for the
SEE did not render an offeror automatically ineligible for award.
Prior to the commencement of the CCPDS-R source selection technical evaluation, the MITRE
SEE team developed preliminary standards for each of the above SEE factors and subfactors based in
part on the recommended evaluation criteria and a set of lower level discriminating issues identified
during the course of the SEE dry run. The lower level discriminators related to identification of
specification ambiguities, allocation of timing requirements across system components, behavioral
aspects of the exercise system, interface specification, and consistent representation of design
information across ADL, text and graphics.
In the dry run of the exercise, the team discovered instances of incompleteness and ambiguities
in the draft exercise specification. Many of these instances were uncovered during requirements
analysis only after discussion among the team members; initially, each member thought he or she
understood the intent of the requirements and only when two members had to agree did the
incompleteness become apparent Many of these ambiguities were impossible to resolve fully until
derived requirements were presented at the SRS level. An example is the interpretation of the
requirement that the exercise system will "simulate the CCPDS-R missile warning capability in real-
time" (see appendix A). Other areas of incompleteness related to the difficulty of stating performance
requirements concisely; for example, the requirement that the "time from completion of [data] entry [by
the user] until the database is modified to reflect the update shall not exceed two seconds" (see
appendix A). Such requirements force end-to-end performance measurement across different
components. To prevent incorrect interpretations of specification ambiguities from having later
catastrophic and costly results, it is imperative that the methodology employed for requirements analysis
include approaches for detecting and resolving specification ambiguities and inconsistencies. The
offerors' SEE products were therefore expected to reflect a thorough identification and resolution of
specification ambiguities.
3 3.2.3.2 Timing Requirements
As a result of the dry run, the team found that allocation of timing budgets to software
components for the SEE was very difficult to support analytically. As mentioned above, this was due
in part to inherent problems in stating quantitative performance requirements in a rigorous, testable
manner. But the primary problem was due to the nature of the exercise: the analysis to support timing
budget allocation requires simulation and/or prototyping activities, and the tools and time needed to do
this were not available to the team during the exercise period. The team also found that timing analyses
must be done during the requirements analysis phase to do a proper allocation of requirements. The
offerors' SEE results were therefore expected to include an explicit performance analysis activity, done
during requirements analysis, which would provide input to the SRSs.
A number of technical questions arose during the exercise dry run that related to the correct,
reliable operation of the system. The team felt that these issues should be addressed in the preliminary
* 23
I
I
I
design by means of explicit use of Ada language features. These issues were the clearidentification
(from the ADL and the graphical representation) of the major control threads running throughout the
I
system; the synchronization and pioritization of concurrent tasks; the avoidance of system-wide
deadlock; the effectiveness of the mechanisms used to initialize and terminate the exercise system (these
mechanisms can be implicit via reliance on Ada elaboration order or can involve explicitly implemented
procedures); and the effective use of Ada exception handling.
3.2.3.4 Interfaces
Interface consistency has typically plagued DOD software development efforts over the years,
and the advantages of Ada for producing consistent interface package specifications are obvious. While
the team did not really expect that an offeror would fail to use Ada properly for data definition on such a
small exercise, the team felt nevertheless that effective use of Ada should be demonstrated in the SEE
products.
3.2.3.5 Consistent Representation
The SEE system specification requires that both graphical representation and ADL be used to
describe design information and that they be employed consistently. The team found in the SEE dry run
that graphical representations are necessary and useful to depict top-level and detailed views of the
software architecture as well as relationships among components. ADL is then used to fill in details and
to enhance definitions. The team found that these techniques must supplement one another since they
I
lose effectiveness if used to describe different things; consequently, the offerors' SEE products were
expected to reflect compatible ADL and graphical design representations. I
3.3 EVALUATION TOOLS AND TECHNIQUES
Given the rather low level of detail described above against which the offerors' SEE products
would be evaluated together with the potentially large amount of data to be submitted by the offerors,
the SEE team identified the possible need for some additional tools and techniques to assist the
Government in evaluating the SEE products. To that end, the team recommended that the Government
use the ESD acquisition support environment (EASE) and a set of evaluation checklist questions for the
SEE source selection.
3.3.1 EASE
I
EASE is a prototype workstation-based tool intended to support Government review of contract i
technical documentation. Specifically, EASE, which was under development at the time of the SEE dry
run, is oriented towards the review of contractor products relating to the acquisition of Ada software.
These products will primarily consist of ADL and Ada code. At maturity, EASE will support a wide
range of analytic activities, including RFP preparation, modeling, requirements analysis, design
I
analysis, and tool assessment. EASE is not intended, however, to support management functions.
The EASE prototype executes on a Sun-3 UNIX®l-based workstation. EASE takes full
i advantage of the Sun's large bit mapped display and windowing system. Different tools execute in their
own windows, and information is managed in a common database hidden from the user. At the time of
the CCPDS-R source selection, the only tools integrated with EASE were the GNU Emacs editor, the
Verdix Ada compiler and several utilities delivered with the compiler.
For the SEE, the team proposed the use of EASE specifically for browsing through the
offerors' textual products and for assisting in the evaluation of the ADL submitted with the products.
Since the SEE system specification required that the offeros' design be documented in compilable
3 ADL, the team recommended that the Govermnent use EASE to test whether the offerors' ADL did in
fact compile. The Government elected to follow these recommendations.
In addition to the use of EASE, the SEE team recommended that a set of informal checklist
questions be employed to assist the SEE evaluators in their rating of the offerors' SEE products against
the factors and standards. The questions would be correlated with specific factors and standards and
would highlight particular issues which must be addressed to determine if a standard is met. The
questions would serve two purposes: for those source selection evaluators who participated in the dry
run, the questions would serve as reminders of key points to look for in the SEE products, and for
those evaluators who were not familiar with the SEE prior to source selection, the questions would
serve as a checklist for evaluating the products and determining whether or not standards had been met.
In support of this recommendation, the SEE team prepared an extensive list of evaluation questions to
serve as a basis for the checklist.
IBased upon MITRE's dry run of the SEE, it was expected that both the offeror preparation of
the SEE and the Government evaluation of the resulting products would be intensive and time
consuming. Furthermore, the Government resources to review the SEE products would be limited,
since the evaluators, about eight people, would most likely be responsible for reviewing both the SEE
products and the offerors' technical proposals. Therefore, to give the Government time to review the
offerors' CCPDS-R technical proposals and SDPs as well as to give the offerors adequate time to
prepare both the technical proposals and the SEE, the Government elected not to begin the SEE until
after receipt of the offerors' technical proposals and SDPs. Consequently, the Government did not
include the SEE system specification and detailed instructions for the offeror in the RFP released on
10 October 1986; it only included a copy of the SEE section M evaluation criteria and a preliminary set
of SEE instructions for the offeror in the RFP instructions for proposai preparation (IFPP). As stated
in section 3.2.1, the SEE section M evaluation criteria identified the basis on which the offerors' SEE
products would be judged. The preliminary instructions for the offeror contained the general ground
rules for the conduct of the SEE and a brief description of the SEE products to be generated and
submitted by the offerors for Government evaluation. Upon receipt of the offerors' technical
proposals, due on 10 November 1986, the Government planned to supply each offeror the actual SEE
system specification and the detailed instructions for the offeror. Figure 5 shows the interaction of
Government and offeror activities during the timeframe of source selection. Copies of the SEE system
specification and detailed instructions for the offeror, the RFP IFPP preliminary instructions for the
offeror, and the RFP section M may be found in appendices A, B, and C, respectively.
* 25
I
I I I
I I
I--- - I
_,Ie ,_. i U
i~ - I
_ I|
I-I
I
I
The Government's planned approach for evaluating the offerors' SEE products, delivered 3 1/2
weeks after receipt of the SEE system specification and detailed instructions for the offeror, consisted of
a first-pass evaluation, an in-house audit at each offeror's facility, and a completed evaluation. The
Government's intent was that upon receipt of the offerors' SEE products, the Government would
perform a preliminary evaluation of the products, allocating approximately one week for each offeror.
Following that evaluation period, the Government would conduct a I-day audit at each offeror's
facility. The offeror's SEE products and results of the audit would then be factored into a final
evaluation to be completed by the Government within a week of the audit. The following paragraphs
describe the process of the first-pass evaluation and the audit.
The purpose of the first-pass evaluation was to obtain a preliminary assessment of each
offerors performance on the SEE and to identify strengths and weaknesses in the offeror's SEE
products. The Government would carry out the first-pass evaluation by scoring each offeror's SEE
products against the predefined source selection factors and standards. For each offeror, the
Government would prepare draft documentation which would describe the offeror's SEE products, the
offeror's strengths and weaknesses relative to the factors and standards, and an overall assessment of
the offeror's performance on the SEE. Also, the Government would prepare a set of questions tailored
for each offeror which would be posed to the offeror during the on-site audit. The intent of these
questions was to verify the Government's interpretation and evaluation of the SEE products and to
assess the offerors SEE team capabilities in software engineering, Ada, and the selected methodologies
and tools.
1 3.5.2 Audit
As stated above, the purpose of the Government audit at each offeror's facility was to verify the
Government's preliminary assessment of the offerors SEE products and to obtain additional
information to complete its evaluation. As planned, the in-house SEE audit would consist of two parts:
an offeror briefing and a question and answer session. The briefing would provide an opportunity for
the offeror to explain the methodology proposed for CCPDS-R and employed on the SEE. The briefing
would include, at a minimum, a summary of the offeror's management approach, an overview of the
requirements analysis approach, an overview of the preliminary and detailed design approaches, an
identification of any assumptions made while carrying out the SEE and generating the SEE products,
and an identification of any deviations made from the SDP along with the rationale for those deviations.
The briefing would not include any discussion of further work which the offeror may have
completed following the submission of the SEE products, since the Government would not evaluate this
additional work. The question and answer session would provide an opportunity for the Government
to obtain clarifying information about the offeroes SEE products and to query individual offeror team
members spontaneously to test their expertise with the selected methodologies and tools. Each member
of the offers SEE teamn would be required to be present during the audit to respond to specific
questions directed to that individual. The Government would maintain a transcript of the questions and
answers. This transcript together with the briefing presentation material and the SEE products delivered
at the end of the 3 12-week exercise period would be considered part of the offeror's proposal and
included in the Government's final evaluation of the offeror's SEE results.
*27
I
I
m SECTION 4
N DRY RUN LESSONS LEARNED
3 As a result of the MITRE dry run of the SEE, numerous lessons were learned. These lessons
may be divided into two categories: administrative issues that relate to defining, organizing and
including the SEE as part of the source selection process, and technical issues that concern software
development and Ada in general. Lessons learned relating to administrtive issues have been described
throughout sections 2 and 3. This section summarizes the major technical lessons learned relating to
software engineering and specifically requirements analysis, object oriented design, Ada, and DOD-
STD-2167. It also highlights how those lessons were factored into either the CCPDS-R FSD/P
program and/or the standards prepared for the CCPDS-R source selection.
During the course of MITRE's dry run of the SEE, the team made observations regarding the
time allocated for requirements analysis, Government interaction, a formal requirements analysis
methodology, and the completion of the requirements analysis phase.
54.1.1 Time Allocation
The team discovered that much more time than anticipated was needed to produce a thorough
requirements analysis and associated documentation. As reflected in section 2.3, the team spent
approximately three times longer on the requirements analysis effort than originally planned. This extra
time was due to the following conditions:
a. The team members initially assumed that the requirements in the exercise specification were
clear and would not require extensive analysis;
b. The original 1-week allottment for requirements analysis was overly optimistic but was
necessary to achieve the scheduled 15 July 1986 RFP release;
c. The team lacked a formal approach to requirements analysis at the start of the exercise; and
d. The DOD-STD-2167 SRS DID was new, requiring a learning curve, and it specified a
lower level of detail than anticipated by the team members.
5 Eliminating the extra time spent due to conditions a through c above, it was estimated that as
much as twice the originally scheduled time was spent on requirements analysis due to the DOD-STD-
2167 required level of detail. Based on this observation, the Government developed a projected
CCPDS-R FSD/P phase schedule which included approximately one additional month for requirements
analysis beyond that typically estimated. Also, the Government elected to scrutinize carefully during
5adequate
source selection the offerors' proposed CCPDS-R software development schedules to ensure that
time had been allocated for requirements analysis.
329
4.1.2 Government Interaction I
During the dry run of the SEE, the team observed that the presence of a "Government"
representative during the requirements analysis effort greatly facilitated progress during that phase.
This person was able to assist the development team by clarifying ambiguities and identifying incorrect
assumptions. As mentioned in section 2.7.2.2, the team recognized that the Government could not play
a similar role during the offerors' execution of the SEE. However, based upon this SEE observation,
the Government elected to include in the CCPDS-R statement of work (SOW) a provision for the
U
Government to maintain a representative on-site in the contractor's facility throughout the requirements
analysis phase to monitor the contractor's effort and to assist in obtaining responses to contractor
questions.
30 £
I
about the problem and without concerning themselves with the structure of the solution. Finally, the
third phase consists of formalizing the strategy. Using the informal strategy developed in the second
I phiase, nouns and verbs ame extracted and become the objects and operations in the solution. The nouns
are used to imply abstract data types and specific real-world objects. The verbs are used to define real-
world operations with particular objects. Also, adverb phrases are extracted to identify attributes of the
I
I operations, and interfaces between objects are described. Fially, the operations previously identified
for each object are implemented in executable form (e.g., ADL). This process is repeated until a point
is reached where the level of decomposition is unesadbewithout further modularity.
I During the dry run of the SEE, it appeared to the team that Booch's QOD was an incomplete
methodology. While it provided practical guidance for object identification, it lacked support for
requirements traceability and completeness, performance analysis, concurrency (i.e., multitasking),
I initialization and termination conditions, and error detection and handling. It did not clearly specify
how to transition from requi rements analysis to design nor did it specify guidelines for the completion
of detailed design. Furthermore, the team discovered that the use of OOD's informal strategy was non-
productive in practice. To have the SRS and then have to write the informal strategy resulted in
duplications of effort. The team observed that in many cases the informal strategy could be developed
so as to produce contrived results.
As an outcome of these observations, the Government included specific SEE factors and
standards to ensure that the offerors' design methodologies were complete, robust, and that they
contained specific procedures to resolve the above issues. In particular, the Government planned to
assess whether the offerors' design methodologies contained clearly defined procedures for
transitioning between requirements analysis and design phases and for handling initialization and
termination, exception handling, concurrency, and performance analysis.
* 4.3 ADA
While dry nning the SEE, the team identified several observations relating to both the
technical and management aspects of Ada. These issues concerned control flow, ADL, and personnel.
* 31
the software development methodology must include techniques for designing effective controls for the I
detecion and/or prevention of deadlock and process starvation.
During the dry run, these issues were dealt with in part through the use of canonical task idioms m
and strategies that provided controlled access to shared resources. Given the team's conclusion about
the importance of these control issues and the fact that the SEE subsystem was to be designed using
ADL, the Government chose to consider during source selection, as part of the completeness and I
robustness of the offerors' design methodologies, the ability of the offerors' design methodologies to
address flow control, in general, and deadlock and process starvation, in particular.
4.3.2 ADL U
The MITRE SEE team designed the SEE system using ADL, documented the design with the
ADL incorporated into the DOD-STD-2167 products prepared by the team, and presented the design to
the "Government" representatives via the ADL at a mock PDR. The basic conclusion the team reached
from these efforts was that the use of ADL by itself does not present a global picture of the entire
system to the developers. In its design meetings, the team came to rely on Buhr diagrams as the
primary design representation. Moreover, the team found that presenting only ADL at the dry-run PDR
I
failed to convey design information clearly to all reviewers. As a result of this conclusion, the
Government modified the CCPDS-R system specification to require the use of graphical notation to
convey design information in conjunction with the ADL. Furthermore, since the SEE system
specification contained the same graphical notation requirements as the CCPDS-R system specification,
the Government opted to consider during source selection, as part of the completeness and robustness
of the offerors' design methodologies and clarity and communication of design, the offerors' graphical I
notation to ensure that it was well-defined, it was consistent with the ADL, and it contained enough
features to convey the information available via the ADL constructs.
4.3.3 Personnel
During the course of dry running the SEE, the team encountered several issues related to Ada
and personnel. These consisted of personnel training, retention of Ada-trained staff, and presence of
Ada experts on development teams.
I
4.3.3.1 Training I
The team observed during the dry run that Ada training must occur at all levels of the software
development and acquisition teams; from users, programmers, and designers, to program managers and
reviewers. The team also noted that training for Ada programmers and designers is slower and more
difficult than training for other programming languages, primarily because Ada imposes the software
engineering discipline of a methodology on its users. To a greater extent than in other languages, an
Ada programmer must be a software engineer and must be knowledgeable of the methodologies
employed, the graphical notation used, ADL, and the Ada language itself. The Ada programmer must
I
be well versed in all these issues at all stages of development; simply learning Ada syntax and semantics
is not enough. Based on the SEE dry-min results, the team estimated that training for Ada could be at
least two to three times longer than for other languages. Finally, the team concluded that the SEE dry
I
run served as an excellent vehicle to teach Ada as well as software engineering and software acquisition.
The SEE dry run served as a far more substantive approach for teaching software engineering and Ada
than the typical 5-day courses offered in these areas, which usually concentrate only on theory rather
32 3
I
I
I
I than practical applications. Specifically, the SEE dry run provided team members hands-on training in
all aspects of software development, including Ada, methodologies, DOD-STD-2167, requirements
analysis, design, and software management. The one major limitation of the SEE dry run was that it
did not cover the complete software development cycle since it did ot progress all the way through
code and testing.
As a result of the above lessons learned, the Government decided to examine the CCPDS-R
FSD/P offerors to ensure that the offerors' companies provided in-depth Ada training which was geared
for all offeror personnel associated with CCPDS-R software development, as appropriate for assigned
roles, and which exceeded the typical 1- to 5-day courses. Furthermore, the Government made plans to
train its own CCPDS-R project personnel in Ada after source selection by rerunning the SEE from
requirements analysis through testing, with select project personnel serving as the team members, and
by having all project individuals participate in at least some typical, formal Ada courses, as appropriate
for their given roles and responsibilities.
One of the primary risks associated with Ada today is the ability to obtain and retain highly
qualified Ada engineers, since the number of such individuals is extremely small. The MITRE SEE
,eam itself experienced problems in these areas during the dry run of the SEE. MITRE had difficulty in
assembling a sufficient number of Ada-trained people who could devote a significant amount of time to
the SEE, and the SEE team itself experienced the departure of one of the technical leads. As a result of
confirming this Ada risk during the SEE dry run, the Government decided to assess each CCPDS-R
offeror's ability to assemble and manage a team of engineers for the SEE who were well versed in Ada,
as well as the proposed methodologies, software development tools and procedures, and ADL. In
addition, to help deter the departure of the FSD/P contractor's key Ada/software engineering personnel,
the Government included an award fee plan in the CCPDS-R FSD/P model contract [2]. The contract
requires that the contractor flow down 50 percent of the award fee directly to the contractor employees
3 working on CCPDS-R, and not to the company as a whole. As defined, the award fee is tied to the
successful completion of specific CCPDS-R milestones.
3As mentioned in section 2.1, the MITRE SEE team included two Ada/software engineering
experts who were familiar with all aspects of Ada and particularly its more complex constructs. These
individuals had significant software development experience as well as a deep understanding of the
relevant and often more complex software engineering issues. The presence of these two experts was
crucial to the SEE development progress. They served as mentors to the rest of the team and as such
3
Ito
were able to keep the rest of the team on track, to point out areas overlooked by the team members, and
answer or resolve detailed software engineering and Ada questions.
Given the importance of these Ada experts on the MITRE SIE team, the Government
concluded that the software development risks on a complex Ada development could be substantially
reduced if the contractor's team included at least one or more strong Ada technical leads/experts who
were well versed in all the detailed aspects of Ada and software engineering. As a result of this
observation, the Government elected to consider, as part of the offeror's team expertise, the offeror's
ability to organize a SEE team that included strong Ada/software engineering technical leads.
3 33
U
I
I
4.4 DOD-STD-2167 I
The SEE team performed the dry run in accordance with DOD-STD-2167, as tailored for
CCPDS-R. As the dry run progressed, the team observed that the already existing CCPDS-R tailoring
of DOD-STD-2167 required further tailoring because of what the team considered inappropriate
requirements of the standard. For example, the DOD-STD-2167 DID for the SRS requires data which
seems premature and in some cases impossible to obtain during the requirements analysis stage of
I
development. In particular, the input, processing, and output sections of the SRS DID require the
specification of items such as units of measure and ranges for inputs and outputs, the exact intent of the
operation, error detection, and algorithms. However, the team found that during the requirements
analysis phase, units of measure at this level may be impossible to define and that delineation of the
processing section seemed to force the conceptualization of a design, which contradicts the intent of the
requirements analysis effort. The SRS DID also requires the specification of timing and sizing data
against which the software will be tested, since the SRS is the baseline document for software formal
qualification testing to the Government. For current Ada developments, this is almost impossible to do,
since previous data on programs developed in Ada is minimal. Thus, the team determined that any
timing and sizing estimates entered into an SRS during the requirements analysis phase for an Ada
I
development were especially weak; the possibility was extremely high that the timing and sizing data
contained in an authenticated SRS would hold no validity later in the development effort.
I
vi
I'
EgoiI
I~* !Iu u
15i
lug ~
I -~ .1 35
I
I
I SECTION S
3 FORMAL CONDUCT OF THE SEE
With the completion of the MITRE dry run of the SEE and the associated SEE exercise
specification, ground rules, and evaluation criteria, the Government was fully prepared to conduct the
SEE as part of the CCPDS-R FSD/P source selection. This section describes how the Government
conducted the SEE relative to the plans described in section 3. It details how the Government released
the SEE to the offerors, the products delivered by the offerors, the Government's evaluation team, the
3 tools and techniques the team used to aid in the evaluation of the SEE products, and the Government's
overall approach for evaluation of the offerors' SEE products.
The SEE was issued to the offerors following the plan described in section 3.4. The offerors
received a copy of the SEE section M evaluation criteria and a preliminary set of SEE instructions to the
offeror in the request for proposal package, issued on 10 October 1986. Upon submission of the
offerors' proposals to the Government on 10 November 1986, the offerors received the SEE detailed
instructions to the offeror and the SEE system specifications. The offerors were then given 3 1/2 weeks
to deliver their SEE products, due on 3 December 1986. For each offeror, the Government spent
approximately four days evaluating the delivered SEE products, one day conducting an audit at the
offeror's facility, and two days finalizing the evaluation results.
This method of issuing the SEE to the offerors worked out beneficially. First, the offerors
benefitted by not having to write their proposals and develop the SEE products at the same time.
Second, it allowed the Government SEE evaluation team time during the proposal evaluation period to
review each offeror's SDP prior to receiving the SEE products. (The SDP defines the offeror's
software engineering approach -- methodology, tool set, ADL, and terminology -- and is the baseline
against which the offeror's products were to be evaluated.) If the Government had to review each
offeror's SDP and the SEE products at the same time, either the Government would have required a
longer time period to review the SEE products (as opposed to 4 days per offeror) or the staff-hours
required of the SEE evaluation team would have been overwhelming.
As discussed in section 2.7.2.5, the Government expected the offerors to submit all
requirements analysis and design products, both textual and graphic, which the offerors generated as
part of their methodology and which are required by DOD-STD-2167, as tailored for CCPDS-R (e.g.,
software requirements specifications, software top-level design documents, software detailed design
documents (SDDDs), performance analyses, etc.); all textual products the offerors generated (e.g.,
requirements analysis conclusions and documentation, Ada design language listings, etc.) in both
hirdcopy form and in machine-readable, 9-track tape; and a briefing to the Government on the offerors'
j SEE results.
137
I
I
In general, the Government did receive most of the expected products from the offerors. In
some cases, the Government received documentation that was not required (e.g., diaries of the entire
SEE effort). However, the Government did not receive all of the expected "intermediate" products
(e.g., data flow diagrams), leading the Government to conclude that in the future the instructions may
need to be clarified to ensure that the offerors are aware that the "intermediate" products are required.
I
Overall, the SEE products delivered were of sufficient quality, content and scope to conduct a thorough
analysis of the offeror's software engineering capabilities.
38 U
I
I
I
evaluators who had not been part of the original MITRE dry-run team as a means of coming up to speed
on the type of details the Government was looking for, and by the experienced team members simply as
reminders. They were not used as a means to determine whether or not factors and standards had been
met, or as the basis for the questions to be asked during the in-house visit.
As discussed in section 3.5, upon receipt of the offerors' SEE products, the Government
intended to perform a first-pass evaluation of each offerots SEE products, lasting approximately one-
week per offeror, using each offeros proposed CCPDS-R FSD/P SDP as the definition of the
offeror's software engineering methodology;, conduct a 1-day audit at each offeror's facility; and, using
the first-pass evaluation and the results of the audit, to produce a final evaluation of each offeror's SEE
products within one week of the audit.
The Government evaluated the offerors' SEE products against the prepared factors and
standards. As a basis for this evaluation, the Government used each offerors SDP (evaluated during
source selection prior to receiving the SEE products), along with any augmentations to it, to determine
whether or not the offeror's methodologies, as described in the SDP, were followed during the
development of the SEE products. The Government evaluation was to determine not only that each
offeror's SDP was followed in the development of the SEE products, but that the requirements analysis
and design methodologies defined by the SDPs were adequate.
As strengths and weaknesses in an offerors SEE products or SDP were identified, vis-a-vis
the factors and standards, the evaluators documented them. For those instances where the evaluators
could not find the information necessary to evaluate a standard, were not sure of the offerors
motivation or rationale, had any questions about the products, or where the evaluation information
could not be ascertained directly from the SEE products delivered, the evaluators prepared questions to
be asked during the in-house audit. In addition, the Government evaluation team generated questions to
verify its own evaluation of whether or not system requirements had been met.
3 Although it was originally planned that one list of questions would be prepared for each
offeror, and would be presented to the offeror during the question and answer period of the in-house
audit, the Government concluded during the first-pass evaluation that the questions for each offeror fell
into two categories: those the Government would best benefit from by allowing the offeror 24 hours
during which to prepare an answer and those for which the Government would best benefit from by not
allowing the offeror more than 5 minutes during which to prepare an answer. Therefore, during the
first-pass evaluation, the Government prepared two sets of questions for each offeror based on the
results of the Government's evaluation of the following SEE factors.
5.5.1.1 Methodology Factor
Each offerors requirements analysis and design methodologies were evaluated to ensure that
they adequately addressed the major issues in each phase and that the methodologies were compatible.
This was accomplished by reviewing the SEE products to determine if the methodologies were robust
and cohesive and to ensure that the methodologies were consistent with each other and provided a
* 39
I
distinction between the end of requirements analysis and the beginning of design. A methodology was
considered robust if it adequately and completely addressed modem software engineering issues for a
real-time system. Attributes of the SEE products that contribute to requirements analysis methodology
robustness include, but are not limited to, inclusion of performance analysis during requirements
analysis, detection, and resolution of specification ambiguities known to exist in the SEE system
I
specification, effective employment of measures for tracing requirements, and identification of derived
requirements. In addition, the SEE products were reviewed to ensure that the SEE products contained
acceptable ADL and graphical representations consistent throughout the SEE products. Questions
generated for the in-house audit were intended to clarify methodology questions.
I
U
I advance, for which the offerors were required to respond during the audit as well as to provide formal,
written responses to the Government. The other set of questions was given to the offerors 5 minutes in
advance, for which the offerors were required to respond immediately and for which the Government
maintained a record via cassette tape.
I 5.5.3 Evaluation Completion
Following the in-house audit at each offeror's facility, the Government easily completed its
evaluation of the offerors' SEE products within a week of the in-house audit. The completion consisted
of updating the first-pass evaluation assessments and associated identification of strengths and
weaknesses to reflect the additional, clarifying information obtained from the in-house audit. In
addition, the Government transcribed the cassette-recorded responses to the spontaneous "five-minute"
questions. The transcripts, together with the formal responses to the "twenty-four hour" questions
were then entered into the offerors' official submission of SEE products.
I4
I
I
I
I
I
I
I
1 41
I
!
N
U SECTION 6
5 As a result of conducting the SEE during the CCPDS-R FSD/P source selection, the
Government identified a number of lessons learned concerning the administration of a software
engineering exercise. These lessons learned relate to deliverable products, exercise scope and duration,
Government evaluation tools and techniques, and Government evaluation approach. The lessons
learned, presented herein, are intended to describe not only how the SEE might be changed for future
5 use or what did not work out as well as possible, but also to discuss those aspects of the SEE that did
work well and should be repeated in the future.
In some cases, offerors did not submit all "intermediate" SEE products (e.g., data flow
diagrams) which were expected by the Government. Therefore, future programs which elect to carry
out a SEE may need to evaluate their instructions to the offerors to see if they must be clarified to ensure
that the offemrs are aware that all requirements analysis and design products, including "intermediate"
products, are deliverable to the Government.
As a result of the use of the SEE during source selection, the Government concluded
time, level, and the coverage of the SEE was adequate. The volume and depth of the offerors'that the
delivered
SEE products indicate that 3 1/2 weeks was sufficient time.
I It was apparent during the evaluation of the offerors' performance on the SEE that it did
provide the Government with the answers it was looking for concerning the offerors' ability to
assemble a SEE team and address software engineering and Ada issues, in the context of the offerors'
SDP. The level of requirements in the system specification provided the opportunity for the offerors to
demonstrate their ability in the pertinent areas (e.g., real-time system design, modem software
engineering practices, Ada). It is not felt that a more difficult set of requirements would have added
anything to the Government's knowledge of the offerors' ability. To have increased the coverage of the
SEE requirements, or to have broadened the system, could have had the negative impact of forcing the
offerors to cover more area with less depth. The Government feels that no significant amount of new
evaluation information would have been gained, had more time been allocated to the offerors for
completing the exercise. More ADL might have been generated, or the products might have been more
complete, but it would not have added anything to the Government's assessment of the offerors' ability
to perform requirements analysis and design a real-time system.
I 43
!
I
6.3 EVALUATION TOOLS AND TECHNIQUES 3
As mentioned in section 5.4, the Government used the ESD acquisition support environment
and a set of checklist questions to assist in its evaluation of the offerors' SEE products. In addition, the
Government relied heavily on word processors to expedite its evaluation and associated documentation
efforts. The Government made the following observations regarding the use of these tools during the
evaluation of the offerors' SEE products.
6.3.1 EASE
As discussed in section 5.4, EASE use was attempted and largely abandoned during source
selection. The Government had trained a large portion of the evaluation team in the use of EASE;
I
however, the investment was not worth the return due to the limited EASE functionality and the
logistical problems associated with using a computer facility remote from the source selection. The
team members also felt that EASE was not really essential, given the volume of SEE materials
submitted.
At the time of the CCPDS-R FSD/P source selection, EASE provided text editing and Ada i
compilation functions, but did not provide tools to assist in identifying control flows and data flows,
perform syntax-related browsing and cross-referencing, or assess compliance to coding/design
standards. It required significant manual overhead for such activities as loading tapes and providing
backups, and because EASE was not collocated with the source selection facility, there was time-
consuming travel to transport materials to the EASE facility for evaluation. The EASE facility also had
to be locked and other EASE users could not have access while source selection sensitive materials were
installed. Not until these largely logistical deficiencies can be overcome will EASE and similar tools
I
become useful tools for SEE evaluations. Therefore, before using EASE or similar tools on future
software engineering exercises, programs should first assess the functionality, ease-of-use, logistics,
and potential benefits. If the selected tool is deficient in any of these areas, then its exact use during the
evaluation should be clearly specified prior to source selection. If programs opt to use automated tools
in the future, regardless of whether or not any of these deficiencies still exist, these programs should
consider training fewer evaluation team members since the cost and time required to undergo such
training is likely to be significant.
i
6.3.2 Checklist Questions
As discussed in section 5.4.2, the Government concluded that the checklist questions prepared
prior to source selection did not prove as useful as had been anticipated and were not worth the amount
of time it took to prepare them. As an evaluation tool, the questions were not very beneficial and
serious consideration should be given to either not using them or not investing so much time in
preparing them. 3
6.3.3 Word Processing Capabilities
At the start of the evaluation of the offerors' SEE products, the Government SEE evaluation
team had only minimal word-processing capabilities. As the evaluation continued, more word- I
processing capability was secured, and though it was helpful, it was still not at an adequate level. For
the CCPDS-R FSD/P source selection, the SEE evaluation would have been expedited if there had been
a separate word processor for each of the three groups which made up the team. A laser printer capable
44I
I
I
1 of producing letter-quality text and viewgraphs is also necessary. In the future, programs conducting a
3 SEE should ensure that sufficient word-processing capability is provided so that there is no contention
of resources when documenting the SEE evaluation results against the factors and standards.
As a result of conducting the in-house audit, the Government concluded that a 1-day audit at the
offerors' facilities was both beneficial and of sufficient time. Future programs which institute software
engineering exercises are strongly encouraged to conduct such audits if time and logistics permiL
Furthermore, as a result of conducting the in-house audit, the Government made several observations
regarding the submission of detailed questions to the offerors and the maintenance of a transcript of
3 offeror responses.
As discussed in section 5.5.1, the Government altered its method for presenting detailed
questions to the offerors as a result of the first-pass evaluation. The revised approach, consisting of
two sets of questions, one submitted 24 hours in advance and one 5 minutes in advance, proved
successful; its use is therefore recommended for other programs that may conduct a SEE audit. In the
case of the 24-hour set, the Government was able to get answers to questions that the offerors could not
have answered as completely or in as much detail if they had not had some time to prepare. For the 5-
U minute set, the Government was able to evaluate the offerors based on their ability to answer questions
extemporaneously which should have required no preparation time, assuming the offerors' teams were
fully trained in the methodologies as claimed in their proposals. The Government was able to direct
many questions to particular offeror SEE team members based on their area of responsibility on the
SEE, lending substance to the evaluation of the offerors' entire SEE team. The use of two sets of
questions for the offerors provided discriminating information that could not have been attained through
the use of only one set of questions.
As mentioned in section 5.5.3, the Government completed its evaluation of the offerors' SEE
products by updating the first-pass evaluations to reflect the audit results. Transcripts of the offerors'
recorded spontaneous audit responses, together with the offerors' formal responses to the "twenty-four
hour" questions were entered into the offerors' official SEE product submissions.
I In general, this approach to completing the SEE evaluation worked well. The one major
drawback was the method chosen for documenting the offerors' responses to the spontaneous
questions. Originally, the SSEB planned to have the offerors maintain the written transcripts of these
responses, but this decision was overruled. Consequently, the CCPDS-R FSD/P SSEB had to
maintain the transcript. However, transcribing the cassette tapes placed an overwhelming burden on the
limited SSEB resources since it was such an extremely tedious, time-consuming process. Therefore, it
is suggested that in the future, if SEE audits are held, that either the SSEB be allowed to have the
offerors maintain the written transcripts or that some alternative method be found, such as only
requiring magnetic recordings or videotapes of the responses.
4
1 45
U
I
I
U SECTION 7
OFFEROR FEEDBACK
I In addition to making its own observations regarding administering and conducting a software
engineering exercise, the Government solicited feedback from the CCPDS-R FSD/P offerors on their
impressions of the SEE. The Government accomplished this by providing to the offerors an optional
questionnaire at the conclusion of the in-house audit (see appendix D). In general, the offerors
responding to the questionnaire felt that the SEE was a valuable exercise. This section summarizes the
offeror feedback, addressing the areas of the size of the SEE, the exercise appropriateness, the
resources expended, and the benefits to the offerors.
* 7.1 SIZE
On the average, the offerors considered the size of the SEE to be of an appropriate level, both in
terms of the time required and the time allowed by the Government, and in terms of the SEE system that
the offerors were required to design. The offerors made no recommendations to either increase or
decrease the scope of the SEE or the time allotted for it.
1 7.2 APPROPRIATENESS
Generally speaking, the offerors considered the SEE to be a challenging, appropriate exercise in
relation to CCPDS-R FSD/P source selection. The offerors indicated that the focus of the SEE on
requirements analysis and design was very appropriate because of the perceived high level of risk
associated with the requirements analysis and design of an Ada system. Some offerors felt that metrics
should have been included in the SEE, since metrics collection, reporting, and evaluation are integral to
program management. Additionally, some offerors felt that not enough opportunity was provided to
demonstrate products that they had expended resources on (e.g., prototypes) for use in CCPDS-R
FSD/P.
7.3 RESOURCES
On the average, the offerors considered the resources expended on the SEE to be of a
reasonable level. The percentage of time allotted by the offerors was fairly equally divided between
each of the phases (i.e., requirements analysis, top-level design, detailed design, and preparation for
and participation in the in-house audit). The average amount of resources expended by the offerors was
a little less than 1 staff-year.
7.4 BENEFITS
Overall, the offerors assessed the SEE as beneficial, for several reasons. First, the SEE
provided the offerors an opportunity to exercise and refine their software engineering methodologies.
I47
I
I
Prior to the SEE, the requirements analysis and design methodologies included in the offerors' SDPs I
had not been fully utilized. The SEE provided an opportunity for the offerors to exercise their
methodologies on an actual, albeit small, program and to receive feedback internal to the offerors'
organizations on those methodologies and the products delivered as a result of utilizing them. This
feedback provided the offerors with an opportunity to refine their methodologies where necessary, prior
to utilizing them on a large program such as CCPDS-R. Second, the SEE provided an actual illustration
of the benefits of various software engineering approaches (e.g., prototyping, reusable components,
etc.). This provided offeror insight into the value of these approaches and/or the need to modify these
software engineering approaches for CCPDS-R FSD/P. Third, the SEE provided CCPDS-R related
experience which can then be applied to the CCPDS-R FSD/P phase. Finally, successfully
accomplishing the SEE using their chosen methodologies and Ada provided the offerors with increased
confidence in those methodologies, their Ada expertise, and their software development core team.
I
I
I
I
I
, I
, I
1 I
I
i I
i I
I! 4 I
I
U SECTION 8
* CONCLUSIONS/RECOMMENDATIONS
I Overall, both the SEE dry run and the incorporation of the SEE into the FSD/P source selection
were successful. This section summarizes the Government's conclusions and recommendations
resulting from this successful CCPDS-R SEE, first from the perspective of the dry run of the SEE and
then from the actual use of the SEE during source selection.
The Government assessed the dry run of the SEE as extremely beneficial, given the lessons
learned from that effort. More importantly, however, based on the results of the dry run, the
Government determined that the software engineering exercise demonstrated strong potential for being
an effective and discriminating source selection technique. This section summarizes the Government's
pre-source selection SEE conclusions and recommendations relative to the dry-run objectives, software
engineering, and Ada.
8.1.1 Objectives
IAs stated in section 2, the primary objectives of the MITRE SEE dry run were to generate a
clearly defined SEE system specification, to develop the ground rules for the offerors to follow when
conducting the SEE, to identify a discriminating set of evaluation criteria, and to assess whether the
SEE could reasonably be done in the time allotted to the offerors. The secondary objective was to
educate staff in software methodologies, Ada, ADL and DOD-STD-2167. The SEE dry run achieved
all these objectives satisfactorily. As a result of dry running the SEE, the Government was able to
a. Analyze the draft SEE system specification thoroughly, identify weaknesses in the draft
specification, resolve these weaknesses, and generate a final, concise SEE system
specification that contained heretofore omitted requirements pertinent to CCPDS-R, that
would serve as key technical discriminators
b. Develop a set of offeror instructions by which the Government scoped the SEE, expedited
both the offeror preparation effort and the Government evaluation, and maintained the
fai mess and objectivity of the SEE effort
c. Identify a low level set of technical discriminators geared specifically to the SEE system
u specification and Ada, which the Government felt would enable it to separate form from
substance in the offerors' results and thus distinguish those offerors who have strong
software engineering/Ada capabilities from those who do not
d. Verify that the SEE, as scoped per the detailed offeror instructions, could reasonably be
done within the 3 1/2 weeks allotted to the offerors
I I 49
I
I
e. Gain further knowledge, depending on the skill of the individual SEE team member, in
requirements analysis and design methodologies, Ada, ADL, DOD-STD-2167, all of which
U
would prove useful to the CCPDS-R program office during both the source selection and
the FSD/P phase.
MITRE expended approximately fifteen staff months of effort dry running the SEE, from initial
conceptualization of the SEE to completion of all SEE RFP documentation. Thus, the Government
considered the SEE dry-rm effort somewhat costly, however, the Government considered that the
I
benefits far outweighed the costs. Since the SEE was a new source selection technique at ESD, the
Government considered the dry running of the SEE mandatory to test out the concept of the SEE, to
verify that the SEE was a reasonable and workable source selection technique, and to ensure the overall
success of the SEE as a source selection technique for CCPDS-R and other future programs.
Moreover, by dry running the SEE, the Government was better able to identify specification
requirements and evaluation criteria it felt would serve as true discriminators during source selection and
to train staff in software engineering methodologies and Ada for both present and future use. Given
these benefits, it is strongly recommended that when a program includes a software engineering
exercise as part of its source selection approach, it dry run the exercise to some extent before the release
of the exercise to offerors. As a minimum, the dry run should focus on the generation of an appropriate
system specification and on the development of discriminating evaluation criteria.
I
8.1.2 Software Engineering and Ada I
A number of software engineering and Ada lessons learned resulted from the MITRE SEE dry
run. The major conclusions and associated recommendations are as follows: I
a. More time is needed for requirements analysis than is traditionally allocated. This extra
time is due to the difficulty of correctly interpreting user requirements and intentions, as
well as the increased level of detail required by DOD-STD-2167. Programs with a large
software development component should therefore plan appropriately for this additional
time.
50 I
I
U Therefore, programs using DOD-STD-2167 should consider tailoring the standard so that
specification and authentication of data occurs at achievable and realistic milestones.
e. With the Ada language and its tasking construct, applications software, and not just
operating system software, must consider and handle control flow issues such as deadlock
and process starvation. Thus, programs using Ada should ensure that the software
development methodologies employed on the program include techniques for designing
effective controls for the detection and/or prevention of deadlock and process starvation.
f. An Ada-based design language by itself is not a sufficient tool for effecting clarity and
communication of global system design information either among developers or between
developers and the Government. Hence, programs using ADL should require that a
graphical design representation technique, consistent with the ADL, also be used for
portraying design information.
g. Availability and retention of qualified Ada engineers constitutes a high risk on Ada
developments. Consequently, programs using Ada should investigate the use of different
contracting vehicles and incentives to obtain and retain qualified Ada engineers both within
the Government agencies and the contractors' organizations.
h. For programs using Ada, Ada training must occur at all levels of the software development
and acquisition teams. Proper training in Ada, however, takes longer than for other
languages. Therefore, programs designing and/or implementing in Ada should require
extensive Ada training for both Government and contractor personnel, as appropriate, and
should plan and account for any additional time and effort required to do so.
The Government also concluded that a software engineering exercise serves as an extremely
effective vehicle for training personnel in all aspects of software acquisition and software engineering.
The unique benefit of the SEE as a training approach is that it provides practical, interactive, hands-on
experience not offered in typical non-interactive theoretical courses, and it covers a range of issues,
such as requirements analysis methodologies, design methodologies, Ada, DOD-STD-2167, software
specifications and reviews, and software tools and technique. While the SEE is an effective training
technique, it is costly to conduct since participants must dedicate significant amounts of time and effort
to reap the benefits. However, the benefits are considered to far outweigh the cost.
At the start of the CCPDS-R FSD/P source selection, the Government considered the purpose
of the SEE to provide discriminating information that would enable the Government to determine the
degree of risk associated with each CCPDS-R offerr's proposed software development methodology
and to determine the offeror's ability to organize a team fully knowledgeable in that methodology and in
Ada, the required CCPDS-R implementation language. This section provides a summary of the
conclusions the Government reached regarding the SEE versus its CCPDS-R objectives. It also
provides some general observations regarding the conduct of future software engineering exercises.
I
*!5
I
I
8.2.1 CCPDS-R SEE Objectives N
At the completion of the FSD/P source selection, the Government concluded that the CCPDS-R
SEE satisfied its objectives resoundingly. General conclusions regarding the CCPDS-R SEE as a
source selection technique may be summarized as follows:
a. The SEE was an extremely beneficial source selection technical area evaluation technique.
By having offerors develop actual products using their proposed software development
approach, the SEE provided the Government invaluable insights as to what an offeror
really can do versus what an offeror claims he can do. It provided a concrete example that
demonstrated the degree of robustness of an offeror's methodology, the offeror's ability to
follow the propos, d SDP, and the offeror's expertise in the proposed methodology and
tool set. It clearly demonstrated whether or not an offeror's proposed CCPDS-R FSD/P
team had sufficient expertise to design and develop a real-time system in Ada, as is required
for CCPDS-R.
b. The SEE served as an excellent vehicle by which to identify early problems in an offerors
software approach. For example, the use of the SEE helped to point out incomplete
methodologies that did not address all of the software engineering issues, areas where the
requirements analysis and design methodologies conflicted, and inadequate ADL and
graphical design representation techniques. By uncovering these problems during source
selection, the Government was better able to focus on these problems immediately at
FSD/P contract award, rather than waiting until they become apparent in the development
phase, when problems are more costly and difficult to correct and the contractor is less
willing to make changes.
In addition to meeting its stated objectives, the SEE also provided some additional benefits not I
originally anticipated. In particular, the SEE assisted in the source selection cost area evaluation by
yielding valuable information on offeror capabilities in such areas as level of experience with the
selected programming language and tools. This additional insight into actual offeror capabilities enabled
the Government to generate more representative inputs for its software cost estimation models and
thereby to assess cost and schedule risk associated with an offeror's software development approach for
the CCPDS-R FSD/P phase. Also, as the offeror feedback indicates, the SEE forced offerors to
solidify and test out their methodologies and teams and thus to make modifications, as appropriate, to
eliminate problems on their own prior to the FSD/P phase.
Given the overwhelming benefits that were reaped from the CCPDS-R FSD/P SEE, the
Government SEE team strongly recommends the use of a software engineering exercise for other
acquisition programs. However, the team does so with the following caveats:
a. To conduct a software engineering exercise is costly, both for the Government and for the
offerors. For the CCPDS-R SEE, the Government expended approximately twenty staff-
months to dry run the SEE and to evaluate the offerors' SEE products during source
I
selection. Offerors expended approximately ten staff-months each to carry out the exercise.
As the Government becomes more used to conducting SEEs, the level of Government
effort expended will decrease, perhaps to ten staff-months. However, in any case, if a
52 I
U
I
I
program elects to conduct a software engineering exercise, it should be aware of and able to
accommodate the additional cost.
b. Evaluation of a software engineering exercise may add significant time to a source selection
if many offerois respond or if the Government evaluation team is not well prepared in
advance. Consequently, programs that opt to conduct a SEE should consider approaches
for minimizing the time required to conduct and evaluate a SEE. Possible approaches
include
forstaggered release of the technical
exercise, and
Government dry running
done CCPDS-R, and strong management teams toof the exercise
evaluate as was
the SEE
products.
Iinformation,
d. While a software engineering exercise provides discriminating source selection
it cannot be relied upon solely as a means to select a contractor. For example,
situations may occur, such as offerors not following the ground rules and using resources
not proposed in the SDP, which may invalidate the SEE results and consequently its
usefulness as an evaluation item. Thus, programs which elect to carry out a software
engineering exercise should include other evaluation items besides the exercise upon which
to make a source selection decision.
5In some respects, the CCPDS-R program was fortunate in that it was the first program at ESD
to conduct a software engineering exercise. Consequently, industry was not sure what to expect and,
therefore, industry followed the SEE instructions completely and satisfied the SEE intent fully.
However, as software engineering exercises become more common, the response of industry may be to
develop "professional exercise teams" analogous to the specialized proposal preparation teams now
evident, or to bring in outside consultants or employ other similar vehicles (e.g., submitting too much
material) which will in essence circumvent or negate the intent of the exercise. Future programs that
choose to conduct software engineering exercises must be aware of this possibility and thus take
additional precautions where necessary to prevent this situation from arising during source selection.
I
I
I 53 I
I
1 LIST OF REFERENCES
4. Buhr, R., System Design With Ada, Englewood Cliffs, New Jersey: Prentice-Hall, Inc.,
1984.
5. Headquarters, U. S. Air Force, "Contracting and Acquisition: Source Selection Policy and
I Procedures," AF Regulation (AFR) 70-15, 22 February 1984.
6. Headquarters, Electronic Systems Division, "Contracting and Acquisition: Source Selection
Policy and Procedures," ESD Supplement I to AFR 70-15, 21 March 1986.
I
U
I
I
I
I
I
I
1 55
I
I
I
APPENDIX A
SEE INSTRUCTIONS FOR THE OFFEROR
AND EXERCISE SPECIFICATION
I
U
I
I
I
I
I
I
I
I
I
I CCPDS-R SOFTWARE ENGINEERING EXERCISE
* 1.0 PURPOSE
The purpose of the software engineering exercise (SEE) is to permit the Government to evaluate
an actual application of each offeror's software development methodology as proposed for use during
the CCPDS-R full-scale development/production (FSD/P) phase. The SEE will concentrate exclusively
on the offerors' approach to requirements analysis, design, and their interrelationship. The
Government will not evaluate as part of the SEE the offeror's approach to implementation, integration,
test, quality assurance, configuration management, staffing level, productivity measures, software
metrics collection, and other development activities not explicitly mentioned in the following
paragraphs.
Each offeror will provide a prototypical example of his proposed software development
approach, as applied to a sample problem taken from the missile warning domain. [The attachment],
"Exercise Specification," presents the requirements for the sample problem. In performing the exercise,
the offeror shall comply with all provisions of his proposed software development plan and with section
3.3 of the CCPDS-R system specification. To the maximum extent practical, the offeror shall make use
of development tools and procedures that are proposed for the CCPDS-R FSD/P phase, as this will be
viewed more favorably by the Government; deviations shall be noted by the offerors.
Participation in the exercise shall be limited to those individuals identified in the offeror's
proposal as part of the CCPDS-R full-scale development team. Subcontractors who will be responsible
for software development on CCPDS-R shall be active participants. Consultants shall be precluded
from participating. Each offeror will deliver to the Government all requested materials, in the formats
described in section 3, no later than 12 noon local time, 3 December 1986. The Government will
review this material for a period of time not to exceed two (2) calendar weeks. Following completion of
the Government review, a Government team will conduct an on-site visit at the offeror's facility, at
which time the offeror shall brief his approach and provide responses to Government requests for
clarification. The Government will coordinate the schedule for the on-site visit with the offeror upon
receipt of the offeror's exercise results. Preliminary plans are for the Government to conduct the on-site
visit during the week of 15-19 December 1986. Note that there will be no interaction between the
offeror and the Government during the offeror's implementation of the exercise. Should the offeror
have any questions on the exercise, the offeror is instructed to identify appropriate assumptions, to
document these assumptions, and proceed with the exercise based on those assumptions.
The Government will conduct its evaluation of the offeror's delivered materials and assess the
offeror's proposed methodologies using as a primary reference the offeror's Software Development
Plan (SDP) submitted with the CCPDS-R proposal, and particularly the software standards and
3 procedures contained within the SDP. The offeror may submit with the SEE materials delivered on
3 December 1986 an augmentation to the SDP, not to exceed fifteen (15) pages, which provides further
5 59
U
SEE DETAILED INSTRUCTIONS (Continued) S
concise, technical, and explicit details regarding the offeror's proposed software development approach i
and methodologies. The Government will consider any such augmentation as part of the offeror's
proposal and subject to Government evaluation. 3
The Government will employ automated tools to conduct its evaluation of the offeror's
delivered materials. Therefore, as described in section 3, the offeror is required to deliver some of the
exercise products in machine-readable format. In order to assess the compatibility of the Government's
tools and the offeror's machine-readable products, the offeror is requested to deliver to the Government
no later than 12 noon local time, 19 November 1986, a demonstration tape containing sample files of
I
the offeror's methodology products (e.g., Ada-based design language (ADL) listings. etc.) in the same _____
format as will be submitted at the conclusion of the exercise period. The Government will not evaluate 3
the contents of this demonstration tape, but will merely use the tape to study and resolve any
compatibility issues that may develop between the Government's tools and the offeror's tape output.
The sample files on the demonstration tape do not need to represent actual products of the exercise; they
need only represent general products of the offeror's proposed methodologies, the types of which the
I
offeror will submit for evaluation at the end of the exercise period.
At the conclusion of the exercise period on 3 December 1986, the offeror shall deliver the
following items to the Government for evaluation:
a. A complete software architecture for the sample problem. This architecture shall contain an
identification of software components, an allocation of functions to these software S
components, a preliminary specification of interfaces, and an indication of control and data
flow throughout the system. 5
b. For two or more offeror-selected components of the system, all requirements analysis
conclusions reached and documentation. With respect to the selected components, the
requirements analysis shall represent a complete utilization of the tools and procedures
proposed by the offeror for use on CCPDS-R. The offeror shall identify any deviations
U
from these tools and procedures and the associated rationale for these deviations in his
briefing to the Government.
c. For two or more offeror-selected components of the system, all preliminary design
documentation, including requirements traceability, ADL listings, and graphics products.
With respect to the selected components, the preliminary design documentation shall
represent a complete utilization of the tools and procedures proposed by the offeror for
I
CCPDS-R. The offeror shall identify any deviations from these tools and procedures and
the associated rationale for these deviations in his briefing to the Government.
d. For at least one offeror-selected component of the system, all detailed design
documentation, including requirements traceability, ADL listings, and graphics products.
With respect to the selected component(s), the detailed design documentation shall
603
I
I
I
SEE DETAILED INSTRUCTIONS (Continued)
I represent a complete utilization of the tools and procedures proposed by the offeror for
3 CCPDS-R. The offeror shall identify any deviations from these tools and procedures and
the associated rationale for these deviations in his briefing to the Government.
All textual products of the exercise, including requirements analysis conclusions and
documentation, ADL listings, and other design documentation shall be delivered to the Government
both in hardcopy form and in machine-readable, 9-track tape. Exception will be made for materials that
the offeror does not propose to create and/or maintain online during the CCPDS-R FSD/P contract. In
particular, graphical representations shall be submitted in hardcopy form. The tape shall be in 9-track
1600 bpi format in accordance with ANSI X3.27-1978, ASCII labelled. and with an identified record
size and block size. The block size shall be 512 bytes. For readability, all tabs should be expanded to
spaces. The offeror shall provide ten (10) copies of all hardcopy products. The products delivered
shall be clear, coherent, legible, and prepared in sufficient detail for effective evaluation. Elaborate
documentation, expensive binding, detailed art work or other embellishments are unnecessary. The
offeror shall include with these products indices delineating the subject and contents of the hardcopy
material package and the 9-track tape; the operating system command(s) used to create the tape; a list of
if ADL compilation units; and a list of the compilation order of these units.
In addition to the delivered products described above, the offeror shall provide a briefing to the
Government during the on-site visit that summarizes his experience in carrying out the exercise and
describes the products generated. The briefing shall not exceed three (3) hours in duration. The topics
presented shall include the following:
* 61
I
I
6
I
I
I
U
1
!
I
3
i
62 5
I
ATTACHMENT
U 1.0 SCOPE
The exercise system will create scenarios under user direction and will simulate the CCPDS-R
missile warning (MW) capability in real time.
3 3.0 REQUIREMENTS
The exercise system shall maintain MW information and display the information in tabular form
in real time. Specifically, the exercise system shall create scenarios under user direction and store each
created scenario in a separate scenario file. It shall use a generated scenario to run the MW simulation in
real time. The system shall provide the capability for the user to run a simulation while editing,
deleting, creatir - .ing, or querying a scenario file (possibly the same file). The design for the
exercise system shall be modular to facilitate changes in software components which ae needed to
accommodate future changes in operational requirements.
33.2 HARDWARE
The exercise system will generate tabular displays only. No special graphics hardware or
capabilities shall be used. The user interface shall be designed to operate on a single dumb terminal
with keyboard entry device.
5 63
U
S
U
CCPDS-R SEE SPECIFICATION (Continued) I
3. There shall be five missile launch origin locations, designated as U
MLOC1 through MLOC5, and five predicted impact/nuclear detonation locations,
designated as IPLOCI through IPLOC5. 3
4. Sensor conncctivity shall be from each sensor to the command center.
5. The exercise system shall simulate the transmission and processing delay incurred
from the time a sensor transmits a missile warning message until the message has been
processed by the system and made ready for display. The processing delay parameter shall
I
be user selectable from 0-99 seconds and shall be constant during a given missile warning
simulation.
3.3.2 Missile Warning Data 3
Missile warning data shall consist of missile launches and nuclear detonations (NUDETs). A
missile launch message shall consist of launch origin location, launch type (ICBM, SLBM), reporting
sensor, position of predicted impact, and time of launch. Each launch shall be detected by (i.e.,
associated with) only one sensor. A nuclear detonation message shall consist of time and location.
Launch locations and impact locations shall be designated as described in 3.3.1.
I
The user interface shall be menu driven and user friendly. All user inputs shall be validated for
proper format and range of values. The user shall be notified of any entries that are erroneous or that
U
cannot be processed for any other reason. Error messages shall be self-explanatory and shall specify,
to the extent practical, the cause and location of the error.
General user capabilities to be provided shall include the capability to start and stop a session;
the capability to terminate the scenario generator (SG) and/or missile warning simulator (MWS) and exit
to the main menu upon user request; the capability to display the directory of scenario file names; the I
capability to select the processing delay parameter (see 3.3.1); and the capability to interface with the
scenario generator and miss-*:. warning simulator as described in 3.5 and 3.6, respectively. 3
All user inputs shall be acknowledged within one second of the input. For data entered by the
user, the time from completion of entry until the database is modified to reflect the update shall not
exceed two seconds. An advisory shall be provided within two and one half seconds if the system
cannot complete such an update. At a minimum, these performance requirements shall be met on
I
dedicated processing equipment and with at least twenty stored scenario files, consisting on the average
of 5,000 combined missile launch and NUDET events. 3
64
I
I
I CCPDS-R SEE SPECIFICATION (Continued)
The exercise system shall be able to generate three displays for MW data: a missile launch
summary display, a predicted impact/NUDET summary display, and a message display. The summary
displays shall present the MW information received by the command center as generated by a selected
scenario, summarized from the start of the scenario, in real time, and in accordance with the specified
processing delay (see 3.3.1). The formats for the missile launch summary display and the predicted
impact/NUDET summary display shall be as specified in figures I and 2, respectively. The message
display shall sequentially list the messages received by the command center, as received in real time.
The capability shall be provided to display the contents of at least the five most recently received
messages in the scenario. Display updates shall be processed and reflect a scenario event within one-
half second of the activation time of the event. (Activation time is defined in section 3.5.)
The SG shall only be activated and deactivated as a result of user action. The SG shall be able
to create, delete, edit and save files containing scenario data. Edit capabilities for a selected scenario file
shall include changing the contents of events in the scenario file, adding events to the scenario file, and
deleting events from the scenario file. The capability shall be provided to save a scenario and any
changes to it as a new file or as the current file. Each event in a scenario shall have a unique activation
time to the nearest tenth of a second, where the activation time represents the time the reporting sensor
transmits the missile warning message. The user shall be precluded from entering multiple events into a
scenario with the same activation time. The user shall be able to query an individual scenario file to
search for events based on reporting sensor and/or time of event activation. The design for the exercise
system shall be flexible to allow as future growth the capability to perform this query across all scenario
files. The SG shall accept inputs from the keyboard to perform the above functions. There shall be a
default scenario file consisting of a total of 5,000 individual missile launch and NUDET events and their
3at associated times of activation covering a twenty minute scenario period. The SG shall support a total of
least 40,000 missile launch events and 10,000 NUDET events contained in one or more scenarios.
3.6 MW SIMULATION
I The MWS shall provide the user with the capability to select and run a scenario contained in a
scenario file. The MWS shall run this scenario in real time, generating the missile launch summary
display, the predicted impact/NUDET summary display, or the message display, as specified by the
user. The MWS shall be activated or deactivated only upon user request. Capabilities shall be provided
for the user to select the processing delay parameter (see 3.3.1), to suspend the simulation, to resume
the simulation, to fast forward the simulation (where fast forward means the run time between event
activations is reduced by two), and to stop the fast forward capability and return to the normal run time
between event activations. The user shall also have the capability to select which of the three MW
displays he wishes to view, and to move to other displays while the simulation is running.
I
5 65
U
I
TOTAL
PREDICTED I
IMPACTS (PI)
PPE
PPW
I
BMEWS I
BMEWS II
BMEWS IIl
IRI
3
IR II
TOTAL PI 5
TOTAL NUMBER
OF NUDETS
The exercise system shall provide the capability for the user to run the MWS and SG
simultaneously, either on the same or different scenario files, while still meeting the performance
requirements specified herein. Formats for the displays when both are running simultaneously will be
contractor defined as part of the design effort
When both the SG and the MWS are processing the same scenaio, the MWS displays shall
reflect a modification to an event in the scenario only if the event has not yet been processed by the
MWS; otherwise, the MWS displays shall not reflect the changes.
I
I
I
I
I
a
I
I
!
I
* 67
I
I
U
I APPENDIX B
This appendix contains the information included in the CCPDS-R RFP Instructions for
Proposal Preparation for incorporating the software engineering exercise as part of the CCPDS-R
source selection. This information, provided to the offerors in the initial release of the RFP, identifies
the requirement for all offerors to carry out the SEE as part of the CCPDS-R proposal effort. It also
provides a high-level set of instructions detailing to the offerors what is expected of them in carrying out
the SEE.
I
I
I
I
I
I
I
I
I
! 69
I
I
I
I CCPDS-R IFPP SEE MATERIAL
i Software Engineering Exercise (SEE). The offeror shall carry out a software engineering
exercise which will be defined by the Government. [The attachment] contains the general ground rules
for the conduct of the SEE and a brief description of the SEE products to be generated and submitted by
the offeror for Government evaluation. The Government will provide ,he SEE specification and the
detailed SEE ground rules following receipt of proposal, at which time the SEE shall commence.
I
I
I
U
I
I
I
I
I
* 71
I
U
I
ATTACHMENT
1.0 PURPOSE
The purpose of the software engineering exercise (SEE) is to permit the Government to evaluate
an actual application of each offeror's software development methodology as proposed for the
CCPDS-R full-scale development/production (FSD/P) phase. The SEE will concentrate exclusively on
the offerors' approach to requirements analysis, design, and their interrelationship. The offeror's
approach to implementation, integration, test, quality assurance, configuration management, and other
development activities not explicitly mentioned in the following paragraphs will not be evaluated by the
Government as part of the SEE.
Each offeror will provide a prototypical example of his proposed software development
approach, as applied to a sample problem taken from the missile warning domain. The Government
will define the sample problem and provide the SEE problem specification to the Offeror following
receipt of proposal. In performing the exercise, the offeror shall comply with all provisions of his
proposed Software Development Plan and with section 3.3 of the CCPDS-R System Specification;
deviations shall be noted by the offerors.
Participation in the exercise shall be limited to those individuals identified in the offeror's
proposal as part of the CCPDS-R full-scale development team. Subcontractors who will be responsible
for software development on CCPDS-R shall be active participants. Consultants shall be precluded
from participating.
Each offeror will be allocated a period of four (4) calendar weeks from receipt of the exercise
materials until delivery to the Government of all requested materials in the formats described below.
The Government will review this material for a period of time not to exceed two (2) calendar weeks.
Following completion of Government review, the Government will conduct an on-site visit at the
offeror's facility, at which time the offeror shall brief his methodology approach to the Government and
provide responses to Government requests for clarification. The Government will coordinate the
schedule for the on-site visit with the offeror upon receipt of the offeror's exercise results. Note that
there will be no interaction between the offeror and the Government during the four week exercise
period. Should the offeror have any questions on the exercise, the offeror is instructed to identify
appropriate assumptions, to note these assumptions, and proceed with the exercise based on those
assumptions.
I
* 73
I
I
I
PRELIMINARY INSTRUCTIONS FOR THE OFFEROR (Continued)
c. For two or more offeror-selected components of the system, all preliminary design I
documentation, including requirements traceability, Ada-based design language (ADL)
listings, and graphics products 3
d. For at least one offeror-selected component of the system, all detailed design
documentation, including requirements traceability, ADL listings, and graphics products.
All textual products of the exercise, including requirements analysis conclusions and i
documentation, ADL listings, and other design documentation shall be delivered to the Government
both in hardcopy form and in machine-readable, 9-track 1600/6250 bpi tape format in accordance with
ANSI X3.27-1978. Exception will be made for materials which the offeror does not propose to create
and/or maintain online during the CCPDS-R FSD/P contract. In particular, graphical representations
shall be submitted in hardcopy form. The offeror shall provide six (6) copies of all hardcopy products.
The products delivered shall be clear, coherent, legible, and prepared in sufficient detail for effective
expensive binding, detailed art work or other embellishments are
I
unnecessary.Elaborate documentation,
evaluation.
In addition to the delivered products described above, the offeror shall provide a briefing to the
Government that summarizes his experience in the carrying out the exercise and describes the products
produced. The briefing shall not exceed three (3) hours in duration. The topics presented shall include
the following:
I. Management approach
The briefing to the Government shall be presented between one and two calendar weeks after
delivery to the Government of the products of the exercise described in points (a) - (d) above. The
3
briefing shall not include any discussion of further work which the offeror may have completed
following completion of the four week SEE period. All participants in the exercise shall be present at
the briefing to respond to Government requests for clarification. All offeror responses to these
74 I
II
I!
I
I Government clarification requests together with the briefing presentation material and the products
3 identified in items (a)-(d) above shall be considered part of the offeror's proposal and subject to
evaluation by the Government.
3 a. Additional work accomplished on the SEE after the initial 4-week period
b. Level of staffing
I
I
I
I
I
I
I
!
* 75
I
I
I
I APPENDIX C
3 Listed below is the material included in the CCPDS-R RFP section M, evaluation criteria, for
the SEE. This material identifies on what basis an offerors SEE products will be judged by the
Government.
The offeror will be evaluated on his familiarity with the selected software
development methodology and on his capability to utilize Ada. The offeror will be
evaluated on his corporate Ada/Software Engineering expertise; his requirements
analysis and design approaches and their inter-relationships; the robustness and
cohesion of his requirements analysis and design methodologies; his familiarity
and expertise with the methodologies; his familiarity with the tool set and the
development environment; the robustness, cohesion, and completeness of his
exercise design; his ability to address and analyze real-time requirements and
issues; his clarity and communication of design, including the use of ADL to
express design; and his compliance with the exercise specification requirements
and SDP. A visit to each offeror will be scheduled approximately six (6) weeks
after receipt of proposals to evaluate the software engineering exercise. The
evaluation will be considered as pass/fail; there will be no opportunity to re-
accomplish the exercise. The visiting Government team will be assisted by
It should be noted that while the Software Engineering Institute (SEI) was identified in section M as a
possible member of the Government SEE evaluation team, no representatives of the SEI did in fact
participate.
7
I
I
!
* 77
I
I
I
APPENDIX D
3 SEE QUESTIONNAIRE
This appendix contains the optional questionnaire which was submitted to all CCPDS-R FSD/P
SEE offerors.
I
I
I
I
I
I
I
I
I
I
I
I
I 79
I
i
I
N SEE QUESTIONNAIRE
PART I. PLEASE CIRCLE YOUR RESPONSE FOR EACH OF THE QUESTIONS BELOW.
3 b. somewhat beneficial
c. rot beneficial
*81
I
I
SEE QUESTIONNAIRE (Continued) 3
9. The instructions given for the briefing were i
a. adequae
b. somewhat adequate
c. not adequate I
10. The questions you were given the day before the briefing were
a. too numerous
b. adequate in number
c. too few
11. The questions you were given the day before the briefing were i
a. relevant
b. somewhat relevant
c. not relevant i
12. The questions you were given during the briefing were
a. too numerous
b. adequate in number
c. too few
I
13. The questions you were given during the briefing were
a. relevant
3
b. somewhat relevant
c. not relevant 3
14. Assembling the SEE team was
a. difficult
b. somewhat difficult
c. not difficult
PART II. DESCRIBE THE MAJOR BENEFITS YOU GOT FROM PARTICIPATION IN THE SEE. 3
I
82 3
i
I
I
U SEE QUESTIONNAIRE (Concluded)
I PART IIl. FOR EACH OF THE FOLLOWING PHASES OF DEVELOPMENT FOR THE SEE,
DESCRIBE THE LEVEL OF EFFORT SPENT IN EACH PHASE (PERCENT OF TOTAL SEE
EFFORT) AND ANY DIFFICULTIES YOU RAN INTO DURING EACH PHASE. ALSO,
IDENTIFY THE TOTAL EFFORT (I.E., NUMBER OF STAFF-MONTHS EXPENDED ON THE
SEE).
3 A. Requirements Analysis
B. Top-Level Design
3 C. Detailed Design
D. Prototyping
E. Briefing
I
3 PART IV. WAS THERE ANYTHING YOU WOULD HAVE LIKED THE GOVERNMENT TO
HAVE SEEN IN THE SEE PRODUCTS BUT THERE WAS NO PLACE TO PUT IT?
3
PART V. HOW SHOULD THE SEE BE MODIFIED TO INCREASE ITS BENEFITS TO FUTURE
3 ACQUISITIONS?
I
PART VI. USE THIS SPACE FOR ANY ADDITIONAL COMMENTS.
I
I
I
I
I
i
II
U GLOSSARY
* Acronyms
FM file manager
3 FSD/P full-scale development/production
* 85
I
I
GLOSSARY (Continued) I
Acronyms
I
I 87