Handbook For Object-Oriented Technology in Aviation (Ootia) : Volume 2: Considerations and Issues January 30, 2004
Handbook For Object-Oriented Technology in Aviation (Ootia) : Volume 2: Considerations and Issues January 30, 2004
This Handbook does not constitute Federal Aviation Administration (FAA) policy or guidance nor is it
intended to be an endorsement of OOT. This Handbook is not to be used as a standalone product but, rather,
as input when considering issues in a project-specific context.
Considerations and Issues
Contents
2.1 INTRODUCTION.............................................................................................................................................1
2.1.1 Background.............................................................................................................................................2
2.1.2 Purpose and Organization of Volume 2.................................................................................................2
2.2 CONSIDERATIONS BEFORE MAKING THE DECISION TO USE OOT...............................................................3
2.2.1 Reality of Benefits...................................................................................................................................4
2.2.2 Project Characteristics...........................................................................................................................4
2.2.3 OOT Specific Resources.........................................................................................................................4
2.2.4 Regulatory Guidance..............................................................................................................................5
2.2.5 Technical Challenges.............................................................................................................................5
2.3 CONSIDERATIONS AFTER MAKING THE DECISION TO USE OOT.................................................................7
2.3.1 Considerations for the Planning Process...............................................................................................8
2.3.2 Considerations for Development Processes -- Requirements, Design, Code, and Integration.............9
2.3.3 Considerations for Integral Processes.................................................................................................13
2.3.4 Additional Considerations....................................................................................................................18
2.4 OPEN ISSUES..............................................................................................................................................19
2.5 SUMMARY..................................................................................................................................................20
2.6 REFERENCES...............................................................................................................................................21
APPENDIX A RESULTS OF THE BEYOND THE HANDBOOK SESSION............................................24
APPENDIX B MAPPING OF ISSUE LIST TO CONSIDERATIONS........................................................26
APPENDIX C ADDITIONAL CONSIDERATIONS FOR PROJECT PLANNING..................................38
2-ii
Considerations and Issues
Figures
Figure 2.2-1 Original Classification Scheme for the Beyond the Handbook Questions...............................................3
iii
2.1 Introduction
The introduction of object oriented techniques and tools to aviation software development presents challenges to
understanding their effect on safety and certification. As discussed in Volume 1, there is an increasing desire among
aviation software developers to use object-oriented technology (OOT), including object oriented modeling, design,
programming, and analysis, in the development of aviation applications. These desires are fueled, at least in part, by
claims of increased efficiency in the development of complex systems through using reusable components. Object
oriented (OO) design, with the ability to encapsulate design decisions, is considered by some to be “the most
important low-level design technology in modern software engineering” [19].
In response to the aviation industry’s desire to use OOT, the Federal Aviation Administration (FAA) enlisted the
National Aeronautics and Space Administration (NASA) to help start the Object Oriented Technology in Aviation
(OOTiA) project. This project is sponsoring research and conducting workshops designed to identify concerns about
OOT relevant to safety and certification and to develop recommendations for its safe, and DO-178B compliant, use.
The OOTiA project was initially based on work by the Aerospace Vehicle Systems Institute (AVSI). AVSI is a
research consortium for the aerospace industry working to reduce the costs of complex subsystems in aircraft. As
part of this consortium, Boeing, Honeywell, Goodrich, and Rockwell Collins collaborated on an AVSI project titled
Certification Issues for Embedded Object-Oriented Software, the goal of which was to mitigate the risk that
individual projects face when certifying systems with OO software. The AVSI project proposed a number of
guidelines for producing object-oriented DO-178B compliant software [2].
In 2001, a committee including representatives from the AVSI project, FAA, and NASA Langley Research Center,
was formed to extend the AVSI work for the benefit of the entire aviation software community. This committee
developed the following approach for accomplishing this purpose:
Establish a web site dedicated to collecting data on safety and certification concerns
Hold public workshops to which the aviation software community would be invited to discuss concerns
Document each key concern raised either through the web site or the workshops
Adapt the AVSI guidelines to address the concerns
Produce a handbook.
This report is the second volume of the OOTiA handbook. This volume focuses expressly on the concerns and
questions about OOT that have been collected through the OOTiA web site and workshops, with the goal of raising
awareness of aspects of OOT that may complicate compliance with DO-178B. Consequently, the tone of this report
may seem overly negative to some readers, just as the tone of other volumes may seem overly positive to others.
Readers of this report should carefully note the following:
Comments recorded through the OOTiA activities are cited throughout the report. The purpose of citing
recorded comments is not to imply their individual validity, but to account for the data that has been
collected and show the basis for a concise set of key concerns relevant to DO-178B compliance that are
derived from the data as a whole.
The key concerns documented in this report do not constitute a complete set of safety and certification
concerns.
Nearly all of the submitted issues used a particular nomenclature for OO concepts and constructs (class,
subclass, superclass, method). For simplicity, the same nomenclature is used in this volume. This usage
does not imply a preference for languages or tools that use these terms over those that do not.
This volume does not discuss approaches for how to resolve the concerns. Other volumes of the handbook
(Volume 3 in particular) are purported to provide resolutions.
Note also that this volume assumes that the reader has a fundamental understanding of OOT concepts and
languages. Further information and references on these can be found in Volume 1.
Considerations and Issues
2.1.1 Background
On September 14, 2001, the OOTiA web site http://shemesh.larc.nasa.gov/foot/ was launched by NASA Langley
Research Center, and the aviation software community was invited by email to participate in a dialogue about OOT.
The email distribution list comprised over 900 individuals who had expressed an interest in or attended software
related functions sponsored by the FAA. Individuals were invited to participate by submitting comments or concerns
about OOT to an issue list kept on the OOTiA web site, by attending public workshops organized by the OOTiA
committee, and by reviewing products from the effort.
To date, 103 separate concerns 1 about various aspects of OOT have been collected through the web site. The web
site initially requested that each submission include a topic, a statement of the concern, and a proposed solution (if
known). Neither individual nor company names were recorded with the submittals. No specific guidance was given
regarding what could or could not be submitted. Later updates of the web site simply requested that concerns be
emailed to a point of contact at NASA Langley. The web site continues to accept new submissions.
Each submission to the web site is added to a list titled “Issues and Comments about Object Oriented Technology in
Aviation.” This issue list is posted on the web site and updated as new issues are submitted. Every entry that is
submitted is added to the list exactly as it is submitted; entries to the list are not edited. As of the date of publication
of this report, the web site is operational. Decisions about how to respond to future submissions or when to close the
web site have not been made.
Considerable overlap and similarities are evident when reviewing the entries in the issue list. The OOTiA committee
originally determined that most of the issues on the issue list related to the following eight topics: single inheritance,
multiple inheritance, reuse and dead/deactivated code, tools, templates, overloading, type conversion, and inlining.
The OOTiA committee drafted papers for each of these topics, drawing heavily from the original AVSI documents.
In April 2002, a public workshop was held to introduce the OOTiA project, to discuss the draft papers, and to
provide an opportunity for people to raise additional concerns about OOT. Workshop attendance included 13 FAA
representatives and more than 100 aviation industry representatives supporting both airborne and ground-based
applications. After this workshop, the individual draft papers were revised and collated into a single document:
“Handbook for Object Oriented Technology in Aviation.” Also, a ninth topic, traceability, was added, and a paper
on the topic included in the draft handbook.
The draft handbook served as the basis for discussion at a second public workshop, held in March 2003. Attendance
at this workshop was similar in number and composition to the first workshop. Results of both workshops are
available on the OOTiA web site. Most of the workshop was devoted to individual sessions on specific chapters of
the handbook. The purpose of those sessions was to review and modify the draft chapters, and to document new
issues, if raised. Two other sessions at the workshop were not directly tied to the handbook: Beyond the Handbook
and Open Issues. The Beyond the Handbook session provided participants with an opportunity to discuss questions
that should be answered before making a decision to use OOT on a project. The Open Issues session provided
participants the opportunity to discuss any concerns they thought had not been adequately covered in the draft
handbook.
1
There are actually 107 entries to the list, but 4 of them are duplicates.
2-2
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2 DO-178 issue
Are all the objectives (Annex A) in DO-178B compatible with OOT? Is there anything specific
to OOT that is not addressed in DO-178B?
3 Handbook Issues
How should the handbook be applied to a practical project? For example, is it a certification
aid or best practice? Is it adequate – that is, what needs to be extended, added, and changed?
Figure 2.2-1 Original Classification Scheme for the Beyond the Handbook Questions
Appendix A contains the original questions recorded during the brainstorming session. Since that time, the questions
from the session have been re-examined and placed in one of the following five categories:
Reality of the Benefits
Project Characteristics
OOT Specific Resources
Regulatory Guidance
Technical Challenges
The rest of this section briefly discusses each of these five categories and the key questions and associated issues
within each.
2-3
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-4
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
standards for OO source code languages (for example, Ada95, Java, and C++). Other important standards include
internal process standards that define life cycle activities and data associated with OOT and how those map to
activities and data specified in DO-178B. Companies that commit to OOT may also have standards for packaging
OO components for reuse. For example, standards may cover packaging development and verification artifacts from
one project such that they do not conflict with other artifacts when they are brought together to build a larger or
different system.
OO tools are another important resource to consider. Some OO tools introduce new levels of abstraction, such as the
visual model level, that might not directly correspond to abstraction levels (high- or low-level requirements or
design) in DO-178B. Factors to consider here include compatibility of new OO tools with existing tools and
integrated development environments, notations, and processes; configuration management; and qualification costs.
The project characteristics together with the OOT specific resources within a company will influence the level of
involvement, or degree of oversight, that the FAA has with a project. This is a non-trivial consideration with respect
to both time and cost. The level of FAA involvement will dictate the number of software reviews, the stages of
involvement, and the nature of the review [16]. This level of regulatory involvement is closely related to the fourth
of the high-level questions raised at the workshop.
2-5
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2.2.5.1 Requirements
Several questions asked whether OOT is an adequate approach for requirements development and implementation.
That is, do OO approaches to requirements help with the correct specification and implementation of intended
functionality? At least two points were raised: (1) the difference between approaches to requirements decomposition
(functional versus object), and (2) documentation of requirements (text versus graphics).
With functional decomposition, the typical programming unit is some form of subprogram, such as a function,
subroutine, or procedure. Each subprogram typically performs a single specific function, where good programming
practice calls for maximizing functional cohesion within a subprogram and minimizing coupling between
subprograms. Applications are built by sequencing these functional building blocks—“first do this, then do that.”
Verification, in turn, starts with the functionality of an individual subprogram and works its way up by testing
increasing levels of functionality.
In contrast to functional decomposition, OOT focuses on objects and the operations performed by or to those
objects. In an OO program, a class, which is a set of objects that share a common structure and a common behavior,
is the structural element most comparable to a subprogram. Operations related to a given functional requirement
often are distributed among objects associated with different classes. The concern here was whether the distribution
of functionality inherent in OO systems complicates assurance of the intended functionality.
DO-178B organizes objectives for development and verification around the decomposition of requirements from
high-level requirements to low-level requirements to source code. With a structured programming approach, the
requirements specification is largely text-based, with diagrams such as Visio ® diagrams, data and control flow
diagrams, and sequence diagrams, included in the text. OOT is much more focused on a graphical representation of
the system. Typically, requirements for OO systems are developed with use cases, scenarios, and various diagrams
such as class, object, and activity diagrams. Determining how to map these, and their subsequent refinements, onto
the DO-178B objectives was thought by session participants to be difficult. The number and diversity of the
diagrams used to describe the system can add additional complication.
Requirements definition by any method is a significant challenge to developing a correct and safe system [18].
Developers should consider whether OOT makes this challenge easier or more difficult for their project.
2.2.5.2 Verification
In addition to the questions raised about the suitability of OOT for requirements development, a similar number of
questions were raised about verification. The questions about verification are not unrelated to the concerns raised
about requirements. The following sentiment exemplifies the opinion of many in the brainstorming session:
“object oriented programs are generally more complex than their procedural counterparts. This added complexity
results from inheritance, polymorphism, and the complex data interactions tied to their use. Although these features
provide power and flexibility, they increase complexity and require more testing” [1].
Several of the questions discussed in the session sought to explore the extent that OO software can be verified:
Can we analyze OO software?
Can we adequately test OO software?
Can we determine the error cases unique to OOT?
That is, do we have the same level of confidence in our ability to adequately analyze and test OO programs as we do
with structured programs? Some specific analysis issues included source to object code traceability, and control and
data flow analysis. Several participants in the session argued for the application of static analysis and logic-based
methods. Most of those participants would likely argue for static analysis and formal methods even in a functional
approach. However, the broader question is whether additional verification methods are needed for OOT to meet the
same level of assurance that could be obtained under a functional approach.
Lastly, many participants acknowledged the need for additional research to better understand error classes that are
unique to OOT, such as research by Offutt [26], and to better understand the extent that existing methods are
adequate for verifying OOT. Several error classes introduced by OOT have been submitted as entries to the issue
list, and are discussed in section 3.
2-6
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2.2.5.3 Safety
The final technical challenge mentioned in the questions concerns the ability to conduct effective system and
software safety assessment. Participants discussed whether system and safety assessments can be easily and
accurately derived from an OO program. Current safety analysis is often based on determining that a function, as
implemented, is both correct and safe. In an OO program, operations related to a function can be widely distributed
among objects that interact with each other by exchanging messages. Assessing the interaction among distributed
objects complicates safety analysis and makes functionality difficult to trace. In [20], Nancy Leveson argues that
engineers find that functional decomposition is a more natural approach to the design of control systems, and “That
naturalness translates into easier to understand and review, easier to design without errors, easier to analyze to
determine whether the system does what the engineer wants and does it safely.”
Although safety analysis is not one of the life cycle activities specified in DO-178B, connections with safety
assessment are mentioned in DO-178B [10] (e.g., DO-178B sections 5.1.2 j and 5.2.2d). Hence, the effect of OO
design and implementation on safety analysis should be carefully considered.
The data from the Beyond the Handbook session represents only a small portion of the data collected in the OOTiA
project. The majority of the data deals with issues specific to OO methods and languages; that is, the decision to use
OOT is assumed. As might be guessed, many of the questions for deciding whether to use OOT are directly related
to issues raised about OO features discussed in the next section.
2
Some concerns span multiple life cycle processes. Determining which process is most influenced is necessarily subjective. Overlap of concerns
is evident throughout the discussion.
2-7
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2.3.1.3 Restrictions
The final topic relevant to planning deals with restrictions. Section 4.5c of DO-178B states, “the software
development standards should disallow the use of constructs or methods that produce outputs that cannot be verified
or that are not compatible with safety-related requirements” [10]. Some OO languages have features that could make
it extremely difficult or impossible to satisfy the objectives of DO-178B. In many cases, a well-defined subset of the
language may be identified and documented in the coding standards that will allow compliance to objectives for a
given software level. For example, ANSI C++ has some language features, such as multiple inheritance, that may
2-8
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
make it difficult to meet some DO-178B objectives. Two of the entries from the issue list spoke to the potential need
for restrictions or special rules:
Multiple inheritance should be avoided in safety critical, certified systems. (IL 38)
How can we enforce the rules that restrict the use of specific OO features? (IL 58)
The key concern is that language features, such as multiple inheritance, should be evaluated carefully in the planning
process and restrictions or rules established, documented, and followed as warranted for a particular project.
2-9
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
When a descendent adds an extension method that defines an inherited state variable, an inconsistent state
definition can occur. (IL 95)
2.3.2.1.2 Inconsistent Type Use
Another subtyping problem is inconsistent type use. When a descendant class does not override any inherited
method (that is, there is no polymorphic behavior), anomalous behavior can occur if the descendant class has
extension methods resulting in an inconsistent inherited state. (IL 91)
2-10
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2.3.2.2.2 Overriding
Overriding is the redefinition of an operation or method in a subclass. The key concern here is that unintentionally
overriding an operation is easy in some OO languages because of the lack of restrictions on name overloading (the
use of the same name for different operators or behavioral features, operations or methods, visible within the same
scope). The consequence is that a method of the expected name but of a different type might be called in a program.
It is important that the overriding of one operation by another and the joining of operations inherited from
different sources always be intentional rather than accidental. (IL 32)
A subclass-specific implementation of a superclass method is [accidentally] omitted. As a result, that
superclass method might be incorrectly bound to a subclass object, and a state could result that was valid
for the superclass but invalid for the subclass owing to a stronger subclass invariant. For example, object-
level methods like isEqual or copy are not overridden with a necessary subclass implementation. (IL
20)
It is unclear whether the normal overload resolution rules should apply between operations inherited from
different superinterfaces or whether they should not (as in C++). (IL 31)
Offutt has identified the following five classes of errors associated with overriding [26]:
1 If a computation performed by an overriding method is not semantically equivalent to the computation of
the overridden method with respect to a variable, a behavior anomaly can result. (IL 94) This is referred to
as a State Defined Incorrectly (SDI) problem.
2 If a descendant class provides an overriding definition of a method which uses variables defined in the
descendant’s state space, a data flow anomaly can occur. (IL 96) This is referred to as an Anomalous
Construction Behavior (ACB1) problem.
3 If a descendant class provides an overriding definition of a method which uses variables defined in the
ancestor’s state space, a data flow anomaly can occur. (IL 97) This is referred to as an Anomalous
Construction Behavior (ACB2) problem.
4 If refining methods do not provide definitions for inherited state variables that are consistent with
definitions in an overridden method, a data flow anomaly can occur. (IL 92) This is referred to as a State
Definition Anomaly (SDA) problem.
5 When private state variables exist, if every overriding method in a descendant class doesn’t call the
overridden method in the ancestor class, a data flow anomaly can exist. (IL 99) This is referred to as a
State Visibility Anomaly (SVA) problem.
As mentioned above, overriding is affected by the use of overloading. In theory, overloading enhances readability
when the overloaded operators, operations or methods are semantically consistent. (IL 60) However, overloaded
operators, operations, and methods contribute to confusion and human error when they introduce methods that have
the same name but different semantics.
2-11
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
traditional allocation and deallocation algorithms are unpredictable in terms of their worst-case memory use and
execution times, resulting in indeterminate execution profiles (IL 66).
2.3.2.3.2 Initialization
Incorrect initialization of variables and constants is dealt with in sections 6.3.4 and 6.4.3 of DO-178B. In OO
programs, class hierarchies (deep hierarchies in particular) may lead to initialization problems. The key concern is
that a subclass method might be called (via dynamic dispatch) by a higher level constructor before the attributes
associated with the subclass have been initialized. (IL 19) This can lead to the incomplete (failed) construction
problem identified by Offutt [26]. (IL 98) According to Offutt, there are two possible faults, depending on
programming language:
“First, the construction process may have assigned an initial value to a particular state variable, but it is the wrong
value. That is, the computation used to determine the initial value is in error. Second, the initialization of a particular
state variable may have been overlooked. In this case, there is a data flow anomaly between the constructor and each
of the methods that will first use the variable after construction (and any other uses until a definition occurs)” [26].
2-12
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
using general-purpose libraries and OO frameworks when all elements of the libraries or frameworks are not used.
This key concern applies equally well to non-OO systems. However, some might argue that dependence on libraries
and frameworks may be more extensive in an OO system. In any case, use of libraries must be carefully considered
and verified for proper functionality.
Deactivated Code will be found in any application that uses general purposed libraries or object-oriented
frameworks. (Note that this is the case where unused code is NOT removed by smart linkers.) (IL 1)
How can deactivated code be removed from an application when general purpose libraries and object-
oriented frameworks are used but not all of the methods and attributes of the classes are needed by a
particular application? (IL 57)
2-13
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
How can we meet the control and data flow analysis requirements of DO-178B with respect to dynamic
dispatch? (IL 56)
Flow analysis, recommended for Levels A-C, is complicated by dynamic dispatch (just which method in
the inheritance hierarchy is going to be called?). (IL 2)
Flow analysis and structural coverage analysis, recommended for Levels A-C, are complicated by multiple
implementation inheritance (just which of the inherited implementations of a method is going to be called
and which of the inherited implementations of an attribute is going to be referenced?). The situation is
complicated by the fact that inherited elements may reference one another and interact in subtle ways
which directly affect the behavior of the resulting system. (IL 16)
OO language features such as inlining can also complicate flow analysis because inlining can cause substantial
differences between the flow apparent in the source code and the actual flow in the object code.
Flow Analysis, recommended for levels A-C, is impacted by Inlining (just what are the data coupling and
control coupling relationships in the executable code?). The data coupling and control coupling
relationships can transfer from the inlined component to the inlining component. (IL 43)
2.3.3.1.2 Structural Coverage Analysis
Structural coverage analysis is required in DO-178B for software levels A-C. The intent of structural coverage
analysis, in the DO-178B context, is to complement requirements-based testing by: (1) providing evidence that the
code structure was verified to the degree required for the applicable software level; (2) providing a means to support
demonstration of absence of unintended functions; and, (3) establishing the thoroughness of requirements-based
testing [11].
Structural coverage analysis is complicated by dynamic dispatch because structural coverage changes when going
from subclass to superclass. The key concern is that structural coverage in an OO program is not meaningful unless
coverage measurements are context dependent; that is, based on the class of the specific object on which the
methods were executed. “Coverage achieved in the context of one derived class should not be taken as evidence that
the method has been fully tested in the context of another derived class” [15]. The following entries in the issue list
attest to the complications:
Structural coverage analysis, recommended for Levels A-C, is complicated by dynamic dispatch (just
which method in the inheritance hierarchy does the execution apply to?). (IL 5)
The use of inheritance and polymorphism may cause difficulties in obtaining structural coverage,
particularly decision coverage and MC/DC (IL 11)
The unrestricted use of certain object-oriented features may impact our ability to meet the structural
coverage criteria of DO-178B. (IL 48)
Statement coverage when polymorphism, encapsulation or inheritance is used. (IL 49)
How can we meet the structural coverage requirements of DO-178B with respect to dynamic dispatch?
There is cause for concern because many current Structural Coverage Analysis tools do not “understand”
dynamic dispatch, i.e. do not treat it as equivalent to a call to a dispatch routine containing a case statement
that selects between alternative methods based on the run-time type of the object. (IL 55)
The following entries from the issue list refer to the effect that OO language constructs such as inlining and
templates have on structural coverage analysis:
With inlining, the “logical” coverage of the inline expansions on the original source code is not clear. This
is generally only a problem when inlined code is optimized. If statements are removed from the inlined
version of a component, then coverage of the inlined component is no longer sufficient to assert coverage
of the original source code. (IL 45)
Inlining may affect tool usage and make structural coverage more difficult for levels A, B, and C. (IL 47)
Templates can be compiled using code sharing or macro-expansion. Code sharing is highly parametric,
with small changes in actual parameters resulting in dramatic differences in performance. Code coverage,
2-14
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
therefore, is difficult and mappings from a generic unit to object code can be complex when the compiler
uses the "code sharing" approach. (IL 52)
Code sharing involves the sharing of code by more than one class or component, for example, by means of
implementation inheritance or delegation. There are many ways to support code sharing. One risk is that inheritance
can be misused to support only the sharing of code and data structure, without attempting to follow behavioral
subtyping rules.
2.3.3.1.3 Timing and Stack Analysis
Timing analysis, worst-case execution time in particular, and stack usage are both part of review and analysis of
source code in DO-178B section 6.3.4f. Stack overflow errors are listed in section 6.4.3f of DO-178B as errors that
are typically found in requirements-based hardware/software integration testing. Timing and stack analysis are
complicated by certain implementations of dynamic dispatch. With some implementations of dynamic dispatch, it is
difficult to know just how much time will be expended determining which method to call. (IL 3) If polymorphism
and dynamic binding are implemented, stack size can grow, making analysis of the optimal stack size difficult. (IL
107)
Timing and stack analysis are also affected by inlining, templates, and macro-expansion. Inline expansion can
eliminate parameter passing, which can affect the amount of information pushed on the stack as well as the total
amount of code generated. This, in turn, can affect the stack usage and timing analysis.
Stack Usage and Timing Analysis, recommended for levels A-D, are impacted by Inlining (just what are
the stack usage and worst-case timing relationships in the executable code?). Since inline expansion can
eliminate parameter passing, this can affect the amount of information pushed on the stack as well as the
total amount of code generated. This, in turn, can affect the stack usage and the timing analysis. (IL 44)
Templates are instantiated by substituting a specific type argument for each formal type parameter defined
in the template class or operation. Passing a test suite for some but not all instantiations cannot guarantee
that an untested instantiation is bug free. (IL 50)
Macro-expansion can result in memory and timing issues, similar to those identified for inlining. (IL 53)
2.3.3.1.4 Source to Object Trace
Source to object code traceability tends to be a controversial issue; object orientation does not improve the situation.
As discussed in DO-178B, for level A software, it is necessary to establish whether the object code is directly
traceable to the source code. If the object code is not directly traceable to the source code, then additional
verification should be performed [10]. Dynamic dispatch complicates source to object code traceability because it
might be difficult to determine how the dynamically dispatched call is represented in the object code. (IL 6) In
addition, source to object code correspondence will vary between compilers for inheritance and polymorphism,
along with constructors/destructors and other language helper functions. (IL 12) Additional entries from the issue
list related to source to object traceability include:
Dynamic dispatch presents a problem with regard to the traceability of source code to object code that
requires “additional verification” for level A systems as dictated by DO-178B section 6.4.4.2b. (IL 8)
Are there unique challenges for source to object code traceability in non-Level A systems? Where should
this be addressed? Multiple tools and ways of addressing source to object traceability? (IL 81)
Some OO language features, such as inlining and implicit type conversion, can also complicate source to object code
traceability:
Conformance to the guidelines in DO-178B concerning traceability from source code to object code for
Level A software is complicated by inlining (is the object code traceable to the source code at all points of
inlining/expansion?). Inline expansion may not be handled identically at different points of expansion.
This can be especially true when inlined code is optimized. (IL 46)
Implicit type conversion raises certification issues related to source to object code traceability, the potential
loss of data or precision, and the ability to perform various forms of analysis called for by DO-178B
2-15
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
including structural coverage analysis and data and control flow analysis. It may also introduce significant
hidden overheads that affect the performance and timing of the application (IL 59)
3
Some of the books themselves are huge; for example Binder’s Testing Object-Oriented Systems, Models, Patterns, and Tools [Binder] is over
1000 pages!
2-16
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-17
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-18
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
Are there other types of OO tools that need to be addressed? Need to anticipate other classes of tools that
may come onto the scene; e.g., traceability tool for OO, transformation tools, CM tools, refactoring tools
(tool to restructure source code to meet new requirements). (IL 86)
2-19
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
Should there be consideration of maintenance of large OO programs? Should the handbook offer guidance
for long term maintainability?
Moving target of language standard (e.g., Java moves every 6 months)
Should we be addressing other programming paradigms?
What about the consistency of guidelines among ground-based, space-based, and airborne systems?
Is it worth looking for other instances of object oriented use that should be advised against (such as those
given in the multiple inheritance chapter)?
Mapping OO life cycle data to DO-178B section 11 life cycle data; e.g., what are requirements, design,
and code in OO?
How do you review code that has been generated by a non-qualified code generator?
If you are going to go OO, you may require more processing power and memory for the delivered system
than if you had not used OO.
Formal Methods:
o Why aren’t correct-by-construction and static verification recognized as valuable within the
aviation community?
o Ignorance about static verification.
o Documentation of best practices that include formal methods for producing better software.
o Formal methods should be included in DO-178C, acknowledging the maturity of formal methods.
o Determine the gain you get from formal methods by showing how it affects Annex A.
2.5 Summary
Identifying safety and certification concerns is an important step in developing guidelines for safely using OOT in
aviation applications. Because OOT has been around for a while, there is industrial experience to help shed light on
possible problems. In this report, we focus specifically on potential pitfalls to using OOT in aviation applications, as
reported in the form of concerns and questions about OOT that have been collected through the OOTiA project web
site and workshops. In general, two sets of challenges are presented: challenges to consider before making the
decision to use OOT on a program, and challenges to consider once that decision is made.
During a brainstorming session at the second OOTiA workshop, participants proposed that the following subjects
should be evaluated as part of the decision-making process for using OOT:
Reality of the benefits of using OOT
Project characteristics
Resources specific to implementing OOT
Regulatory guidance
Technical challenges in the areas of requirements, verification, and safety.
Other challenges to safely implementing OOT in compliance with DO-178B were captured through the OOTiA web
site on a list of Issues and Comments about Object Oriented Technology in Aviation. The key areas of concern, as
outlined below, are organized with respect to DO-178B life cycle activities:
2.3.1 Considerations for the Planning Process
2.3.1.1 Defining Life Cycle Data
2.3.1.2 Requirements Methods and Notations
2.3.1.3 Restrictions
2.3.2 Considerations for Development Processes -- Requirements, Design, Code, and Integration
2-20
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
For each of the above topics, key concerns are identified based on input to the OOTiA program. This list is certainly
incomplete; however it provides a starting point for developing guidelines for the safe use of OOT in aviation
applications. Volume 3 of the OOTiA handbook provides guidelines for developing OOT applications in systems to
be certified by the FAA. Volume 4 provides an approach for certification authorities and designees to ensure that
OOT issues have been addressed in the projects they are reviewing or approving.
2.6 References
1. Alexander, Roger T., September/ October 2001, “Improving the Quality of Object-Oriented Programs,” IEEE
Software, pp. 90-91.
2. Aerospace Vehicle Systems Institute, Guide to the Certification of Systems with Embedded Object-Oriented
Software, version 1.2, 31 October 2001
3. Aerospace Vehicle Systems Institute, Guide to the Certification of Systems with Embedded Object-Oriented
Software, version 1.6.
4. Basili, V., L. Briand and W. Melo, “How Reuse Influences Productivity in Object-Oriented Systems,”
Communications of the ACM, vol. 39, no. 10, 1996, pp. 104-116.
5. Binder, Robert V., Testing Object-Oriented Systems, Addison-Wesley, Reading, MA, 2000.
2-21
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
6. Briand, L., E. Arisholm, S. Counsell, F. Houdek, and P. Thévenod-Fosse, “Empirical Studies of Object-
Oriented Artifacts, Methods, and Processes: State of the Art and Future Directions,” Technical Report ISERN-
99-12, 1999.
7. Certification Authorities Software Team (CAST), “Object-Oriented Technology (OOT) in Civil Aviation
Projects: Certification Concerns,” Position Paper CAST-4, January 2000, available at
http://www2.faa.gov/certification/aircraft/av-info/software/CAST_Papers.htm. Visited on 29 July 2003.
8. Certification Authorities Software Team (CAST), “Use of the C++ Programming Language,” Position Paper
CAST-8, January 2002, available at http://www2.faa.gov/certification/aircraft/av-
info/software/CAST_Papers.htm. Visited on 29 July 2003.
9. Cuthill, Barbara, “Applicability of Object-Oriented Design Methods and C++ to Safety-Critical Systems”,
Proceedings of the Digital Systems Reliability and Safety Workshop, 1993.
10. RTCA, Inc., Software Considerations in Airborne Systems and Equipment Certification, RTCA/DO-178B,
December 1992, Washington, D. C.
11. RTCA, Inc., Final Report for Clarification of DO-178B “Software Considerations in Airborne Systems and
Equipment Certification”, RTCA/DO-248B, 12 October 2001, Washington, D. C.
12. RTCA, Inc., Guidelines for Communication, Navigation, Surveillance, and Air Traffic Management
(CNS/ATM) Systems Software Integrity Assurance, RTCA/DO-278, 5 March 2002, Washington, D. C.
13. Glass, Robert L, May/June 2002, “The Naturalness of Object Orientation: Beating a Dead Horse?” IEEE
Software, pp. 103-104.
14. Hayhurst, Kelly J., C. Michael Holloway, “Challenges in Software Aspects of Aviation Systems,” Proceedings
of the 26th Annual NASA Goddard Software Engineering Workshop, 27-29 November 2001, Greenbelt, MD,
pp. 7-13.
15. Information Processing Ltd., “Advanced Coverage Metrics for Object-Oriented Software”, available at
http://www.iplbath.com/pdf/p0833.pdf. Visited on 30 October 2003.
16. FAA Aircraft Certification Service, Conducting Software Reviews Prior to Certification, Job Aid, June 1998,
available at http://www2.faa.gov/certification/aircraft/av-info/software/Job_Aids.htm. Visited on 29 July 2003.
17. Knight, J.; Evans, D.; and Offutt, J.: Object Oriented Programming in Safety-Critical Software. A white paper.
18. Hanks, Kimberly S., John C. Knight, Elisabeth A. Strunk, “Erroneous Requirements: A Linguistic Basis for
Their Occurrence and an Approach to Their Reduction,” Proceedings of the 26th Annual NASA Goddard
Software Engineering Workshop, 27-29 November 2001, Greenbelt, MD, pp. 115-119.
19. Knight, John C., Object-Oriented Techniques and Dependability, white paper.
20. Leveson, Nancy, “Re: object-orientation vs. safety-critical” in Safety-Critical Mailing List, 2002, archived at
http://www.cs.york.ac.uk/hise/safety-critical-archive/2002/0203.html. Visited on 28 July 2003.
21. Liskov, Barbara H., Jeanette M. Wing: A Behavioral Notion of Subtyping, ACM Transactions on
Programming Languages and Systems, Nov. 1994, vol. 16, no. 6, pp. 1811-1841.
22. Rierson, Leanna: “Object-Oriented Technology (OOT) in Civil Aviation Projects: Certification Concerns,”
Proceedings of the 18th Digital Avionics Systems Conference, St. Louis, MO, Oct. 24-29, 1999.
23. The Memory Management Reference Beginner's Guide Overview, archived at
http://www.memorymanagement.org/articles/begin.html. Visited on 28 August 2003.
24. Meyer, Bertrand. Object-Oriented Software Construction. Prentice Hall, 2nd edition, 1997.
25. Moynihan, Tony, 1996, “An Experimental Comparison of Object-Orientation and Functional-Decomposition as
Paradigms for Communicating System Functionality to Users,” The Journal of Systems and Software, vol. 33,
pp. 163-169.
2-22
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
26. Offutt, Jeff, Roger Alexander, Ye Wu, Quansheng Xiao, Chuck Hutchinson, November 2001, “A Fault Model
for Subtype Inheritance and Polymorphism,” The 12th IEEE International Symposium on Software Reliability
Engineering, Hong Kong, PRC, pp. 84–95.
27. Object Management Group, March 2003, OMG Unified Modeling Language Specification, Version 1.5,
formal/03-03-01.
28. Rierson, Leanna K., FAA’s Next Steps for OOTiA, presented at the Object Oriented Technology in Aviation
Workshop 2, 27 March 2003, available at http://shemesh.larc.nasa.gov/foot/next-steps-end.ppt. Visited on 29
July 2003.
29. Rosay, Cyrille, Is DO-178B still compatible with modern modeling methods?, white paper from JAA/CEAT,
draft 3-15, Friday 22 February 2002.
30. Hayhurst, Kelly J; Cheryl Dorsey; John Knight, Nancy Leveson, G. Frank McCormick, August 1999,
Streamlining Software Aspects of Certification: Report on the SSAC Survey, NASA/TM-1999-209519.
31. Vessey, Iris, and Sue A. Conger, “Requirements Specification: Learning Object, Process, and Data
Methodologies,” Communications of the ACM, vol. 37, no. 5, May 1994, pp. 102-113.
32. Webster, Bruce F., Pitfalls of Object-Oriented Development, M&T Books, New York, New York, 1995. (out of
print)
33. Whitford, S. A., Software Safety Code Analysis of an Embedded C++ Application, Proceedings of the 20th
International System Safety Conference, Denver, CO, August 5-9, 2002, pp. 422-429.
34. Wood, M, J. Daly, J. Miller, and M. Roper, “Multi-Method Research: An Empirical Investigation of Object-
Oriented Technology,” The Journal of Systems and Software, 1999, no. 34, pp. 13-26.
2-23
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Appendix A Beyond the Handbook Session
At OOTiA Workshop 2, a brainstorming session called “Beyond the Handbook” was held where participants were
asked to suggest questions that should be answered before a program commits to using OOT. During the session,
participants produced a list of fifty-one questions related to making a decision about whether to use OOT. The
questions are listed below in the order recorded during the session.
Is it appropriate to use commercially available processes/products for developing aviation software, or what
steps need to be taken to make it so?
Are we subjecting OOT to extra scrutiny because it is new to aviation community?
Does the OO paradigm actually fit your problem domain?
How much of current good practice is non-applicable to OOT?
Do we need to make a distinction between the design process and the develop/test process? Does OOD <->
OO implementation?
Do you have a plan for failure?
Where is dynamic dispatch really useful? Would the non-OO alternative be any easier to analyse?
How mature are your requirements?
Is there an existing tool set to support your effort?
Has the company analysed the benefits & risks of using OOT over their current established approaches to
software engineering?
Have you run out of steam in regards to existing techniques for really large systems and if so does OOT
help us manage such systems better?
Is there an agreement on what is OOT? What is essential, what is not etc.?
Does OOT help us write the requirements correctly and implement them properly? Can we integrate
Formal Methods into the process?
Does OOT help us identify what we really want in the system and document this in requirements?
Will our system interface with other systems that are not OOT based?
Do you understand the issues presented in the handbook and why are they issues? Does anyone?
Can your company make a case for using OOT in a system in such a way that you can document and verify
the system?
Why is company X using OOT?
Can the system & software safety assessments be derived easily from OOT or is it a difficult task to ensure
safety?
Why is company Y not using OOT?
Is OO the best method from the engineering point of view for reuse?
Are the objectives in DO-178B sufficient to define ‘correctly’ for OOT development?
Are your requirements implementable using OOT?
What measures or metrics will you use to determine success/failure for your project?
Is control-flow analysis, data-flow analysis, Z-flow analysis applicable to OO software, or should we be
looking for something else?
2-25
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-26
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
coverage analysis
10 Inheritance, polymorphism, and linkage can lead to Development 2.3.2.2.1 unclear
ambiguity. intent
11 The use of inheritance and polymorphism may cause Integral 2.3.3.1.2 structural
difficulties in obtaining structural coverage, particularly (Verification) coverage analysis
decision coverage and MC/DC
12 Source to object code correspondence will vary between Integral 2.3.3.1.4 source to
compilers for inheritance and polymorphism. (Verification) object trace
13 Polymorphic and overloaded functions may make tracing Integral 2.3.3.4.2 complexity
and verifying the code difficult. (Traceability)
14* Requirements Testing, recommended for Levels A-D, and Integral 2.3.3.2.2 test case
Structural Coverage Analysis, recommended for Levels (Verification) reuse
A-C, are complicated by Inheritance, Overriding and
Dynamic Dispatch (just how much of the existing
verification of the parent class can be reused in its
subclasses?). [Note: this is exactly the same issue as IL
4)]
15 Multiple interface inheritance can introduce cases in Development 2.3.2.2.1 unclear
which the developer’s intent is ambiguous. (when the intent
same definition is inherited from more than one source is
it intended to represent the same operation or a different
one?)
16 Flow Analysis and Structural Coverage Analysis, Integral 2.3.3.1.1 flow
recommended for Levels A-C, are complicated by (Verification) analysis
Multiple Implementation Inheritance (just which of the
inherited implementations of a method is going to be
called and which of the inherited implementations of an
attribute is going to be referenced?). The situation is
complicated by the fact that inherited elements may
reference one another and interact in subtle ways which
directly affect the behavior of the resulting system.
17 Use of inheritance (either single or multiple) raises issues Development 2.3.2.1.1 type
of compatibility between classes and subclasses. substitutability
18 Inheritance and overriding raise a number of issues with Integral 2.3.3.2.2 test case
respect to testing: “Should you retest inherited methods? (Verification) reuse
Can you reuse superclass tests for inherited and
overridden methods? To what extent should you exercise
interaction among methods of all superclasses and of the
subclass under test?”
19 Inheritance can introduce problems related to Development 2.3.2.3.2
initialization. “Deep class hierarchies [in particular] can initialization
lead to initialization bugs.” There is also a risk that a
subclass method will be called (via dynamic dispatch) by
a higher level constructor before the attributes associated
with the subclass have been initialized.
20 “A subclass-specific implementation of a superclass Development 2.3.2.2.2 overriding
method is [accidentally] omitted. As a result, that
superclass method might be incorrectly bound to a
subclass object, and a state could result that was valid for
the superclass but invalid for the subclass owing to a
stronger subclass invariant. For example, Object-level
methods like isEqual or copy are not overridden with a
necessary subclass implementation”.
2-27
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-28
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-29
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-30
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-31
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-32
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
70 The difference between dead and deactivated code is not Development 2.3.2.4.1 identifying
always clear when using OOT. Without good Integral dead and deactivated
traceability, identifying dead vs. deactivated code may be (Traceability) code
difficult or impossible.
71 When a design contains abstract base classes, portions of Development 2.3.2.4.1 identifying
the implementations of these classes may be overridden dead and deactivated
in more specialized subclasses, resulting deactivated code
code.
72 Traceability is made more difficult because there is often Integral 2.3.3.4.3 tracing
a lack of OO methods or tools for the full software (Traceability) through OO views
lifecycle.
73 Formal specification languages are generally accessible Planning 2.3.1.2 requirements
only to those specially trained to use them. To make methods and
formal specifications accessible to developers and the notations
authors of test cases, we must map such formal
specifications to natural language and/or other less formal
notations (e.g. UML). There, however, is currently no
well defined means of doing so. This issue applies to both
preliminary and detailed design.
74 Change impact analysis may be difficult or impossible Integral 2.3.3.3.2
due to difficulty in tracing functional requirements (Configuration configuration control
through implementation. Management)
75 Limitations of UML may limit how non-functional and Planning 2.3.1.2 requirements
cross-cutting requirements of realtime, safety critical, methods and
Development
distributed, fault-tolerant, embedded systems are captured notations
in UML and traced to the design, implementation, and
test cases.
76 Configuration management may be difficult in OO Integral 2.3.3.3.1
systems, causing traceability problems. If the objects and (Configuration configuration
classes are considered configuration items, they can be Management) identification
difficult to trace, when used multiple times in slightly
different manners.
77 What is “low level requirements” for OO? Affects how Planning 2.3.1.1 defining life
we do low-level testing. If we don’t know what low-level cycle data
requirements are, we don’t know the appropriate level of
testing.
* High level = WHAT
* Low level = HOW
Related to issue raised in tools session – relation be
between artifacts.
Should be addressed in the handbook.
78 Addressing derived requirements for OO – how does this Planning 2.3.1.2 requirements
happen? How is it different than traditional and how does methods and
it tie up to the safety assessment. Not really unique for notations
OO.
Will be addressed when we do the artifact mapping.
79 Difficult to identify individual atomic requirements in Planning 2.3.1.2 requirements
OO. UML tends to group requirements in a graphical methods and
format. Would complicate matters if considered derived. notations
2-33
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-34
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
mean in OO? What is req, design, code? Transition from cycle data
text-based to model-based artifacts.
*** May need to clarify this up front in the handbook,
when making the tie between DO-178B and the
handbook.
88 Configuration management and incremental development Integral 2.3.3.3.2
of OO projects and tools. When CM comes into play (Configuration configuration control
during the development process may be different than our Management)
current practices, when using an UML tool. Doing more
iterations in OO. How to “get credit” on iterations. Not
necessarily OO-specific, but might be more prevalent
with OO because of the multiple iterations.
89 Is dynamic dispatch compatible with DO-178B required Integral 2.3.3.1.1 flow
forms of static analysis? Mention that dynamic dispatch (Verification) analysis
hinders some forms of static analysis including (see DO-
178B section 6.3.4f). Tools can treat this if complete
closure exists. DO-178B requires complete closure. In
cases of incomplete closure, need to define ways to
implement.
90 Fundamental pre-requisite language issues need Development 2.3.2.1.1 type
clarification prior to adopting LSP and DBC. How can substitutability
LSP be implemented using available languages?
Strongly consider a language subset that is amenable to
use of LSP and DBC. Concern is how far to take this
subset.
91 Inconsistent Type Use (ITU): Development 2.3.2.1.2
inconsistent type use
When a descendant class does not override any inherited
method (i.e., no polymorphic behavior), anomalous
behavior can occur if the descendant class has extension
methods resulting in an inconsistent inherited state. 5
92 State Definition Anomaly (SDA): Development 2.3.2.2.2 overriding
If refining methods do not provide definitions for
inherited state variables that are consistent with
definitions in an overridden method, a data flow anomaly
can occur. 6
93 State Definition Inconsistency (SDIH): Development 2.3.2.2.2 overriding
If an indiscriminately-named local state variable is
introduced, a data flow anomaly can result.
94 State Defined Incorrectly (SDI): Development 2.3.2.2.2 overriding
If a computation performed by an overriding method is
not semantically equivalent to the computation of the
overridden method wrt a variable, a behavior anomaly
can result. 7
95 Indirect Inconsistent State Definition (IISD): Development 2.3.2.1.1 type
5
Inconsistent Type Use (ITU): This is addressed by verification of subtyping (LSP). Where we assume that the meaning of "inconsistent state" is
"violates the class invariant". [26]
6
State Definition Anomaly (SDA): This goes beyond an initialization problem to LSP and breaking promises made by superclasses, i.e. by not
keeping postconditions. [26]
7
State Defined Incorrectly (SDI): Violates LSP in that promises to clients by superclasses are not kept. “Incorrect” means either breaking an
invariant or breaking a postcondition by defining an incorrect “v” (per example). [26]
2-35
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-36
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-37
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
2-38
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.
Considerations and Issues
Verification Pitfalls
Neglecting component testing: Although it is possible to take a subsystem or system level testing approach to an
OO system, there are advantages to testing individual components. These components may be individual classes or
larger units (assemblies of classes representing libraries or subsystems). They, however, should correspond to
reusable entities. Testing at this level makes it easier to enforce the principles of Design by Contract [24], makes it
easier to “inherit” test cases, and makes it possible to deliver individual components, their test cases, documentation,
etc. as a single package.
Reuse Pitfalls
Having or setting unrealistic expectations: OOT is often sold on promises related to reuse. Reuse, however, is not
easy, and is not free. It comes at a cost (in terms of analysis and design) that many organizations are unwilling to
pay. It also may only make economic sense if the organization plans to build three or more additional systems that
are closely related to one another (form a product family), or if an individual component or subsystem can be reused
at least three times.
Being too focused on code reuse: “The most important creation to come from an OOT project is often the
architecture and the design, not the code implementation.” [32] This is true because these artifacts are typically more
general, and more easily applied to new systems and new applications.
2-39
NOTE: This handbook does not constitute official policy or guidance from any certification authorities.