ESAISVVGuideRev1 0cnov2005
ESAISVVGuideRev1 0cnov2005
NDEPENDENT OFTWARE
ERIFICATION
ALIDATION
Disclaimer:
This ISVV Guide is the first draft issue of the document. It is available for use and review to the European
Space Industry. The ISVV Guide is provided as is: the Agency gives no warranty nor guarantees
whatsoever as to its completeness, adequacy or suitability and shall not be held liable for any direct,
indirect nor consequential damages. Use of the ISVV Guide by Readers/Users is made fully at the latter's
own risk
Feedback:
Readers/users of this guide is requested to provide comments to ESA on eventual inconsistencies in the
guide or suggestions for its improvements.
A P P R O V A L
Title ESA Guide for Software Verification and Validation issue 1 revision 0
titre issue revision
C H A N G E L O G
C H A N G E R E C O R D
Issue: 1 Revision: 0
contents:
1.0 Introduction .................................................................................................... 1
1.1 Background and Motivation .............................................................................. 1
1.2 Purpose ............................................................................................................ 1
1.3 Definitions......................................................................................................... 1
1.4 Acronyms ......................................................................................................... 4
1.5 References ....................................................................................................... 6
1.6 Outline .............................................................................................................. 7
2.0 What is Independent Software Verification and Validation? ...................... 8
2.1 Types of Independence .................................................................................... 8
2.2 Objectives of ISVV ......................................................................................... 10
2.3 ISVV is Complementary to Developer’s V&V ................................................. 10
3.0 ISVV Process Overview ............................................................................... 12
4.0 ISVV Process Management ......................................................................... 15
4.1 Activity Overview ............................................................................................ 15
4.1.1 Roles and Responsibilities ......................................................................................... 16
4.1.1.1 Responsibilities of Software suppliers........................................................................ 17
4.1.1.2 Interface with Software Validation Facility supplier .................................................... 17
4.1.2 Criticality Analysis, Definition of Scope and Budgeting .............................................. 17
4.1.3 Scheduling and Milestones ........................................................................................ 19
4.1.4 Quality Management System ..................................................................................... 19
4.1.5 Non-Disclosure and Security...................................................................................... 20
4.1.6 Competence ............................................................................................................... 20
4.2 Activity Inputs and Prerequisites .................................................................... 21
4.2.1 Software Criticality Analyses ...................................................................................... 21
4.2.2 Documents and Code from Software Development ................................................... 21
4.2.3 ISVV Findings Resolution Report............................................................................... 22
4.3 Activity Outputs .............................................................................................. 22
4.3.1 ISVV Plan ................................................................................................................... 22
4.3.2 Requests for Clarification ........................................................................................... 22
4.3.3 ISVV Report (with ISVV Findings).............................................................................. 23
4.3.4 Progress Reports ....................................................................................................... 23
4.4 Activity Management ...................................................................................... 23
4.4.1 Initiating and Terminating Events ............................................................................... 23
4.4.2 Completion Criteria..................................................................................................... 24
4.4.3 Relations to other Activities ........................................................................................ 24
4.5 Task Descriptions ........................................................................................... 24
4.5.1 ISVV Process Planning .............................................................................................. 24
4.5.2 ISVV Process Execution, Monitoring and Control ...................................................... 25
4.6 Methods.......................................................................................................... 26
5.0 Criticality Analysis ....................................................................................... 27
5.1 Activity Overview ............................................................................................ 27
5.1.1 ISVV Levels and Software Criticality Categories........................................................ 29
5.1.2 Adjusting the ISVV Level............................................................................................ 31
5.1.3 Treatment of Diverse Criticality Categories................................................................ 32
5.2 Activity Inputs and Prerequisites .................................................................... 33
5.3 Activity Outputs .............................................................................................. 33
figures:
Figure 1: ISVV Process Activities.............................................................................................. 12
Figure 2: Software Engineering and ISVV Processes............................................................... 13
Figure 3: ISVV process management in context....................................................................... 15
Figure 4: ISVV Process Management Tasks ............................................................................ 16
Figure 5: ISVV cost model ........................................................................................................ 18
Figure 6: Criticality Analysis in context...................................................................................... 27
Figure 7: ISVV Criticality Analysis Tasks .................................................................................. 28
Figure 8: Visualisation of a software architecture with diverse category levels assigned ......... 32
Figure 9: Technical Specification Analysis in context .............................................................. 40
Figure 10: Technical Specification Analysis activity.................................................................. 41
Figure 11: Software Requirements Independent Verification.................................................... 42
Figure 12: Design Analysis in context ...................................................................................... 48
Figure 13: Software Design Analysis ........................................................................................ 49
Figure 14: Software Architectural Design Independent Verification.......................................... 50
Figure 15: Software Detailed Design Independent Verification................................................. 51
Figure 16: Software User Manual Independent Verification...................................................... 51
Figure 17: Code Analysis in context.......................................................................................... 64
Figure 18: Code Analysis .......................................................................................................... 65
Figure 19: Software Source Code Independent Verification ..................................................... 66
Figure 20: Integration/Unit Test Procedures and Test Data Verification................................... 66
Figure 22: Independent Software Validation ............................................................................. 78
Figure 23: Subtasks to "Identification of Test Cases" ............................................................... 79
Figure 24: Subtasks to "Construction of Test Procedures" ....................................................... 81
Figure 25: Subtasks to "Execution of Test Procedures" ........................................................... 82
tables:
Table 1: Competence requirements for ISVV personnel........................................................... 20
Table 2: ISVV levels.................................................................................................................. 29
Table 3: Default mapping from Software Criticality Category to ISVV level.............................. 31
Table 4: Matrix to derive ISVV level from Software Criticality Category and Error Potential..... 31
Table 5: Dependency between ISVV level, input and analysis ................................................. 80
Table 6: RID Form..................................................................................................................... 94
Table 7: RID Problem Type Categories .................................................................................... 95
Table 8: RID Severity Classes .................................................................................................. 95
Table 9: Error Potential Questionnaire...................................................................................... 96
Table 10: Mapping from error potential score to error potential level........................................ 97
Table 11: Software criticality categories for manned mission ................................................... 98
Table 12: Software criticality categories for unmanned mission ............................................... 98
Table 13: System reliability criticality categories....................................................................... 99
Table 14: System hazard severity categories ........................................................................... 99
Table 15: UML 2 diagram types .............................................................................................. 104
Table 16: UML 2.0 to UML 1.x mapping ................................................................................. 104
Foreword
This ISVV Guide is the result of work carried out for the European Space Agency by a
consortium of European Companies under ESA contract no. 18466/04/NL/AG. The companies
involved were:
• Det Norske Veritas (N)
• Terma (DK)
• SciSys (UK)
• Critical Software (P)
In addition to this, representatives of primes have contributed with valuable input during
dedicated industry workshop.
1.0 Introduction
1.1 Background and Motivation
Independent Software Verification and Validation (ISVV) is an engineering practice intended to
improve quality and reduce costs of a software product as well as reduce development risks by
having an organisation independent of the software developer’s perform verification and
validation of the specifications and code of a software product.
The global objective of this guide is to aid in establishing an improved and coherent ISVV
process across the European space industry in a consolidation of existing practice. Special
emphasis is placed on process efficiency. It is hoped that the guide will also be found to be
useful in other industries where software is a component of safety and dependability critical
systems (e.g. automotive, rail systems, medical systems).
The guide defines an ISVV process with management, criticality analysis, verification, and
validation activities. It provides advice on ISVV roles, responsibilities, planning, and
communication as well as methods to use for the various verification and validation tasks.
1.2 Purpose
The purpose of this guide is to:
• Define a uniform, cost effective and reproducible ISVV process across Projects, and to guide
in adapting it to each specific project;
• Assist the industry in getting predictable cost and quality out of the ISVV process;
• Clarify the benefits of applying ISVV;
• Improve ISVV project execution by highlighting the many different issues that need to be
clarified and considered in the various phases of the project;
• Disseminate best practices with respect to recommended methods for the different
verification and validation activities. ;
• Present a summary of the required capabilities of the Independent SVF in preparation of the
development and utilization of a specific one for each project.
The assumed readership of the ISVV Guide is primarily the customers and suppliers of ISVV
services, but also software developers, system suppliers (primes) and system customers are
likely to find the guide useful, be they verification / validation personnel, quality assurance
managers or technical managers.
The guide should be used in the preparation of a request for quotation for an ISVV service, in
preparation for a bid, during planning, execution and re-planning of an ISVV project.
1.3 Definitions
The definitions presented herein are for the purpose of readability of this document. These
definitions prevail when any discrepancy occurs with other standards’ definition.
activity A defined body of work to be performed, including its required input and
output information [IEEE 1074:1997]
critical item Component, material, software, sub-assembly, function, process or
technology, which requires special project attention [ECSS-P-001B:2004].
NOTE: In this document critical item is used as a common term denoting
critical system function, critical software requirement, critical software
component, or critical software unit.
critical List of critical software component as determined by Design Analysis
software Criticality Analysis (ISVV task) with assigned software criticality categories
safety System state where an acceptable level of risk with respect to:
- fatality,
- injury or occupational illness,
- damage to launcher hardware or launch site facilities,
- damage to an element of an interfacing manned flight system,
- the main functions of a flight system itself,
- pollution of the environment, atmosphere or outer space, and
- damage to public or private property
is not exceeded
NOTE 1: The term “safety” is defined differently in ISO/IEC Guide 2 as
“freedom from unacceptable risk of harm”.
[ECSS-P-001B:2004]
software See ‘software product’.
software Part of a software system.
component NOTE 1: Software component is used as a general term
NOTE 2: Components can be assembled and decomposed to form new
components. In the production activities, components are implemented as
modules, tasks or programs, any of which can be configuration items. This
usage of the term is more general than in ANSI/IEEE parlance, which defines
a component as a “basic part of a system or program”; in this Standard,
components are not always “basic” as they can be decomposed.
[ECSS-E-40B:2003]
software A general term covering critical system functions list, critical software
critical item list requirements list, critical software components list, critical software units list.
software Software criticality analysis is an analysis resulting in the definition of a
criticality software critical item list; it is carried out with the purpose of defining the
analysis scope of ISVV. Criticality is related with safety and dependability but may
refer to, for example, security, maintainability or any other property defined by
the ISVV Customer.
NOTE: This definition deviates from the usage (there is no definition) in
[ECSS-E-40B:2003] and [ECSS-Q-80B:2003] where software criticality
analysis is a safety and dependability analysis and a requirement for all
software.
software A number or letter designating the criticality of a failure mode or an item.
criticality Software criticality categories are defined as part of a software criticality
category scheme.
software The definition of a set of software criticality categories used for a specific
criticality project or purpose. The categories are ordered from low to high criticality.
scheme NOTE 1: There are usually 4 or 5 software criticality categories in a software
criticality scheme.
NOTE 2: In space projects, software criticality categories are usually named A
to D, with A being the most critical. For ISVV, software criticality categories
are numbered 1 to 4. This numerical scale will normally correspond to the
alphabetical software criticality categories so that 4 is equivalent to A, 3 to B,
2 to C, and 1 to D. However, the software criticality categories assigned to
functions, software products, software requirements, software components,
and software units for the purposes of ISVV may be different from those
assigned for development. The two scales are intended to avoid confusion.
software Set of computer programs, procedures, documentation and their associated
product data [ECSS-E-40B:2003]
NOTE: software and software item are synonyms of software product
1.4 Acronyms
AR Acceptance Review
B The B method (Formal Methods)
CA Code Analysis
CAR Code Analysis Review
CCS Calculus of Communicating Systems (Formal Methods)
CDR Critical Design Review
CFL Critical Function List
CR Criticality Analysis
CSP Communicating Sequential Processes (Formal Methods)
DAR Design Analysis Review
DDF Design Definition File
DDR Detailed Design Review
DJF Design Justification File
ESA European Space Agency
FDIR Fault Detection, Isolation and Recovery
FMEA Failure Mode and Effects Analysis
FMECA Failure Modes, Effects, and Criticality Analysis
FSM Finite State Machines
FTA Fault Tree Analysis
HOOD Hierarchical Object Oriented Design
HRT-HOOD Hard Real-Time HOOD
1.5 References
[AFISC:1985] Software System Safety, AFISC SSH 1-1, Headquarters Air Force
Inspection and Safety Center, 5 September 1985
[ARTHUR:1999] James D. Arthur, Markus K. Gröner, Kelly J. Hayhurst, and C.
Michael Holloway, “Evaluating the Effectiveness of Independent
Verification and Validation,” IEEE Computer, October 1999.
[AUDSLEY:1991] Hard Real-Time Scheduling: the Deadline-Monotonic Approach, N.
C. Audsley, A. Burns, M. F. Richardson, and A. J. Wellings, IEEE
Workshop on Real-Time Operating Systems, 1991
[BS 7799-2:2002] BS 7799 Part 2, Specification for information security management
systems, 2002.
[BURNS:1993] HRT-HOOD: A Structured Design Method for Hard Real-Time Ada
Systems, A. Burns, A. Wellings, University of York, Version 2.0
Reference Manual, September 1993
[DETECT:1995] Comparing Detection Methods For Software Requirements
Inspections, IEEE Transactions in Software Engineering, 06/1995
[DNV ISVV:1992] Sven-Arne Solnørdal, Torbjørn Skramstad, and Jan Tore
Henriksen, Presentation of the ISVV Concept, ESSDE Reference
Facility Project (ESA 8900/90/NL/US(SC)), Doc.Ref.
ESSDE/MISC/B31, DNV Technical Report, April 9, 1992.
[DO-178B:1992] RTCA, DO-178B: Software Considerations in Airborne Systems
and Equipment Certification, December 1992.
[ECSS-E-40B:2003] ECSS, ECSS-E-40 Part 1B, Space engineering, Software – Part 1:
Principles and requirements, 28 November, 2003.
[ECSS-M-00-03B:2004] ECSS, ECSS-M-00-03B, Space project management, Risk
Managment, 16 August, 2004.
[ECSS-P-001B:2004] ECSS, ECSS-P-001B, Glossary of terms, 14 July 2004.
[ECSS-Q-40B:2002] ECSS, ECSS-Q-40B, Space product assurance - Safety, 17 May
2002.
[ECSS-Q-80-03d:2004] ECSS, ECSS-Q-80-03 Draft 1, Space product assurance, Methods
and techniques to support the assessment of software
dependability and safety, 8 April 2004.
[ECSS-Q-80B:2003] ECSS, ECSS-Q-80B, Space product assurance, Software product
assurance, 10 October, 2003.
[EN 50128:1997] CENELEC, EN 50128: Railway Applications: Software for Railway
Control and Protection Systems, 1997.
[HRTOSK:1991] Hard Real-Time Operating System Kernel: Overview and Selection
of Hard Real-Time Scheduling Model, British Aerospace and
University of York, ESTEC Contract “HRTOSK” - Task 1 Report,
1991
[IEC 60880:1986] IEC, IEC 60880: Software for Computers in Safety Systems of
Nuclear Power Stations, 1986.
[IEC 61508-1:1998] IEC, IEC 61508: Functional safety of
electrical/electronic/programmable electronic safety-related
systems – Part 1: General requirements, First Edition,1998.
[IEEE 1012:1998] IEEE, IEEE Standard 1012: IEEE Standard for Software Verification
and Validation, 1998.
[IEEE 1074:1997] IEEE, IEEE Standard 1074: IEEE Standard for Developing
Software Life Cycle Activities, 1997.
[INSPEC:1976] Design and Code Inspections to Reduce Errors in Program
Development, IBM Systems Journal, Vol. 15 No. 3, 1976.
[ISO 9000:2000] ISO, ISO 9000: Quality management systems – Fundamentals and
vocabulary, 2000.
[ISVV TN2:2005] ISVV Process and Facility, TN 2 - Methods And Tools Tradeoff
Analysis, DNV Report No.: 2005-1033, Rev. 1.2, 28 August 2005.
[LEVESON:1987] Safety Analysis Using Petri Nets, Leveson, Nancy G., Janice L.
Stolzy, IEEE Transactions on Software Engineering, Vol. SE-13,
No. 3, The Institute of Electrical and Electronics Engineers, March
1987
[NASA IV&V] NASA, Software Independent Verification and Validation
(IV&V)/Independent Assessment (IA) Criteria,
http://ivvcriteria.ivv.nasa.gov.
[NIST5589:1995] A Study on Hazard Analysis in High Integrity Software Standards
and Guidelines, U.S. Department of Commerce, Technology
Administration, National Institute of Standards and Technology,
January 1995
[PASCON RAMS related static methods, techniques and procedures
WO12-TN2.1:2000] concerning software, Issue 1.0, 2 May 2000
1.6 Outline
The document consists of the following sections:
• The introduction of which this outline is a part describes the background and motivation for
ISVV as well as the purpose of the ISVV guide.
• Section 2.0 elaborates on the topic of ISVV, describing types of independence, the
objectives of ISVV as well as its relationship to development verification and validation.
• Section 3.0 provides an overall view of the ISVV process.
• Section 4.0 describes the ISVV process management activity, detailing on ISVV roles,
responsibilities, tasks and other aspects of management.
• Section 5.0 describes the Criticality Analysis activity, and how Criticality Analysis can be
used to identify the scope of ISVV.
• Sections 6.0, 7.0, 8.0 describe the verification activities of Technical Specification Analysis,
Design Analysis, and Code Analysis respectively.
• Section 9.0 describes the Independent Validation Activity.
• Finally, there are a number of annexes providing more detailed information related to the
various ISVV activities.
ISVV also implies verification and validation additional and complementary to that carried out by
the software developer. Research has shown that such independence in verification and
validation of software produces better software for less money [ARTHUR:1999].
The fundamental idea of independence is that some verification and validation activities are
carried out by a person other than the person responsible for (the development/design of) the
product or process being verified. Independence is strengthened by increasing the emotional
and organisational distance between the developer and the verifier. Many safety-related
standards (e.g. [IEC 61508-1:1998]) thus distinguish between:
• independent person,
• independent department, and
• independent organisation.
The higher the criticality of the system (and the software), the more independence is required.
The independent person may belong to the same department as the writer/developer, but
should not have been involved in writing the specification or the code. This is the minimum level
of independence, frequently used for document reviews or desk checking within most
companies. The independent department requires verification to be carried out by people from
a different department within the same organisation. The department could be the quality
assurance department, or a department dedicated to V&V on a specific project. For two
organisations to be independent they must be different legal entities with different management
groups and preferably different owners. This level of independence is required for auditors in
financial auditing as well as various types of third party certification.
Two of the main benefits of independence mentioned above are that they provide differences in
points of view and separation of concerns. The IEEE Standard for Software Verification and
Validation [IEEE 1012:1998] distinguishes between different types of independence addressing
these concerns:
• technical independence,
• managerial independence, and
• financial independence.
• Technical independence requires the V&V effort to utilize personnel who are not involved in
the development of the software. The IV&V effort must formulate its own understanding of
the problem and how the proposed system is solving the problem. Technical independence
("fresh viewpoint") is an important method to detect subtle errors overlooked by those too
close to the solution. For software tools, technical independence means that the IV&V effort
uses or develops its own set of test and analysis tools separate from the developer's tools.
Sharing of tools is allowable for computer support environments (e.g., compilers,
assemblers, utilities) or for system simulations where an independent version would be too
costly. For shared tools, IV&V conducts qualification tests on tools to ensure that the
common tools do not contain errors which may mask errors in the software being analyzed
and tested.
• Managerial independence requires that the responsibility for the IV&V effort be vested in an
organization separate from the development and program management organizations.
Managerial independence also means that the IV&V effort independently selects the
segments of the software and system to analyze and test, chooses the IV&V techniques,
defines the schedule of IV&V activities, and selects the specific technical issues and
problems to act upon. The IV&V effort provides its findings in a timely fashion simultaneously
to both the development and program management organizations. The IV&V effort must be
allowed to submit to program management the IV&V results, anomalies, and findings without
any restrictions (e.g., without requiring prior approval from the development group) or
adverse pressures, direct or indirect, from the development group.
• Financial independence requires that control of the IV&V budget be vested in an
organization independent of the development organization. This independence prevents
situations where the IV&V effort cannot complete its analysis or test or deliver timely results
because funds have been diverted or adverse financial pressures or influences have been
exerted.
The primary purpose of technical independence is thus to ensure a different point of view, while
the purpose of managerial and financial independence is separation of concerns. An
independent person may be sufficient to achieve technical independence. However, being part
of the same technical culture may still lead to basic assumptions being unquestioned.
Managerial independence requires at least an independent department. The same is true for
financial independence. However, for all of these types of independence, an increased
organisational distance will increase the independence and thus reduce the risk of assessments
being unduly influenced by non-relevant (to the verification/validation objective) concerns.
In European space industry, full technical, managerial and financial independence is required for
ISVV of critical software. The ISVV supplier is required to be an organisation independent of the
software supplier as well as the prime (system integrator).
In European space projects, the ISVV customer has traditionally been either the prime or the
end customer. The prime should not be the ISVV customer if the prime itself (or any of its
subsidiaries) is also developing software subject to ISVV.
The recommendation of this ISVV guide is that the ISVV supplier should be a fully independent
company and that the ISVV customer should be the end customer or the prime (unless the
prime is developing software subject to ISVV).
As with any verification and validation activity, the objective of ISVV is to find faults and to raise
confidence in the software subject to the ISVV process. The emphasis of either of these
objectives may vary, depending on the maturity of the software, budget, time, the maturity of the
software supplier, the complexity of the software product, as well as the distribution of
responsibility between the software developer’s V&V and the ISVV supplier’s V&V.
The effectiveness of the ISVV process is evident when faults are actually found. However, if a
lot of faults continue to be found, the software will be considered immature and it cannot be
trusted. If, on the other hand, faults are not found, the reason may either be that the software is
actually devoid of problems or that the ISVV process is not effective. To make the ISVV process
a confidence raising measure for the software thus requires trust in the process itself and in the
people executing it.
Raising the confidence is particularly important for critical software, whose failure may lead to
hazardous events, damage to health, environmental damage, grave economic losses, or loss of
reputation. ISVV is therefore usually targeted to find critical faults with respect to safety or
dependability. This is also the main emphasis of this guide. However, in other cases, ISVV may
target other quality attributes, including security, maintainability, reusability, and usability.
2.3 ISVV is Complementary to Developer’s V&V
ISVV should provide added value over the verification and validation carried out by the software
developer. The approach of the ISVV supplier thus has to be complementary. What does it
mean in practice?
Both the developer’s verification and validation team and the ISVV team share the objective of
finding faults as early as possible. This requires a “destructive” attitude contrary to the
“constructive” attitude of developers. However, especially when pressed on time and budget,
the tendency of positive thinking in the developer organisation (they have developed the product
so they know it) may work to the detriment of the quality of the product, in particular what
concerns robustness. The ISVV team, not being subject to the same pressures, can focus
solely on finding possible weaknesses and faults, trying to break the software.
The developer’s verification and validation process will have to comply with the requirements of
standards forming the basis for the contract with the software customer as well as company
internal requirements. Still there is a lot of room for customisation and interpretation. Even in
the same company, the verification and validation plan of one software product may thus look
different from that of another. The developer’s verification and validation plan is one of the
factors that the ISVV vendor should take into account to ensure complementarity.
ISVV may choose to use methods and tools different from those of the development
organisation. In some cases, one method is an alternative to another; in others, methods are
complementary and not substitutable.
There are often many different tools in the market supporting the same method. Where the
functionality of tools overlaps considerably, the complementarity of using a different tool may be
questioned. However, different implementations may still yield slightly different results,
reflecting particular strengths and weaknesses of the tools themselves. Where two tools
produce the same results, at least the confidence in the findings is increased.
Even if a verification or validation task is repeated, using the same methods and tools as before,
having it done by another person may still yield interesting results. Few verification and
validation methods and techniques are deterministic - unless wholly automated by tool. Two
persons carrying out the same verification activity will not get the same results or discover the
same problems; problems overlooked by one person may be found by the other.
MAN. Management
MAN.CR.Criticality Analysis
IVE.DA.Design Analysis
IVE.CA.Code Analysis
IVA.Validation
ISVV Process Management (MAN.PM) is concerned with issues such as roles, responsibilities,
planning, budgeting, communication, competence, confidentiality etc. It involves responsibilities
of both the ISVV customer and the ISVV supplier.
Criticality Analysis (MAN.CR) is an activity supporting both ISVV Process Management and the
Verification and Validation tasks. It provides important input for ISVV planning: How can the
available budget best be used? The activity defines the scope and rigour of subsequent V&V
activities by assigning software criticality categories and ISVV levels to software requirements,
components and units.
Design Analysis (IVE.DA) is verification of the Software Architectural Design and the Software
Detailed Design. The activity ends with a Design Analysis Review (DAR).
Code Analysis (IVE.CA) is verification of the software source code. The activity ends with a
Code Analysis Review (CAR).
Validation (IVA) is testing of the software to demonstrate that the implementation meets the
Technical Specification in a consistent, complete, efficient and robust way. The activity ends
with an Independent Validation Review (IVR).
Figure 2 relates the ISVV activities to the software development processes and the review
milestones defined by [ECSS-E-40B:2003]. In addition, 4 ISVV reviews are defined. The figure
indicates possible early and likely start times as well as end times of the activities. More specific
guidance is provided with the individual activity.
Each of the ISVV activities is described in detail in the following sections. The ISVV Process
Management activity is given special treatment, but otherwise the main structure of an activity
description is:
• Activity overview
• Activity inputs and prerequisites
• Activity outputs
• Activity management
− Initiating and terminating events
− Completion criteria
− Relations to other activities
• Task descriptions
• Methods
Every activity is broken down into tasks and sometimes sub-tasks. Each task description (with
the exception of project management tasks) is described in a table format with the following
fields:
Start Event: Start constraint for the task (might be tailored depending on the
characteristics/objectives of specific ISVV projects)
End Event: End constraint for the task (might be tailored depending on the
characteristics/objectives of specific ISVV projects)
Responsible: Identification of the responsible for the task execution, the ISVV supplier or
the ISVV customer.
Sub Tasks (per ISVV Level): Task breakdown into subtasks, organised per ISVV level.
A specific ISVV project may include all, one, or some of the verification and validation activities
referred to above and in turn some of its tasks and subtasks. There are dependencies
between the activities; output of previous activities is often used as input for later activities. If
one or more of the activities are defined to be outside the scope of the ISVV project, some of
the tasks of the activity may nevertheless have to be performed for the prerequisites (in terms
of required input) of other activities to be fulfilled. The dependencies are described as part of
the individual activity descriptions.
MAN. Management
MAN.CR.Criticality Analysis
IVE.DA.Design Analysis
IVE.CA.Code Analysis
IVA.Validation
The figure also shows the inputs and outputs of each task.
Software
Development Plan
Software Product
Assurance Plan ISVV Plan
Software Verification
and Validaton Plan ISVV Process Planning
Requests for
Clarification
Documents and Code
from Software
Development
Criticality Analyses
Progress Reports
ISVV Plan
The following topics are discussed in the following section, introducing important aspects to
consider for the Management of any ISVV process:
• Roles, responsibilities, and tasks;
• Criticality analysis, definition of scope, and budgeting;
• Scheduling and milestones;
• Communication and reporting;
• Competence and motivation;
• Non-disclosure and security.
4.1.1 Roles and Responsibilities
Independent Software Verification and Validation is a service provided by an ISVV supplier to
an ISVV customer. In addition, the ISVV process may have interfaces to other roles:
• Software supplier (software developer)
• Software validation facility supplier
• System supplier (system integrator, prime, software customer)
• System customer (system owner)
One of the two latter roles is also likely to be the ISVV customer.
The responsibilities of the ISVV customer and the ISVV supplier are clearly indicated in the
task descriptions in section 4.5.
1
Note that figure shows only the most important inputs and outputs.
The following subsections describe the interfaces to the software supplier and the software
validation facility supplier.
4.1.1.1 Responsibilities of Software suppliers
The involvement of the software supplier in ISVV includes:
• Providing documents and code for ISVV;
• Assisting the ISVV customer in responding to requests for clarifications from the ISVV
supplier;
• Assisting the ISVV customer in assessing the findings of the ISVV supplier, their criticality
and resolution;
• Investigating and following up software problem reports resulting from ISVV findings.
All communication between ISVV supplier and any of the software suppliers (when allowed)
shall be copied to the ISVV customer.
4.1.1.2 Interface with Software Validation Facility supplier
The Software Validation Facility supplier is the party providing the Software Validation Facility
for the ISVV supplier’s independent validation activity.
The involvement of the SVF supplier could be minimal, i.e. just providing the SVF for a given
period, or it could involve tasks such as specification and execution of test procedures, and
reporting of test results.
It is the ISVV customer’s responsibility to ensure the ISVV supplier gets (or gets access to) the
SVF. The recommendation of this ISVV guide is that the SVF be provided to the ISVV
supplier. This secures the ISVV project’s access to the SVF, also in critical phases of the
project where resource contention would otherwise easily occur.
4.1.2 Criticality Analysis, Definition of Scope and Budgeting
The budget for ISVV should reflect the criticality of the software to be scrutinised. One would
also like to distinguish between different criticality categories to allow one to expend more effort
in the verification and validation of high criticality software than on software of lesser criticality.
This is the objective of the so-called Criticality Analysis (see definition in section 1.3), which
identifies the Software Criticality Category and ISVV Level of software items at various levels of
specification (software requirements, component, unit), both reducing the number of items
subject to ISVV and determining which verification and validation tasks to carry out for each
individual item.
As already indicated, Criticality Analysis is carried out throughout the ISVV project, refining the
scope as software development becomes more detailed. However, the first Criticality Analysis
task is the most important one as it is carried out by the ISVV customer and forms the basis for
allocating a budget for the ISVV contract. The ISVV supplier may also repeat the analysis to
verify the realism of the budget.
Later analyses are intended to refine the scope even further and should be done by the ISVV
supplier, with the ISVV customer reviewing the results and accepting the specified scope. The
criticality analyses may lead to updates to the ISVV plan and budget.
Resource / attribute:
Competence/ Product / attribute:
experience/
productivity Process / attribute:
Cost:
Component
maturity /
complexity / Man-hours Man-hour cost
size
ISVV task
Number of cost
rounds
ISVV methods
Figure 5 shows a model breaking ISVV costs into two major components, man-hour costs and
tool costs (there may be other costs, but these are for the moment ignored).
Man-hour costs depend on hourly rates and the estimated number of hours. Work hours are
spent on specific verification/validation tasks as well as on management. The number of hours
is a function of the competence, experience and productivity of the person carrying out the
activity, the size, complexity, and maturity of the work product (document or code) under
scrutiny, the number of rounds of verification/validation (repetitions), the ISVV task carried out,
as well as the type of verification/validation method applied. The rigour with which an ISVV
task is to be carried out (if it is to be carried out at all), depends on the ISVV Level. The ISVV
task is supported by one or more ISVV methods. The number of repetitions may have a big
impact on costs, and for fixed-price contracts it is thus crucial that the number of repetitions is
defined.
Tools support specific methods for specific verification and validation tasks. Tool costs could
be broken down into investment costs (or deprecation costs) and costs for using the tool
(based on hourly rates). The use of tools may greatly increase the efficiency of carrying out
ISVV tasks, thereby reducing man-hours.
Determining the total ISVV cost requires calculation of man-hour costs and tool costs for all of
the V&V tasks. To ensure a repeatable process, the work breakdown structure should be
defined. The ISVV Level will affect the cost through the set of verification and validation tasks
defined, and the rigour with which tasks are carried out.
A fundamental question is whether the ISVV budget should somehow be linked to the budget
of the software development project, e.g. as a percentage adjusted according to the ISVV level
of the software. The advantage is that the budget is immediately scaled to reflect the financial
realities of the project and expectations of stake holders; it makes for a good rule of thumb.
The alternative is a bottom-up approach calculating the budget from the number of critical
functions and their ISVV levels.
With any estimation activity there is uncertainty. As work progresses through the phases of
the ISVV project (technical specification analysis, design analysis, code analysis) more
information becomes available (e.g. about the maturity of documents) and better estimates can
be made for remaining phases. Criticality analyses performed also provides input to the
scoping of the ISVV activities, both in terms of which software items to submit to ISVV and in
terms of which ISVV tasks to carry out (derived from ISVV Level).
4.1.3 Scheduling and Milestones
Scheduling the ISVV project is difficult, as there is a strong dependence on the progress of the
software development projects. Delays in software development activities will cause
corresponding delays in ISVV activities. A scheduled date for the start of activities may be
provided, with the understanding that this date may have to change if deliverables from the
software development projects are late.
This guide does not identify specific initiating events for ISVV activities. A general
recommendation is that input documents and code from the software suppliers should be
sufficiently mature. This is usually the case after the implementation of development reviews:
PDR for Technical Specification, DDR for Architectural and Detailed Design, and CDR for
Code. The Independent Validation activity consists of three major tasks: identification of test
cases, specification of test procedures, and execution of test procedures. Identification of test
cases may start as early as PDR, when stable documentation becomes available. To carry out
the independent software validation testing effectively requires completion of the software
development validation testing – this usually finishes at QR.
In some cases, documents may mature earlier, or it may be desirable to have ISVV provide
input to the development review – along with other review comments. In this case, ISVV
activities might start earlier. More guidance on initiation of the different verification and
validation activities is provided with the description of the activities.
A set of additional milestones have been defined for the ISVV project. These milestones have
been presented in section 3.0.
4.1.4 Quality Management System
The ISVV supplier should have a suitable quality management system (fulfilling the
requirements of [ISO 9000:2000]) as well as an information management system (see section
4.1.5).
The ISVV supplier should have a proper software documentation and configuration
management system to manage and control all inputs from the software supplier, as well as all
their ISVV activities’ outputs (verification reports and tests documentation, procedures and
reports).
As already mentioned earlier the ISVV process has many uncertainties (availability of inputs
from the software supplier, etc) and risky elements (maturity of the elements under ISVV,
maturity of the SVF, etc) that should be properly managed and controlled by the ISVV supplier
through a formalised risk management process.
4.1.5 Non-Disclosure and Security
Spacecraft software is high value intellectual property. It is therefore important that access to
documents and code (both source and executable code) is strictly controlled when handed
over to the ISVV supplier2. The ISVV supplier and other stakeholders involved in the ISVV
process, must fulfil requirements both with respect to non-disclosure and with respect to secure
handling of information.
The ISVV customer must in cooperation with the software suppliers (or the intellectual property
owner) and other stakeholders (system supplier/customer) determine the confidentiality
requirements, including:
• whether there should be different confidentiality classes for documents and what those
classes are;
• requirements for distribution and storage of confidential documents;
• requirements for personnel authorised to handle confidential documents.
The ISVV customer must also identify the documents required for the ISVV process and their
confidentiality class.
The ISVV supplier should have an information security management system in place to ensure
that distribution, storage, and handling of data fulfils the confidentiality requirements (e.g.
based on [BS 7799-2:2002]).
4.1.6 Competence
Independent software verification and validation requires special competence.
Requirements on the competence on the individual should cover formal education, experience,
as well as personal traits. There is still no consensus on what the requirements for ISVV
personnel should be, but we have included an example below (adapted from [DNV
ISVV:1992]):
2
The software developer will most likely require the ISVV supplier not to be involved in any kind of
competing software development.
An ISVV project is usually carried out by an ISVV team, not a single individual. The ISVV team
should be composed to provide a mix of complementary competencies. The team as such
should be familiar with all methods and tools to be employed for the analyses. In addition, the
ISVV team manager should be experienced with project management, including the
management of ISVV projects. The project manager must also be able to handle the
contractual and human relations aspects of the project, and should also have sufficient
personal authority to defend the findings of the ISVV team.
There are no particular prerequisites for starting the ISVV Management Activity.
The following subsections provide more detail about the activity inputs.
• ISVV Plan
• Requests for Clarification
• ISVV Report (with ISVV Findings)
• Progress Reports
The following subsections provide more detail about the activity outputs.
This is a real dilemma. Both approaches have been tried in real projects and there is no
definite conclusion as to what is best. The recommendation is to allow requests for
clarification, but to ensure that such requests are properly recorded so that the obscurity found
is not just forgotten about. The clarifications made and the responses given should be
included in the ISVV report.
4.3.3 ISVV Report (with ISVV Findings)
The findings of ISVV are reported in an ISVV report. There will usually be several ISVV reports
produced by an ISVV project, e.g. one per ISVV activity per software product. The ISVV report
shall highlight all the potential problems found as identified by the ISVV activity. The ISVV
supplier shall classify the findings into e.g. ‘major’, ‘minor’, and ‘’comment’, based on an
assessment of the potential consequence of the finding. The classification must later be
assessed by the ISVV customer who will possibly make a reclassification. The ISVV report
should be presented to the ISVV customer who should review it and accept it. The purpose of
the review meeting is to make sure that the ISVV report is understood by the customer. If
requests for clarification have been allowed, the request and the clarification should also be
included in the verification report. In the end, the ISVV customer must approve the ISVV
report. An example form for reporting individual review item discrepancies is included in
section 11.0.
The cost of implementing any ISVV finding to correct the supplied software is lower the earlier
it is being done. The ISVV supplier may therefore provide early feedback on major issues to
the ISVV customer. The ISVV customer is not required to respond to these findings.
It is the responsibility of the ISVV customer to filter the ISVV findings as presented in the ISVV
report and consider whether a particular finding warrants the creation of a software problem
report. The software problem report (SPR) is the usual mechanism by which a software
supplier is notified that a problem exists with the software. The status of all SPRs sent to the
software supplier could be reported to the ISVV supplier to optimize the ISVV activities and
tasks.
4.3.4 Progress Reports
For ISVV projects of more than a few week’s duration (as is likely to be the case for most
projects), the ISVV supplier should provide regular progress reports to the ISVV customer.
The progress report will report the progress of the project with respect to plan and (if not a fixed
price contract) budget, also notifying the customer of any problem areas. Progress reports will
often be issued in conjunction with progress meetings.
4.4 Activity Management
4.4.1 Initiating and Terminating Events
The ISVV Management activity starts for the ISVV customer when it becomes clear that a
software product for which the customer is responsible (either as developer or integrator) will
require ISVV. This may be at an early stage in the ISVV customer’s process of bidding for the
development of the software or the system containing the software. The ISVV customer will
start the activity when deciding to prepare a response to a request for ISVV services.
ISVV Management ends with the close of the ISVV contract, i.e. with the acceptance by the
ISVV customer of all deliverables required by the contract and described in the ISVV Plan.
Criticality Analysis provides important input to the ISVV Management activity for budgeting and
planning.
Inputs:
3
There are of course also other parties influencing this process, e.g. the ISVV suppliers.
Perform the Technical Specification Criticality Analysis to identify the ISVV scope, level, and
critical software requirements list. This may be carried out by the ISVV Customer or the ISVV
Supplier. See also section 5.5.2.
- MAN.PM.T1.S5: Estimate ISVV budget (ISVV Supplier)
The ISVV Supplier should do an independent estimation of the ISVV budget. See section 4.1.2.
- MAN.PM.T1.S6: Develop ISVV plan (ISVV Supplier)
The ISVV Supplier must define an ISVV plan (a draft could be part of the proposal). The plan
should be approved by the ISVV Customer. The developer’s software development plan,
software product assurance plan, and software verification and validation plan should be taken
into account if available (overall coordination planning data is to be provided by the ISVV
Customer). See section 4.3.1. An outline of a sample ISVV plan is found in section 10.0.
- MAN.PM.T1.S7: Approve ISVV Plan (ISVV Customer)
The ISVV Customer should approve the ISVV plan developed by the ISVV Supplier. An outline
of a sample ISVV plan is found in section 10.0.
- MAN.PM.T1.S8: Determine confidentiality issues and prepare NDAs (ISVV Customer)
It is the responsibility of the ISVV Customer to clarify confidentiality requirements and ensure
these are kept throughout the project through the signing of Non-Disclosure Agreement with the
ISVV Supplier and any of its sub-contractors (see section 4.1.5).
- MAN.PM.T1.S9: Approve scope definition resulting from Criticality Analysis (ISVV
Customer)
All criticality analyses must be approved by the ISVV Customer. See also section 4.2.1.
Outputs:
Inputs:
schedule management (see section 4.1.3), budget management, resource management, activity
management, risk management, quality management, document management, and security
management.
- MAN.PM.T2.S2: Submit documentation and code to ISVV Supplier (ISVV Customer)
It is the responsibility of the ISVV Customer to provide all documentation and code necessary for
ISVV planning and for the verification and validation activities to the ISVV Supplier. See also
section 4.2.2.
- MAN.PM.T2.S3: Check received documentation (ISVV Supplier)
Any documentation and code received from the ISVV Customer or other parties of the ISVV
should be registered and checked by the ISVV Supplier.
- MAN.PM.T2.S4: Perform verification and validation activities (ISVV Supplier)
The ISVV Supplier must carry out the verification and validation activities as described in the
ISVV plan.
- MAN.PM.T2.S5: Request clarifications (ISVV Supplier)
The ISVV Supplier may request clarification from the ISVV Customer. See section 4.3.2.
- MAN.PM.T2.S6: Respond to Requests for Clarification (ISVV Customer)
Whenever the ISVV Supplier issues a Request for Clarification, the ISVV Customer should
provide feedback in a timely manner (see section 4.3.2)
- MAN.PM.T2.S7: Report early ISVV findings (ISVV Supplier)
The ISVV Supplier may provide early feedback on findings to the ISVV Customer.
- MAN.PM.T2.S8: Review early ISVV Findings (ISVV Customer)
The ISVV Customer shall review received early ISVV findings for criticality and impact on the
software/system, and shall take action as appropriate.
- MAN.PM.T2.S9: Produce ISVV verification report (ISVV Supplier)
For each ISVV activity (as defined by the ISVV plan), the ISVV Supplier must produce an ISVV
verification report in which all of the findings reported. See section 4.3.3.
- MAN.PM.T2.S10: Conduct Review Meeting (ISVV Customer)
The findings and their resolution are discussed during a review meeting with participation of all
related parties. The meeting is the responsibility of the ISVV Customer.
- MAN.PM.T2.S11: Produce ISVV findings resolution report (ISVV Customer)
In response to each ISVV report, the ISVV Customer should produce an ISVV findings resolution
report, describing how each finding is resolved. The reports should be distributed to the ISVV
Supplier and the end customer (see section 4.2.3).
- MAN.PM.T2.S12: Implement resolutions (ISVV Customer)
The ISVV Customer is responsible for ensuring that the resolutions described in the ISVV
findings resolution report are implemented. The ISVV Supplier is not responsible for following-up
the findings.
- MAN.PM.T2.S13: Update criticality analyses (ISVV Supplier)
The criticality analysis may be updated throughout the project to further limit the scope of
subsequent verification and validation activities. This is the responsibility of the ISVV Supplier,
although the ISVV Customer may also be involved. See sections 5.5.3 and 5.5.4.
Outputs:
4.6 Methods
Methods used for ISVV Process Management are not different from project management
methods in general and will not be further discussed in this Guide.
MAN. Management
MAN.CR.Criticality Analysis
IVE.DA.Design Analysis
IVE.CA.Code Analysis
IVA.Validation
The figure also shows the inputs and outputs of each task.
4
Note that figure shows only the most important inputs and outputs.
If ISVV is to be limited to only a subset of the verification and validation activities, some of the
Software Criticality Analysis tasks may not be included. For example, for the verification
activities, if Code Analysis is not to be carried out there is no need to carry out Software Code
Criticality Analysis. If both Design Analysis and Code Analysis are to be excluded, both
corresponding Software Criticality Analysis activities may be excluded as well.
The System Level Criticality Analysis and the Software Technical Specification Criticality
Analysis will always have to be carried out - also in the case where ISVV consists only of
Independent Validation. In the case where earlier verification activities have been left out (e.g.
there is no Technical Specification Analysis or Design Analysis), but there are still verification
activities included in ISVV, Criticality Analyses of left out verification activities may still have to
be carried out to ensure that prerequisites for the remaining ISVV activity/activities are fulfilled.
Software Criticality Analysis is carried out using (Software) Failure Modes, Effects and
Criticality Analysis ((S)FMECA), supported by traceability analysis, control flow/call graphs
analysis, and complexity measurements.
It is important to emphasise that the use of these methods would not be as rigorous for the
Software Criticality Analysis as for the Safety and Dependability Analyses to be carried out as
part of the verification activities. The purpose here is not to find all potential problems
(hazards, failures, etc), but to scope the verification and validation activities. Also, the
performance of these specific analyses depends to some degree on what analyses are already
available from the software developer or system integrator.
Items of the lists will usually be grouped by software product. For a given software product, the
list will include all items included in the product with a software criticality category and an ISVV
level assigned to each of them.
Level Description
ISVVL 0 No ISVV activities are required.
ISVVL 1 Basic ISVV is required.
ISVVL 2 Full ISVV is required.
Table 2: ISVV levels
For a given Verification and Validation activity, the ISVV Level provides guidance to the
selection of tasks and the rigour of performing each task within the activity. As a guiding
principle, verification and validation tasks at ISVVL 1 consists of scrutinizing analyses already
performed by the software developer whereas for ISVVL 2, the ISVV supplier will perform
independent analyses.
When a given verification task is to be applied at ISVVL 2, then the input to the task will be all
items of the critical item list which have been assigned an ISVV Level of 2. Some tasks shall
only be applied at ISVVL 1 whereas other tasks apply to both ISVVL1 and 2.
In some cases, the verification tasks cannot be applied to individual items, but the entire
specification of a software product should be taken as input. Examples of this are the
verification of readability of a design document (IVE.DA.T2.S5) or the verification of timing and
sizing budgets of software (IVE.DA.T2.S6). To determine whether such a task should be
carried out for a specific software product, one has to consider the ISVV Level of any item
contained in it.
The ISVV Level is derived from the Software Criticality Category (SCC) but may be adjusted
upward if there are other risk factors warranting increased verification and validation (see next
section).
A software criticality scheme, defining software criticality categories, will usually have been
defined for the software development projects, to allow tailoring of the development process to
the criticality of the software. A common scheme may have been defined for all of the software
products embedded in the system or several different schemes may have been in use.
What was deemed critical for the development project may or may not be what is considered
critical for ISVV. Before adopting any existing criticality scheme, it should be carefully
scrutinised to ensure it is aligned with the ISVV objectives.
If the existing software criticality scheme is not appropriate for the purpose of the ISVV, a new
scheme will have to be defined. Inspiration may be taken from the examples included in
section 13.0 of this document (from [ECSS-Q-80-03d:2004, Annex B]). There are two
examples from the domain of space engineering, where the distinction between un-manned
and manned missions is fundamental. The examples relate to safety and dependability. It
has often proved difficult to include mission success criteria (e.g. related to availability) in the
definition of software criticality categories. This could be taken into account in when defining
software criticality categories for ISVV.
schemes. For the example criticality categories presented in annex D, the natural mapping is
to make 4 equivalent to A, 3 to B, 2 to C, and 1 to D.
Throughout the various stages of the criticality analysis activity, criticality categories are
assigned to failure modes (of functions/requirements/components), as well as to system
functions, software requirements, software design components and software units.
As a baseline, the ISVV Level for a software item is determined by the software criticality
category as follows:
SCC ISVVL
SCC 4 ISVVL 2
SCC 3 ISVVL 1
SCC 2 ISVVL 0
SCC 1 ISVVL 0
Table 3: Default mapping from Software Criticality Category to ISVV level
5.1.2 Adjusting the ISVV Level
The ISVV Level of a system function, software requirement, component, or unit is primarily
determined by its criticality category.
However, there may be a range of factors associated with a specific software product, which
may lead one to consider intensifying the verification and validation of the software. These
factors are not related to the criticality of the software as determined by safety or dependability
analyses, but to characteristics of the development organisation, the development process or
the software itself which may affect the quality of the software. We will call it error potential.
This type of adjustment is most appropriate for the initial software criticality analyses, i.e.
System Level Software Criticality Analysis or Software Technical Specification Criticality
Analysis, because the characteristics considered may be different for the different software
products (e.g. because sub-systems with their software are developed by different suppliers).
The factors influencing the error potential are listed as yes/no questions in a questionnaire.
The more ‘yes’ responses, the higher the potentially negative impact on software quality.
Based on a qualitative assessment one may then decide to increase the ISVV level, i.e. from
ISVVL 1 to ISVVL 2 or from ISVVL 0 to ISVVL 1 or 2. It should be noted that when the ISVV
level of an item is raised from 0 to 1, this is effectively equivalent to increasing the number of
items subject to ISVV. The table below shows the mapping from software criticality category
and error potential to ISVV Level:
4
SCC ISVVL 1 ISVVL 1 ISVVL 2 ISVVL 2
Category
3
SCC ISVVL 0 ISVVL 1 ISVVL 1 ISVVL 2
2
SCC ISVVL 0 ISVVL 0 ISVVL 1
1
ISVVL Low Medium High
Error Potential
Table 4: Matrix to derive ISVV level from Software Criticality Category and Error Potential
For some software criticality categories and error potential levels, the table provides two
choices of ISVV Level, the decision being left to expert judgement after having assessed the
error potential.
The questionnaire itself is contained in chapter 12.0 (Annex C). There may be a need to fill in
several instances of the questionnaire if the different software products are developed by
different organisations/project groups or there are other reasons for believing the response to
the questions would vary.
One of the questions of the questionnaire is related to complexity. At the early stages of a
software development project, this is not quantifiable and will have to be based on experience
and sound judgement. For Software Code Criticality Analysis, complexity can be measured,
and the measurements will be used as input to error potential determination. A complexity
measure must be defined, with a threshold to distinguish between non-complex and complex
software units. At code level, complexity is given more weight than the other error potential
factors.
5.1.3 Treatment of Diverse Criticality Categories
In some instances, system design splits critical software from non-critical, allowing the non-
critical software to follow a less strict development process than the critical software, thereby
potentially saving costs. However, such a split is only viable if it can be demonstrated that a
fault of the non-critical (or less critical) component cannot cause the critical component to fail.
Demonstrating this is a verification task, but for the criticality analysis, it must be ensured that
the functions/requirements/components/units constituting the boundary between the two
components are verified to the same ISVV level as the most critical items of the critical
software.
The boundary will be some sort of communication channel with built in checks to ensure that
fault propagation cannot occur. If the components reside on different processors, the
communication channel is likely to be a communication protocol stack; if they reside on the
same processor, it may be shared buffers, pipes, or files, probably managed by the operating
system and with extra hardware support to ensure that processes are strictly separated except
as managed by the operating system.
ISVV boundary
Comp X Comp Y
Criticality: 4 Criticality: 1
Operating System
Figure 8: Visualisation of a software architecture with diverse category levels assigned
The fact that the boundary must be taken into account should follow from system or software
level safety and dependability analyses, but it is highlighted here as a case deserving special
attention.
5.2 Activity Inputs and Prerequisites
The following work products are input for the criticality analysis activity:
• From ISVV Customer:
− Software Criticality Scheme
− Critical System Functions List
− Mission and System Requirements Specification
− System Architecture
− Requirements Baseline
− System FMECA
− Technical Specification including Interface Control Documents
− SFMECA based on Technical Specification (if existent)
− Design Definition File: Software Architectural Design and Traceability Matrices
− Design Definition File: Software Detailed Design and Traceability Matrices (optional)
− Software safety/dependability analyses based on software architectural design or
software detailed design (if existent)
− Design Definition File: Software code
− Software safety/dependability analyses based on code (if existent)
Unlike the Independent Verification activities, the Criticality Analysis activity is split into four
tasks with different starting points in time. A prerequisite for starting any of these activities is
the availability of the required input at a satisfactory level of maturity. Please refer to the
individual tasks for a more detailed view.
5.3 Activity Outputs
The following work products are produced in the scope of the Criticality Analysis activity:
• Software Criticality Scheme
• Error Potential Questionnaires
• Critical System Functions List
• Critical Software Requirements List
• Critical Software Components List
• Critical Software Unit List
The initial Criticality Analysis (System Level Software Criticality Analysis) will normally be
carried out by the ISVV customer (and reviewed by the ISVV supplier at the tendering process)
as it is an important input for the cost estimation of the ISVV project.
5.4.2 Completion Criteria
The outputs of each of the Software Criticality Analysis tasks shall be reviewed in a joint review
meeting between the ISVV supplier and the ISVV customer to determine whether the output
provides a sufficient basis for the execution of subsequent verification and validation activities.
5.4.3 Relations to Other Activities
The primary relation of the Software Criticality Analysis to other activities is the Verification and
Validation activities, which uses the output of the Software Criticality Analysis to limit scope
and guide the performance of the different analyses.
Input to the Software Criticality Analysis activity comes from System and Software Engineering
activities as well as from Independent Verification activities previously carried out (ISVV
findings).
The initial Software Criticality Analysis (System Level Software Criticality Analysis) is also an
important input to the cost estimation task of ISVV management.
These Software Criticality Analyses will normally not provide any feedback to the System or
Software Engineering activities, they are only to be used to scope the ISVV activities.
5.5 Task Descriptions
5.5.1 System Level Software Criticality Analysis
TASK DESCRIPTION
Title: System Level Software Criticality Analysis Task ID: MAN.CR.T1
Activity: MAN.CR – Criticality Analysis
Start event: SRR – System Requirements Review
End event: PDR – Preliminary Design Review
Responsible: The System Level Software Criticality Analysis shall be carried out by the ISVV customer. The
result of the analysis will be reviewed by the ISVV supplier during the tendering process.
Objectives:
Inputs:
- MAN.CR.T1.S1: Identify the software criticality scheme used for the mission.
- MAN.CR.T1.S2: Evaluate whether the defined software criticality scheme is relevant for the ISVV
objective. If it is not, then define a new software criticality scheme for ISVV.
- MAN.CR.T1.S3: If there is a Critical Function List and the criticality scheme it is based on is relevant
for the ISVV objective, then use this CFL.
- MAN.CR.T1.S4: If there is no Critical Function List or the ISVV objective does not match the criteria
used to derive it, perform a simplified system FMECA along the lines described in section 13.1.
- MAN.CR.T1.S5: Identify each software product and its supplier. Fill in the error potential questionnaire
(see section 5.1.2) for each software product.
- MAN.CR.T1.S6: Assign ISVV level to each system function based on the software criticality category
of the function and error potential.
Outputs:
Inputs:
- MAN.CR.T2.S1: For each software product implementing critical system functions, identify any
SFMECA based on the Technical Specification available.
- MAN.CR.T2.S2: If an SFMECA exists and the criticality scheme used as a basis is relevant for the
ISVV objective, then it may be used as a basis for deriving the critical software requirements list.
- MAN.CR.T2.S3: If no such analyses have been carried out, the quality is too poor, or the ISVV
objective differs from the presumptions of the SFMECA, perform a simplified SFMECA based on the
Technical Specification including Interface Control Documents. Another simplified way of doing this
step is described in section 13.2.
- MAN.CR.T2.S4: Verify the consistency of the SFMECA with the Critical systems function list. If
discrepancies are found, notify the ISVV customer who will have to consider consequences in terms of
re-analysis.
- MAN.CR.T2.S5: For each software requirement, derive the software criticality category by identifying
the highest criticality category of any failure mode associated with it.
- MAN.CR.T2.S6: Assign an ISVV level to each software requirement based on the software criticality
category of the requirement and error potential (there is no need to reassess error potential unless
different answers to the error potential questionnaire are expected at this level).
Outputs:
Inputs:
- MAN.CR.T3.S1: Review the findings of and the safety/dependability analysis performed as part of the
Technical Specification Analysis. Evaluate the consistency with the critical function list and the critical
software requirements list produced by the preceding Criticality Analyses. If discrepancies are found,
notify the ISVV customer who will have to consider consequences in terms of re-analysis.
- MAN.CR.T3.S2: If design level safety and dependability analyses exist from the developer, investigate
whether these may be used to assign software criticality categories to design components. The
software criticality scheme should be relevant for ISVV, the analysis should be based on the same
versions of documents as ISVV (or else a delta analysis must be carried out), and the results of any
higher level analyses it is based on should not be in conflict with the results of the Technical
Specification Analysis.
- MAN.CR.T3.S3: If not, trace the software requirements to software architectural design components.
Assign to each software component the highest software criticality category of any requirement tracing
to it.
- MAN.CR.T3.S4: Alternatively, extend the SFMECA carried out at software requirements level by
identifying software components as causes for requirements failure modes. This creates an alternative
trace from requirements to design components. Assign to each software component the highest
software criticality category of any failure mode to which it may contribute.
- MAN.CR.T3.S5: Identify any dependency mechanisms for the design language used (e.g. use or call
relationships).
- MAN.CR.T3.S6: Analyse the dependency of critical components on other components and adjust the
software criticality category of these components to be the same as the critical component depending
on them. Some components may be used by several critical components. For these, assign the
highest criticality category of any dependent component.
- MAN.CR.T3.S7: Assign an ISVV level to each software component based on the software criticality
category of the component and error potential (there is no need to reassess error potential unless
different answers to the error potential questionnaire are expected at this level).
- MAN.CR.T3.S8: Software criticality categories and ISVV levels may also be assigned to detailed
design software components. The benefit of going to this level of detail for the criticality analysis
should be balanced by the costs induced.
Outputs:
Inputs:
- MAN.CR.T4.S1: Review the findings of and the safety/dependability analysis performed as part of the
Design Analysis. Evaluate the consistency with the critical system function list, the critical software
requirements list and the critical software component list produced by earlier criticality analyses. If
discrepancies are found, notify the ISVV customer who will have to consider consequences in terms of
re-analysis.
- MAN.CR.T4.S2: If code level safety and dependability analyses exist from the developer, investigate
whether these may be used to assign software criticality categories to software units. The software
criticality scheme should be relevant for ISVV, the analysis should be based on the same versions of
code as ISVV (or else a delta analysis must be carried out), and the results of any higher level
analyses it is based on should not be in conflict with the results of the Design Analysis.
- MAN.CR.T4.S3: If not, identify mapping rules from software design components to software units. For
each software component (either architectural design component or detailed design component) trace
the software component to source code. Assign to each software unit the software criticality category
of the software component it implements.
- MAN.CR.T4.S4: Define complexity measure for software unit. The complexity measure could be e.g.
based on cyclomatic complexity of procedures contained in the unit as well as the number of other
units using this unit. Define a threshold to distinguish non-complex from complex units.
- MAN.CR.T4.S5: Perform complexity measurements on source code.
- MAN.CR.T4.S6: Fill in the error potential questionnaire (see section 5.1.2) for each software unit,
taking into account the complexity measures.
- MAN.CR.T4.S7: Assign ISVV level to each software unit based on the software criticality category of
the software unit and error potential .
Outputs:
5.6 Methods
Some of the methods supporting the Criticality Analysis are not used for verification and
validation and are thus not listed in chapter 14.0 (Annex F). The comment field provides
information on where further information can be found.
MAN. Management
MAN.CR.Criticality Analysis
IVE.DA.Design Analysis
IVE.CA.Code Analysis
IVA.Validation
The Technical Specification Analysis activity aims to verify the software requirements against
the following criteria:
• software requirements traceable to system partitioning and system requirements
• software requirements externally and internally consistent (not implying formal proof
consistency)
• software requirements unambiguous and verifiable
• software design feasible
• operations and maintenance feasible
• the software requirements related to safety and criticality correct (as shown by suitably
rigorous methods)
The Activity also aims to identify safety-critical and mission-critical design drivers and potential
test cases which may be given special attention during subsequent activities of the
independent software verification and validation processes.
System Requirements
allocated to Software Technical Specification Analysis
(RB)
Software
Requirements
Specification Traceability between
(TS) System Requirements
and Software
Requirements
Software-Hardware
Interface Requirements
Requirements Traceability
(RB) Verification Traceability between
System Requirements
and Interface
Interface Control Requirements
Document
(TS)
Requirements
Critical Software Software Verification Report
Requirements List Requirements
(ISVV) Verification
Contribution to
Software Logical Independent
Model Validation
(TS)
Software Criticality
Analysis Report
(PAF)
The traceability verification is indicated in Figure 11 below by the relationships between higher-
level and lower level documents.
5
Note that figure shows only the most important inputs and outputs.
The inputs to the Technical Specification Analysis activity should comprise a mature, stable,
and self-consistent set to ensure that the analysis conducted on them is useful. A set of inputs
which meet these criteria is available for the customer’s Preliminary Design Review.
Verification reports include at least overall analysis of the work products analysed, findings, list
of open issues to probe further on subsequent analysis, suggested modifications (if any), and
inputs for independent validation test cases specification. Traceability matrices might be
provided as annexes of verification reports or as separate documents.
6.4 Activity Management
6.4.1 Initiating and Terminating Events
The activity will be initiated on receipt of the required inputs. A suitable set of input documents
will be contained in the Datapack submitted by the software supplier for the customer’s
Preliminary Design Review. The Datapack is normally submitted some weeks prior to the
Review, but an earlier initiation of the activity could be achieved if a set of mature, stable, and
self-consistent documents can be made available by the software supplier at an earlier date.
The activity will be terminated on completion of the verification tasks which have been selected
by the customer during verification process implementation as identified in the ISVV Plan. The
required outputs will be submitted to the customer’s ISVV Technical Specification Analysis
Review Meeting.
6.4.2 Completion Criteria
The completion of the Requirements Traceability Matrices and Requirements Verification
Report and their submission to the ISVV customer contribute to the completion of the activity.
The customer’s ISVV Technical Specification Analysis Review Meeting, with the participation of
all involved parties, will allocate final dispositions to the findings of the activity.
6.4.3 Relations to other Activities
Safety-critical and mission-critical design drivers may be identified for further analysis in the
Design Analysis Activity. Potential test cases may be identified for the Validation Activity.
6.5 Task Descriptions
6.5.1 Requirements Traceability Verification
TASK DESCRIPTION
Title: Requirements Traceability Verification Task ID: IVE.TA.T1
Activity: IVE.TA - Technical Specification Analysis
Start event: PDR - Preliminary Design Review
End event: TAR - Technical Specification Analysis Review
Responsible: ISVV Supplier
Objectives:
- Identify the two-way relationships between the software requirements and interface specifications and the
system requirements allocated to software and interface requirements and analyse the identified relationships
for completeness, correctness, consistency, and accuracy.
Inputs:
Ensure that the implemented FDIR mechanisms are independent of the faults that they are supposed to
deal with.
- IVE.TA.T2.S5: Verify the readability of the software requirements
Ensure that the software requirements documentation has a clear and consistent structure.
Ensure that the documentation is intelligible for its target readers and that all the required elements for its
understanding are provided (e.g. definition of acronyms, terms, and conventions).
- IVE.TA.T2.S6: Verify the timing and sizing budgets of the software requirements
Ensure that the software requirements for timing and sizing budgets (e.g. memory usage, CPU utilization,
etc.) correctly represent the system performance requirements allocated to software.
Ensure that the software requirements for timing and sizing budgets (e.g. memory usage, CPU utilization,
etc.) are specified with the accuracy required by the system performance requirements allocated to
software.
- Ensure that the acceptance criteria for validating the software timing and sizing budgets (e.g. memory
usage, CPU utilization, etc.) requirements are objective and quantified.
- IVE.TA.T2.S7: Identify test areas and test cases for Independent Validation
Identify software requirements which cannot be analysed adequately for independent verification and
which, therefore, require execution of independent validation tests. Annotate this information (e.g.
requirements, test case.) as a contribution to the Independent Validation activities.
- ISVV Level 2 only:
- IVE.TA.T2.S8: Verify that the software requirements are testable
Ensure that the acceptance criteria for validating the software requirements are objective and quantified.
Ensure that each software requirement is testable to objective acceptance criteria.
Ensure that software requirements are unambiguous.
- IVE.TA.T2.S9: Verify the feasibility of producing an Architectural Design
Ensure that from the defined software requirements it is possible to implement architectural design.
- IVE.TA.T2.S10: Verify software requirements conformance with applicable standards
Ensure that the software requirements are compliant to applicable standards, references, regulations,
policies, physical laws, and business rules.
Outputs:
- Requirements Verification Report
- Contribution to Independent Validation
6.6 Methods
The table below identifies which methods can be used for the work to be performed within the
scope of each subtask. For each method it is stated whether it covers the purposes of the
subtask completely or partially.
MAN. Management
MAN.CR.Criticality Analysis
IVE.DA.Design Analysis
IVE.CA.Code Analysis
IVA.Validation
The Design Analysis consists on the evaluation of the design of each software product i.e.
analysis of the Design Definition File (DDF) and Design Justification File (DJF), focusing on
aspects such as:
• reliability, availability and safety, ensuring that the sufficient and effective fault detection
and isolation and recovery mechanisms are included,
• error handling mechanisms,
• initialisation / termination of software components
• interfaces between software components and between software and hardware components
• threads / processes synchronisation and resource sharing, and
• budget analysis, including schedulability analysis
Design Analysis focus on two main products, Software Architectural Design and Detailed
Design, corresponding to two main phases of the analysis. In addition Design Analysis should
analyse the software user manual (Figure 13).
Traceability between
Design Justification
ICD and SW
File Architectural Design Architectural Design
Traceability
Verification
Traceability between TS
Technical
and SW Architectural
Specification
Design
SW Architectural Design
Interface Control
Architectural Design Independent
Documents
Verification Verification Report
Detailed Design
Traceability between
Verification
SW Detailed Design SW Architectural and
Detailed Design
Software Detailed
SW Item (application) Design Independent
Verification Report
6
Note that figure shows only the most important inputs and outputs.
• Second, one shall verify the architectural design itself in order to check whether it is
consistent, correct, complete and readable such that it can be effectively tested, is
sufficient to produce a detailed design and is in conformance with the applicable standards.
Figure 14 illustrates the verification subtasks to be performed as part of the software
architectural design independent verification.
Technical
•Verify the traceability with Specification (PDR)
the Technical Specification
Figure 15 illustrates the verification tasks to be performed as part of the software detailed
design independent verification.
SW Item
Software User Manual
(Application)
(DDR)
(CDR)
The prerequisite for starting the design analysis activity is the availability of the listed inputs.
Moreover the design artefacts shall present a satisfactory maturity level.
Verification reports include at least overall analysis of the work products analysed, findings, list
of open issues to probe further on subsequent analysis, suggested modifications (if any), and
inputs for independent validation test cases specification. Traceability matrices might be
provided as annexes of verification reports or as separate documents.
Although several iterations of the design activity may be performed, thus extending it
potentially until the end of the development project, the Design Analysis activity ends with the
Design Analysis Review (DAR) (as defined in above section 3.0), which in general takes place
before the CDR.
7.4.2 Completion Criteria
Design Analysis becomes complete after Architectural Design, Detailed Design, and Software
User Manual verified in accordance with tasks IVE.DA.T1 to IVE.DA.T5 (refer to section 7.5).
7.4.3 Relations to other Activities
This section identifies the relations between this activity and the remaining ISVV activities.
The tailoring of the Design Analysis activity is performed as part of the Criticality Analysis
activity. Criticality Analysis may also provide useful inputs to the Design Analysis activity,
namely the subtask “Verify the safety and dependability of the design” (refer to section 7.5).
Strong relations exist between the Technical Specification Analysis and the Software Design
Analysis. The outputs of the Technical Specification Analysis are applicable inputs to the
Design Analysis. In addition Technical Specification Analysis may raise issues to be closed
during Design Analysis.
Design Analysis is also likely to provide inputs to independent validation test cases
specification.
7
According to ECSS-E-40 Detailed Design Traceability matrix is only due on CDR. However, if it is
available at the beginning of Detailed Design Traceability Verification it can be considered as an input.
8
The verification of the Detailed Design traceability to Architectural Design is only considered under the
condition that traceability matrices be available at the beginning of Detailed Design Traceability
Verification.
- IVE.DA.T3.S4: Independently construct the traceability matrix with the Technical Specification
By independently constructing traceability matrices address the same topics as described in IVE.DA.T3.S1.
- IVE.DA.T3.S5: Independently construct the traceability matrix with the ICDs
By independently constructing traceability matrices address the same topics as described in IVE.DA.T3.S2.
- IVE.DA.T3.S6: Independently construct the traceability matrix with the Architectural Design
By independently constructing traceability matrices address the same topics as described in IVE.DA.T3.S3.
Outputs:
- Traceability Between TS and SW Detailed Design
- Traceability Between ICD and SW Detailed Design
- Traceability Between SW Architectural Design and SW Detailed Design
For real-time software ensure also that a computational model is provided as part of the software architectural
design.
- IVE.DA.T4.S4: Verify the dependability & safety of the design
Ensure that the software detailed design minimises the number of critical software units without introducing
undesirable software complexity.
Ensure that the software is not contributing to system hazardous events by analysing software failure modes and
their propagation to system level.
Ensure that the software detail design implements proper features for Fault Detection Isolation And Recovery
(FDIR) in accordance with the technical specification.
Ensure that the implemented FDIR mechanisms are independent of the faults that they are supposed to deal with.
Ensure that the software correctly handles hardware faults and that the implemented software logic is not harming
the hardware in any way.
Ensure that the detailed design includes proper verification of inputs and consistency checking.
Ensure that software detailed design implements proper error handling mechanisms.
- IVE.DA.T4.S5: Verify the readability of the detailed design
Ensure that the detailed design documentation has a clear and consistent structure.
Ensure that the documentation is intelligible for the target readers and that all the required elements for its
understanding are provided (i.e. acronyms, terms, conventions used, etc.).
- IVE.DA.T4.S6: Verify the timing and sizing budgets of the software
Ensure that software architectural design implements proper allocation of timing and sizing budgets (e.g. memory
usage, CPU utilization, etc.) by reviewing the analysis performed by the software developer.
For real-time software verify developer’s schedulability analysis.
- IVE.DA.T4.S7: Identify test areas and test cases for independent Validation
Identify areas and items that can not be sufficiently analysed by means of Independent Verification only and
therefore that require execution of validation tests. Annotate this information (test areas/items, test case, etc.) as a
contribution to the Independent Validation activities.
This subtask shall receive, refine and update the contribution to Independent Validation from the Architectural
Design Verification (IVE.DA.T2.S7).
- ISVV Level 2 only:
- IVE.DA.T4.S8: Verify that the software units are testable
Ensure that every single software unit is testable and that a clear and objective criterion for validating it exists.
- IVE.DA.T4.S9: Verify the feasibility of coding
Ensure that the defined software detailed design is possible to implement, i.e. to translate into source code. The
detailed design shall be such that it is possible to fully implement the source code without the need for the
Technical Specification (it shall contain all the necessary information).
- IVE.DA.T4.S10: Verify architectural design conformance with applicable standards
Ensure that the detailed design is compliant to applicable standards, references, regulations, policies, physical
laws, and business rules.
Outputs:
- Software Detailed Design Independent Verification Report
- Contribution to Independent Validation (updated)
TASK DESCRIPTION
Title: Software User Manual Verification Task ID: IVE.DA.T5
Activity: IVE.DA - Design Analysis
Start event: DDR – Detailed Design Review
End event: DAR – Design Analysis Review
Responsible: ISVV Supplier
Objectives:
- Ensure the User Manual readability, completeness and correctness.
Inputs:
- From ISVV Customer:
- Software User Manual [DDF; DDR]
- Software Technical Specification [TS; DDR]
- Software Architectural Design [DDF; DDR]
- Software Detailed Design [DDF; DDR]
- Software Item (application) [DDF; CDR]
Sub Tasks (per ISVV Level):
- ISVV Level 2 only:
- IVE.DA.T5.S1; Verify the readability of the User Manual
Ensure that the user manual has a clear and consistent structure.
Ensure that the user manual is intelligible for the target software users and that all the required elements for its
understanding are provided (i.e. acronyms, terms, conventions used, etc.).
- IVE.DA.T5.S2; Verify the completeness of the User Manual
Ensure that the User Manual describes all the functionalities implemented by the software. Check if all the
necessary information for performing the required operations is provided.
- IVE.DA.T5.S3: Verify the correctness of the User Manual
Ensure that the information provided in the User Manual is consistent with the software implementation i.e. the
software behaves as described.
Outputs:
- Software User Manual Independent Verification Report
9
Please note that one of the pre-requisites for the verification tasks is that the documentation should be
mature. Usually this does not happens with the Software User Manual at DDR.
7.6 Methods
The table below identifies which methods can be used for the work to be performed within the
scope of each subtask. For each method it is stated whether it covers the purposes of the
subtask completely or partially.
MAN. Management
MAN.CR.Criticality Analysis
IVE.DA.Design Analysis
IVE.CA.Code Analysis
IVA.Validation
The Code Analysis consists on the evaluation of the source code of each selected software
product focusing on aspects such as:
• reliability, availability and safety, ensuring that the sufficient and effective fault detection
and isolation and recovery mechanisms are included,
• error handling mechanisms,
• initialisation / termination of software components
• interfaces between software components and between software and hardware components
• threads / processes synchronisation and resource sharing, and
• budget analysis, including schedulability analysis
The Code Analysis activity comprises the analysis of the application source code and the tests
procedures and test data (Figure 18).
Technical
Specification
Traceability between TS
and Source Code
Source Code
SW Architectural Traceability
Design Verification
Traceability between
ICD and Source Code
SW Detailed Design
Traceability Between
SW Architectural Design
and Source Code
Interface Control Source Code
Documents Verification
Traceability between
SW Detailed Design and
Source Code
Source Code
Source Code
Independent
Integration Test Verification Report
Criticality Analysis
Procedures and Test
Report
Data Verification
Contribution to IVA
Contribution to IVA
Integration Test
Procedures and Test
SW Integration Test Unit Test Procedures Data Verification Report
Plan and Test Data
Verification
Unit Test Procedures
and Test Data
Verification Report
SW Unit Test Plan
Figure 19 illustrates the verification tasks to be performed as part of the code analysis.
10
Note that figure shows only the most important inputs and outputs.
Code Analysis
Technical SW Architectural
Specification (DDR) Design (CDR)
− Requirements Baseline
− Technical Specification
− Interface Control Documents
− Software Architectural Design
− Software Detailed Design
− Software Units Source Code
− Software Integration Test Plan
− Software Unit Test Plan
− Software User Manual
− Software Dependability and Safety Analysis Reports
− Software Code Traceability Matrices
− Schedulability Analysis
− Technical Budgets
− Criticality Analysis
The prerequisite for starting the code analysis activity is the availability of the listed inputs.
Moreover the listed inputs shall present a satisfactory maturity level.
8.3 Activity Outputs
The following work products are produced in the scope of Source Analysis activity:
• Software Source Code Independent Verification Report
• Integration Test Procedures and Data Independent Verification Report
• Unit Test Procedures and Data Independent Verification Report
• Traceability Between TS and Source Code
• Traceability Between ICD and Source Code
• Traceability Between Software Architectural Design and Source Code
• Traceability Between Software Detailed Design and Source Code
• Contribution to Independent Validation (updated with Code Analysis findings)
Verification reports include at least overall analysis of the work products analysed, findings, list
of open issues to probe further on subsequent analysis, suggested modifications (if any), and
inputs for independent validation test cases specification. Traceability matrices might be
provided as annexes of verification reports or as separate documents.
Although several iterations of the Code Analysis activity may be performed, thus extending it
potentially until the end of the development project, the Code Analysis activity ends with the
Code Analysis Review (CAR) (as defined in above section 3.0), which in general takes place
before the QR.
The tailoring of the Code Analysis activity is performed as part of the Criticality Analysis
activity. Criticality Analysis may also provide useful inputs to the Code Analysis activity, namely
the subtask “Verify the safety and dependability of the source code” (refer to section 7.5).
Strong relations exist between the Technical Specification Analysis, and the Software Design
Analysis and the Code Analysis. The outputs of the Technical Specification Analysis and
Design Analysis are applicable inputs to the Code Analysis. In addition Technical Specification
Analysis and Design Analysis may raise issues to be closed during Code Analysis.
Code Analysis is also likely to provide inputs to independent validation test cases specification.
TASK DESCRIPTION
Title: Source Code Traceability Verification Task ID: IVE.CA.T1
Activity: IVE.CA Code Analysis
Start event: CDR – Critical Design Review
End event: CAR – Code Analysis Review
Responsible: ISVV Supplier
Objectives:
- Verify source code external consistency with Technical Specification, Interface Control Documents, Architectural Design
and Detailed Design.
Inputs:
- From ISVV Customer:
- Software Requirements Specification [TS; DDR]
- Interface Control Documents [ICD; CDR]
- Software Architectural Design [DDF; CDR]
- Software Detailed Design [DDF; CDR]
- Source Code [DDF; CDR]
- Source Code Traceability Matrices [DJF; CDR]
- From ISVV Supplier:
- Traceability Between TS and SW Architectural Design
- Traceability Between ICD and SW Architectural Design
- Traceability Between SW Architectural Design and SW Detailed Design
- Traceability Between TS and SW Detailed Design
- Traceability Between ICD and SW Detailed Design
Implementation:
- ISVV Level 1 only:
- IVE.CA.T1.S1: Verify the traceability matrix with the Technical Specification
By reviewing the traceability matrices produced by the software developer:
Ensure that all software item requirements are traceable to a software unit (source code) and that the functionality
described in the requirement is implemented by the source code unit (forward traceability);
Ensure that all software units (source code) have allocated requirements and that each software unit (source
code) is not implementing more functionalities than the ones described in the requirements allocated to it
(backward traceability)
For each requirement traced to more than one software unit (source code) ensure that implementation of
functionalities is not repeated.
Ensure that the relationship between the software units (source code) elements and the software requirements are
specified in a uniform manner (in terms of level of detail and format).
- IVE.CA.T1.S2: Verify the traceability matrix with the Interface Control Documents
By reviewing the traceability matrices produced by the software developer:
Ensure that the interfaces implementation (with other software units, hardware, the user, etc.) is consistent with the
applicable Interface Control Documents.
Ensure that interfaces are designed in a uniform way.
Ensure that each interface provides all the required information from the underlying component.
- IVE.CA.T1.S3: Verify the traceability matrix with the Architectural Design and Detailed Design
By reviewing the traceability matrices produced by the software developer:
Ensure that the static architecture (e.g. software decomposition into software elements such as packages, and
classes or modules) and dynamic architecture (e.g. specification of the software active objects such as thread /
tasks and processes) are implemented according the design.
Ensure that the software units (source code) implement correctly the internal interfaces described in the software
architectural design.
- ISVV Level 2 only:
- IVE.CA.T1.S4: Independently construct the traceability matrix with the Technical Specification
By independently constructing traceability matrices address the same topics as described in IVE.CA.T1.S1.
- IVE.CA.T1.S5: Independently construct the traceability matrix with the ICDs
By independently constructing traceability matrices address the same topics as described in IVE.CA.T1.S2.
- IVE.CA.T1.S6: Independently construct the traceability matrix with the Architectural and Detailed design
By independently constructing traceability matrices address the same topics as described in IVE.CA.T1.S3.
Outputs:
- Traceability Between TS and Source Code
- Traceability Between ICD and Source Code
- Traceability Between Software Architectural Design and Source Code
- Traceability Between Software Detailed Design and Source Code
Ensure that source code complexity and modularity is in accordance with quality requirements.
Ensure that the software source code implements proper sequence of events, inputs, outputs and interfaces logic
flow.
Ensure that correct use of programming language, libraries, system calls, etc. is being made.
- IVE.CA.T2.S3: Verify the source code readability, maintainability and conformance with the applicable standards.
Ensure that the source code is written in a clear way and that it is properly documented.
Ensure that all source code files adhere to the same coding style and that the applicable coding conventions, if any,
are followed.
Ensure that applicable coding standards, if any, are followed (e.g. Ada RAVEN, MISRA C, etc.).
Ensure that every single source file has a descriptive header and that the file history was recorded there.
Ensure that a description is provided for every single subprogram.
- IVE.CA.T2.S4: Verify the dependability & safety of the source code
Ensure that the software source code minimises the number of critical software units without introducing
undesirable software complexity (e.g. critical software unit are not sharing resources with non-critical software units
thus increasing their criticality).
Ensure that the software is not contributing to system hazardous events by analysing software failure modes and
their propagation to system level.
Ensure that the software source code implements proper features for Fault Detection Isolation And Recovery
(FDIR) in accordance with the technical specification.
Ensure that the implemented FDIR mechanisms are independent of the faults that they are supposed to deal with.
Ensure that the software correctly handles hardware faults and that the implemented software logic is not harming
the hardware in any way.
Ensure that defensive programming techniques are used
Ensure that the source code includes proper verification of inputs and consistency checking.
Ensure that all relevant events are reported by the software using the appropriated channels.
Ensure that the source code does not include any hazardous programming language construct or library function.
Ensure that no dead or deactivated code exists. If deactivated code exists ensure that its activation will not lead to a
hazardous condition.
For concurrent systems ensure that no dead-lock or race conditions exist.
- IVE.CA.T2.S5: Verify the accuracy of the source code
Ensure that the source code implements the required computational precision (e.g. rounding, truncation, etc.).
Ensure that the granularity of the reported error information is sufficient to trigger the necessary corrective actions.
Ensure that the parameter values and the computation made are conformant with the required units (e.g. meters,
inches, volts, etc.).
- IVE.CA.T2.S6: Identify test areas and test cases for independent Validation
Identify areas and items that can not be sufficiently analysed by means of Independent Verification only and
therefore that require execution of validation tests. Annotate this information (test areas/items, test case, etc.) as a
contribution to the Independent Validation activities.
This subtask shall receive, refine and update the contribution to Independent Validation from the Design Analysis.
- Level 2 only:
- IVE.CA.T2.S7: Verify that the source code is testable
Ensure that the source code can be easily tested (e.g. check if every single subprogram implements a single
function).
- IVE.CA.T2.S8: Verify the timing and sizing budgets of the software
For real-time software, verify the developer’s computation of the Worst Case Execution Time (WCET) of each task
and compare the obtained values with those provided in the design and/or technical specification.
Verify the developer’s schedulability analysis of the implemented application (should be based on computed
WCETs).
Verify the sizing budgets of the software (e.g. executable image size, stack size, buffers, etc.) and face them up to
the design and requirements.
Outputs:
- Software Source Code Independent Verification Report
- Contribution to Independent Validation (updated)
8.6 Methods
The table below identifies which methods can be used for the work to be performed within the
scope of each subtask. For each method it is stated whether it covers the purposes of the
subtask completely or partially.
MAN. Management
MAN.CR.Criticality Analysis
IVE.DA.Design Analysis
IVE.CA.Code Analysis
IVA.Validation
Notice that the ‘Construction of Test Procedures’ subtask can start when the SVF is delivered.
The ‘Execution of Test Procedures’ requires the object code of the software under test., and
can be started at QR. At QR the first version of the software is delivered and it is expected that
the software has been through development validation.
The identification of test cases is an iterative process, where new test cases might be identified
during the establishment and execution of previously identified test cases.
The Identification of Test Cases task is divided into the subtasks shown in Figure 23; input to
the task is also illustrated. The needed documents are highlighted in the figure. This first task in
the IVA activity will take 20%-40% of the total effort.
11
Note that figure shows only the most important inputs and outputs.
The input to the IVA activity to this task originates from the ISVV customer and from the ISVV
supplier.
The analysis will reuse as much as possible from the preceding IVE activity. If none or only
part of the IVE activities have been performed it can be necessary to include elements of the
verification analysis in preparation of the IVA activity.
Table 5 shows when analyses are performed by the IVA activity dependent on the existence of
independent verification results and ISVV level.
12
If the ISVV supplier uses the test cases and test report from the development validation to identify
missing test cases, the ISVV supplier must be careful not to adapt the developer’s way of thinking.
A full independent validation (ISVV Level 2) must always rely on independent analysis and
shall preferably be performed within the IVA activity to ensure that the analysis focus is on the
validation.
At the end of the task, it is important that the ISVV customer reviews and accepts the test plan.
This will make it possible for the ISVV customer and ISVV supplier to discuss the intended
behavior of the software and focus on essential areas.
9.1.2 Construction of Test Procedures
Test procedures are the implementation of the test cases, i.e., test cases expressed in the test
language as provided by the software validation facility.
The Construction of Test Procedures task is divided into three subtasks. Figure 24 shows the
subtasks and input to each of the subtasks.
The test procedures are part of the Independent Validation Test Report along with the test
results.
9.1.2.3 Updating the Independent Validation Test Plan
When implementing the test cases into test procedures new test cases might appear, these
test cases must be added to the independent validation test plan to be used as documentation
for the test execution.
9.1.3 Execution of Test Procedures
The Execution of Test Procedures task is divided three subtasks as shown in Figure 25.
When investigating the failed tests it might leads to additional or modified test cases. If new
test cases are identified they must be added to the IVTP, implemented into test procedures
and then executed.
9.1.3.3 Produce Test Reports
This task will result in two test reports, the recommended contents is:
• Independent Validation Test Report
• Description of how to execute the tests
• Description of observations and problems
• Test procedures (scripts)
• Test results (log files) including pass/fail status
• Independent Validation Findings
• Summarized findings during IVA
• Lists of failed Tests
The input to the individual tasks are discussed in section 9.1 and listed in section 9.5.
9.2.2 Activity Prerequisites
To ensure efficient independent validation activity, it is important that the software under test is
in a mature and healthy state:
• The software under test has already been validated by the software supplier. Because the
ISVV supplier is not expected to redo or replace the software supplier’s validation activities,
these must have been performed prior to the independent validation..
• A suitable software validation facility is available. The SVF can either be constructed by the
ISVV supplier, but can also be delivered by another supplier.
• If possible, the independent verification analysis should have been performed in order to
support the identification of the test cases. If this is not the case, corresponding activities
must be performed as part of the test case identification.
The Independent Validation Test Plan (IVTP) contains the ISVV supplier’s basic knowledge of
the software and the identified test cases. This test plan is an output of the activity, but also a
document used during the IVA activity. The IVTP is reviewed by the ISVV customer before
continuing with the “Construction of Test Procedures” task.
The test procedures delivered can be used for regression testing of future versions of the
software product, i.e., they might be added to the set of acceptance tests that the software
customer will request executed as part of acceptance of a delivery.
The IVA activity will produce a test report holding all results of the test execution and the
findings from test execution. Before the test report is produced, the ISVV supplier must
investigate failed tests to ensure that the problem revealed is located in the software under
test, and not in the software validation facility or in the test procedures. It is the responsibility
of the ISVV customer to investigate the Independent Validation Test Report, and decide if
failed tests should result in problem reports.
9.4 Process Management
9.4.1 Initiating and Terminating Events
The IVA activity can be initiated as soon as sufficient documentation is available. This means
that the independent validation activity can start at CDR or as soon as corresponding
information is available and mature. If independent verification is being performed, the IVA
activity is recommended to start during the independent code analysis.
The completion of the independent software validation activity does not have to be linked with
the development process, but should take place while the software supplier is still available for
maintenance of the software and preferably close to the AR.
9.4.2 Completion Criteria
The independent validation activity closes with the delivery and review of the test report.
It can be an advantage to deliver the applied software validation facility to the operation phase
along with the test procedures, SVF User Guide and the independent validation test plan. This
will enable execution of the independent validation test suite when updates to the software are
to be investigated.
9.4.3 Relations to other Activities
The IVA activity is dependent of the result of the IVE activities. If important analyses do not
exist and sufficient knowledge about the software under tests is not achieved, sufficient
activities enforcing these must be executed prior to the formal IVA.
- The purpose of this task is to identify areas to be subjected for independent validation. The task relies
on the development documentation for the system, including requirements and design specifications
and output from the IVE analysis. The identified test cases must be described in the test plan to be
used when implementing the tests.
Inputs:
- The purpose of this task is to express the test cases in the test language provided by the software
validation facility
Inputs:
- The purpose of this task is to execute the test procedures and generate a test report
Inputs:
9.6 Methods
The table below identifies which methods can be used for the work to be performed within the
scope of each subtask. For each method it is stated whether it covers the purposes of the
subtask completely or partially.
<1.2> Purpose
This section should describe the purpose of the ISVV plan. Who are the readers of the plan
and how will it be used?
<1.5> Outline
This section should provide an outline of the rest of the plan.
<6.1.5> Metrics
This section should list the metrics that should be collected during the ISVV project, also
describing the purpose and use of the metrics.
The work breakdown structure should reflect the ISVV supplier’s defined process (based on the
ISVV process of this guide) and the defined scope of the process. The process description
could be part of the ISVV plan or is a separate document which should then be referred to.
Responsibility, start and end event may also be described here or it should be described in the
schedule if this is more convenient.
13
The questions have been inspired by [NASA IV&V]
The mapping between the normalised score and the error potential levels is shown in the table
below:
Category Definition
SCC A Software involved in a hazard severity CAT I control function or software failure
causing a hazard leading to consequences of hazard severity category I where
in the event of software failure no immediate hardware or independent software
backup can be activated and no time is available to effectively intervene to
prevent the occurrence of the consequences.
SCC B Software involved in a hazard severity CAT I control function or software failure
causing a hazard leading to consequences of hazard severity category I where
in the event of software failure immediate hardware or independent software
backup can be activated without external intervention or time and means are
available to effectively intervene to prevent the occurrence of the consequences.
or
Software involved in a hazard severity CAT II control function or software failure
causing a hazard to consequences of hazard severity category II and software
controlling reliability criticality functions category 1 where in the event of
software failure no immediate hardware backup or independent software can be
activated and no time is available to effectively intervene to prevent the
occurrence of the consequences or failure modes effects.
SCC C All other software implementing a function CAT III or CAT 2, 3
or
All other software which is used to control, generate the above and to control
categories A and B software
or
Software used to check-out or qualify system critical equipment/subsystems.
SCC D Any other software.
Table 11: Software criticality categories for manned mission
Category Definition
SCC A Software whose anomalous behaviour would cause or contribute to a failure of
the satellite system resulting in loss of life, personnel injuries, or damage to
other equipment.
SCC B Software whose anomalous behaviour would cause or contribute to a failure of
the satellite system resulting in permanent or non-recoverable loss of the
satellite’s capability to perform its planned mission.
SCC C Software whose anomalous behaviour would cause or contribute to a failure of
the satellite system with negligible or minor effect on the satellite’s mission and
operability.
SCC D Any other software.
Table 12: Software criticality categories for unmanned mission
The following tables are the sample definition of both the reliability and the safety categories.
Reliability Definition
category
1 Functions whose failure result in loss of the flight configuration
2 Functions whose failure result in loss of all operational capability
3 All others
Table 13: System reliability criticality categories
The existing system FMECA may be used as a starting point, by adding an additional column
where the consequence severity (criticality) in terms of the newly defined scheme may be
annotated for every currently defined failure mode. However, the existing failure modes were
identified with the project defined consequence severity categories in mind. The new scheme
(defined for the purpose of scoping the ISVV activity as defined in section 5.0) may raise the
criticality of previously non-critical failure modes which for this reason were not included in the
initial FMECA but should be in the amended one. Identifying these failure modes is a creative
process requiring system understanding. It is thus best done by the ISVV customer during the
original analysis, possibly with participation by the ISVV supplier.
If no system FMECA has been carried out at all, and there is no Critical Function List, the
system FMECA will have to be prepared from scratch. However, an FMECA carried out for the
purposes of determining (i.e. limiting) the scope of ISVV could be somewhat simplified. The
columns that need to be filled in are:
• FMECA #
• Item
• Function
• Failure mode
• Consequence
• Operational phase/mode
• Severity (criticality)
13.2 Software Requirements FMECA
If no software requirements FMECA is available, a simplified analysis (e.g a simplified
SFMECA) must be produced to identify the most critical software requirements. The following
procedure represents the most basic way for identifying the criticality category of the software
requirements. The procedure is based on expert sessions. The traceability matrix from
software requirements to system requirements will aid the process considerably.
Formal Methods do suffer from certain limitations. In particular, Formal Methods can prove that
an implementation satisfies a formal specification, but they cannot prove that a formal
specification captures a user's intuitive informal understanding of a system. In other words,
Formal Methods can be used to verify a system, but not to validate a system. The extent of this
limitation should not be underestimated - the reduction of informal application knowledge to a
rigorous specification is a key problem area in the development of large systems.
14.2 Hardware Software Interaction Analysis
The area that covers the interfaces between the software (specially the critical software) and
the hardware on which it runs is particularly difficult to analyse and can become problematic or
somehow forgotten. In fact it tends to stay in no-man’s land. The architecture of the embedded
systems has also suffered changes due to the evolution of computer systems, i.e., from the
simple software controlling and interfacing directly with hardware, we have now systems with
complex Operating Systems, Java virtual machines and other SW layers. These layers
increase the distance between the application software and the hardware, and add an extra
complexity that can lead to covert (or not well understood) communication channels between
the SW and the HW. This is the reason why the use of techniques such as HSIA is becoming
necessary.
The HSIA method is defined in ESA standard [ECSS-Q-80-03]. It consists of the systematic
analysis of the HW/SW interfaces, with particular focus on the hardware faulty conditions and
the handling of those faulty conditions by the SW.
Several assessment techniques exist to verify safety and dependability properties of critical
systems. For example, FMECA and SFMECA are used to identify failure modes, causes,
effects and severity in HW and SW respectively. These techniques are not strongly coupled
and are usually performed at early stages of the project lifecycle. HSIA is meant to complement
and be used with FMECA/SFMECA, to analyse how the hardware might be affected by
software failures or stressed by software actions (in both nominal and error conditions).
The core of the HSIA method is a checklist consisting of a set of questions. These questions
are aimed at providing answers to the following fundamental issues:
• Is the software able to detect, recover, compensate and report a specific hardware failure
mode?
• Is the software using the hardware properly, not harming it in any way?
• Are the recovery actions independent from the hardware components that failed?
Question one helps determine whether the HW/SW system has the ability to monitor the
failure modes provoked by hardware faults and provides mechanisms to recover and limit the
effect of such failures. Question two intends to determine whether the SW, both in nominal and
error recovery cases, does not stress the HW. The last question aims at determining whether
the recovery mechanisms do not make use of (or depend on) the component that has failed
(e.g., sending an error report when the failure affects the communication link). These
fundamental questions are a general overview of the original HSIA checklist. From the
application of HSIA it is expected to obtain a list of software actions that may have adverse
effects on the hardware, recommendations to add or improve the HW and SW FDIR
mechanisms, and information to complement the SFMECA and FMECA worksheets.
14.3 Inspection
The basic method of software inspection for design and code was defined by Fagan in 1976
[INSPEC:1976]. The method has subsequently been applied to the verification of software
requirements where it is said to be most effective when individual reviewers are assigned
specific responsibilities and where they use systematic techniques for meeting those
responsibilities [DETECT:1995, §I.A]. Alternatively [ISO 9000:2000] defines Inspection as an
activity such as measuring, examining, testing or gauging for conformity evaluation.
From the ISVV point of view the inspection can be defined as an evaluation technique in which
software requirements; design, code, or other work products are formally examined by a
person or group (the inspection team) to detect faults, violations of development standards,
and other problems. The author of the work product may or may not be part of the inspection
team. The inspection team typically ranges from two to seven elements being four the
commonly recommended number. All of the elements are inspectors but some have special
roles in the team. The typical layout includes a moderator (manages the inspection), a reader
(performs full reading of the review item in the inspection meeting) and a recorder (annotates
all the inconsistencies found in the inspection meeting). An inspection begins with the
distribution of the item to be inspected (e.g., a specification, some code and test data). Each
participant is required to analyse the item on his own. During the inspection, which is the
meeting of all the participants, the item is jointly analysed to find as many errors as possible.
All errors found are recorded, but no attempt is made to correct the errors at that time.
However, at some point in the future, it must be verified that the errors found have actually
been corrected. Inspections may also be performed in the design and implementation phases.
14.4 Modelling
Modelling consists on the elaboration of a model of the system using a modelling tool and/or
language (e.g. UML, SDL, etc.). The primary target of this method is to help on the
understanding of the system. Modelling can be used to cover a broad range of analysis or sub-
tasks such as, data flow analysis, control flow analysis, state machine diagrams, etc. The
method may be applied to all or specific parts of the system under verification.
UML 2 defines 13 basic diagram types, divided into two general classes:
• Structural diagrams. This class comprises diagrams that define the static architecture of a
model (elements of a specification that are irrespective of time). These diagrams are used
to model the building blocks that make up the full model – classes, objects, interfaces and
physical components. Structure diagrams are also used to model the relationships and
dependencies between elements. It includes class, component, composite structure,
deployment, object and package.
• Behavioural diagrams. This class comprises diagrams that depict behavioural features of
a system or business process. It includes activity, state machine, and use case diagrams
as well as the interaction diagrams.
The Behavioural class is further divided in a subclass named Interaction Diagrams. This
subclass is defined as a subset of behaviour diagrams which emphasize object interactions.
This includes communication, interaction overview, sequence, and timing diagrams.
The next table presents the UML 2 diagram types grouped by class.
UML 2 adds four new diagram types to the UML 1.x set and renames one. The next table
presents the UML 2 to UML1.x mapping.
The purpose is to detect poor and potentially incorrect program structures. Data flow analysis
combines the information obtained from the control flow analysis with information about which
variables are read or written in different portions of code. It may also be used in the design and
implementation phases.
Data flow analysis can be used to support the dependability and safety assessment in what
concerns the analysis of failures and faults in the product. It complements the engineering
activities, in case these diagrams are already provided therein from which they could be reused
for the dependability and safety analyses purposes. Many tools and methods used for the
design engineering of the product already have the possibility to define the data flows, and
these same diagrams should be re-used to analyse dependability and safety aspects of the
software product, this is, potential faults existing in the product.
14.4.2 Control Flow Analysis
Control flow is most applicable to real time and data driven systems. Logic and data
requirements are transformed from text into graphic flows, which are easier to analyze.
Examples of control flow diagrams include, among others, PERT, state transition, and
transaction diagrams. These analyses are intended to identify unreachable code, dead code,
inconsistent/incomplete interface mechanisms between modules, and logic errors inside a
module.
Control flow analysis can be used to support the dependability and safety assessment in what
concerns the analysis of failures and faults in the product and an analysis of their propagation.
It complements the engineering activities, in case these diagrams are already provided therein
from which they could be reused for the dependability and safety analyses purposes. Many
tools and methods used for the design engineering of the product already have the possibility
to define the control flow (IDEF0, etc), and these same diagrams should be used to analyse
dependability and safety aspects of the software product, this is, potential faults existing in the
product.
14.5 Real-Time Properties Verification
14.5.1 Schedulability Analysis
Schedulability Analysis aims at determining whether a specific software system is schedulable
or not. In other words, to check whether the software system accomplishes the deadlines it
was designed for (does each function executes within the required time limit).
Cyclic models are by their nature deterministic and the duration and completion of each
function can be determined. On the contrary pre-emptive models are non-deterministic, since
the functions may be triggered by the asynchronous occurrence of some events. In the case of
hard real-time systems, deadlines are defined and must imperatively be respected when a
service is provided. In this case, if a pre-emptive model is adopted, application shall be
analysed in order to verify if it meets the deadline requirements.
To this end Scheduling Models have been defined, based on the Rate Monotonic or Deadline
Monotonic Scheduling Algorithms and the Ceiling Priority Inheritance Protocol
[AUDSLEY:1991, HRTOSK:1991]. Such Scheduling Models allow to verify that all critical tasks
are schedulable (that is, can be executed within their deadlines) under their worst case
execution time conditions.
The Schedulability Analysis is supported by tools which allow the off-line static verification of
hard real-time systems, the simulation of their run-time behaviour (related to timing aspects)
and the evaluation of the worst case execution time. In addition, based on the Schedulability
Analysis theory, an extension of the HOOD method, named HRT-HOOD has been defined
[BURNS:1993].
14.5.2 Worst Case Execution Time Computation
Schedulability Analysis serves to exercise the scheduling algorithm in order to check whether
the software system is schedulable or not. However the method requires vital input information
that is the duration of each task. That information can be obtained by applying another method,
the Worst Case Execution Time calculation (or WCET.
The execution time of a program depends generally on its input data, which determine a
certain execution path within the code according to its control flow instructions. The Worst
Case Execution Time (or WCET) is thus defined as the maximum value of such execution time
for the set of all possible input data. In the following paragraphs, we first review the existing
techniques for calculating the WCET. We then focus on static code analysis-based techniques,
which are today well studied and largely applied in both industry and academy.
There are a number of methods developed for the prediction of the WCET of a function or
program. These methods can be grouped into two main categories, namely, dynamic methods
and static methods. Dynamic methods consist in measuring the execution time of a program
directly on the target system or on a simulator of the target system. The first technique is
referred to as testing, while the second is called simulation. Both techniques require a set of
input data to be found that can lead to the maximum execution time. Conversely, static
methods are based on a static analysis of the code of the program, so no input data is
required. These three different techniques are discussed in more detail hereafter.
Testing: This is a method of evaluating the timing characteristics of code fragments by actually
running them on representative input data. However, since test data may not fully cover the
domain of interest, and measurement may not be possible without setting up an actual
environment, this approach may not be acceptable in the hard real-time domain. Indeed, the
duration of the testing process (i.e., time needed for generating all possible combinations of the
input data and then applying them) is generally too high (however, it can be feasible for very
simple programs defining few input data with a limited value domain, and whose code can be
easily understood and handled). Consider for example a program with a single input consisting
of a 32-bit integer variable. In a general case (i.e., considering no knowledge of the internal
structure of the program), to be sure to find the WCET, it would be necessary to systematically
measure the execution time of the program for all possible input values, i.e., a total of 232
measurements assuming that the program does not rely on internal hardware state. Unless the
execution times of the program are extremely short, this process would take years to perform.
Simulation: As already mentioned in section 14.6, this method consists in analysing some
behavioural characteristics of any software system, for example the timing properties of a
program, by simulating in this specific case, the behaviour of the target system. This method
can be used during the design phase of the system, when the hardware platform is not still
available. The simulator heavily relies on the model of the underlying hardware, which is an
approximation of the actual system, so it may not accurately represent the worst case situation.
Note that the problem related to finding a set of input data that lead to the longest execution
time also applies here.
Static code analysis: This is an analytical method that relies on a static analysis of the
program code so as to find worst case execution paths within the code. The program code is
usually analysed at both high language level and low language level. The former level is based
on the study of the control flow of the program (e.g., loops, conditional branches, etc.) and
aims at isolating basic sequential high-level instruction blocks. The objective of the latter level
is to calculate the execution times of the corresponding basic assembly blocks by considering
the specific architecture of the target system (e.g., instruction set, caches, pipelines, branch
prediction units, etc.).
Unlike static code analysis, techniques based on testing and simulation, are unlikely to
characterise accurately the worst case timing properties of a program. Indeed, static code
analysis has become very popular today and has been the object of study of many works in
both industry and academy. Next section describes in more details the fundamentals of static
code analysis based techniques.
An important objective of the Software Common Cause Failure Analysis is to ensure real
independence of failures of multiple systems. The effects of failures in the components which
defeat their independence should be analyzed.
Techniques for Software Common Cause/Mode Failure Analysis are general SFMECA, quality
control, design review, verification and testing by independent team, etc.
14.8 Software Failure Modes, Effects and Criticality Analysis (SFMECA)
The software failure modes effects analysis SFMEA is an iterative method, intended to analyse
the effects and criticality of failure modes of the software within a system. SFMECA extends
SFMEA by assigning a criticality category to each software failure. SFMEA and SFMECA are
based on FMEA and FMECA respectively being the last two targeted to hardware/equipment
analysis.
The SFMEA/SFMECA main purposes are to reveal weak or missing requirements and to
identify latent software failures, assign software criticality categories. SFMEA/SFMECA use
intuitive reasoning to determine the effect on the system of a component failing in a particular
failure mode. For example, if a function of a train crossing system is to turn on warning lights
as a train approaches the crossing and to leave the lights on until the train has past, some of
its failure modes could be:
The effect on the system of each component’s failure in each failure mode would then be
assessed by developing a matrix for each component. The criticality factor, that is, the
seriousness of the effect of the failure, can be used in determining where to apply other
analyses and testing resources.
14.9 Software Fault Tree Analysis (SFTA)
Software Fault Tree Analysis (SFTA) is derived from Fault Tree Analysis (FTA). Fault Tree
Analysis was originally developed in the 1960’s for the safety analysis of a missile system and
it has become one of the most widely used hazard analysis techniques.
SFTA can be used in conjunction with FTA whereby hardware (system) and software fault
trees are combined in order to analyze the entire system. This is significant since many
hazards can be caused by a combination of a software error with a hardware or human failure.
The goal of SFTA is to show that the logic in the software design or in an implementation will
not produce hazard or a failure. The basic procedure in an SFTA is to assume that the hazard
or the failure has occurred and then to determine its set of possible causes. The produced fault
tree depicts the logical relationship of basic events that lead to the undesired event, which is
the top event of the fault tree. The design or code is modified to compensate for those failure
conditions deemed to be hazardous threats to the system.
System fault trees can be used to calculate the probability of a hazard (the top event)
occurring, if the probabilities of lower events are known. These aides in determining which
parts of the system are the most critical and therefore, require more intensive safety analysis.
These probability calculation are not really applicable to the software or software parts of the
system.
14.10 Static Code Analysis
14.10.1 Coding Standard Conformance
Coding Standard Conformance is method that aims at checking whether the implemented
source code follows a specific coding convention or set of coding rules (this includes checking
coding style, naming conventions, etc.). Coding Standard Conformance verification usually
results on an exhaustive task and therefore a tool for automating it is required.
The Coding Conformance Verification may be used to verify user defined conventions or some
standards such as the Ada RAVEN, MISRA C, etc.
14.10.2 Bug Pattern Identification
Bug Pattern Identification consists on the identification of known programming language and
library bug patterns. As Coding Standard Conformance Bug pattern Identification is not a
complete method. It is always possible to identify and further patterns.
This method can be applied manually but that is only feasible for very small systems.
Therefore, a tool for automating the method will be required in the majority of the cases.
Fortunately a significant amount of high quality tools is available.
14.10.3 Software Metrics Analysis
Software Metrics Analysis consists on evaluating the quality of the software based upon a set
of extracted metrics such as the McCabe’s cyclomatic rate, percentage of comments per
statement and, number of subprogram exit points, etc.
However there are some metrics that are widely accepted – McCabe’s cyclomatic rate is
probably the best example – many others vary or are particular to a specific tool. It is therefore
impossible to say that a tool is complete with respect to the set of metrics it provides. This fact
is not critical because Software Metrics Analysis is to be used as a companion method; it is not
intended to be complete. Software Metrics Analysis may support the safety and dependability
subtask when using complexity measures to point to complex code areas more likely to have
software faults.
14.11 Traceability Analysis
The traceability analysis method consists of analysing the tracing of (finding the
correspondence of) specific items of one lifecycle phase to items of another lifecycle phase.
Typically items are traced across adjacent lifecycle phases and the traceability can be done
from inputs to the outputs (forward traceability, e.g. trace software requirements to architectural
elements) or from outputs to the inputs (backwards traceability e.g. trace architectural elements
to technical requirements). The main purpose of traceability analysis is to check the
consistency and completeness of the work products being reviewed.
Traceability analysis is performed analysing a table with at least two columns so called the
traceability matrix. In case of backwards traceability, the first column will be filled with the
outputs of the phase (e.g. for architectural design analysis all the design elements) and then,
for every output the analyst will check for the matching inputs (e.g. for architectural design
analysis all the software requirements).
14.12 Walkthrough
The walkthrough technique can be defined as a sort of minimalist review. The main difference
between the walkthrough and review techniques is that in walkthrough full reading of the
review item is not performed. Instead of a full reading, one walks through the items under
review only stopping whenever some of the reviewers have a question or a comment. The
objective of a walkthrough is to evaluate a specific software element (e.g. document, source
module) attempting to identify defects and consider possible solutions. In contrast with other
forms of review, secondary objectives are to educate, and to resolve stylistic problems.
Besides that, the preparatory effort and the number of participants is typically less than in a
review. The walkthrough relies more in the discussion between the team members to find
inconsistencies in the work product than in the preparatory work done by each member.
2. For each data object appearing in the external interface section determine the following information:
- Object name:
- Class: (e.g., input port, output port, application variable, abbreviated term, function)
- Data type: (e.g., integer, time, Boolean, enumeration)
- Acceptable values: Are there any constraints, ranges, limits for the values of this object
- Failure value: Does the object have a special failure value?
- Units or rates:
- Initial value:
(a) Is the object's specification consistent with its description in the overview?
(b) If object represents a physical quantity, are its units properly specified?
(c) If the object's value is computed, can that computation generate a non-acceptable value?
3. Develop an invariant for each system mode (i.e. under what conditions must the system exit or remain in a given mode)?
(a) Can the system's initial conditions fail to satisfy the initial mode's invariant?
(b) Identify a sequence of events that allows the system to enter a mode without satisfying the mode's invariant.
(c) Identify a sequence of events that allows the system to enter a mode, but never leave (deadlock).
1. Identify the required precision, response time, etc. for each functional requirement.
(a) Are all required precisions indicated?
2. Documentation Verified
2.1. Is the code clearly and adequately documented with an easy-to-maintain commenting style?
2.2. Are all comments consistent with the code?
3. Variables Verified
3.1. Are all variables properly defined with meaningful, consistent, and clear names?
3.2. Do all assigned variables have proper type consistency or casting?
3.3. Are there any redundant or unused variables?
2. Correctness Verified
2.1. Is the reason for not testing a particular feature acceptable and sufficient?
2.2. Is the pass/fail criterion for each test item correct?
2.3. Is the test plan conformant with the project testing strategy in which respects to the types of tests to be performed?
2.4. Are the specified test coverage goals in conformance with the criticality of the project?
2.5. The test plan is in conformance with the project plan?
2.6. Is the identified staffing and training appropriated for the testing tasks that are to be performed?
2.7. Do the described contingencies can overcome the identified risks?
3. Feasibility Verified
3.1. Is the defined testing approach feasible?
3.2. Are the different testing roles and responsibilities correctly defined and assigned?
3.3. Are the specified environment needs and selected tools feasible for executing the necessary testing activities?
3.4. Is the proposed schedule as well as the identified training needs feasible?
3.5. Are the identified contingencies feasible taking into account this test plan and the project schedule?
Reference
Failure Failure
Mode Criticality
Failure
Mode
Definition
Questions & Findings yes/no
1. Is sufficient and reliable information about the failure available?
3. Does the software initiate a corrective action that safe-keeps the system and negate the effects of the failure?
4. Are there fault tolerance characteristics in order to compensate the failure mode?
6. Is the immediate corrective action not built on top of the function that has failed?
9. Is the failure detection and reaction executed within appropriate time limits?
10. Does the implemented software remove any possibility for Single Point of Failure?
Summary:
Software
detection
Risks accepted
/ identified
Recommendati
ons
Update FMECA
Issues identified
Comments
3. Does the software initiate a corrective action that safe-keeps the system and negate the effects
of the failure?
This question shall verify that the software initiates a corrective action upon a hardware failure.
By “corrective action” is meant that the software reaction keeps the hardware failure from
compromising the mission.
4. Are there fault tolerance characteristics in order to compensate the failure mode?
This question tends to answer if the case of the failure mode was taken in account by the
design, or imposed by the requirements, of both hardware and software. Examples of
characteristics that shall be looked-for are the presence of redundant units, alternative
communication links, error correction codes (e.g. CRC), etc.
6. Is the immediate corrective action not built on top of the function that has failed?
An example a situation that shall be checked is for instance trying to send an error report when
the failure mode affects the communication link. Another example is a situation where the
failure detection is based on the received HK but HK may be incorrect due to that failure.
actions (e.g. incorrect use of a link may not damage the link controller but may cause further
errors in the SW).
9. Is the failure detection and reaction executed within appropriate time limits?
This question shall verify if the failure is detected early enough in order to have sufficient spare
time to perform the respective corrective action and if the corrective action itself, is performed
within a time limit that safe-keeps the system.
10. Does the implemented software remove any possibility for Single Point of Failure?
This is a conclusion of the precedent check-points. The conclusion is adapted to failure
management philosophy. There shall be no possibility that any single failure can bring the
module in an unrecoverable