Ecss e HB 40a (11december2013)
Ecss e HB 40a (11december2013)
11 December 2013
Space engineering
Software engineering handbook
ECSS Secretariat
ESA-ESTEC
Requirements & Standards Division
Noordwijk, The Netherlands
ECSS-E-HB-40A
11 December 2013
Foreword
This Handbook is one document of the series of ECSS Documents intended to be used as supporting
material for ECSS Standards in space projects and applications. ECSS is a cooperative effort of the
European Space Agency, national space agencies and European industry associations for the purpose
of developing and maintaining common standards.
The material in this Handbook is defined in terms of description and recommendation how to
organize and perform the work of space software engineering.
This handbook has been prepared by the ECSS-E-HB-40A Working Group, reviewed by the ECSS
Executive Secretariat and approved by the ECSS Technical Authority.
Disclaimer
ECSS does not provide any warranty whatsoever, whether expressed, implied, or statutory, including,
but not limited to, any warranty of merchantability or fitness for a particular purpose or any warranty
that the contents of the item are error-free. In no respect should ECSS incur any liability for any
damages, including, but not limited to, direct, indirect, special, or consequential damages arising out
of, resulting from, or in any way connected to the use of this document, whether or not based upon
warranty, business agreement, tort, or otherwise; whether or not injury was sustained by persons or
property or otherwise; and whether or not loss was sustained from, or arose out of, the results of, the
item, or any services that may be provided by ECSS.
2
ECSS-E-HB-40A
11 December 2013
Change Log
3
ECSS-E-HB-40A
11 December 2013
Table of contents
1 Scope ...................................................................................................................... 8
2 References ........................................................................................................... 10
3 Terms, definitions and abbreviated terms......................................................... 11
3.1 Terms from other documents ..................................................................................11
3.2 Terms specific to the present document .................................................................11
3.3 Abbreviated terms...................................................................................................11
4 Introduction to space software .......................................................................... 14
4.1 Getting started ........................................................................................................14
4.1.1 Space projects ........................................................................................................ 14
4.1.2 Space standards: The ECSS System ..................................................................... 14
4.1.3 Key characteristics of the ECSS System................................................................ 15
4.1.4 Establishing ECSS Standards for a space project ................................................. 15
4.1.5 Software / ECSS Standards relevant for Software ................................................. 16
4.1.6 Why are standards a MUST for the software development process? .................... 18
4.1.7 Executing a space software project ........................................................................ 19
4.1.8 Disciplines in Space Software Projects .................................................................. 20
4.2 Getting compliant ....................................................................................................21
4.2.1 The ECSS-E-ST-40C roles ..................................................................................... 21
4.2.2 Compliance with the ECSS-E-ST-40C ................................................................... 25
4.2.3 Characterization of space software leading to various interpretations/applications of
the standard ............................................................................................................ 27
4.2.4 Software criticality categories ................................................................................. 30
4.2.5 Tailoring .................................................................................................................. 32
4.2.6 Contractual and Organizational Special Arrangements ......................................... 34
5 Guidelines ............................................................................................................ 40
5.1 Introduction .............................................................................................................40
5.2 Software related system requirement process ........................................................40
5.2.1 Overview ................................................................................................................. 40
5.2.2 Software related system requirements analysis ..................................................... 43
5.2.3 Software related system verification ....................................................................... 44
5.2.4 Software related system integration and control .................................................... 45
5.2.5 System requirement review .................................................................................... 52
5.3 Software management process ..............................................................................53
5.3.1 Overview ................................................................................................................. 53
5.3.2 Software life cycle management............................................................................. 53
5.3.3 Software project and technical reviews .................................................................. 54
5.3.4 Software project reviews description ...................................................................... 54
5.3.5 Software technical reviews description................................................................... 55
5.3.6 Review phasing ...................................................................................................... 62
5.3.7 Interface management ............................................................................................ 63
5.3.8 Technical budget and margin management ........................................................... 64
5.3.9 Compliance to this Standard .................................................................................. 64
5.4 Software requirements and architecture engineering process .................................64
5.4.1 Overview ................................................................................................................. 64
5.4.2 Software requirement analysis ............................................................................... 64
5.4.3 Software architectural design ................................................................................. 66
5.4.4 Conducting a preliminary design review ................................................................. 75
5.5 Software design and implementation engineering process .....................................75
4
ECSS-E-HB-40A
11 December 2013
5
ECSS-E-HB-40A
11 December 2013
Figures
Figure 4-1: ECSS relations for discipline software ................................................................ 16
Figure 4-2: Role relationships ...............................................................................................25
Figure 4-3 : Delivery of warranty and support between companies ....................................... 35
Figure 4-4 : Closely Coupled Build and Service Support Contract ........................................ 35
Figure 5-1: System database................................................................................................47
Figure 5-2: Constraints between life cycles .......................................................................... 49
Figure 5-3: Software requirement reviews ............................................................................ 52
Figure 5-4: Phasing between system reviews and flight software reviews ............................ 63
Figure 5-5: Phasing between ground segment reviews and ground software reviews .......... 63
Figure 5-6 : Reuse in case of reference architecture ............................................................ 74
Figure 5-7 : Example ITIL processes .................................................................................. 105
Figure 6-1: The autocoding process ................................................................................... 148
Figure 7-1: Mitigation of theoretical worst case with operational scenarios. ........................ 153
Figure 7-2: An example of a complete task table with all timing figures .............................. 174
Figure 7-3 : Maintenance Cycle .......................................................................................... 190
Tables
Table 5-1: Possible review setup ..........................................................................................58
Table 5-2: Example of review objectives and their applicability to each version .................... 59
Table 6-1: Choosing a Software life-cycle .......................................................................... 130
Table 6-2: Relation between the testing objectives and the testing strategies .................... 144
Table 7-1: Schedulability Analysis Checklists ..................................................................... 175
6
ECSS-E-HB-40A
11 December 2013
Introduction
The ECSS-E-ST-40C Standard defines the principles and requirements applicable to space software
engineering. This ECSS-E-HB-40A handbook provides guidance on the use of the ECSS-E-ST-40C.
7
ECSS-E-HB-40A
11 December 2013
1
Scope
This Handbook provides advice, interpretations, elaborations and software engineering best practices
for the implementation of the requirements specified in ECSS-E-ST-40C. The handbook is intended to
be applicable to both flight and ground. It has been produced to complement the ECSS-E-ST-40C
Standard, in the area where space project experience has reported issues related to the applicability,
the interpretation or the feasibility of the Standard. It should be read to clarify the spirit of the
Standard, the intention of the authors or the industrial best practices when applying the Standard to a
space project.
The Handbook is not a software engineering book addressing the technical description and respective
merits of software engineering methods and tools.
ECSS-E-HB-40A covers, in particular, the following:
a. In section 4.1, the description of the context in which the software engineering standard
operates, together with the explanation of the importance of following standards to get proper
engineering.
b. In section 4.2, elaboration on key concepts that are essential to get compliance with the
Standard, such as the roles, the software characteristics, the criticality, the tailoring and the
contractual aspects.
c. In section 5, following the table of content of the ECSS-E-ST-40C Standard, discussion on the
topics addressed in the Standard, with the view of addressing the issues that have been
reported in projects about the interpretation, the application or the feasibility of the
requirements. This includes in particular:
1. Requirement engineering and the relationship between system and software
2. Implementation of the requirements of ECSS-E-ST-40 when different life-cycle paradigms
are applied (e.g., waterfall, incremental, evolutionary, agile) and at different levels of the
Customer-Supplier Network
3. Architecture, design and implementation, including real-time aspects
4. Unit and integration testing considerations, testing coverage
5. Validation and acceptance, including software validation facility and ISVV
implementation
6. Verification techniques, requirements and plan
7. Software operation and maintenance considerations.
d. In section 6 and 7, more information about selected topics addressed in section 5 such as (in
section 6) use cases, life cycle, model based engineering, testing, automatic code generation, and
(in section 7) technical budget and margin, computational model and schedule analysis.
NOTE In order to improve the readability of the Handbook, the
following logic has been selected for sections 5, 6, and 7:
8
ECSS-E-HB-40A
11 December 2013
9
ECSS-E-HB-40A
11 December 2013
2
References
10
ECSS-E-HB-40A
11 December 2013
3
Terms, definitions and abbreviated terms
11
ECSS-E-HB-40A
11 December 2013
Abbreviation Meaning
CPU central processing unit
DDF design definition file
DDR detailed design review
OTS off-the-shelf
12
ECSS-E-HB-40A
11 December 2013
Abbreviation Meaning
SPA software product assurance
SPAMR software product assurance milestone report
SPAP software product assurance plan
13
ECSS-E-HB-40A
11 December 2013
4
Introduction to space software
14
ECSS-E-HB-40A
11 December 2013
15
ECSS-E-HB-40A
11 December 2013
The ECSS-S-ST-00C document provides a general introduction to the ECSS System and to the use of
the ECSS documents in all the space projects and gives therefore background information also for
16
ECSS-E-HB-40A
11 December 2013
software projects. The M standards for project management define phases, processes, reviews and
rules for space project organization that apply also for software projects. In particular the ECSS-M-
ST-40C Rev1 defines the space configuration and information management requirements also for
software projects. In addition, the ECSS-M-ST-10C Rev1 requirements are tailored for software in the
ECSS-S-ST-40 Software Management Process.
The context of the space software engineering activities is the overall Space System Engineering
process. System engineering is defined as an interdisciplinary approach governing the total technical
effort to transform requirements into a system solution. It's framework is defined by the "Space
engineering - System Engineering General Requirements" (ECSS-E-ST-10C) Standard which is
intended to apply to all space systems and products, at any level of the system decomposition,
including hardware, software, procedures, man-in-the-loop, facilities and services.
The "Space Engineering Software" Standard (ECSS-E-ST-40C) covers the software development of
the “Space system product software”, i.e. all software that is part of a space system product tree and
that is developed as part of a space project needs to apply this standard (for software deliverables and
non-deliverables). It focuses on space software engineering processes requirements and their expected
outputs. A special emphasis is put in the standard on the system-to-software relationship and on the
verification and validation of software items.
This Standard is, to the extent made applicable by project business agreements, to be applied by all the
elements of a space system, including the space segment, the launch service segment and the ground
segment. It covers all aspects of space software engineering including requirements definition, design,
production, verification and validation, transfer, operations and maintenance.
The Standard defines the scope of the space software engineering processes and its interfaces with
management and product assurance, which are addressed in the Management (–M) and Product
assurance (–Q) branches of the ECSS System, and explains how they apply in the software engineering
processes. Together with the requirements found in the other branches of the ECSS Standards, this set
provides a coherent and complete framework for software engineering in a space project. Software
may be either a subsystem of a more complex system or it may also be an independent system.
In case of developing ground systems the "Space engineering - System Engineering General
Requirements" (ECSS-E-ST-10C) Standard is extended by the "Space Engineering - Ground Systems
and Operations" Standard (ECSS-E-ST-70C). Ground systems and operations are key elements of a
space system and play an essential role in achieving mission success. Mission success is defined as the
achievement of the target mission objectives as expressed in terms of the quantity, quality and
availability of delivered mission products and services within a given cost envelope. Mission success
requires successful completion of a long and complex process covering the definition, design,
production, verification, validation, post-launch operations and post operational activities, involving
both ground segment and space segment elements. It involves technical activities, as well as human
and financial resources, and encompasses the full range of space engineering disciplines. Moreover it
necessitates a close link with the design of the space segment in order to ensure proper compatibility
between these elements of the complete space system. - Another specific link is made between ECSS-
E-ST-70-01C and ECSS-S-ST-40C for On-Board Control Procedures (OBCP) / On-Board Application
Programs (OBAP) development and validation processes.
The Space Engineering Software Standard is always complemented by the Space Product Assurance
Standard (ECSS-Q-ST-80C), which specifies the product assurance aspects and is the entry point for
ECSS-E-ST-40C into the Q-series of standards. All the requirements of the ECSS-E-ST-40C and ECSS-
Q-ST-80C are applicable to the software development projects as a principle, unless criticality driven
tailoring (or additional tailoring as per Annex R of ECSS-E-ST-40C for details) or other specific
characteristics and constraints driven tailoring is applied. - Requirements for space configuration and
information management are in ECSS-M-ST-40C Rev 1, which includes the software Document
17
ECSS-E-HB-40A
11 December 2013
Requirements Definition (DRD) for the software configuration file. However, the DRDs in the annexes
are provided to help customers and suppliers with the project's setup. Together, both these standards
either define or refer to the definition of all relevant processes for space software projects. ECSS-Q-ST-
20C is the reference, through ECSS-Q-ST-80C, for the software acquisition process, and the software
management process tailors ECSS-M-ST-10C for software.
18
ECSS-E-HB-40A
11 December 2013
The Space Engineering Software Standard may be adapted, i.e. tailored for the specific characteristics
and constraints of a space project in conformance with ECSS-S-ST-00C. There are several drivers for
tailoring, such as dependability and safety aspects, software development constraints, product quality
objectives and business objectives. Tailoring for dependability and safety aspects is based on the
selection of requirements related to the verification, validation and levels of proof demanded by the
criticality of the software. So a number of factors have more or less influence on the tailoring: project
characteristics, costs for the development and the operations & maintenance phase, number and skills
of people required to develop, operate and maintain the software, criticality and complexity of the
software, risk assessments. The type of software development (e.g. database or real-time) and the
target system (e.g. embedded processor, host system, programmable device, or application specific
integrated circuits) needs also to be taken into account. The tailoring of ECSS standards has direct
influences to the business agreement of the space project (due to the binding clauses).
4.1.7.4 Reviews
The reviews relevant to the software engineering processes are defined in detail in the "Space Project
Management - Project Planning and Implementation" Standard (ECSS-M-ST-10C Rev1) as:
a. System Requirements Review (SRR)
b. Preliminary Design Review (PDR)
c. Critical Design Review (CDR)
d. Qualification Review (QR)
19
ECSS-E-HB-40A
11 December 2013
4.1.7.5 Processes
The notion of engineering processes is fundamental in the Space Engineering Software Standard
(ECSS-E-ST-40C). These processes provide the means to describe the overall constraints and interfaces
to the engineering processes at system level. At the same time, they provide the possibility to the
supplier to implement the individual activities and tasks implied by the processes in accordance with
a selected software life cycle.
The Standard is a process model, and does not prescribe a particular software life cycle. Although
quite some space project reviews suggest a waterfall model, the software development plan can
implement any life cycle, in particular by the use of the technical reviews whose formalism is setup
less formal than the project reviews.
ECSS-E-ST-40C Figure 4-2: Overview of the software life cycle processes identifies the main
concurrent software engineering processes and shows how and when they are synchronized by the
customer-supplier reviews.
The relations between the software and the system reviews are addressed in chapter 5.2.5 in more
detail.
20
ECSS-E-HB-40A
11 December 2013
21
ECSS-E-HB-40A
11 December 2013
b. the software customer assumes a supplier role to interface in turn with his customer at the next
higher level, ensuring that higher level system requirements are adequately taken into account.
The customer main activities are recalled hereafter as a synthesis of the ECSS-E-ST-40C standard, the
supplier role is fully defined in the standard and it is not summarised in this introduction
The customer derives the functional, performance, interface and quality requirements for the software,
based on system engineering principles and methods. Software items are defined in the system
breakdown at different levels. . The customer’s requirements are specified by this process, and they
provide the starting point for the software engineering.
As the customer supplier relationship is a formal relationship (e.g. contractual when the customer
supplier are not in the same organisation), the customer requirements baseline (RB) should be
completed by a SOW , to precise the commitment between the two parties and the scope of the
activities to be performed by the supplier such as phases to be covered (development, operation ,
maintenance) , Customer Furnished Item (CFI ) , schedule constraints , etc. …
The customer activities are defined in the ECCS-E-ST-40C clause 5.2
When the first level SW supplier subcontracts SW activities to a third party, only those requirements
of the ECSS-E-ST-40C standard that pertains to the subcontracted activity are applicable to the third
party. The first level SW supplier nevertheless remains responsible of the ECSS-E-ST-40C conformance
towards the Customer.
If the customer considers that it is necessary, the customer has to tailor the ECSS standard in the scope
of its project.
Non-compliances to the standard or the tailored standard are agreed before the kick-off meeting.
In additional to the formal reviews , the customer and supplier may agree on additional technical
reviews to address specific concerns as anticipation of formal reviews, technical focus , etc. (ECSS-E-
ST-40C clause 5.3.3)
The customer has in charge to perform the acceptance activities (section 5.7.3)
The customer specifies which part of operation and maintenance activities are to be covered by the SW
supplier in the SOW. Even if not explicitly covered by the C/D phase SOW, maintenance process is
considered by both customer and supplier in order to ensure that the prerequisites for maintenance
phase have been correctly taken into account.
4.2.1.2 User
The ECSS-E-ST-40C introduces the term “user”.
The term “user” in ECSS is used in section 5.9 with the introduction of the SOS entity in charge to
support the user during the operations. The user is defined in this case as the “final user” of the SW
product, and so it is not the customer in the development phase, in the major cases. The final SW user
is the operator of the system in which the software is embedded
The ECSS-E-ST-40C mentions in others requirements the notion of use case, user need, etc., which are
related to all SW users during the different system integration phases.
It is the Customer responsibility to collect all the user's needs, and in particular the Use Case
definitions, which define early the operational constraints. The customer must include the verification
of the user's needs in the acceptance process.
22
ECSS-E-HB-40A
11 December 2013
The SOS entity main objective is to keep the software operational for the final user. The SOS entity
plays a role only after the system Acceptance Review, and acts as a relay between the final user and
maintainers, in charge of testing the operational software behaviour in the operational context,
tracking user requests, deploying new releases when necessary, ensuring administration activities.
4.2.1.4 Maintainer
The maintenance process contains the activities and tasks of the maintainer. The objective is to modify
an existing software product while preserving its original properties. This process includes the
migration and retirement of the software product. The process ends with the retirement of the
software product.
The activities provided in section 5.9.1.2 are specific to the maintenance process; however, the process
utilize other processes in ECSS-E-ST-40C and the term supplier is interpreted as maintainer in this
case.
The maintainer manages the maintenance process applying the management process.
4.2.1.5 Operator
The term “operator” appears inside the ECSS-E-ST-40C only:
a. in the clause 5.10.6.2 for the establishment of the migration plan in collaboration with the
maintainer
b. in the clause 5.10.7.1 for the establishment of the retirement plan in collaboration with the
maintainer
NOTE The responsible for these two plans are the maintainer and not
the operator.
The operator is used to express the final user not only of the software product but also of the system
which include the software product.
4.2.1.6 Conductor
The term “conductor” is introduced for the ISVV activities. The conductor is the person or the entity
that takes in charge the verification and validation tasks of the ISVV. It is the ISVV supplier.
23
ECSS-E-HB-40A
11 December 2013
4. the SW final user is in fact the system operator. As a major concern, the customer is in
charge to make sure that the needs of the final user are properly captured in the
requirements baseline, to avoid late introduction of operational constraints.
b. between the SW AR and the system AR :
1. the SW customer is in charge to integrate the SW product inside the system. It will be
supported in this phase by the SW maintainer for expertise , analysis and correction or
modification if it is necessary
2. the SW maintainer is in charge to support the SW customer. The SW maintainer performs
expertise and analysis on SW customer request, to correct the SW product, to implement
modification or evolution on SW customer request. In general the SW maintainer is the
SW supplier of the previous phase, but this responsibility can be translated to a third
party.
3. ISVV activities can be required according to modification or evolution requests and their
impact on the critical parts of the software.
4. the SW final user is in fact the system operator. There is no change with regard to the
previous phase.
c. After the system AR :
1. the SW final user who is in fact the system operator, is in charge to operate the system
2. The SOS entity role appears in this phase and covers in fact two roles: assure the
administration and the maintenance of the operations infrastructure, and assure the role
of system maintainer. The SOS entity is in direct relationship with the SW final user
(system operator)
3. The SW maintainer is in charge to support the SOS entity. The SW maintainer performs
expertise and analysis on SOS entity request, to correct the SW product, to implement
modification or evolution on SOS entity request.
The role relationship are synthesised in Figure 4-2.
24
ECSS-E-HB-40A
11 December 2013
System Operator
Final user Administration
SOS entity
System Developer System
[SW customer] maintainer
Milestones
SW AR System AR
Figure 4-2: Role relationships
25
ECSS-E-HB-40A
11 December 2013
1. The complete list of ECSS requirements applicable to the project (e.g. in the EARM).
2. For each requirement, the actual indication of compliance. When deviation is identified the
justification is provided.
An example of template for an EARM for the requirements of ECSS-S-ST-00C is the following table:
where:
applicable without change (A)
applicable with modification (M)
not applicable (D)
new generated requirements (N) including identification of origin if existing
According to ECSS-S-ST-00C, the customer provides an ECSS Applicability Requirements Matrix and
the Supplier replies with an ECSS Compliance Matrix. It is important that the ECSS Compliance
Matrix is done at the level of each requirements (as opposed to a global statement of compliance), in
order to allow the Customer to detect early enough in the project the Non or Partial Compliance. It is
essential to discuss them at the beginning of the project rather than discovering them at the end.
The attention of the Supplier is drawn to the risk taken by declaring systematically (global)
compliance in the proposal. Although this might look like a way to increase the chances of being
selected, it will likely backfire on the Supplier and the project when it will actually be discovered that
this global compliance cannot be achieved.
A Partial Compliance needs to be detailed such that the Customer can assess the extent to which the
objective of the ECSS requirement is covered, and whether a different way to achieve the objective
might be acceptable.
A Non-Compliance needs also to be investigated in terms of feasibility and acceptability in the scope
of the project.
There are historical records of PC/NC that ended up being acknowledged in an update of the ECSS-E-
ST-40C Standard, such as the structural coverage of code achieved only by unit tests (version B) or
through all the test campaign (version C, Note in 5.8.3.5b) or delivery of detailed design, not as a
document (version B), but as an electronic file (version C, Ye in Annex R; accounting of models in
5.3.2.4).
When a Supplier has his own internal software engineering standard (e.g. as part of a company
quality model), he can negotiate with the Customer the approval of a compliance matrix between the
ECSS-E-ST-40C standard and the company standard, provided that all the information required by
ECSS is actually delivered. This is in particular the objective of ECSS-E-ST-40C clause 5.3.9.2
(Documentation compliance) to allow a mapping of the ECSS DRDs on company documentation.
This is done today at each project level. However, it would be beneficial to put in place a
customer/supplier company level agreement, although there is no today formalized organisational
structure to do so.
26
ECSS-E-HB-40A
11 December 2013
Various application families have been identified in the space flight software domain:
a. command and control software (on board of satellites, probes, spacecraft, but also command
and control for payloads, control software of vehicles, hardware command and control,
equipment, sensor, actuator software, up to the control software of robots and the central
computers of space stations). They generally take care of control and data handling, and feature
a high level of criticality. The focus is put on the integrity of data, which impacts on the design
constraints and the technology used.
b. data processing software (on board satellites, probes, spacecraft and robots). The software
processes the payload data and can be less critical. The focus is put on the performance (e.g.
image processing).
c. micro-gravity facility and experiment software. It is generally a self-contained (software) system
which includes command, control and data handling software for the check-out, error, FDIR,
time, mode, TM/TC and resources management and local data processing. Its failure, by design,
does not affect the whole space infrastructure.
d. control MMI software is generally running on a laptop for developing and running the
experiment procedures and the various sub systems controllers, for managing concurrency and
resources access. Although operating in space, its characteristics are closed to the commercial
desktop software, and similar technologies and criteria can be used.
Launcher software (flight control software and equipment software) has a short operational life
combined with high availability and reliability. Furthermore it runs once and cannot be tested in the
real environment.
27
ECSS-E-HB-40A
11 December 2013
Flight software includes low level software such as boot, initialisation, board support package, that
often cannot be modified during operation.
Flight software runs in a hostile environment. A failure of the flight software that results in the non-
acceptance of commands can lead to the loss of the spacecraft. Software maintenance is more difficult
on-board as it is difficult to access and is usually required continuing operation. In case also of a
remaining fault within flight software, investigations are difficult to conduct due to less observability
than on ground. This leads to specific dependability requirements.
Furthermore during operation, the spacecraft visibility can be limited and the spacecraft can be
hidden from the ground control (by the earth itself or by a planet). This limits the access time and can
lead to specific requirements for the autonomy of the spacecraft.
The operational lifetime of a spacecraft is generally long (in the range from 2 to 15 years). This puts
specific constraints on the maintenance of the flight software development environment, as in most of
the cases the on board software is the only spacecraft component that can be modified during in-orbit
life. This allows correcting malfunctions in the software, but also to circumvent HW problems or to
adapt to not-predicted situations in-orbit. The software development environment needs also to be
working during all the operational lifetime of the spacecraft, this may require specific measures such
as hardware support provisions or emulation on new hardware.
28
ECSS-E-HB-40A
11 December 2013
independent level (for a group of similar space missions with common characteristics). In fact the
utilisation of the infrastructure software is not limited to reuse of software functionality but more
importantly to re-use of a system-design at ground segment level. In other words the subject
infrastructure can be seen in many cases as a reference ground segment system architecture, which
defines the composing components of the ground segment software and establishes standard
interfaces among them. Hence when developing the components of the ground segment for a
particular mission (software systems such as the Mission Control System, Flight Dynamics software
systems, operational simulator or the ground station software), the software development does not
need to start from the system related software requirements engineering phase (the system context is
established through the reuse of system design provided by the infrastructure).
The ground segment infrastructure is generic and covers the common requirements of many missions.
For specific families of missions such as EO or deep-space missions, which share additional common
requirements, special profiles of the infrastructure may be developed to increase the reuse even more.
From this infrastructure perspective, many developments are not driven by the requirements of a
single space mission and are not implemented as part of a particular space programme, but have their
more or less independent and parallel lifecycle. These infrastructure developments are planned ahead
of the missions (e.g. new versions of the mission control system or simulator infrastructure). Therefore
some system requirements are not fully stable. Performance requirements, in particular, need to be
forecasted or maximized;
The lifecycle of ground segment software elements is often more complicated than that of the flight
software. It is not unusual that the different components of the ground infrastructure are developed
by different Suppliers, maintained by yet other Suppliers and provided as Customer Furnished Items
to the Suppliers who are responsible for the development of the mission specific software components
(which again may be maintained by other Suppliers). The fact that more than one mission use at same
time components of common infrastructure software, adds additional complexity to the governance of
the infrastructure software lifecycle. To give an example the management of software defects on the
infrastructure software can only be governed by configuration change boards, involving
representatives of all software projects, reusing the subject software. These characteristics impact
directly many aspects of software development life cycle, including the design, implementation,
validation, maintenance and operation.
Components of ground software undergo an operational validation, which may be performed,
depending on the schedule, before or after the individual software development project is finally
accepted (AR), e.g. after the launch of the satellite or in the context of simulation campaigns;
The overall Ground Segment composes typically significantly larger number of complex systems in
comparison to flight software (from the ground station to the mission control system to data
distribution system and payload operations planning system, internet connectivity units such as
routers and firewalls, server and PC operating systems, databases, …). Each of these systems is
typically deployed in a geographically different location and operated by different entities. The
process of end-to-end validation of such a distributed system can therefore usually be achieved only
at system level and scenario based.
The Ground Segment (GS) often relies on commercial software such as operating systems and COTS.
More-over with emerging software development technologies such as web-based applications, Java
EE, .NET, SMP-2 based simulator model developments or the principles of Service Oriented
Architectures, the GS software becomes more and more developed based on frameworks related to
these technologies. The validation of these frameworks and COTS is often not a realistic approach. In
particular as the COTS and framework are often not available at source code level and limited
information regarding their test coverage is available.
29
ECSS-E-HB-40A
11 December 2013
It is worth noting that it is not unusual for ground software components developed based on such
frameworks, that a significant percentage of the overall functionality of the “system” is provided by
the framework (this can reach in some cases 90%). The flight software is in contrary often a custom
development fully available at source code level.
The development of parts of the ground segment infrastructure (e.g. routers and gateways) is not
directly managed by the ground data system developer and the operating entity (e.g. network) the
level of quality of these units can only be assessed at contractual level (service level agreements SLA)
or via system testing.
The human factor is an important factor in Ground Segment software, since it is very often manually
or semi-automatically operated. A typical characteristic of the Ground Segment software is the
provision of Man-Machine-Interfaces. Hence validating the software against all possible combinations
of human interactions with it through the Man-Machine-Interfaces (flow of all possible inputs) is often
unfeasible.
The Ground Segment software Error control mechanisms are above the level of software only.
Examples: critical commands must be sent twice. CRC codes ensure that commands are not corrupted.
In case of major problems the spacecraft will enter in safe mode, alerts are verified by humans
conducting flight operation procedures;
Most ground segment software problems impact in the worst case the availability of the system.
Ground segment software can however usually be corrected and redeployed much easier than flight
software. This is for instance often related to much easier access to the system for debugging and
redeployment (no need for satellite pass, change of hardware components, extend the resources, e.g.
memory, storage). If the maintainability is high (i.e. short time is required to correct failures), there are
margins to increase availability of the software i.e. the probability that the software system is
available.
Suitability of Ground Segment software for a specific project may be claimed based on product service
history, provided that conditions for product service history application are complied with. Most
software routines are taken from previous versions of the system. It is particularly the case for
complex systems such as flight dynamic software, which may have been developed over the years and
are validated operationally (at system level) for a number of missions but not at unit and integration
test level and there may provide limited test coverage information.
Ground software is normally
a. data driven (e.g. the behaviour depends on the specific TM/TC data being received/generated
and the input entered by the user via the MMI);
b. configuration driven (IP addresses and ports, database adapters, WS endpoints, all options
existing in the TM/TC database definitions);
c. human driven (with multiple users, multiple applications, multiple operational modes, etc.).
30
ECSS-E-HB-40A
11 December 2013
31
ECSS-E-HB-40A
11 December 2013
4.2.5 Tailoring
The contract perimeter is defining the extent to which the ECSS-E-ST-40C is applicable, because some
activities such as operation, maintenance, retirement, or ISVV (for criticality category A and B), may
not be included in the contract. But this is more related to "applicability" of the Standard rather than to
its "tailoring".
In the same spirit, if activities of ECSS-E-ST-40C are done by another role than the one specified, this
will also lead to tailoring, for example when system/customer tasks are delegated to
software/supplier. i.e. not all of clause 5.2 may be done by the Customer, or the Supplier may be
responsible for the acceptance test plan, or the Supplier may not be responsible for the definition of
the external interfaces.
The ECSS-E-ST-40C introduces in Annex R a pre-tailoring based on software criticality.
These pre-tailoring may be further tailored according to criteria different from criticality, mainly
according to the level of risk which is taken by not performing given engineering activities, or doing
them differently than what is specified in the Standard.
It should be considered that taking a risk on high criticality software may have consequences much
more important than taking a risk on non-critical software.
On the other hand, activities not required for non-critical software (e.g. higher code coverage, ISVV)
may be performed for other reasons, such as intended reuse in a critical context. For example: (non-
exhaustive)
a. merging the requirement baseline and the technical specification increases the risk of losing the
customer or supplier standpoint, i.e. to miss a use case or to miss an implementation
requirement.
b. skipping joint reviews, increases the risk to discover late disagreements between customer and
supplier on the product capability or quality, causing substantial reengineering
c. not managing technical budgets and margins, increases the risk to discover unfeasibility late in
the project
d. not using design methods, increases the risk to develop weak architectures and inconsistent
designs
e. not defining interfaces, increases the risk of integration issues
f. not documenting the detailed design, increases the risk to lose control on the software
development such as capability to anticipate implementation errors, to debug, to integrate, to
maintain, to master safety and dependability, etc.
g. skipping unit tests, increases the risk to discover faults late in the process and to jeopardize the
schedule.
h. not rerunning the full validation tests on the last version of the software, increases the risk to
leave faults in the product
i. not performing full verification activities, increases the risk to affect the quality of the product
j. not complying with DRDs content, increases the risk of having non-complete documentation,
and of missing information for maintenance
k. non-compliance with the DRDs structure, increases the effort of the reviewers
It is emphasized that the risk taken does not affect only the project management, but also the
dependability and safety of the product, i.e. adequacy to its criticality level.
32
ECSS-E-HB-40A
11 December 2013
Each risk taken by tailoring out a software engineering requirement is analysed in the scope of the
project, and its consequences are assessed to decide if it is acceptable or not.
Any further tailoring is therefore associated to a risk management, which is specific of each project.
For this reason, it is not possible to propose other generic tailoring tables.
However, there are activities that can never be tailored out, e.g.:
a. agreeing on a development approach
b. specifying software and reviewing the specification
c. producing, validating and accepting software
d. managing the configuration
Examples of tailoring:
a. Prototypes and study software projects may come with minimal requirements for
documentation and validation. These types of application are typically not re-used after having
demonstrated the concept or their feasibility. Only a small proportion of prototypes are later
used either for integration into other software products or as stand-alone applications. In the
case where it is desirable to re-use a prototype, its reusability needs to be evaluated and the
appropriate re-engineering will be performed.
b. The incremental development of a technology - following an improvement of its Technology
readiness level (TRL) - could also lead to tailoring of the software engineering activities. The full
list of activities that should be performed for a software "product" [TRL 6] will be performed
incrementally along several contracts and development steps. This takes into account the
foreseen criticality of the product in its intended operational environment. The tailoring
addresses the extent of functions implemented in the software, the extent of its validation,
verification and documentation, and the criticality level of each increment (see also 4.2.4 on
software criticality).
NOTE Interpretation of the technology readiness levels for software is
given in the chapter 7 of the ESA document “Guidelines for the
use of TRLs in ESA programmes”.
c. For criticality category D software, the level of granularity of the detailed design, its depth, or in
other words the concept of unit (raised to the level of function or even application), can be
tuned to a level which takes into account both the needs in maintainability, or traceability, but
also a realistic engineering approach related to the type of software (e.g. ground software). The
internal interfaces [5.5.2.2] are defined at the same level of granularity, as well as the unit tests
strategy.
d. For criticality category D software, the level of definition of the computational model can be
adapted to the type of software and the criticality of its real-time aspects.
e. When ECSS-E-ST-40C requirements are tailored out, the associated Expected Output in the
DRD is also tailored out. The DRDs propose a way to deliver the requested information,
however 5.3.9.2 allows a different documentation structure, provided that all the needed
information is there. This can be seen as a sort of tailoring of the DRDs.
f. Also, some documentation can remain available in Supplier premises without delivery, or can
even be waived by the Customer.
33
ECSS-E-HB-40A
11 December 2013
34
ECSS-E-HB-40A
11 December 2013
Software Development
Software Software Warranty
Supplier
Systems Integration
System System Warranty
Supplier
System Maintenance
SOS
Entity UserSupport
Commonly now, larger ground system projects are being procured with the build and user support
contracts linked very closely together (see Figure 4-4). This means that the contractor is asked to build
a system, then maintain it and, in some cases, operate the system (including SOS entity role). In this
case the service contract may imply that a team of people are employed during normal working hours,
or on a 24/7 basis.
For all types of contract it is usual to provision for the fix of defects, incidents or problems within an
agreed period. Usually this period is different for the different criticality of defects. The contract may
specify a Service Level Agreement (SLA) that defines the response time and couples them to
contractual penalties or incentives.
35
ECSS-E-HB-40A
11 December 2013
DELI-020 All the deliverables documents shall be provided in Microsoft Word format in
the version in use at the Agency and in PDF format.
At the time of writing, the Agency uses Microsoft Office Suite 2003. . The latest
version to be used shall be specified at KoM.
DELI-030 All the deliverables documents shall be produced based on the official document
templates [RD1].
DELI-040 The deliverables for which the Customer owns the IPR shall not bear any
information identifying the Contractor’s Company or staff who produced the
deliverable (e.g. Contractor’s logo).
DELI-050 All deliverables shall be delivered in their original source format.
Comment: For example in MS Word format for a document or in the source
format of the UML tool for a UML document.
DELI-060 All deliverables documents shall in addition be delivered in PDF format.
DELI-090 All documents delivered to the Customer for approval shall be signed
electronically by the Contractor technical responsible.
DELI-100 All deliverable documents shall be named according to the rules defined in the
Configuration Management Plan.
DELI-110 The deliverables shall be delivered to the Customer on duly labelled media as
defined by the Configuration Management Plan.
DELI-120 The organisation of the files and directories on the media shall be approved by
the Customer Technical representative.
DELI-130 Deliveries shall be done on DVD.
36
ECSS-E-HB-40A
11 December 2013
4.2.6.6.1 Overview
The Acceptance testing period, also referred to as the check-out period starts with the delivery of the
software by the Supplier to the Customer, and ends with the formal acceptance of the software by the
Customer.
The duration of the acceptance period, i.e. the time granted to the Customer for performing its own
validation of the delivered software is subject to contractual agreement and not specified by ECSS-E-
ST-40C. It is therefore specified through dedicated requirements as part of the Contract (often in SoW).
Also the condition for acceptance is clearly specified as part of the contractual agreement between the
Customer and Supplier.
The Customer may perform additional testing and validation at the top of the SVS specified by the
Supplier during the acceptance period, and may report all identified software defects and non-
compliances against the RB/TS to the Supplier. It is irrelevant how the Customer identifies software
defects/non compliances, as long as they can be traced back to a clear non-compliance against the
RB/TS. It is not subject to any contractual agreement between the Supplier and Contractor, which tests
the Customer may execute during the acceptance period, since it’s purely the Customers decision.
37
ECSS-E-HB-40A
11 December 2013
Since acceptances are often linked to payment milestones, the Customer and the Supplier may agree
contractually on a stepwise acceptance approach, in which case the steps, their duration and the exact
criteria for successful completion are contractually specified and agreed upon by both parties. This can
for instance be realized through introduction of preliminary and final acceptance milestones.
In special cases, where the target operational environment of software is not easily accessible, e.g. in
case of remote ground stations, the Customer and the Supplier may contractually arrange for
additional acceptance milestones such as site and operational site acceptances.
Another case of additional contractually arranged acceptances is the so called Factory Acceptance,
which is performed in the Supplier environment and aims at serving as a confident building step for
the Customer before the software is delivered by the Supplier in the Customer’s environment. (see
section 5.7)
4.2.6.6.2 Warranty
The start and duration of the warranty is governed by contractual agreement between the Customer
and the Supplier. It is also governed by the legal terms of the country/European/international law,
applicable to the contract.
The following requirements are examples of contractual requirements for regulating the warranty
start and duration:
The Contractor shall offer a rolling warranty starting with the first delivery and expiring 3
months after the Final Acceptance of the last delivery.
Comment: “Rolling warranty” means that, at any point in time inside the warranty period
defined in this requirement, the warranty covers the whole system.
During the warranty period, the Contractor shall be responsible to investigate and fix all errors
found in the system and deliver corresponding corrections.
The Contractor shall perform an End of Warranty delivery, which shall reflect the latest status
of the System both in terms of software and in terms of documentation.
The original Warranty period, which is provisioned for in the contractual agreement between the
supplier and customer, often ends before the end of operational usage of the software (e.g. end of the
space mission).
Software maintenance is in such cases established via the contractual agreements (typically in form of
service level agreements) between the customer and the supplier (which can be the same supplier who
developed the software or a different party who provides only the maintenance) in accordance to the
operational needs of the customer on the subject software. The difference between warranty and
maintenance is sometimes a source of confusion. The nature of the work to be performed under
warranty and maintenance is similar (software defect management and removal). See also 5.10.2.1.2
for more details.
38
ECSS-E-HB-40A
11 December 2013
39
ECSS-E-HB-40A
11 December 2013
5
Guidelines
5.1 Introduction
This chapter includes guidelines for the application of ECSS-E-ST-40C to space projects.
For the convenience of the reader, the note of section 1 is repeated here:
NOTE In order to improve the readability of the Handbook, the
following logic has been selected for the sections 5, 6, and 7:
• section 5 follows the table of content of ECSS-E-ST-40C at
least up to level 3 and generally up to level 4. For each sub
clause of ECSS-E-ST-40C:
+ either information is given fully in section 5,
+ or there is a pointer into section 6 or section 7
+ or the paragraph has been left intentionally empty for
consistency with the ECSS-E-ST-40C table of content, in this
case, only “ –“ is mentioned.
• - section 6 expands selected parts of section 5 when:
+ either the volume of information was considered too large to
stay in section 5,
+ or the topic is addressed in several places of section 5
In any case, there is a pointer from section 5 to section 6, and
section 6 mentions the various places in ECSS-E-ST-40C
where the topic is addressed.
• section 7 follows the same principles as section 6, but
gathers the topics related to margins and to real-time.
40
ECSS-E-HB-40A
11 December 2013
recommendations, in particular on system engineering. Some of them are quoted below from the
Executive Summary of the Final report:
a. “Engineers and scientists often do not realize the downstream complexity and cost-driving factors
entailed by their local decisions. Overly stringent requirements and simplistic hardware interfaces can
complicate software; flight software de-scoping decisions and ill-conceived autonomy can complicate
operations; and a lack of consideration for testability can complicate verification efforts. It is therefore
recommended to look at educational materials, such as a “complexity primer” and the addition of
“complexity lessons” to NASA Lessons Learned. 1
b. Unsubstantiated requirements have caused unnecessary complexity in flight software, either
because the requirement was unnecessary or overly stringent. Rationale statements have often
been omitted or misused in spite of best practices that call for a rationale for every requirement.
The NASA Systems Engineering Handbook 2 states that rationale is important, and it provides
guidance on how to write a good rationale and check it. In situations where well-substantiated
requirements entail significant software effort, software managers should proactively inform
the project of the impact. In some cases this might stimulate new discussion to relax hard-to-
achieve requirements.
c. Engineering trade studies involving multiple stakeholders (flight, ground, hardware, software,
testing, and operations) can reduce overall complexity, but we found that trade studies were
often not done or only done superficially. Whether due to schedule pressure or unclear
ownership, the result is lost opportunities to reduce complexity. Project managers need to
understand the value of multi-disciplinary trade studies in reducing downstream complexity,
and project engineers should raise complexity concerns as they become apparent.”
1 http://llis.nasa.gov/llis/search/home.jsp
2 NASA/SP-2007-6105 Rev1
41
ECSS-E-HB-40A
11 December 2013
The system engineering process uses the results of these lower level verification activities to build
bottom-up multi-layered evidence that the customer requirements have been met.
The system engineering process is applied with various degrees of depth depending on the level of
maturity of the product (e.g. new development or off-the-shelf).
The system engineering process can be applied with a different level of tailoring as agreed between
customer and supplier in their business agreement.
The system engineering organization has interfaces with organizations in charge of management,
product assurance, engineering disciplines, production, and operations and logistics.
42
ECSS-E-HB-40A
11 December 2013
NOTE This might also work for a software supplier from a different
organisation, if a dedicated business agreement can be
established early enough, e.g. in phase B.
43
ECSS-E-HB-40A
11 December 2013
44
ECSS-E-HB-40A
11 December 2013
5.2.4.4.1 Introduction
This part aims at providing some details about the System Data Base (SDB), its interface with the other
space software (particularly the central on-board software) and some of its development constraints.
All engineering processes involved in the space system life cycle are based on data management
processes and tools. The SDB is the reference repository of all the system level data, e.g.:
a. System configuration data,
b. System element physical properties,
c. TM/TC data definitions,
d. On-board software parameters / monitoring / Safe Guard Memory / OBCP definitions,
e. On-board communication protocol data,
f. AIT specific data,
g. Ground calibration/decalibration functions,
h. Operation / FDIR specific data,
i. Electrical I/F data (at functional interface level),
j. Simulator data (modelling input).
The System team is responsible to define, configure, distribute and validate these system level data
(see ECSS-E-ST-40C clause 5.2.4.4), with the support of the SDB.
The SDB is thus in the heart of the industrial process (i.e. development, validation, operations) of a
complete system (including ground and flight segments). It interfaces with almost all engineering
space subsystems (named SDB users below), e. g. the ground segment (including the control centre),
the validation facilities, the simulators, the on-board software’s, the operation tools…
The objective is to configure all software(s) and subsystems in a consistent way (see ECSS-E-ST-40C
clause 5.4.3.6c).
One key point in order to ensure a proper exchange of information is to define, when possible, a well-
defined name for all the described entities. It is of high added value for the whole project to ensure a
consistent naming of e.g. telecommands, telemetry parameters, system parameters, from the
Requirement Baseline to the software implementation and the SDB.
45
ECSS-E-HB-40A
11 December 2013
The SDB may be distributed or centralized. In any case, the important point is to ensure the
consistency of the structure and of the provided data.
Data structures are defined at the SDB user (e.g. on-board software, Ground segments) PDR. Final
content and values are defined at CDR.
The SDB tool itself is software designed through a conceptual Data Model that defines all data
structures and external interfaces needed by SDB users, for software generation (including
documentation) and/or operations. The SDB tool needs to be available in accordance to subsystems
schedule; this creates strong dependencies among life-cycles of these subsystems and SDB tool. Hence
the clause 5.2.8.4 provides some recommendations to properly tailor the ECSS-E-ST-40C.
46
ECSS-E-HB-40A
11 December 2013
Requirement Baseline
TM/TC
SSS
IRD
CSW data
exported to SDB
Technical Specification
SW
SRS
ICD SDB
Production
SW
DDD
SDB
Code
Bin SDB
47
ECSS-E-HB-40A
11 December 2013
48
ECSS-E-HB-40A
11 December 2013
SDB
PDR
SDB SSS
SDB TS
SDB Development/Validation
SDB Life Cycle
Requirements
Requirements
on SDB tool:
on SDB tool
Late inputs
SDB
Delivery
SDB user 1
Need date
for SDB
To answer all these constraints, the approach is to take benefits of already existing SDB applications,
so as to be able to provide intermediate versions of the SDB as early as possible, and to put emphasis
on the validation phase.
The proposed life-cycle and reviews for SDB is as follows:
a. System SRR: high level requirements on SDB are specified and reviewed. It is recommended to
take benefit of already existing SDB applications: when applicable, a first version of the Reuse
File is issued, identifying which application could be reused and the expected changes on the
existing software(s).
b. The specification (SSS) then details the data management processes, SDB users and their roles,
the scope of data to be managed, external interfaces for import and export, expected data
volumes and performances, schedule of SDB tool itself (with incremental approach when
needed) and preliminary schedule of SDB data deliveries.
c. This specification is reviewed during System PDR.
When the approach of reuse of an existing SDB tool, justified by a Reuse File, is agreed, a first version
of SDB tool can be immediately delivered for initial population.
a. Each Change needed to comply the mission requirements is further implemented in new SDB
tool versions,
b. Each SDB tool delivery is subject to an acceptance review, based on an acceptance plan
prepared by the System team.
In case a complete or important development is initiated, the following reviews take place:
a. SDB PDR: the TS specifies the complete SDB tool, including the SDB conceptual data model,
and mapping with external interfaces. The reuse of existing components (e.g. data models, data
management tools) are identified,
b. SDB CDR: the SDB is validated against its TS and can be early delivered to users,
49
ECSS-E-HB-40A
11 December 2013
c. SDB TRR: recommended to get the agreement from the SDB users on the operational
representativeness of the datasets that is used for the SDB QR
d. SDB QR: SDB qualification against the RB. The qualified SDB is delivered,
e. SDB AR: the SDB is accepted by the SDB users, represented by the SDB tool responsible in the
System team.
50
ECSS-E-HB-40A
11 December 2013
51
ECSS-E-HB-40A
11 December 2013
Consolidation of preliminary
system technical specification
(System)
SRR
System architecture
Software
SRR
System SWRR or
PDR Software PDR
The (System) SRR is a system review, held during the phase B of the project. The review main
objectives are the assessment of the preliminary system design definition, the assessment of the
preliminary verification program and the release of technical requirement specification for the
52
ECSS-E-HB-40A
11 December 2013
subsystems. The system requirements allocated to the software subsystem are thus identified at this
point.
The Software SRR is a review of the requirements baseline (RB) that is the expression of the user needs
for the software system – and which is customer documentation. It is thus a review focused on the
customer activities related to the software system, as per ECSS-E-ST-40C clause 5.2. The objectives of
this review are to ensure consistency with the system requirements (including interfaces) and system
architecture, to check the feasibility of the software development and to reach the approval of the
software requirements baseline by all stakeholders. When possible, the software supplier should be
involved in the process in the frame of co-engineering activities, particularly since the requirements
baseline is the main technical document initiating the software subsystem project.
The SWRR is a review of the technical specification – which is the supplier’s reply to the requirement
baseline. The objective of this review is to ensure that the requirement baseline is correctly understood
and covered. The SWRR is a non-mandatory review anticipating the Software PDR for the part of the
PDR addressing the software requirements specification and the interfaces specification.
53
ECSS-E-HB-40A
11 December 2013
54
ECSS-E-HB-40A
11 December 2013
As stated in the ECSS-E-ST-40C sub clause 5.3.6, software reviews are synchronized with system ones
that are mainly sequential because the system life cycle follows the V-Cycle model (see ECSS-M-ST-
10C figure 4.4). Therefore, and whatever the selected software lifecycle model (see §5.3.1), there
should be at the minimum the set of ECSS-E-ST-40C project reviews (i.e. SRR, PDR, CDR)
synchronized with system level. Then, they can be completed by technical reviews where appropriate
to map to the software lifecycle.
55
ECSS-E-HB-40A
11 December 2013
a. of all the information used to decide about the reuse (or not) of existing component/software,
b. of the qualify level of existing component/software,
c. that the reused software meets the project requirements and the domain use,
d. of the specific proposed actions, if they are required.
This review may be merged with SRR or SWRR.
TRR/TRB (Test Readiness Review and Test Review Board)
Two Technical Reviews are specifically mentioned in the software standard, the Test Readiness
Review [TRR] and the Test Review Board [TRB]. They go together, one before a test execution (to
ensure that the test specification and the test means are ready) and the other after the test activities.
Requirements 5.3.5.1a and 5.3.5.2.a are clear on the fact that these reviews need to be organized in
relation to test activities, i.e. unit tests, integration tests and the various validations and acceptance
testing. The tests activities subject to TRR/TRB are mentioned in the Software Development Plan.
The phrasing “as defined in the Software Development Plan” (actually duplicated in ECSS-Q-ST-80C
6.3.5.4) suggests that some testing activities might be considered as not subject to TRR/TRB. The
common practice for unit testing or integration testing is indeed not to have specific TRR/TRB.
However, the TRR for Validation can be anticipated before the integration tests. It becomes a TRR for
integration and validation where both Integration Test Plan and SVS are reviewed. Integration Test
Report and Validation report are checked at the TRB for Validation (or at the CDR).
ECSS-Q-ST-80C requires (see requirement 6.1.5) that TRR/TRB are mandatory for any validation
campaign, therefore before CDR, QR and AR. Common practice is to reflect it in a consistent way with
ECSS-E-E40C (where requirement 5.3.3.3.a-b allows to define TRR/TRB in the SDP), according to the
project needs. Typical examples are the following:
a. the TRB is very often handled together with/in the scope of formal reviews: TRB for validation
together with the CDR, TRB for qualification together with the QR and TRB for acceptance
together with the AR.
b. the TRR for qualification is sometimes handled together with/in the scope of the CDR
c. the TRR for acceptance is sometimes handled together with/in the scope of QR.
d. for larger software development, several TRR/TRB may be handled (e.g. per function) for
validation w.r.t. TS.
e. for smaller software developments, TRR/TRB may be suppressed.
Typical objectives of the TRR are introduced in annex P.2.1<6>a. Note 5.
It should be highlighted that, as the TRR is supposed to confirm the readiness of test activities, this
implies that the relevant SVS (against TS or RB) is available at the TRR, as well as sufficient
documentation to achieve the objectives of the review stated in the Review Plan, in particular
documentation of the test environment.
TRR Deliverables are summarized in annex Q.4
Typical objectives of the TRB are introduced in ECSS-E-ST-40C annex P.2.1<6>a. Note 6.
TRB Deliverables are summarized in annex Q.5.
DRB (Delivery Review Board)
The DRB is an additional Technical Review not mentioned in the software standard, but actually
necessary to authorize the delivery of a software version to an external team aside to the project
56
ECSS-E-HB-40A
11 December 2013
reviews (usually software versions are released at CDR, QR and possibly AR). This review aims at
clarifying the state of the version (according to the previous agreement between the provider and the
user of the version) intended to be delivered in terms of perimeter, validation, verification and
configuration (whereas project reviews aim at acknowledging completeness of the activities
demonstrating the fulfilment of the requirements).
When validation has been run on the version, the DRB can be merged with the Test Review Board
[TRB].
DRB Deliverables depends on the agreement between provider and user, but includes at least the
software release document (see 5.7.2).
5.3.5.2.1 Introduction
Technical Reviews are a useful tool for the implementation of iterative lifecycles (and not only
incremental as mentioned in the ECSS-E-ST-40C sub clause 5.3.3.3.c), such as incremental,
evolutionary, spiral or agile (see clause 5.3.1). These life cycles spread the development in smaller
waterfall-like steps. Running all the needed reviews as Project Reviews is potentially overloading the
schedule and cost. Running some of them as Technical Reviews may allow achieving the objective
while making it feasible within programmatic constraints.
There are many practices in projects. The whole review cycle may be applied for each version, or one
review cycle may be spread over the versions, or the review cycle is applied only on the last version.
In addition, the objectives of the reviews are not seen the same way by all reviewers in all the projects.
However, each practice intends to save cost and schedule by tailoring the full formal process, while
maintaining efficiency with respect to the review objectives and ensuring that the final quality is met.
57
ECSS-E-HB-40A
11 December 2013
Here is an example of versions definition and review setup in an evolutionary life cycle:
a. V1 is intended to allow the integration of all electrical units on the Electrical Functional Model
(EFM). Therefore, V1 contains the basic Data Handling functions (typically a limited number of
the functions). The architecture is defined, as well as some part of the detailed design. The
technical budgets are still estimated. V1 must be tested enough such that it is usable on the
hardware.
b. V2 is intended to allow the run of most close loop tests on the EFM using most of the AOCS
involved equipment units. Therefore V2 contains the AOCS and Safe Mode (V2 is now typically
a large number of the functions). The detailed design is more complete. Technical budgets are
confirmed with high level of confidence. V2 is unit tested and validated on the stable functions.
c. V3 is intended to allow the satellite qualification. Therefore V3 contains the complete software.
The technical budgets are measured and proven. V3 is fully verified and validated with 100%
coverage of requirements.
d. V4 is the final flight software. V4 contains corrections of anomalies from system qualification.
V4 undergoes regression tests and representative mission simulation.
NOTE The software located in non-erasable memory (PROM) should
be ready at equipment CDR. Therefore its lifecycle should be
shorter. The boot software should have at least passed a CDR
for V1 delivery and should have its QR before the SW V2 TRR,
or before the starting of the qualification of the first flight
model.
A possible review setup is the following (this example assumes a single TRR before CDR)
SRR X X X
PDR X X X
DDR X X X
TRR X X
TRB/DRB X X
CDR X
QR X
AR X
In an iterative/evolutionary life cycle, a way to reduce the overall review effort is to merge some
reviews or to run some Project reviews as Technical reviews. The above scheme allows running half of
the reviews as technical reviews. The rationale of their selection is:
a. Requirements Baselines are updated for the first three versions, they are important in the
customer supplier relationship as they define the baseline for the validation and the system
interaction. Therefore they deserve three formal SRRs (project review).
58
ECSS-E-HB-40A
11 December 2013
b. Technical Specifications are relatively complete for V2 (if V2 includes 80% of the functions). The
Architecture is baselined for V2; no changes are expected for V3. Therefore formal PDRs apply
to V1 and V2, V3 PDR is a technical review.
c. Detailed design is not mature in V1, and quite complete in V2, therefore a single formal DDR is
done for V2, technical reviews only for V1 and V3.
d. TRR importance increases from V1 (no review) to V2 and V3 (technical review)
e. There is no CDR, but simply a TRB for V1 and V2, to review the results of the tests before
delivery, but there is a single formal CDR (including the TRB objectives) and QR for V3.
f. The flight version V4 affords the formal acceptance.
NOTE All the documents are made consistent for the QR and AR. All
the documents are available for QR (in the final DP) and
updated for the AR.
Another possibility to inject flexibility in the review scheme, is to balance the review objectives
according to the version. When the development is incremental, the objective of the reviews can also
be incremental. For example, some objectives are reached in the scope of a particular version (denoted
V in Table 5-2). Some objectives are fully reached for the complete software development, even if they
are achieved at a preliminary version delivery (denoted F in Table 5-2). Some objectives, once
achieved for the whole project at a preliminary version, do not need to be modified in terms of
deliverable in the next versions (denoted NM in Table 5-2).
Table 5-2 is an example of review objectives and their applicability to each version. This table is an
example only that is subject to review and adaptation for each project.
Table 5-2: Example of review objectives and their applicability to each version
SW-SRR Objectives V1 V2 V3
SW-PDR Objectives V1 V2 V3
59
ECSS-E-HB-40A
11 December 2013
Verify the assumptions taken for the budget and margins estimations in
V V F
order to ensure schedulability and performances achievements
SW-DDR Objectives V1 V2 V3
Confirm the detailed design is clear and correctly captures the technical
requirements and will allow an eased maintenance of the product thanks
V V F
to the application of the design standards or if none at least to a
homogeneous way of designing
Confirm the Software behaviour through the Code analysis, the Unit Test
and Integration Results and other verification performed for quality
V F
assurance with regard to the software criticality - justifications that can be
agreed are provided when the results are not as expected*
60
ECSS-E-HB-40A
11 December 2013
Confirm the way to operate the software is correctly described and does
F*
not have any unexpected side effect on the operations
SW-CDR Objectives V1 V2 V3
Confirm that all the agreed verification and tests have been successfully F
performed and duly reported in order to ensure the required quality level
and the technical requirements coverage
Confirm the way to operate the software is correctly described and does F
not have any unexpected side effect on the operations
SW-QR Objectives V1 V2 V3
Verify that the complete set of test is run on the same software version F
otherwise justified;
61
ECSS-E-HB-40A
11 December 2013
SW-AR Objectives V4
Identical to the SW-QR objectives after application of the resolution plan and F
correction/validation of any other NCR raised on the software during satellite test
campaign and successful run of the mission representative test.
62
ECSS-E-HB-40A
11 December 2013
Figure 5-4: Phasing between system reviews and flight software reviews
Ground
segment SRR PDR CDR QR AR
reviews
Figure 5-5: Phasing between ground segment reviews and ground software reviews
63
ECSS-E-HB-40A
11 December 2013
5.4.2.1.1 Techniques
The Use Case technique can support this activity and is described in 6.1
5.4.2.1.2 Tools
In the last ten years requirements are increasingly specified and maintained in dedicated requirements
engineering tools.
Requirement management tools usually contain some kind of database management facility. Expected
capabilities of these tools are:
a. ensuring unique identifiers,
b. attach attributes (i.e. tool-defined or user-defined database fields) to requirements, such as
status, importance, delivery version, verification method, creator, importance, risk, requirement
category, creation date, last modification date, etc.
c. capturing, modifying, deleting and searching requirements and their attributes,
d. support cross referencing and traceability,
e. version control and revision history,
f. multi-user concurrent access, with access control
64
ECSS-E-HB-40A
11 December 2013
65
ECSS-E-HB-40A
11 December 2013
The model should be a communication bridge between the software and the system teams, enabling
them to understand and agree requirements, and to anticipate the software requirements early within
the system definition in the systems engineering processes related to software. This is of primary
importance for software. Preserving the adopted system modelling method for software modelling, or
assuring the continuity through appropriate model transformations with preservation of properties,
represents an asset for the efficiency and correctness of the flow from requirements to implementation.
A model is simpler to understand and maintain when it is hierarchical, with consistent decomposition
criteria, progressing through level of abstractions. It should be organized around a defined software
component model approach (see 6.3.2.4).
The model shows the software functionalities and includes behavioural views. State transitions
diagrams support the behavioural view representation that can be used also during the design
activities. Alternatively Petri-Nets, SDL or other methods can be used.
The logical model can be built using different methods depending on the software functions (e.g.
FDIR) or other characteristics (e.g. automata, modes). Possible methods and/or languages are:
a. functional decomposition, structured analysis, mathematics
b. object-oriented analysis (OOA);
c. formal methods;
d. use case and scenarios, e.g. as introduced in UML
See the Annex B for detailed descriptions of such languages and methods.
Although the authors of any particular method will argue for its general applicability, all of the
methods appear to have been developed with a particular type of system in mind. Looking at the
examples and case histories supports the decision whether a method is suitable.
The experience gives indication on which function benefits better of which method:
a. the control and guidance of satellites, producing cyclically actuation data from sensor data, are
best modelled with mathematics, data flows or functional models
b. the mode management of a spacecraft, or the reconfiguration after failure, or some parts of
autonomy management, are best modelled with state transitions diagrams.
c. data management systems, in their parts where data flow or state machines alone cannot
naturally represent them, are better modelled with UML.
d. the high criticality of the software under development may suggest the application of formal
methods.
NOTE Model based engineering is discussed in 6.3.
66
ECSS-E-HB-40A
11 December 2013
describing the architecture, static decomposition into software elements such as packages, classes or
units, describes the dynamic architecture, which involves the identification of active objects such as
threads, tasks and processes, describes the software behaviour (see ECSS-E-ST-40C clause 5.4.3.1).
The architecture describes the solution in concrete implementation terms. As the logical model,
produced in the requirements analysis, structures the problem (what) and makes it manageable, the
architecture defines the solution (how).
The architecture defines a structured set of software component specifications that are consistent,
coherent and complete. A design method provides a systematic way of defining the software
components.
The logical model is the starting point for the construction of the architecture. The design method
sometimes provides a technique for transforming the logical model into the architectural design
model. The designer's goal is to produce an efficient design that meets all the software requirements,
as well as specific design requirements such as portability, degree of reuse, RAMS, etc.
In particular, the Use Case technique can support this activity to help identify software architectural
components and trace to them. It is described in 6.1. More generally, model based engineering is
discussed in 6.3.
The software architectural design describes:
a. the static architecture,
b. the dynamic architecture including the software behaviour.
67
ECSS-E-HB-40A
11 December 2013
d. the way active components interact to implement behaviour and the evolution of each of them
as it responds to events based on its current state (e. g. protocols, state transitions).
The dynamic design is based on the selected computational model and provides also the temporal
attributes for real-time active objects, which enable schedulability analysis to be undertaken. Some
temporal attributes may be concerned with mapping timing budget and performance requirements
onto the design (e.g. the deadline), other are attributes that are defined as design parameters (e.g. the
period of execution), or set as schedulability analysis parameters (e.g. the worst case execution time).
Further details are provided in section 7.4.3.
The software behaviour is intended as the (sequential) description of the internal functions or
operations that are provided by the software components or units, identified in the static architecture.
To describe the software behaviour, it is possible to use the same behavioural description techniques
that are used in the analysis phase for the logical model, such as automata, Petri-Nets, State Charts or
interaction scenarios.
The behavioural term can be intended to describe the observable behaviour of the components that
execute concurrently in the dynamic design. This meaning is part of the dynamic architecture.
5.4.3.6 Definition of methods and tools for software intended for reuse
68
ECSS-E-HB-40A
11 December 2013
A source of complexity is due to the fact that software can be changed easily therefore it absorbs
complexity, it is a complexity sponge.
Complex system have been defined succinctly as follows: A system is classified as complex if its
design is unsuitable for the application of exhaustive simulation and test, and therefore its behaviour
cannot be verified by exhaustive testing (Defence Standard 00-54, Requirements for safety related
electronic hardware in defence equipment, UK Ministry of Defence, 1999).
The avionics hardware can be source of complexity, in particular:
a. Interfaces that do not provide the option of interrupt-driven, forcing polling. You can always turn off the
interrupts if you want polling, but if they’re not there in the first place the bridge is burned.
b. Interfaces that do not provide atomicity, that is to say several ports need to be manipulated
separately to get the desired result. Especially bad if timing tight (forcing critical sections) or no
way to insure the operation actually worked.
c. Distribute functionality to remote controllers to reduce real-time demands on the central
processor.
Second, some quotes about the importance of architecture in complexity:
Architecture is about managing complexity. Good architecture—for both software and hardware—
provides helpful abstractions and patterns that promote understanding, solve general domain
problems, and thereby reduce design defects.
Unnecessary growth in complexity can be curtailed in particular by:
a. Modular software architecture to isolate concerns and allow composition of individually tested elements.
b. Good architecture rooted in the fundamental mission objectives.
c. Design fault protection early, otherwise it is added on piecemeal, distorting the architecture a
little more with each addition.
d. Use design patterns to capture sound solutions to recurring engineering needs, so less time
spent reinventing wheel, freeing resources to work on more important things.
Necessary growth in complexity can be better engineered and managed through architecture:
a. Build a flight software architecture that minimizes the incremental cost of adding functionality, or
modifying functionality.
b. Spend up-front time getting the architecture right. The key properties of the architecture are
modularity (isolating pieces of the system from other pieces of the system), layering (to provide
common ways to do common things), and abstractions (to provide simple ways to use to
capabilities with complex implementations).
c. Do architecture up front. The architect needs real authority to protect the architecture,
otherwise short-term cost and schedule pressure will lead to decisions that corrupt the
architecture, to the detriment of lifecycle costs. According to one definition, architecture is the
set of design decisions that are made early in a project, and for which the consequences of a bad
decision only appear much later.
The report includes several advises related to architecture:
a. Use component-based architecture, with components as the element of reuse, and connectors.
Configuration manages the component specifications and API’s to manage costs of evolution, testing, etc.
b. Model the behaviour of key aspects of the system (state transitions, timing) to avoid finding out
what doesn’t work after it’s built.
69
ECSS-E-HB-40A
11 December 2013
c. Don’t dictate architecture to designers. Focus on the bottom line: delivery of an independently
testable, verifiable product.
d. Have one architectural leader.
e. Establish and stick to patterns.
f. Invest in reference architecture
g. Separation of concerns
As a conclusion:
"Good software architecture is the most important defence against incidental complexity in software
designs, but good architecting skills are not common. From this observation we made three
recommendations:
(1) allocate a larger percentage of project funds to up-front architectural analysis in order to save in
downstream efforts;
(2) create a professional architecture review board to provide constructive early feedback to projects;
and
(3) increase the ranks of software architects and put them in positions of authority. "
70
ECSS-E-HB-40A
11 December 2013
71
ECSS-E-HB-40A
11 December 2013
Cohesion can also increase portability if, for example, all I/O handling is done in common
components. In the contrary, low cohesion implies that components perform tasks which are not very
related to each other and hence can create problems as the component becomes large. The utility of
such component could be reconsidered.
Coupling is usually paired with cohesion: low coupling often correlates with high cohesion, and vice
versa. Low coupling, when combined with high cohesion, supports the general goals of high
readability and maintainability. Such principles are reinforced by object-oriented principles.
72
ECSS-E-HB-40A
11 December 2013
domain of reuse. They are important design driver for the architecture, as the impact of their change
should be minimized and localized in the architecture.
Reference specifications (at the level of a requirement baseline) are derived within this domain. They
include the variability that the products should feature to be reusable in the domain. They constitute
the Generic RB of the system. They address the architecture and the generic functions of the system.
From the architectural part of the generic RB, the architecture is established, which also takes into
consideration the variability of the domain of reuse. The variability is reflected in the system data base
(see 5.2.4.4). This architecture becomes the reference for the domain. It includes in particular reusable
elements (within the domain) called building blocks. The building blocks implement the functional
part of the generic RB.
A specific part of the reference architecture addresses system functionalities which are common to all
the systems belonging to the domain of reuse, for example the communications, the scheduling or the
operating system. Therefore this part will be subject to a common part of the reference RB extracted
from the existing RBs of the domain of reuse.
For examples potential flight software reference architecture would implements, in a part named
“execution platform”, the avionics system communications and the real-time scheduling. The mission
specific application software would be expressed as components calling the services of the execution
platform.
As a conclusion, the major design drivers of architecture with respect to complexity are:
a. containment of the effect of a change of one of the variability factors of the domain of reuse,
b. systematic implementation of the "separation of concerns", in view of decoupling as well the
verification and testing concerns,
c. identification of design patterns allowing the definition of design properties that can be verified
early, aiming at a "correct by construction" design.
Reducing complexity may have higher priority than system performance. Seen from a pure software
engineering or project management standpoint, it is beneficial to trade performance against
complexity, and to suggest hardware resources increase in order to allow for software complexity
decrease.
5.4.3.7.1 Reference
Reuse of existing software is address in the ECSS-Q-HB-80-01A handbook.
Context
The notion of reference architecture has been introduced in 5.4.3.3, with the viewpoint of architecture
complexity reduction. It is addressed here with the viewpoint of the reuse process and the adaptation
of the review process to reused architecture.
One of the underlying examples of Reference Architectures addressed in this section is ground data
systems of many missions, these are typically composed of the same set of components with the same
standardized interfaces.
Another example is the reference architecture for an operational simulator, this defines what
simulation models an operational space mission simulator is composed of, and which interfaces each
73
ECSS-E-HB-40A
11 December 2013
model provides/consumes. The experience in building software from flight reference architecture is
more limited than for ground.
A Reference Architecture, in short RA, is a reusable system design pattern, which assigns the required
functionalities of a complex system to a predefined set of composing components.
In other words the RA specifies which components a system is composed of, and specifies
standardized interfaces between those components. The implementation aspects of the functionalities
assigned to each component of the composed system are however not in the scope of a RA.
It is highlighted that Reference Architecture can define so called RA profiles, which allow choosing
from more than one fixed set of components depending on the profiles of the shared system
requirements.
Impact on ECSS-E-ST-40C
In case of instantiation of a system from a Reference Architecture, the ECSS-E-ST-40C RB and TS
elicitation processes is translated in the following way:
The RB of the new system will be composed of the reference Generic RB, plus a mission Specific RB.
The mission Specific RB will be implemented either as instantiation of existing building blocks
(further described in a TS), or as new elements.
There are a number of architectural activities that are performed once in the scope of the generic
reference architecture elaboration. These activities are reused at the time of mission deployment.
Therefore part of the RB, TS and architectural process are also reused.
The life cycle of the reference architecture and of the specific instances can ultimately be separated,
and there may even be different development teams. The reference architecture team delivers an
architecture baseline to the specific development team.
Customer Software
Supplier Reuse File
Generic TS
Generic TS instantiation Specific TS
Reference
architecture
The ECSS-E-ST-40C process applies both for the generic and the specific parts. The Customer of the
specific system is expected to place reuse constraints on the design and development coming from the
reference architecture.
74
ECSS-E-HB-40A
11 December 2013
Considering that SW restarts from existing documentation for RB, TS and ADD, the SRF (ECSS-E-ST-
40C Annex N) is important to explain the overall reuse approach at SRR and PDR. Moreover, the
concept of SQSR (Software Qualification Status Review) can be used to support the specific review of
the SRF.
Specific tailoring
It is highlighted however that in some cases, the Specific RB could be relatively limited, and could be
merged with the Specific TS. In this case, it is important to handle a SWRR. The SWRR is merged with
the SRR. As a consequence of merging Specific RB and Specific TS, the QR is merged with the CDR.
This is in particular the case for some mission control systems of the ground segment development.
The system design can be fixed once (in form of a RA) for a group of similar systems, which share a
set of system requirements.
Actually, an infrastructure for ground segment is developed from the common parts of Ground
System Requirement Documents (considered as RB). A separate team implements the mission specific
features which are added to the configured infrastructure directly from dedicated SRS (TS).
75
ECSS-E-HB-40A
11 December 2013
76
ECSS-E-HB-40A
11 December 2013
Several methodologies enforce the consideration of real-time aspects within the architecture (see
annex B).
Real-time software is discussed in section 7.
77
ECSS-E-HB-40A
11 December 2013
Unit test is sometimes the only way to perform some specific validations, e.g.:
a. testing interfaces with the computer or the BIOS (e.g. HW drivers) on a facility representative of
the HW/SW interfaces;
b. validating some TS requirements that are not reachable with a complete SW. In this case, the
unit test facility includes an OBC emulator. Such requirements are tested on a specific test
executable needing the knowledge of the design;
c. validating a SW function with a very large combination of data, which is hardly feasible on
complete software. In this case, the unit tests are performed on internal mechanisms in a
simpler way.
or some low level verifications, e.g.:
a. testing some mechanisms that contribute to the real-time behaviour of the software (e.g.
semaphore, interrupts, buffer management);
b. testing the boundaries of configuration / mission data: when the definition of some data is not
included in the source code (e.g. for mission data, family of spacecraft), a database is used but
the database values are not used by the tests. Instead, test values are defined to cover the
functional range of these mission data.
Overview
The systematic application of unit tests or the very deep unit tests (fine grain) require a huge effort,
potentially not compatible with the project class, the project schedule, or project needs. Sometimes, the
level of the unit tests needs to be balanced according to the criticality of a function, of a component or
of the software itself: the granularity of the unit tests contributes to the expected software reliability
level. Sometimes still, tests at unit level are fully redundant with other validation tests or it can be
considered as more efficient to reduce the unit tests effort and to start validation earlier, while
accepting discovered faults during the validation phase. Hence, the standard Unit Testing approach
needs to be adapted or optimized; the overall objectives of Unit Testing introduced above, can be
achieved through various combinations according to the project context and the software
characteristics.
78
ECSS-E-HB-40A
11 December 2013
This part presents different areas for adapting or optimising the systematic Unit Testing approach,
highlighting their strengths and weaknesses in order helping the assessment of the risks with regard
to the project context. The adaptation or optimization of the Unit Testing approach need to be assessed
early in the project according to the criticality level, the development schedule, or the deliveries
expectations (i.e. the validation or reliability level of a delivered version). On the basis of this risk
analysis, the Unit Testing strategy can be built and hence the way to achieve the Unit Testing
objectives can be optimised. The defined strategy as well as the appropriate rationale, need to be
documented (e.g. in the Software Development Plan or in the SUITP) and agreed together with the
Customer and Quality representatives.
79
ECSS-E-HB-40A
11 December 2013
Nevertheless, tricky behaviours are easier to test out of the functional context and typically, unit tests
remain particularly efficient to check robustness code or error cases, which are difficult to exercise
through functional tests. Unit tests are also performed for units having high measured complexity
metrics.
A second example is, in order to limit the risk of discovering faults during the validation, to improve
the maturity of the source code by early source code peer reviews.
As a third example, it is also recommended to use powerful simulators during validation, including
high level investigation capabilities at software implementation level.
Another example consists in applying the tests on a group of software units. The principle is to
perform unit testing with regard to the software design elements (i.e. components or set of
components) on the produced software units or a set of software units. Unit test cases are not
systematically defined at each software unit level. This optimization level is generally reached
combining the functional view of the TS and the design view of the SDD. The functional view helps
defining test scenarios and the design view helps defining the set of units to be tested together. Both
views (functional and design) also allow covering integration tests objectives (see next chapter) by
checking that information exchanged between the design items set up through the functional tests is
correct. The 'function' defined in this testing approach is often close to functions defined in the SRS,
e.g. for an on-board central software a PUS service, an equipment management, an AOCS attitude
estimation function or a thermal regulation loop.
With this approach, the external interfaces of the function are tested in order to ease the further
integration with the rest of the software. The internal operations of the function are implicitly tested
during the functional tests contributing to the integration tests objectives.
5.5.4 Integration
5.5.4.1 Software integration test plan development
NOTE Testing methods and techniques are also addressed in 6.4.
80
ECSS-E-HB-40A
11 December 2013
In a life-cycle, the integration is the stage in which individual software items are combined and tested
as a group. In principle, this stage occurs after unit testing and before validation testing. Integration
takes as input the software items that have been unit tested, groups them in larger assemblies, applies
tests to those assemblies, and delivers as output the integrated software ready for validation testing.
The integration strategy is made consistently with the static architecture of the software. When the
software is made of items with “use” dependencies from user items to used items, “bottom-up”
assembly logic can be applied, from the lowest level items until the top of the hierarchy. Each of the
intermediate items can be tested as an item with stubs for the used items when necessary (e.g. for
error cases) or with the actual item when the actual item behaviour is needed. Generally, all the upper
level items are used integrated with the actual intermediate one.
81
ECSS-E-HB-40A
11 December 2013
Moreover, some tools are able to automatically check the interfaces consistency (functions, parameters
types, names or number) that also contributes to this objective.
When the definition of the interfaces is done by mean of tools (e.g. centralised interfaces/data allowing
the generation of both interfaces code and ICD), interfaces inconsistencies are very limited and the
interfaces verification is easier, or even completely proven when using formal languages (e.g. ASN1,
Corba) for data description (see Annex B). In these cases, it is recommended to use editors, checkers
and document producers:
a. A specific editor to support the user describing the interfaces without any specific knowledge of
the syntax required by the language. The result consists in a formal ICD.
b. A document producer that translates the formal definition into a readable format, e.g. HTML,
Word or Excel. The result consists in a readable ICD.
c. A data checker that controls the data compliance with the formal ICD. It is used both by the
data producer software and by the data consumer software.
Finally, according to the software items purpose, integration testing can be made through various
means (including inspection) and facilities when test is chosen. For example, for a component
managing the input/output of the software, a Hardware bench can be used together with the lower
level software (i.e. BIOS, O/S).
82
ECSS-E-HB-40A
11 December 2013
technology the system uses. This expertise is needed if non-functional requirements are to be met.
Since exhaustive testing is usually not tractable on complex software, it is practically almost
impossible to demonstrate that a piece of software is fault-free, unless huge resources are used to
perform, for instance, formal proofs.
The validation activities are split over the one against RB and against TS. The chapter focuses on the
one against RB, highlighting the specific aspects of this validation compared to the one against TS.
5.6.2.1.1 Process
Software validation against TS and RB is made up of the 3 main steps:
a. Software validation tests specification,
b. Software validation procedures implementation,
c. Software validation tests execution and reports.
The software validation tests specification with respect to the TS and RB is written by the responsible
of the software validation and is provided respectively at a dedicated TRR and at critical design
review (CDR), see also 5.3.5. Due to supplier’s environment limitations, it is possible that validation
comprises a set of tests that check the software product against only a subset of the RB requirements.
For each test case, the software validation tests specification provides the identification of the test case,
the aim of the test, and the list of TS or RB requirements covered by the test. It indicates the
environmental needs: configuration of the facility, configuration of the support software (e.g.
simulation) and special equipment needed (e.g. bus analyser). It lists the test inputs or describes its
initial context, and gives the test steps and their associated expected outputs or success criteria.
For the critical design review (CDR) and qualification review (QR) the responsible of the software
validation provides the software validation tests procedures, which describe how the test cases are
implemented in one or several tests programs. They describe the treatments performed by these tests
and how to launch them, drive their execution and take their results.
Even if the simplest layout, one test case per procedure, is often chosen, it is possible to have several
tests cases implemented in a single procedure and a test case distributed among several procedures.
Means or product configuration constraints can lead to such distributions.
The responsible of the software validation provides a software verification report (SVR) containing the
synthesis of the validation test campaign and the reference to the analysis, if some requirements are
not covered by testing.
The VCD (Verification Control Document) is a system document. It often uses the traceability between
the SVS, the TS and the RB.
83
ECSS-E-HB-40A
11 December 2013
Introduction
The test environment addressed in this chapter concerns flight software and ground segment.
Flight software
In order to perform testing, there is a need for adequate validation support, dedicated to each
validation phase, in terms of representativeness in particular w.r.t. the real execution environment but
also in terms of capabilities. The name SVF (software validation facility) is typically used for calling
the facilities for on board software validation. Sometimes, this SVF is incrementally developed and
enriched across the validation phases in order to meet the validation objectives depicted here below.
Representativeness
Several levels of representativeness may be considered for flight software at:
a. processor level: from real processor module, processor emulators, up to host machine executing
native code,
b. IO level: from full physical-electrical ISO layers to pure functional protocols,
c. equipment: from physical models, simulated ones, to restricted interface data buffering,
d. in-orbit environmental conditions: from real operational conditions with hardware in the loop
[HIL, HWIL] to simulated equipment and subsystem.
The expected level of representativeness is increasing throughout the different test phases:
a. for unit and integration testing: in principle, at that stage, the representativeness is expected
only at processor level. The execution of native code on a host machine may be sufficient,
possibly complemented by processor emulators, depending on the expectations in terms of
object code coverage.
b. for software validation testing: at that stage, the full representativeness is expected at processor
and IO levels. For the equipment it may be restricted to interface data buffering without
functional simulation. This representativeness at processor and IO levels may be refined
depending on the two categories of validation tests:
84
ECSS-E-HB-40A
11 December 2013
85
ECSS-E-HB-40A
11 December 2013
86
ECSS-E-HB-40A
11 December 2013
c. inspection of parts of code that cannot be exercised in the configuration required by the
requirements,
d. review of unitary tests.
Otherwise, clause 5.8.3.9 of ECSS-E-ST-40C is applicable. This clause specifies that the supplier has to
identify the requirements that cannot be validated in the supplier’s environment and forward them to
the customer so that they can be validated in the customer’s environment such as avionics, platform or
satellite tests.
5.6.2.2.1 Introduction
The key technical objective of ISVV is to find faults and to increase the confidence in the software,
which should therefore reduce the development risks. This objective can be reached by performing
additional and complementary verification and validation of a software product (i.e. the software and
all corresponding documentation) by an organisation that is independent from the software
developer.
The “independence” notion of the ISVV is valuable and should permit to have a “fresh viewpoint” on
the product and on the applied process. But the fact to have a different view does not imply to detect
faults or defects if the activities and the method used are the same than the one used by the SW
supplier within the nominal life cycle.
In order to be efficient, the ISVV activities should not start too early so that their inputs are mature
enough. The experience shows that starting ISVV activities too early, results in a long loop and
inefficient “questions / answers” process between the SW supplier and ISVV supplier via the customer
(since there are no direct bridge between the SW supplier and the ISVV supplier, to grant the
independence notion) that could result quickly in parallel activities (both nominal and ISVV) on the
same items (most of discrepancies being raised in parallel by V&V and ISVV suppliers).
Currently, the SW V&V process is improved in order to be more and more integrated in the
development process, as the earliest possible moment, in particular for verification activities (e.g.
introduce verification tools inside the development environment to enable each developer to verify
daily the quality of the produced code). This coupling between development and verification activities
will induce more difficulties in order to schedule ISVV tasks in the future.
In the frame of ESA project, the ISVV process is supported by a dedicated ESA ISVV Guide [ESA ISVV
Guide]. The following chapters are written in accordance and consistently with this ESA ISVV Guide
to avoid duplication.
The ISVV should be requested and the detailed objectives defined by the customer. The ISVV
guidelines should be therefore tailored according to the criteria defined in the guide such as the
supplier knowledge/method level, or even customer knowledge/method level as example. In case of
cost limitation, the activities should be tailored according to the expected benefits taking into account
the level of risk the project is ready to take.
87
ECSS-E-HB-40A
11 December 2013
88
ECSS-E-HB-40A
11 December 2013
4. take into account synchronisation constraints between the V&V supplier and ISVV
supplier
5. define the adequate level of independence (separation of concern).
89
ECSS-E-HB-40A
11 December 2013
90
ECSS-E-HB-40A
11 December 2013
c. or when operational data is not available for the TS validation , and the set of operational data
requires a rerun of a subset of the validation tests.
d. or when the TS validation test environment is not representative enough w.r.t. the qualification
objectives (computer target representativeness or open/closed loop tests).
Except for the case (b) which should be clearly identified inside the RB, the two others cases should be
specified inside the SOW as a validation logic scheme.
For case (b), the objective will be reached in exercising operational scenarios, elaborated jointly
between the supplier and the customer with the aim of anticipating system level tests in the supplier’s
environment. The operational scenarios should be defined in the RB.
For case (d), ECSS-E-ST-E40C §5.8.3.9 specifies that the supplier is in charge to alert the customer. In
this case, the activity will be performed on another test environment, either by the supplier or by
another team (avionic integration team, system team, etc. …) under the responsibility of the customer.
According to the alternative chosen, the validation w.r.t. to RB can be based on a subset of TS
validation tests (same validation team using same test specifications , updating the program test
according to the test mean configuration ) or can be a complete new set of tests (new validation team
introducing new test scenarios ).
NOTE If the customer performs these tests, there could be a potential
overlapping with the acceptance tests that is described in 5.7.3
software acceptance.
91
ECSS-E-HB-40A
11 December 2013
In the case of ground software developments, e.g. Mission Control System MCS and Operational
Simulator developments, it is typical that the software is additionally validated by the software
operator (Flight Control Team, FCT) in the context of so called Simulation Campaigns, using the
Ground Operation Procedures and the Flight Operation Procedures. During these simulation
campaigns all elements of the ground segment are either present as real software/hardware or
simulated as part of the operational simulator, including the ground stations network and the
constituting elements of each ground station as well as the communication between the Mission
Control System and the ground stations. The Space to Ground link as well as the complete Space
Segment are replaced (simulated) by the operational simulator, respecting the applicable space to
ground ICD.
92
ECSS-E-HB-40A
11 December 2013
93
ECSS-E-HB-40A
11 December 2013
94
ECSS-E-HB-40A
11 December 2013
c. update the user’s manual whenever necessary in order to reflect all the constraints, “features”
and associated behaviour induced by the SW design choices observed during acceptance tests
on the run scenarios,
d. Depending on the contractual agreement, the supplier may run the acceptance tests of behalf of
the customer.
Execution of acceptance tests are logged /archived. A logbook is established in real time during the
execution of tests, based on the acceptance test plan, identifying the status OK/NOK at each step of the
procedure.
95
ECSS-E-HB-40A
11 December 2013
5.8.2.1.1 Overview
The ECSS-E-ST-40C requirement 5.8.2.1.c asks for the methods and tools to support verification
activities. This section addresses (i) the traceability, which is common to several verification activities,
and which is supported by tools, and (ii) requirement engineering, which is common to RB and TS
elaboration.
5.8.2.1.2 Traceability
Introduction
The ultimate aim of traceability is to support or facilitate some activities such as:
a. verification, e.g. completeness, consistency,
b. change analysis, i.e. the so called impact analysis,
at a fine level of the software product.
In this aim, traceability activity consists in establishing and maintaining the necessary links between
detailed elements of the software product (e.g. a requirement, a test case) that allow “navigation”
between these detailed elements. Links may be of different types (e.g. implementation, refinement,
verification, reference) according to the traceability needs.
96
ECSS-E-HB-40A
11 December 2013
However, due to the quick increase of the traceability complexity and effort, the basic traceability need
consists in establishing links allowing checking that the upper level requirements are taken into
account (or justified) by the lower level software elements throughout the life-cycle.
This is the approach of the ECSS-E-ST-40C which identifies the following detailed elements:
a. system requirements of the RB,
b. software requirements of the TS,
c. software components of the SDD,
d. software units of the SDD,
e. software code,
f. test/analysis/inspection cases of the SUITP, SVS-TS and SVS-RB.
The ECSS-E-ST-40C requires the following coverage verification (See clause 5.8.3):
a. between system requirements and software requirements,
b. between the software requirements and the software components,
c. between the software components and the software units,
d. between the software code and the software units,
e. between the tests cases (TS and RB) and the software requirements, and the system
requirements.
Moreover, ECSS-E-ST-40C also requires an overall traceability:
a. code traceable to design and requirements,
b. unit tests traceable to code, design and requirements,
c. integration test traceable to architectural design.
Direct links between all detailed elements are hopefully not necessary and some techniques allow
minimising the traceability effort:
a. Establish a direct link between two types of detailed elements only when the coverage
measurement is required: e.g. between system requirement and software requirement. Such
links are usually established between detailed elements of "adjacent" steps of the life-cycle (e.g.
requirements analysis and architecture). For others, transitivity mean is used: for example, if an
element A is traced to B, and B to C, transitivity can be used to assess the traceability between A
and C. Complete the direct links accordingly,
b. Moreover, if most of the time explicit links (i.e. manually established and often explicitly
formalised within documents or tables) are necessary, the use of implicit links (i.e. through clear
criteria or rules) is recommended as much as possible for efficiency reasons. Naming rules (e.g.
a software unit has the same name than the related ADD component) are commonly used for
implicit traceability.
Generally, for the ECSS-E-ST-40C, direct links are established for each couple of elements here after
(most corresponding to the ECSS coverage needs):
a. between system requirements and software requirements,
b. between the software requirements and the software components (could be implicit when
software requirements are made together with a generic architecture, the reference of the
requirement may include the component name),
97
ECSS-E-HB-40A
11 December 2013
c. between the software components and the software units (could be implicit through dotted
naming rules),
d. between the software code and the software units (usually implicit through naming rules),
e. between unit tests and the software units (usually implicit through naming rules),
f. between the integration tests and the software components (usually implicit through naming
rules),
g. between the TS tests cases and the software requirements,
h. between the RB tests cases and the system requirements.
Other ECSS-E-ST-40C traceability needs do not require any additional effort since they can be
obtained by transitivity.
Once the traceability is established, traceability links are assessed in both directions. For completeness
verification, the covered and un-covered elements are identified and appropriate actions are taken to
improve the coverage (i.e. modification of the traceability, modification of the elements, or
justification). The most efficient way is to use a traceability tool. A matrix may also be used to support
this verification.
When coverage proofs are requested, the most popular way is to provide a traceability matrix
showing the elements in relation (next upper level elements on the left part of the table) and the
potential justifications for lacks of coverage. Bi-directional matrixes are generally useless.
The effort on justification on completeness and granularity of tracing could be decreased according to
the software criticality level, e.g.:
a. for C-level software criticality, justification could be limited between RB and TS requirements,
TS requirements and test case,
b. for D-level software criticality, justifications could be skipped.
98
ECSS-E-HB-40A
11 December 2013
c. Links enable the verification of test completeness: each requirement has at least one test case
associated, and no test case is redundant. The links help determining the regression test.
As an extensive example, links supported by SysML language are provided in Annex B.
99
ECSS-E-HB-40A
11 December 2013
and to perform a cross-check through two separate requirements documents (i.e. RB and TS) and
reviews (i.e. SRR to check the needs completeness, and the PDR (potentially anticipated by the SWRR)
to allow the Customer checking that the proposed software solution answers the needs and the
software supplier checking that his understanding of the customer needs are correct, complete and
results into a feasible software solution.
This handbook also recommends an early integration of the software engineering process in the
system engineering phase (co-engineering) to facilitate the common understanding of the
requirements: the supplier can early understand the software needs and checks their feasibility, before
the SRR.
In the situation where the SW supplier is not yet selected when the SRR is conducted, the SVerP may
be only available at the PDR, consequently after the SRR. This would prevent co-engineering process
to be set up and would therefore delay the verification activities to be performed at System-Software
engineering level.
In order to verify the completeness of the Requirements Baseline (RB), the following list provides a set
of topics which could strongly affect the software during its development process or which are shared
by different elements (e.g. interface, failure mechanism, and timing performance). These topics need
to be checked carefully at system level (for completeness) and at software level (for feasibility):
a. the RB describes clearly the environment in which the software will operate. The description
focuses on the operational context and the interfaces with external systems,
b. the RB specifies the characteristics of all external systems (e.g. bus, computer, ground interface)
interacting with the software product. The above-mentioned characteristics include, but are not
limited to, the communication protocol, the concurrency and real-time model,
c. the RB specifies the control and observation points for each function,
d. the RB specifies the fault detection, isolation, and recovery , in relation with the dependability
and safety analysis outputs,
e. the RB specifies the modes/sub modes and transition between modes (modes automaton). For
each mode/sub mode and transition, the associated activities (functional description) are
specified,
f. the RB specifies how telemetries / packets TM are dated and the precision required,
g. the RB identifies the requested configurable data of the software, that are usually defined
through a software database,
h. the RB identifies and justifies the margins policy in terms of memory and CPU allocation,
i. the RB defines the operational scenarios (e.g. traffic scenarios),
j. the RB specifies the list of external events that are produced or used for execution by the
software product, as well as their occurrence model (e.g. periodic, spontaneous),
k. the RB specifies, for each application function or hardware component, the type of failure (no
failure, accidental failure on value or time, Byzantine or not, intentional or not [security]) and
their occurrence model (e.g. periodic, a periodic),
l. the RB specifies the timeliness properties (i.e. constraints on time to start or terminate
application functions), periodicity, and jitter when they are mandatory with their associated
justification (e.g. AOCS commanding loop between sensors acquisitions and actuations),
m. the upper level project development plan (system, subsystem, equipment, instrument, platform
development plan) specifies the schedule allocation for the software development, including
margins consistent with respect to the development effort,
100
ECSS-E-HB-40A
11 December 2013
n. the upper level project development plan defines all inputs necessary for the development
activities, such as the test facilities, input documentation, CFI (Customer Furnished Items ),
environmental models if necessary and their availability,
o. the upper level project development plan identifies all needs in term of software product
deliveries and their dates.
101
ECSS-E-HB-40A
11 December 2013
f. training of skilled people e.g. involvement of maintenance team before the end of the software
development activities,
g. the completeness of the maintenance plan
102
ECSS-E-HB-40A
11 December 2013
103
ECSS-E-HB-40A
11 December 2013
All 3 categories of software are operated by the entity combining operations organizations and
associated ground systems for which ECSS-E-ST-70C is applicable. For bullet a) flight software the
software operation process described in ECSS-E-ST-40C (chapter 5.9) is to be considered in a wider
scope of space vehicle operation.
The software operation process is linked together with the software maintenance process.
It covers the following activities (as far as applicable for a space project):
a. Operational management encompasses activities like:
1. managing and coordination of Ground Segment service requirements,
2. provision and support for all mission activities,
3. coordination of Missions operations management and Engineering,
4. management guidance,
5. conflicts resolution, and
6. on console during critical phases.
b. Operational planning involving producing the procedures for operating the product, training
operators and users, operational testing, problem reporting, system operation and user support.
This encompasses tasks like
1. planning of Ground Segment resources for all preparation activities and mission
activities (also with other projects and missions where applicable) with respect to Ground
Segment usage,
2. technical and operational inputs for Simulation and Mission Timeline,
3. review of Mission Requirements Documentation,
4. resources scheduling / conflict resolution for all activities,
5. resource usage tracking,
6. Ground Communications Schedule generation,
7. IGS Work Plan generation,
8. Coordination of all participating facilities for On-board System and PL operation,
9. Preparation of Ground Segment operations (including simulation and testing activities),
and
10. Voice Loop Management.
c. Operational testing of new releases: the software product is released for operational use when
the operational testing criteria have been satisfied, in accordance with the operational plan.
NOTE The team responsible for software maintenance process is not
always in a position to carry out operational testing, since this
cannot be done without the operational environment and
possibly specific operations expertise.
d. User support, implemented in practice using the organisation set up for the software
maintenance process, including
1. assistance and consultation to the users as requested,
2. provision of workaround solutions,
104
ECSS-E-HB-40A
11 December 2013
Support Processes
Incident fixes
to Operational
Obsolescent Services
SW & HW
Improvements
Change Release
to Operational
Management Management
Services
Changes
Residual Problems to
Problem Management
105
ECSS-E-HB-40A
11 December 2013
106
ECSS-E-HB-40A
11 December 2013
5.10.2.1.1 Introduction
The software maintenance process is activated when the software product undergoes any
modification to code or associated documentation as a result of correcting an error, a problem or
implementing an improvement or adaptation. The objective is to modify an existing software product
while preserving its integrity. This process includes the migration and ends with the retirement of the
software product or of the system itself.
Its starting point is dependant from the contractual set-up of the project (e.g. warranty period of the
supplier, which organization is later-on in charge of performing the maintenance), but in most cases
the start is coupled with a project review (like QR, AR, or FAR).
This section provides guidance in particular about:
a. How the maintenance is usually structured and how it relates to the other phases;
b. Best practice processes for managing software maintenance;
c. The different types of maintenance activities and how these can be handled;
d. The Software Maintenance Plan.
107
ECSS-E-HB-40A
11 December 2013
108
ECSS-E-HB-40A
11 December 2013
Preventive Maintenance
Preventive maintenance means planned and scheduled activities that are performed to keep the
Infrastructure in operational conditions. It is performed to ensure the long term availability of the
facilities hardware and software to support the day by day operation. Typical tasks are:
a. SW Back-Ups;
b. Periodic inspections to monitor the condition of the equipment;
c. Inspection of the HW with the emphasis on the detection of faulty mechanical / electrical parts;
d. Calibration of equipment;
e. Replacement of consumable items (toner, paper, light bulbs etc.);
f. General cleaning and maintenance;
g. Maintenance of the logs and error statistics.
Adaptive Maintenance
Adaptive maintenance means that the software can be upgraded in such a manner that they get over a
maintenance phase without changing the specified hardware or software functionality. Commercial
109
ECSS-E-HB-40A
11 December 2013
software licenses need to be available for the complete duration of the maintenance phase, i.e.
planning has to take into account that during a maintenance phase no sudden hardware and/or
commercial software upgrades are necessary due to obsolescence of the items.
Adaptive maintenance might also be induced by hardware obsolescence. As the operations facilities
are usually in operation for several years and most of these facilities or parts of them are used day by
day without major technology upgrades since development ended. On the other hand, technology is
moving fast in the commercial markets in particular in the computer, communications and video area,
so that it is becoming very difficult, time consuming and costly to maintain outdated equipment.
As a countermeasure the Ground Segment service introduces an obsolescence management to have a
controlled process to replace equipment in due time before it becomes outdated and thereby avoid
future additional unforeseen cost. In the past replacement of equipment and systems was mostly
driven by external event, i.e. future operations cost, no more commercial maintenance available, etc. In
an operational environment with an expected lifetime reaching possibly beyond the current decade,
the replacement of ground equipment is planned and controlled to keep the equipment and systems
operational in a cost effective manner.
Continuous monitoring of the existing equipment and review of the commercial technology
developments and trends as part of the ground segment engineering is one input to the obsolescence
management. Another important contribution is the planned yearly configuration review. Out of both
information sources a replacement strategy is established and maintained for the individual systems
and equipment. The overall obsolescence strategy will be to replace equipment as needed dependent
on technology improvements and commercial developments but as seldom as possible before the
equipment becomes obsolete and causes negative impact to operations.
Evolutionary Maintenance
Evolutionary maintenance is initiated by external demands to improve the functionality or system
reaction. It will be initiated by end user problem reporting, but only implemented on the basis of a
customer change request or it is covered in the business agreement as explicit system enhancement.
The objective is to prevent failures and optimize the system for the operational use. This might be
done, for example, redesign a component that has a high problem rate, or modify a component in
order to improve its operability.
The evolutionary Software Maintenance Process starts when a need is identified to change and
allocate requirements, i.e. operational requirements, quality requirements, design requirements,
documentation requirements, and implementation requirements.
These maintenance actions might include major re-design of the software.
Improvement maintenance is considered in the overall maintenance concept; however improvement
maintenance is always triggered by a Change Request Process, or as part of the proposal for the next
phase.
The analysis will be performed on the basis of problems that have been found during the recent phase,
but not implemented for several reasons. Problems that would improve the usability or
maintainability will be proposed as subject of improvement maintenance before entering the next
regular maintenance phase.
110
ECSS-E-HB-40A
11 December 2013
Obsolescence of development equipment, test environments and software will need to be considered
as part of the maintenance plan. This is just as important as the operational software and hardware as
without it the software will become un-maintainable.
Throughout the duration of the maintenance activity the test environment and simulators may need to
be modified in order to keep them realistic. This may mean simulating degradations in spacecraft
performance.
111
ECSS-E-HB-40A
11 December 2013
c. If the flight software is used to cope with degraded or malfunctioning hardware a patch can be
used as permanent solution that will become unnecessary after hardware replacement has
happened in the flight unit in orbit (when accessible, e.g. in a manned space segment). Then the
patch needs to be de-installed to assure correct system handling.
d. Patch cluster: (best practice of COLUMBUS) up to 15 critical Non-Conformance Reports (NCRs)
/ System Problem Reports, are to be resolved by patching within following conditions: one
single on-board computer reboot, but no data pool changes, no planned data base updates, only
critical problems or urgent Operations product updates are covered by the maximum number
of 15 NCRs / System Problem Reports.
e. Operation Patch cluster: (best practice of COLUMBUS) up to 50 NCRs / System problem
Reports of Operations Data updates, are to be fixed temporarily by patching. Following
conditions apply: one single on-board computer reboot only, no data pool changes, data base
upgrade as needed, and, limited regression testing tailored to OPS product’s needs in order to
validate them.
f. OBCP load/reload : when OBCP capability is supported on board, loading or reloading of one
or several OBCP enables in a safe and flexible manner to perform the expected changes without
the drawbacks of patches or complete software reload.
NOTE OBCP are further discussed in 5.2.4.6.
Different patch capabilities can be used, like the patch that directly replaces a section of code (fix size
sections are implemented and patched in a whole) or a new section that is uploaded in a specific part
of the memory, with a derivation to this new part to be executed in replacement of the area of the one
under modification. Other capabilities may also be proposed directly in RAM or in EEPROM that can
be loaded on ground command.
There is no standard approach today, but the following criteria are used to define the appropriate
capabilities in the context of the project. The trade-offs are conducted during the early phases of the
project in order to derive the subsequent requirements applicable to the software (some of them
having potential major impact on the software design itself, e.g. OBCP engine implementation):
a. availability of the system: the acceptable mission interruption duration is compared with the
time taken to load a patch (or load a whole memory image), apply it and exercise before the
system is back to mission mode.
b. TM/TC bandwidth: in conjunction with the previous criteria, this parameter will influence the
duration of patching / reloading operations
c. available memory size
The two first criteria will drive whether it is acceptable to reload complete binary or if only specific
modified code is reloaded in conjunction with the third one (storage of complete binaries need
enhanced memory size).
According to those criteria, temporary and urgent workarounds to identified in-flight problems are
often implemented as patches whereas planned set of “clean” modifications are implemented as full
software binary load. Therefore it is very likely to implement both capabilities on-board for the
Central Flight Software (high availability). In between these two capabilities, OBCP offers an
intermediate flexible way having the advantages of a local and limited patch without the induced
drawbacks and risks in terms of complexity of patch design and patch operations.
Furthermore in the case of high availability, it is worth to manage several binaries on board (at least
two), with the associated capabilities:
a. to reload one of the binary of the software while the another one is under execution,
112
ECSS-E-HB-40A
11 December 2013
b. to switch the execution from one binary to another with the minimum interruption of the
mission.
In the case partial portions of binary are required, dedicated mechanism may be implemented in order
to load, store, activate, and inhibit safely patches in dedicated memory areas.
In case of low availability (e.g. secondary payload or experiment of lower importance), it may be
acceptable to completely interrupt the subsequent software, completely reload and restart afterwards.
@0 @0 @0
Branch @P10
@P0
@end P1
@PF …
Data patch
@F @F @F
@P10
return @end P1
@P1F
@P20
Return @endP2
@P2F
…
In the case there is only one binary image of the OBSW on board, the patch can obviously not be
applied on a part of the OBSW (On-Board Software = BOOT Software plus Application Software /
ASW) which is currently under execution. It is therefore applied in a dedicated mode where the piece
of code to be modified is currently not executed (some process may be inhibited specifically for that
purpose). Specific operational care (if no dedicated protection mechanism is available) should be
taken in order to prevent any corruption / modification of the patch / dump code itself. In particular,
dumps may be performed to ensure that the memory content corresponds to the expected software
binary code. Then after the modifications have been performed, the Application Software is switched
back to normal operational modes.
It is highly recommended during validation of the modification to exercise the patch procedure itself
to verify its robustness and efficiency.
The patch mechanism design and development is part of the initial software development process and
can be validated as such during the qualification process.
113
ECSS-E-HB-40A
11 December 2013
The problematic part in the patch itself is to ensure the correct mapping in the patch application i.e.
not patched zone needs to have an ISO-mapping demonstrated. The patch needs to be correctly linked
in the hosting application.
114
ECSS-E-HB-40A
11 December 2013
6
Selected topics
The intention of this whole chapter is to address several selected subjects as described in the Note of 1.
They are referenced in chapter 5 from the relevant ECSS-E-ST-40C requirements.
115
ECSS-E-HB-40A
11 December 2013
c. The properties expressed in the use cases are then used to derive the requirements of the
software.
For example, the requirements producer of a “Monitoring & Control” system may identify the
following use cases:
a. TM acquisition and packet extraction: Reading of the telemetry data frames received from the
station; packet extraction from the received frames and their distribution
b. Decommutation: Decommutation and supervision of the received packets; distribution of the
results
c. Communication with the station: Dialog with the station for TM/RM acquisition and TC/RC
sending
d. TC encoding: Elaboration of commands binary profiles
e. TC sending: Commands sending to the satellite
f. COP1 management: Implementation of COP1 protocols and BC directives sending.
g. Station commanding: RC elaboration and sending
h. PUS services management: specific PUS services implementation.
116
ECSS-E-HB-40A
11 December 2013
A textual specification is provided to express the use cases. The following template is proposed as an
example, with specific sections as a guideline to the requirements capture. The part “Description” and
“Non-functional constraints” can be used to derive functional and non-functional requirements.
SUMMARY
Short abstract describing the purpose of use cases: brief description of the service rendered by the
system to the actors.
CONTEXT
Activation frequency, operational mode.
Actor or event that triggers the use case.
PRE-CONDITIONS
Pre-conditions necessary for the execution of use cases.
DESCRIPTION
Description of the sequences (scenarios) and identification of properties (requirements). Errors are
described in the" Exception” section. Complex scenarios can be illustrated using the sequence
diagrams, activity diagrams or state transition diagrams (see Annex B for more details on the Use
Cases and UML techniques).
POST-CONDITIONS
Conditions to be met by the system after executing the use case.
EXCEPTIONS
Known errors that lead to certain actions.
DATA
Input and output data of a use case.
117
ECSS-E-HB-40A
11 December 2013
118
ECSS-E-HB-40A
11 December 2013
Plain text associated to the identification of the use cases, describing the behaviour of use case can be
considered as an un-formal way to describe scenarios. Scenarios are described in UML or SysML
using interaction diagrams or activity diagrams, or even state-charts. Other languages can be used to
express scenarios such as the Message Sequence Charts (MSC) of the Specification and Description
Language (SDL). Data flow or control flow diagrams can also be used.
Use cases and scenarios target operational and functional requirements, so they are independent of
any possible implementation detail; in fact they should avoid designing a final solution or establishing
unjustified constraints on the solution.
Uses cases and scenarios are simple ways to describe a system’s behaviour, that may be defined and
approved in collaboration with the stakeholders and system engineers, so that, once they have been
approved, the related requirements may be formalized.
6.2.2 Introduction
The ECSS-E-ST-40C requires the selection of a life cycle for space software projects.
The lifetime of software goes from its initial feasibility study up to its retirement. Complex space
systems are initially being studied in early project system phases (0, A, and B) (see ECSS-M-ST-10C) in
order to get a better assessment of what the final system should be capable for. The requirements are
not yet stable enough to issue a business agreement (contract) for development. Space software
experts are already being involved at this point as part of the system activities (co-engineering).
The system phase 0 (and very often phase A) serves to investigate complex technological challenges
for feasibility. It is the goal to find combinations of techniques and methods to turn the initial ideas
and dreams into reality and to exclude wrong tracks very early in order not to waste time, money and
effort.
The system phase A (which can even be split in several system phases e.g. A1, A2) contributes to
clarify customer and system requirements and to investigate plausible implementation possibilities
taking state-of-the-art or best practice methodologies and techniques into account. The findings and
the results are discussed between customer and the (intended) system supplier representatives at least
in a mid-term and a final review milestone (depending on the business agreement there can be more
coordination meetings as needed by the project). At the end of system phase A there is an agreed
status that represents the feasibility of the future complex space system. Very often, alternative layouts
w.r.t. implementation methods and system capabilities are provided to the customer. In most cases
accompanying global cost estimations for the alternatives are prepared to enable the customer to select
a variant that serves his needs best within his environment / situation.
System phase B (or system phases B1 and B2) is further clarifying the system needs with the goal to
finally get a reliable set of requirements for the development system phase C. The set does not only
consist of customer requirements, but also of system and initial software requirements enabling to
start the developments. This may include the selection of the software life-cycle.
119
ECSS-E-HB-40A
11 December 2013
The software life cycle is a methodology used to guide and to organise the progress of software
throughout its development and maintenance, in a structured way. It provides the framework
supporting the software project planning, organization, staffing, budgeting, controlling, analysis,
design and implementation. A software life cycle explains how the software engineering processes are
mapped over time and project phases.
Processes within software life-cycle can run in different ways:
a. sequential, from requirements analysis to delivery,
b. in parallel or partially overlapping: to reduce time to deployment,
c. iterative, meaning run in several decoupled and partial steps: to progressively reduce project
risks.
Moreover, a process can be activated and started using a draft work product as input (i.e. incomplete,
not verified). There is a risk working on drafts but there is also an opportunity to run parallel work
and to identify early problems with the drafts themselves. The project schedule shortens as more
parallel work is performed. To some extent this is “concurrent engineering” applied on software
development. However, when a process is working on a draft as input it may only produce a draft as
output (draft-in/draft-out concept). Therefore, once the final version of the input is available the
process is re-activated to “reconcile” the work done with the final input.
Life-cycle models are reference frameworks. A lot of software life-cycle models exist. A well-known
life-cycle model is the waterfall model. Models such as V model, incremental, evolutionary or spiral
are derived from the waterfall life-cycle. ". Other models take a perspective within the multiple levels
of systems and software development, aiming at more globally modelling the software life cycle
within the relevant customer-supplier network.
In general terms, life-cycles can be broken down into successive stages. Each of them uses the results
of the previous stage to advance the life cycle, from one baseline to another one, after successful
completion of the relevant activities. During these stages, usually at the end, milestones are planned
(potentially through project reviews).
Even if not considered as a life-cycle model, the Agile methodology is also considered in this chapter,
as a succession of stages with dedicated successful end criteria.
Each review in the software life cycle is a major assessment performed by a designated team. This
includes:
a. assessment of the validity of process output elements in relation with the requirements or the
predictions;
b. decision to start the next stage of the project.
Whatever the selected software lifecycle model, there should be at the minimum the set of ECSS-E-ST-
40-C reviews (i.e. SRR, PDR, CDR, QR and AR) synchronised with system level (see ECSS-E-ST-40-C
sub clause 5.3.6). Then, they can be completed by technical reviews where appropriate to map to the
software lifecycle (see ECSS-E-ST-40C 5.3.3.3). The sequence of software project reviews starts with the
software requirements and definition System Requirement Review (SRR) and Preliminary Design
Review (PDR) and continues with the justification and verification Critical Design Review (CDR),
Qualification Review (QR) and Acceptance Review (AR). Although each stage is usually a part of a
sequential logic, the start of the next stage can be decided before all the tasks of the current stage are
fully completed (e.g. starting validation of some components although coding of others is not
finished). Starting the work for the next phase in parallel can be considered if the induced risks are
identified, accepted and monitored by the project.
120
ECSS-E-HB-40A
11 December 2013
Each software project has its own life cycle (taking benefits from existing models and experience)
fitting the need of that project and its context (e.g. internal organization, customer needs). Frequently,
several life-cycles models are combined to fit the various projects needs and constraints.
The ECSS-E-ST-40-C defines a set of processes. Each process is defined as a set of interrelated activities
that transform inputs into outputs. However, ECSS-E-ST-40-C does not prescribe any specific order of
execution of these processes over time nor does it assume a rigid sequential execution where only the
end of one process triggers the start of the processes using its output. Nevertheless, the ECSS include
some requirements relevant to the software life cycle definition:
a. ECSS-E-ST-40-C, Clause 4, in terms of system and software life cycles and related phasing, and
sub clauses 5.3.2 in terms of specific requirements;
b. ECSS-Q-ST-80C, sub clause 6.1. in terms of the characteristics of the lifecycle that shall be
identified.
With ECSS-E-ST-40-C, any life-cycle model can be applied, provided that the process and output
requirements are satisfied. This chapter provides some recommendations to implement the
requirements of ECSS-E-ST-40-C related to life-cycles, i.e. choosing and implementing a life-cycle
appropriate to the software development constraints. In addition, other recommendations are also
provided for specific technologies such as Database development (see clause 5.2.4.4), Autocoding
(clause 6.5).
6.2.3.1.1 Advantages
The Waterfall model enforces discipline in the life-cycle process and progress control. The verification
is inherent in every stage of the life-cycle. Moreover, it is easy to contract this model.
6.2.3.1.2 Disadvantages
The Waterfall model requires a complete and mature specification before starting and the expected
delivery is available only at the end of the life-cycle. Moreover, this model presents programmatic and
technical risks for large software products, or when there are new technologies applied.
121
ECSS-E-HB-40A
11 December 2013
6.2.3.1.3 Utilization
According to the experience, the Waterfall is only considered as a reference, because changes often
occur during the development, which affect the outputs of the previous stages, even up to the RB. The
consequence is to re-run a sequence of delta Waterfall stages, which is not cost and schedule effective.
This is the reason why other models are regarded as more suitable.
The Waterfall model is more efficient for software products with well-known customer needs. It is
also appropriate when no technological risks are involved, e.g. when the target system (e.g. spacecraft,
or a ground segment facility) is either already available or its design does not imply constraints on the
software design to be verified early in the system life-cycle.
122
ECSS-E-HB-40A
11 December 2013
Advantages
In general, iterative life-cycles allow early delivery of partial software (e.g. implementing only a part
of the Requirement Baseline or only partially validated). In addition, these life-cycles enable to start
the software development and the system integration earlier, to mitigate the risk to detect too late
potential issues (e.g. system - software co-engineering, software design). This is to be balanced with
the disadvantage a).
Disadvantages
The following disadvantages are shared by iterative life-cycles:
a. at supplier level, there may be full or partial rework, such as additional non-regression test
effort in case of too much coupling between several iterations, e.g. when there is a change on a
part of a previous iteration,
b. The support and maintenance processes of already delivered releases start earlier and hence
manage several versions in parallel,
c. There is additional effort for the increased number of required technical reviews (although there
may be a balance between the extra number of technical reviews required and formal reviews
with less effort).
Utilization
Iterative life-cycles are particularly appropriate for software projects where the customer needs early
software releases (e.g. to perform early system tests or test facility set-up) or when system needs are
not enough mature and require consolidation.
Consolidation of needs should be prioritized according to maturity needs (if there are no other specific
constraints).
Moreover, to favour the overall consistency between the system and software life-cycles, it is required
(ECSS-E-ST-40C 5.2.4.1) to detail the perimeter of each planned software release (i.e. the included
functions) as early as possible and before starting the development.
The overall assembly logic as well as the overall testing logic (e.g. HW/SW integration, function
validation, non-regression) should be adapted to be consistent with the selected life-cycle. See also
section 5.5.4. In this aim, automation of tests is strongly recommended in order to ease non-regression
tests.
Finally, to mitigate the risk related to the management of several versions in parallel, particular
attention is given to the configuration and change control process and associated tools.
123
ECSS-E-HB-40A
11 December 2013
on whatever has changed). In other cases it is more efficient to hold an informal technical review for
first iterations and then a formal review for the mast and complete iteration. These principles apply
for software having an overall consistency and not to several software or several part of software
without any coupling.
As a minimum, the last iteration completes the full adherence to what is required by ECSS-E-ST-40C,
as to assure full conformance of the delivered software product.
Several iterative life-cycle models are presented in the subsequent chapters: Incremental,
Evolutionary, Spiral and Agile.
6.2.3.2.2 Incremental
This iterative life-cycle model takes into account the progressive enhancement of the software and
supports developing software by successive increments. Each increment corresponds to a delivered
release of the software, the deliveries holding at pre-defined times. Each release provides an
opportunity for the customer and the supplier to assess the technical baseline and the schedule. In this
model, the Requirements Baseline, then Technical Specification and top level design are established
once for all planned increments.
The software is designed in details, implemented, integrated, and tested as a series of incremental
builds, where a build consists of a set of components interacting to provide a specific functional
capability. At each stage a new build is implemented and then integrated into the architecture that is
tested as a whole.
The detail definition of the perimeter of each increment is directly derived from the customer needs
with respect to deliveries. As example, the initial increment could implement agreed basic
functionalities of the Requirements Baseline and provide the basis for the additional components
(implementing more advanced functions) of the architecture that are added in the following
increments (e.g. for an on-board software only DHS, then DHS+AOCS, and finally DHS+AOCS+FDIR
being the complete software).
In this model, the development of the next increment can start at the end of the previous one or can
partially overlap leading to different variations of the model. However, even in a sequential
versioning approach, some processes of the life-cycle for delivered versions are performed while the
current version is still under development, including the maintenance, and sometimes, the operations
processes.
Advantages
In addition to the Waterfall model advantages, the Incremental life-cycle enables to accommodate the
specific delivery needs and associated schedule. It secures the system schedule, enabling a feasible
software schedule. It allows conducting more efficient technical reviews focussed on limited
functional topics, anticipating parts of the formal project reviews.
Disadvantages
The Waterfall model drawbacks remain, except for delivery available earlier with the Incremental life-
cycle. In addition, the Incremental model has the drawbacks of an Iterative life-cycle.
Utilization
According to the experience, changes may occur during the development that affects the outputs of
the previous stages, even up to the RB. The consequence is to re-run a sequence of complete stages,
that is not cost and schedule effective. This is the reason why other models are regarded as more
suitable.
124
ECSS-E-HB-40A
11 December 2013
This life-cycle is used when the software size is relevant with respect to the allocated time frame. It is
also used when the target spacecraft system integration, verification, and qualification justify an
incremental approach to progressively reduce manufacturing risks and minimise needs for hardware
re-design. Finally, it is used when the software is to support the system needs (e.g. integration and
qualification testing).
To be efficient, this approach should be based on a limited number of increments (e.g. 2 to 5), as
decoupled as possible (see 5.3.5.2.2).
This model is justified for several reasons, including:
a. the identification of a subset of the software product capabilities earlier than the complete
product (for instance for use at upper level by the customer in conjunction with other software
or hardware items);
b. the size of the software project is demanding and leads to division of the effort;
c. the project budget needs to be spread over a longer time than the one required for a Waterfall
approach.
6.2.3.2.3 Evolutionary
The Evolutionary model recognises that the evolution of software takes place according to
prescriptive steps, but accommodating all the changes that are required during the life of the projects.
The Evolutionary model plans for re-establishing the software Requirements Baseline at several
defined milestones. In this view, the final software is developed through a controlled sequence of
versions accounting for the evolution of requirements in major steps.
This Iterative life-cycle is an extension of the Incremental life-cycle with successive evolutions of the
Requirements Baseline. Each iteration takes into account the RB changes.
Advantages
In addition to the Incremental model advantages, this life-cycle is mainly a way to start the
development based on a frozen set of requirements, corresponding to the perimeter of the iteration,
even when some requirement are not mature enough. It allows the anticipation of problems,
feasibility check, etc. and enables a final accommodation of schedule needs.
The planned versions can benefit in fact from the user’s feedback, to correct anomalies and to improve
the software functionality and performance over the lifetime.
125
ECSS-E-HB-40A
11 December 2013
Disadvantages
The disadvantages are the same as for the Incremental model.
In the Evolutionary model, there is an additional activity to establish different Requirement Baselines
and Technical Specifications for each version and to consolidate them. This flexibility needs to be
balanced by the induced risk that the overall software design could not be robust enough to anticipate
the further evolutions of requirements. This is in particular important for the real time design.
Utilization
This life-cycle is used when, even with mature enough customer needs, the Requirements Baseline
consolidation cannot be fully achieved early in the life-cycle.
The Evolutionary life-cycle is effectively suitable for long-living software products. A limited and
planned number of specification evolutions should be managed, leading to a limited number of
product versions.
In addition, in case there is an overlap between the different iterations, the software processes need to
be managed carefully. This implies that for example, the specification of the next version is completed
before the validation of the previous version has been accomplished, or even before its validation
starts. The commitment of the customer-supplier on the new version requirements can be in fact
invalidated by the results (e.g. non-conformance) of the previous life cycle iteration.
The key principle of such model is to establish the overall software design and interfaces during the
first iteration. In order to mitigate the risk on the initial software design robustness, it is beneficial to
inherit from former knowledge on System and Software architecture, or to use a generic architecture,
or to reuse an existing and proven architecture.
126
ECSS-E-HB-40A
11 December 2013
6.2.3.2.4 Spiral
This is a risk-driven model for software processes based on iterative development cycles in a spiral
manner:
a. inner cycles are devoted to project risks minimisation, through approaches such as early
analysis, prototyping, simulation, expert advice, or benchmarking, and associated progress in
the development, and
b. outer cycles implementing the software, once risks are solved, with an appropriate life-cycle
model.
Whilst the radial dimension accounts for cumulative development maturity, the angular dimension
represents the progresses made in accomplishing each development cycle of the spiral. Each cycle is
accompanied by a risk analysis to determine how the development spirals out and evolves, depending
on the achieved confidence in the previous cycle. Each cycle includes therefore the planning, seeking
for alternatives in design and development, the evaluation of these alternatives, and risk analysis as
well as it includes the software development processes to accomplish the planned development cycle.
Each cycle of the spiral is completed by a review of work products of that cycle, including plan for the
next cycle, to assess the level of mitigation of the initial risks and the level of maturity to continue.
Advantages
The Spiral model favours consolidation of not enough mature inputs, i.e. needs, requirements, design.
The emphasis of the spiral model on alternatives and constraints supports the reuse of existing
software.
Disadvantages
The Spiral model has the drawbacks of an Iterative life-cycle.
The main disadvantage of the Spiral model however is that too much risk analysis can be costly
compared to the expected benefits. The development team can be weak at assessing risk.
Moreover, early development of low quality prototype can lead to a low overall quality of the final
product.
The flexibility of this model needs to be balanced against the effort associated with possible
refactoring of the design and code for accommodating evolving requirements.
Utilization
The Spiral model is particularly appropriate for software projects with not enough mature upper level
needs or interfaces. It is used when project risks are evaluated to be relevant at both customer and
supplier level. It is also used when the system life-cycle spans over duration supportive of the multi-
cycle model of this model.
When lower level suppliers provide software, both customer and suppliers perform all risk analysis
before the supply contract is signed.
127
ECSS-E-HB-40A
11 December 2013
The adoption of this model impacts the planning and activities in the organizational, primary, and
supporting processes. The definition of the review logic and associated milestones, as well as the
project schedule and project status determination and reporting are modified.
All the process elements are tailored and optimised throughout the project life-cycle. Those include:
a. project phase Page 158 - section 5b 10-13;
b. have spurious '1's before the testing and planning, for the precise determination of objectives,
alternatives, and constraints to be considered;
c. risk management, for analysis of alternatives and definition of risk mitigation or avoidance
strategies;
d. development process, for definition of the spiral cycles activities and reviews;
e. quality assurance process, in all aspects.
Generally, the Spiral life-cycle outputs and reviews are made in line with the life-cycle model assumed
for each cycle.
For example, a project with un-mature requirements starts a first cycle aiming at analysing the needs
through prototyping; once requirements are enough mature, a SRR can conclude this cycle. Then, the
project continues the next cycle with a classical incremental model, with associated reviews.
6.2.3.2.5 Agile
The concept of iteration is taken to an extreme with a software baseline produced in a very short
period (several days or weeks rather than months). Several methods (not life-cycles strictly speaking)
implement this principle, such as Extreme Programming, Crystal Clear and Scrum, all of them being
based on the Agile method. These methods are developed with the intent of accepting changing
requirements and delivering software to satisfy customer needs in the most efficient way possible.
Agile is a software development approach based on four unifying fundamental values:
a. Individuals and interactions over processes and tools;
b. Working software over comprehensive documentation;
c. Customer collaboration over contract negotiation;
d. Responding to change over following a plan.
In this approach, the development is split into very small increments (named “sprint” in the Scrum
methodology). Each increment involves a team working through a full software development cycle
(i.e. Waterfall) including planning, requirements analysis, design, coding, unit testing, and acceptance
testing when a working product is demonstrated to the customer or the end-user. This intends to
minimize the overall risk and allows the project to adapt to changes quickly.
A specific aspect of the Agile method iterations is that they have functional expansions and are time-
boxed: in case of problems, deliveries always occur on time but with reduced functionality (instead of
postponing delivery waiting for the full functionality).
Agile methods focus only on essential development efforts in the creation of working software that
satisfies the customer’s needs. An on-site customer provides feedback on working portions of the
software, thereby reducing the dependence on written documentation alone and increasing the
possibility of delivering a satisfactory product.
Agile development has been widely seen as being more suitable for certain types of environment,
including small teams of actors. Face-to-face communication and team member’s ingenuity is
encouraged over documentation and prescriptive process.
128
ECSS-E-HB-40A
11 December 2013
Requirements are defined and prioritized in collaborative effort of both customer and supplier (value-
driven approach).
Advantages
Even if particularly useful in domains where requirements are not mature at all or when frequent
requirements change and speedy development are more prevalent, these methods can be also used
with a stabilised Requirements Baseline.
A working product is delivered at each stage, tested and evaluated by the customer and/or end-user.
The testing is a way to define or refine the next requirements that will be implemented during the
following stages.
Disadvantages
These methods need a strong and continuous involvement and commitment of the Customer
throughout the development, and even recommend co-location of customer/supplier teams. They are
mainly dedicated to small software teams.
NOTE This flexibility needs to be balanced against the effort
associated with possible refactoring of the design and code for
accommodating evolving requirements.
This method can increase the pressure due to the high development rhythm.
Utilization
This approach is used when the requirement baseline cannot be fully defined at the beginning of the
software development (either because the customer has no time to specify the software in a detailed
manner, or the end-user does not know his wants relative to the software product). Anyway it is used
only if the customer is ready to get involved during the development.
129
ECSS-E-HB-40A
11 December 2013
130
ECSS-E-HB-40A
11 December 2013
131
ECSS-E-HB-40A
11 December 2013
one sub-system be tested using only a prototype incomplete version of another sub-system, as long as
those tests are repeated on the final version of the other sub-system, once available.
The system life-cycle process provides the framework in which the software life-cycle is defined,
taking into account a set of constraints including:
a. the policy adopted at system level for the procurement, development and manufacturing in one
or more versions (or models, in the case of the space segment) of the system configuration, with
impact in terms of integration of hardware and software and the associated testing;
b. the system review logic, as constraining the definition of the software life-cycle schedule and
associated reviews and milestones;
c. the system needs for technology demonstrations or pre-development activities that can imply
the need for early software versions to be engineered and integrated with the hardware
configuration;
d. the planned evolution of the system requirements baseline and interface requirements-design
that impacts a pre-planned evolution of the software requirements and its interface definitions
across the software life-cycle;
e. the number of system configurations that are expected. For instance, a spacecraft can exist in
different flight units with the same or with different hardware and software configurations. The
software development takes this into account to support all configurations.
For each system, a careful analysis is performed in order to define the proper framework for the
software life-cycle and to determine the set of constraints to be applied on the software development,
delivery, and acceptance. The definition of a generic process model capable of expressing all the
potential system constraints is not feasible, due to the specificity of each spacecraft space and ground
segment conception, mission, and expected lifetime. However, general elements can be provided for a
large variety of systems and applications.
Moreover, due to the need for reduction of the overall spacecraft development schedule, it becomes
more and more difficult to have all the system requirements for software in time before the software
development can start. The software life-cycle is constrained to accommodate for the late
consolidation (or even gathering) of some requirements. Also, a requirement change process can be
defined and applied to cope with changing customer constraints or mission/spacecraft requirements.
For these reasons, the software reviews are tailored to be adapted to the selected life-cycle. In the
tailoring, the major criterion is to aim at developing the software at the same speed as the
corresponding subsystem. Starting the software development too early leads to major redesign, when
the system becomes mature. Waiting for all subsystems to be mature before starting the software can
cause the software to arrive too late.
132
ECSS-E-HB-40A
11 December 2013
efficiency of the supplier. In any case, reviews aiming at technically checking the correctness and
completeness of the outputs and meetings aiming at monitoring the project progress are clearly
distinguished.
The purpose of the software reviews is for the customer to accept the outputs folders specified in
ECSS-E-ST-40C and ECSS-Q-ST-80C (e.g. TS) and to get the software in a given status (e.g. validated,
qualified, and accepted). The ECSS-E-ST-40C (sub clause 5.3.6) provides a logical relationship between
the software reviews and the spacecraft reviews.
At supplier level, it is recognized that rigid process sequencing is usually unsuitable for real industrial
projects taking into account e.g. product-line or reuse policy, and different approaches are often more
suitable at least for:
a. Reducing the time to market by executing processes in parallel as much as possible (i.e.
concurrent development);,
b. Reducing risk by executing processes repeatedly and progressively (i.e. iterative and
incremental development).
The supplier can also consider specific project technical reviews to help him properly scheduling the
project progress.
133
ECSS-E-HB-40A
11 December 2013
d. Clause 5.4.3.3 mentions the computational model (discussed in the following Real-time section
of this document)
e. Clause 5.5.2.3 introduces the detailed design model with its static, dynamic and behavioural
views (that must be verified in 5.8.3.4.a.9),
f. Clause 5.8.3.13 is the verification of the behaviour models,
These concepts are mapped on the appropriate modelling technology in the Software Development
Plan. Any modelling technology is better suited for (i) a given level of abstraction, (ii) a particular
viewpoint on the software and (iii) a particular verification objective. The development plan takes care
to associate the right modelling technology to the right development step. In addition, the
development plan analyses the relationship between the various models, their interaction in the life
cycle, and the various roles associated to their development, verification and validation.
As a non-exhaustive example:
The software logical model has the level of abstraction of software requirements. It uses technologies
such as use cases, class diagrams, and state machines. The architectural design and the static view of
the detailed design model are described with components, or architectural languages. The detailed
design is also based on specific targeted modelling languages, such as data flow diagrams. The
dynamic view is expressed with the real-time features offered by the component model. The
behaviour is expressed with state machines. Interfaces are expressed with data modelling languages.
134
ECSS-E-HB-40A
11 December 2013
the past and to influence the future practice of systems engineering by being fully integrated into the
definition of systems engineering processes.”
Applying MBSE is expected to provide significant benefits over the document centric approach by
enhancing productivity and quality, reducing risk, and providing improved communications among
the system development team.
135
ECSS-E-HB-40A
11 December 2013
136
ECSS-E-HB-40A
11 December 2013
6.4.2 Introduction
The testing methods and techniques are presented by first defining different test objectives: emphasize
on specific aspects of the software product, like robustness, performance or interfaces. Secondly, some
specific testing strategies are presented which can be used for test approach and definition of some of
the test objectives presented above. These techniques can be either white box or black box techniques
(to be specified when introducing each of them).
The process or activities versus the criticality categories is provided by two different tables: one
relating the criticality categories versus the testing techniques; and the second one detailing the overall
testing activities such as planning, execution, reporting.
6.4.3 Definitions
6.4.3.1 Black box test
Test of the software without considering the internal logic.
137
ECSS-E-HB-40A
11 December 2013
NOTE Without the use of a CASE tool for the production of the
design, or code, manual searching for interface parameters in
all design or code modules can be time consuming.
Several levels of detail or completeness of testing are feasible. The most important levels are tests for:
a. all interface variables at their extreme values;
b. all interface variables individually at their extreme values with other interface variables at
normal values;
c. all values of the domain of each interface variable with other interface variables at normal
values;
d. all values of all variables in combination (this is only feasible for small interfaces);
These tests are particularly important if the interfaces do not contain assertions that detect incorrect
parameter values.
One example of interface testing is the so-called ‘Requirements Based Hardware-Software Integration
Testing’. This testing method should concentrate on error sources associated with the software
operating within the target computer environment, and on the high-level functionality. The objective
of requirements-based hardware-software integration testing is to ensure that the software in the
target computer satisfies the high-level requirements. Typical errors revealed by this testing method
include:
a. Incorrect interrupts handling.
b. Failure to satisfy execution time requirements.
c. Incorrect software response to hardware transients or hardware failures. for example un-
correctable EDAC error or a “FPU exception.
d. Inability of built-in test to detect failures.
e. Errors in hardware-software interfaces.
f. Incorrect behaviour of feedback loops.
g. Incorrect control of memory management hardware or other hardware devices under software
control.
h. Stack overflow.
i. Incorrect operation of mechanism(s) used to confirm the correctness and compatibility of field-
loadable software.
j. Violations of software partitioning.
Another example of interface testing is the so-called ‘Requirements-Based Software Integration
Testing’ that concentrates to exercise the code on the components inter-relationships. The objective of
requirements-based software integration testing is to ensure that the software components interact
correctly with each other and satisfy the software requirements and software architecture. This
method can be performed by expanding the scope of requirements through successive integration of
code components with a corresponding expansion of the scope of the test cases. Typical errors
revealed by this testing method include:
a. Incorrect initialization of variables and constants.
b. Parameter passing errors.
c. Data corruption, especially global data.
138
ECSS-E-HB-40A
11 December 2013
139
ECSS-E-HB-40A
11 December 2013
systems to use some fraction (for example 50%) of the total resources so that the probability of
resource starvation is reduced.
Performance testing objectives are to ensure that the system meets its timing and memory
requirements, and acquire measures to be taken as soon as the target operational equipment is
available. Stress testing is the technique mostly used for this type of tests.
For dependability and safety testing, there are more specific techniques to be highlighted, being either
white box or black box. Among these techniques, the following ones can be highlighted:
a. Fault injection
b. Stress testing
c. Equivalence class and input partitioning
d. Boundary value testing
140
ECSS-E-HB-40A
11 December 2013
Nevertheless, fault injection is recognized as a key element in the assessment and validation of critical
software systems, as it is a practical approach to perform stress testing, i.e., raising conditions to
trigger rarely executed software operations such as error handling and recovery. Fault injection is
used to assess the FDIR mechanisms of any software system. Fault injection can be seen as a special
case of testing, with faults becoming the main input in addition to its outputs. Fault injection is quite
effective to spot the “interesting” faults, i.e., software faults that have a high probability of escape test
case design. It is this inherent ability to trigger unexpected system behaviour that places fault injection
technology as a definitive “should have” for achieving more confidence on accomplishment of
dependability and safety requirements of critical software.
Fault injection should be used in a more comprehensive way where the fault models to be used
address both hardware and software faults.
Hardware faults or software faults can be supported by three fault injection techniques:
1. Use of hardware injection support is a unique way to assess the containment and error
propagation into low-level (software directly controlling hardware) critical software, and
to achieve evident confidence that dependability and safety requirements are fulfilled.
2. Use of compile time software injection support, e.g. by use of mutation or fault seeding
techniques. These two white box techniques are done with intrusion, which can be a
disadvantage.
These two techniques are performed by inserting or changing the original source code. They
evaluate the software product from two different perspectives: the first one intended to
locate faults and the second one intended to evaluate the effect of single changes to the
original code. These techniques as modifying the source code, are to complement black box
testing, and should be carefully used since in the insertion or change of the original code,
other non-intended faults can be introduced. To be effective, these two techniques need
good automated tools and significant amount of human analyst time and good insight of the
software.
3. Use of run-time software injection support, e.g. by modifying the content of the memory,
register, etc.
The use of fault injection is based on skilled personnel and good automation tools, which can be a
disadvantage.
141
ECSS-E-HB-40A
11 December 2013
e. for the extreme cases, all influential factors, as far as is possible, are put to the boundary
conditions at the same time.
Under these test conditions the time behaviour of the test object can be evaluated. The influence of
load changes can be observed (these tests are sometimes called volume testing). Throughput analysis
is considered part of this king of stress tests.
Requirements for the usage of resources such as CPU time, storage space and memory can be subject
of these resource stress tests. The best way to verify these kinds of requirements is to allocate these
resources and no more, so that a failure occurs if a resource is exhausted. If this is not suitable (e.g. it is
not always possible to specify the maximum size of a particular file), alternative approaches are to:
a. use a system monitoring tool to collect statistics on resource consumption;
b. check directories for file space used. The correct dimension of internal buffers or dynamic
variables, stacks, etc. can be checked.
As part of the main advantages of stress testing is that it is often the only method a) to determine that
certain kind of systems are robust when maximum numbers of users are using the system, at fastest
rate possible (e.g., transaction processing); and b) to identify that contingency actions planned when
more than maximum allowable numbers of users attempt to use system, when volume is greater than
allowable amount, etc.
One of the disadvantages is that is requires large resources.
142
ECSS-E-HB-40A
11 December 2013
Test cases from equivalence partitioning are complemented with test cases from Boundary Value
Analysis and, applied in conjunction with other test practices to ensure the specified test coverage.
Equivalence partitioning used to test dependability and safety of software focuses the test cases on the
limit values or ‘out of bound’ values of each class identified.
The advantage of Test Cases from Equivalence Partitioning is to analyse that program behaves
correctly for any class of input by selecting a representative value, reducing the total number of test
cases that are developed.
No significant disadvantages are identified in the application of Test Cases from Equivalence
Partitioning.
143
ECSS-E-HB-40A
11 December 2013
and therefore problems resulting from unexpected relationships between input types cannot be
identified.
Table 6-2 summarizes the suitability of each of the above testing techniques to cover the different
testing objectives described at the beginning of this annex.
Table 6-2: Relation between the testing objectives and the testing strategies
Test objective
Testing techniques
Interface Robustness Performance
Fault injection X
Stress testing X
Equivalence X
partitioning
Boundary value X
testing
The marked cells of the table mean that the testing technique suits best to achieve the respective
testing objective. This does not deny that other techniques can contribute to achieve the other
objectives.
144
ECSS-E-HB-40A
11 December 2013
Design of load testing for real-time systems requires additional analysis to normal load testing.
Realistic typical functional worst-case scenario should be elaborated and tested in order exercise the
system under realistic worst-case constraints. However it may be simply impossible or impractical to
enforce the worst-case condition at hardware and software level for a measurement-based observation
to take place. Not only does the amount of inputs and quantity of data need to be represented with a
test, but also the timing of this data should be considered. Individual loads of each task/process are
also depending on some parameters (mode, configuration). Failure conditions of input systems and
devices and the timing tolerance may lead to a race condition or timing issue on the software. This
needs to be explored during the performance testing campaign by designing tests that provide
different input timing constraints.
Real time testing cannot be exhaustive. It needs to be complemented with schedulability analysis
based on WCET. Real time software testing is necessary to increase the confidence and provide real
individual loads figures (to feed the real time analysis). It should provide the evidence of comfortable
margins w.r.t. schedulability analysis which is too much theoretical/pessimistic.
6.5 Autocode
6.5.1 Relation to the Standard
Automatic code generation is addressed in the ECSS-E-ST-40C Standard clause 5.3.2.4 about the
software development plan (and therefore also in Annex O the software development plan DRD), as
well as in 5.4.2.2 for the Technical Specification requirements related to autocode.
However, the use of autocode techniques impacts all the development life cycle and is expected to be
reflected in many places during the project development.
6.5.2 Introduction
Automatic code generation is primarily assumed (due to the current experience in projects) in the
scope of a functional system design that has defined a set of subsystems (like AOCS, thermal control
or power management) that can be subject to modelling and autocode. The subsystem team is in
charge of the functional aspect of the subsystem, using a functional model. This team produces a high
level model representing the expected behaviour of the subsystem level component. Then it is refined
together with software team to become autocodable.
The functional model contributes to the subsystem RB, complemented with a textual RB covering the
parts which are not addressed by the functional model (e.g. data handling, FDIR, non-functional
aspects). The functional model is actually part of the software, as it carries information on technical
specification, architecture, and design. As such, it follows some software related rules such as
modelling standards, structural coverage measure, code generation process, etc.
The functional test suite used to validate the model contributes to software validation.
The software team is in charge of the actual autocode generation process, of the autocode integration
with the rest of the software and the software validation
Another case of use of autocode is when the software team decides to use a modelling autocodable
language instead of (or in addition to) a classical coding language like C or Ada. This is fully internal
to the software development process.
145
ECSS-E-HB-40A
11 December 2013
6.5.3.2 Roles
146
ECSS-E-HB-40A
11 December 2013
Elements of the software AOCS Requirements Baseline are produced, including non-functional
requirements, requirements which are not part of the model, and potentially the model.
Step 2: At the beginning of the C/D phase the AOCS and software teams can work separately. The
software team works first on the global needs analysis and the definition of the software architecture.
The AOCS team refines the preliminary design and enriches the AOCS models according to the
modelling standards defined for auto-coding. It is supported by the software team to help the
definition of the software architecture and interface of the AOCS models with the rest of the software.
The AOCS team performs the functional validation of the AOCS model. The AOCS “Critical Design
Review” is held once all increments are designed and after performance tests.
In this step, there is a co-engineering phase between the AOCS and software teams to evaluate the
auto-code-ability of the AOCS model and to complete it with respect to non AOCS requirements such
as operability, FDIR, and software constraints.
Step 3: The validation of the AOCS model can lead to modifications that may induce to restart the
previous step
Step 4: The software team develops the software that cannot be generated from the models, in a
standalone way. The classical unit test, integration tests, validation tests and code coverage are
performed.
Step 5a: This is the genuine autocode approach that maximizes the gain of autocode, but does not
insure full code coverage. It is acceptable for software that is not of criticality category A. It is also
acceptable for software of criticality category A if the code generator is qualified. Once the AOCS
needs are validated using simulations at model level (e.g. Simulink), the unit test concept is applied to
the model. Model units (elements of the model) are tested. The model structural coverage is measured
(tools exist for the most common modelling environments, e.g. Simulink). Then the code is generated
automatically and can now proceed to integration.
Step 5b: This (alternative) approach insures full code coverage. It is mandatory for software of
criticality category A and B, except if the code generator is qualified at the same level as the generated
code (see ECSS-Q-ST-80C 6.2.8.3). In this approach, the code is automatically generated from the
functionally validated model. It is then unit tested in a classical way, and its structural coverage starts
to be measured.
Step 6: The complete software, whether manually or automatically generated, is integrated and
validated. If automatable, the model tests (i.e. unit tests, validation) may be rerun on the modelling
environment with the generated code in order to check the code generator. This is not necessary if the
code generator is qualified.
After code generation of the software, the subsystem team continues to use the model to execute the
performance tests of the sub-system.
The software maintenance is performed at model level in order to ensure the consistency of the
modification.
This is illustrated in Figure 6-1.
147
ECSS-E-HB-40A
11 December 2013
AOCS SOFTWARE
AOCS
1 Phase B
model
PDR AOCS SW RB
AOCS SRR
2 Phase C modelling
model standards
SW
specification
interfaces
&
architecture
3
AOCS model PDR
validation
DDR
CDR 4
manual SW
unit test
integration test
partial validation test
Code structural
Model unit test Autocode coverage
5a 5b
Model structural
coverage
Code unit test
Code structural
Autocode coverage 6
Integration test
Validation test
148
ECSS-E-HB-40A
11 December 2013
149
ECSS-E-HB-40A
11 December 2013
The model needs also to be built in such a way to satisfy the overall model/software interfaces and
also to ensure that the software code is properly generated (i.e. partitioned in accordance with the
software architecture). The action performed by the software when a numerical error or robustness
check is executed is agreed between the Subsystem modelling and SW teams.
150
ECSS-E-HB-40A
11 December 2013
7
Real-time software
7.2.1 Introduction
The software product margins apply in particular to:
a. load and real-time figures including:
1. CPU load,
2. deadline for a processing with constraints (e.g. period, rendez-vous, reaction constraint,
timing accuracy constraint);
b. memory capacity;
c. numerical accuracy;
d. external (including hardware) interface/event synchronisation and timing.
Margins definitions are agreed between the customer and the supplier. Guidance on margin definition
is given in this section. Margin definition and management are likely to raise more attention for
embedded or real-time software, as it is more difficult to extend the hardware resources, or to access
to the system once launched. Hardware on the ground can be scaled up vertically and horizontally.
151
ECSS-E-HB-40A
11 December 2013
However, ground software may face other concerns at system level, e.g. in terms of ergonomics (for
instance response time), network load, the definition of the environment such as the version of the
hardware and software, which are not addressed here.
In order to allow the Supplier to manage the margins in an appropriate way (e.g. design flexibility),
they should be defined by the Customer in the context of their need, e.g. for already known growth
capability, or for risk reduction.
In particular, the margin philosophy should be specified in such a way that it does not over-specify
the design, unless the customer has explicit requirements regarding the software architecture,
scheduling and interfacing.
Criteria for the selection of all the margins can be:
a. The processor module capability. The processing budget management and reporting presume
that basic hardware choices, such as processor type, clock frequency, wait-states have been
made and clearly described.
b. Equipment, communication and performance aspects, e.g. buses, protocols, acceptable errors,
capacity bus usage by other sources.
c. System design which is derived in timing constraints (e.g. constraints on state transitions,
especially when recovery from a faulty state is concerned)
d. Expected future changes in the requirements baseline (e.g. due to iterations, requirements on
reprogramming of the system during operational use, required budget for temporary copies of
software images)
e. Expected phasing/versioning of the software development and its impact on produced code
and CPU load
f. Accuracy aspects, such as conversion to/from analogue signals, and accuracy of timing signals
g. Allocation of responsibilities regarding the characteristics of Service Access Point interfaces, if
not handled by one contractual party.
Reuse of existing software and software maturity (level of uncertainty on the figures) are also criteria
in particular during the early phases where the figures are estimated
152
ECSS-E-HB-40A
11 December 2013
Alternatively, the notion of worst case can be refined functionally-wise, by selecting (realistic)
operational scenarios (nominal or degraded, per mission phases or modes) that maximizes the
execution time of each function (operational worst case).
The former (theoretical worst case) is more pessimistic than the latter (operational worst case), which is
based on operational scenario.
The theoretical worst case is established on the basis of a discussion between customer (or system
level) and supplier (or software level). The context of the occurrence of this worst case can be more or
less critical, more or less realistic, more or less likely to happen. For example, CPU utilisation may go
over 100 % in case of simultaneous occurrence of failures, or maximum frequency of all telemetry,
which are generally not the system level assumptions.
In this context, the use of operational scenarios to derive an operational worst case and realistic
operational sequences are a mitigation of the theoretical worst case (see Figure 7-1), as the difference
between the two times could be substantial. It is nevertheless interesting to know the theoretical worst
case, because it allows understanding the conditions when it can happen and to relate to the
operational context.
NOTE ECSS-E-ST-40C clause 5.2.3.2 requires the customer to define
representative scenarios.
fi operational d fi
scenarios
7.2.2.3 Margins
Margins can therefore be proposed:
a. per task in isolation, by provision of a margin of execution time (margin_WCET)
b. per task in context, by provision of a growth capacity before it reaches its deadline
(margin_slack),
c. globally, by provision of a growth capacity before the CPU saturation (margin_utilisation).
153
ECSS-E-HB-40A
11 December 2013
The margins can be based on either theoretical worst case or operational worst case.
1) Margins can be defined on the deadline of a particular task taken in isolation using its Worst Case
Execution Time:
margin_WCET(i) = [deadline(i) - WCET(i) ] / deadline(i)
This can be used to force a margin on the estimation of the WCET if the customer needs to have
confidence on the estimated execution time, even if WCETs are already pessimistic in nature. In
addition, this does not account for the context of execution of the task (i.e. the interference by higher-
priority tasks). The only useful use of this margin is to understand the time window during which the
task of interest completes its execution before its relative deadline occurs.
Typical values of this margin should be maximum 10% at PDR where the WCET is estimated, and
then they should not be required later on.
The margin_WCET is generally based on the (theoretical) worst case execution time, because the focus
is typically put on the execution time of the particular algorithm implemented in the task that is
assessed against the envisaged deadline.
2) To take into account the context of execution of the task, a margin on the slack time can be defined.
The slack time is based on the response time of the task, i.e. the time after which it actually completes
its execution in the worst-case scenario, taking into account all the potential pre-emptions by higher-
priority tasks, the blocking time due to the access to protected objects, the execution time of interrupt
handlers, and interference by mechanisms of the RTOS / real-time kernel.
response_time(i) = WCET(i) + interference(i) + blocking_time(i)
slack_time(i) = deadline(i) - response_time(i)
margin_slack(i) = slack_time(i) / deadline(i)
Typical value of this margin very much depends on the nature of the task. Customer may wish to
ensure computation margin to a particular computational-intensive control loop with short deadline,
or instead to minimize margin to a background task. It is likely difficult to define a global margin such
as 10% for all tasks. This could be a target at PDR, but should be refined later on.
The margin_slack uses the response time computed by the schedulability analysis, therefore based on
the theoretical worst case (execution time and events). If the result is found pessimistic, the reasons are
analysed, and the operational scenarios are used to assess the extent to which the operational
assumptions are valid.
3) The theoretical CPU utilisation given by the schedulability analysis (in the absence of memory
cache) is:
WCET(i)
U= Σi ------------
T(i)
where :
WCET(i) is the worst case execution time of the task i (estimated in the early phase, measured at the
end of the project)
T(i) is the period of the task i if this task is periodic, or the Minimum Inter-Arrival Time (MIAT) of the
event triggering the task if it is sporadic.
154
ECSS-E-HB-40A
11 December 2013
The CPU utilisation of a schedulable system is less than or equal to 1 (this is a necessary condition, but
not sufficient to declare the system as schedulable).
The margin is:
margin_utilisation = 1 - U expressed in percentage
As said in ECSS-E-ST-40C, typical values of this margin are 50% [PDR], 35% [DDR/TRR] and 25%
[CDR and after].
The margin_utilisation uses execution time. Therefore, it is based on the theoretical worst case
execution time. If the result is found pessimistic, the reasons are analysed, and the operational
scenarios are used to assess the extent to which the operational assumptions are valid.
In addition, it is handy to assess the margin_utilisation by measuring the execution time of the
background task (the task in which the software is idle). However, the time during which it is
measured must be longer than the largest period or Mean Inter Arrival Time (MIAT) of the system.
155
ECSS-E-HB-40A
11 December 2013
156
ECSS-E-HB-40A
11 December 2013
157
ECSS-E-HB-40A
11 December 2013
For analogue-to-digital and digital-to-analogue interfaces, the choice of the converter must fit the
required signal resolution. If the software does not use the full resolution provided by the hardware,
then the software supplier should analyse if the software resolution fits the requirements for the signal
processing.
The verification intends:
a. at PDR, to verify that the (numerical) accuracy requirements are clearly specified in the RB and TS
b. at CDR, to verify that the numerical accuracy requirement is reached according to the data type
used and propagation effects. This verification is based on analytic analysis or a measurement
performed on a representative numerical environment.
158
ECSS-E-HB-40A
11 December 2013
their deadlines, are often classified by the consequence of missing a deadline: : a system is considered
hard real-time when missing a deadline incurs a system failure. It is considered a soft real-time system
when a late response degrades the system's quality of service yet still delivering an acceptable
outcome
Real-time software is therefore a software characterized by some timing properties (e. g. execution
period, duration, periodic or sporadic event occurrence), and that satisfies the timing requirements
(e.g., deadlines). The applicable timing properties should be captured and properly traced through the
design and code. More specifically, these properties are treated in the dynamic design that presents
the information required to understand the flow of information, the flow of processing and related
timing issues in the software.
The time requirements are verified through static analysis and by testing. Testing the real-time
behaviour of software requires in fact specialized techniques – and since exhaustive testing is
impossible in real-scale systems, there is usually no guarantee that the time requirements are
eventually verified by test means. Static analysis, which seeks, computes and considers worst-case
bounds, is instead capable of providing absolute guarantees if the conditions for the analysis are met..
In order for static analysis to be usable, a computational (and therefore analysable) model of the real-
time software is identified at design time so that the timing requirements can be verified analytically.
Timing requirements can be analytically verified for deterministic or predictable execution models, i.e.
proven, through schedulability analysis. The elements of the computational model should in this case
be captured by the software dynamic design.
Real-time software is usually intended for use in embedded/flight software, but it can also apply to
other software, such as simulators (e.g. in case the software simulator interface with real hardware, or
when the simulator time needs to be real-time accurate) and generally speaking to ground software
(e.g. minimum load of data to be treated per second, maximum handling time for a telecommand or
telemetry, …). In the latter cases, the real-time aspects to consider are the same as for embedded real-
time software, and the process is similar. However, depending on the software environment and the
CPU resources, meeting the timing requirements necessitates potentially less engineering efforts than
for flight software.
Absolute deadline Absolute time at which the deadline of task’s job occurs, which is computed by
adding the task’s relative deadline to the absolute release time.
Aperiodic task A task whose execution is triggered by an activation event which can repeat at
irregular intervals. Aperiodic tasks are either idle waiting for the next
activation event, or they are executable after being triggered by an event
occurrence.
Asynchronous Independent of execution. For example, asynchronous events are events that
occur independently of the execution of the application.
159
ECSS-E-HB-40A
11 December 2013
Asynchronous I/O An I/O (communication, data exchange) operation that does not cause the task
(communication, requesting the I/O (communication, data exchange) to be blocked waiting for
data exchange) the end of the call. This implies that the task and the I/O (communication, data
exchange) operation may be running concurrently.
Blocking State of a task caused by the mechanisms that enforce mutual exclusion of
resource use by multiple tasks.
Deadlock Situation whereby two or more tasks are prevented from proceeding, each
waiting for a needed shared resource to be released
Determinism Property of a computational model such that the response time of all tasks is
statically known and always the same.
Elapsed time The time measured on a clock between two events (thus including the
execution time spent by other tasks in case pre-emption occurs)
Execution time Task’s execution time is the time its execution takes, i.e. the time spent by the
CPU executing the task.
Execution time ET monitoring, operating system feature allowing monitoring the task
monitoring execution-time
Hard real-time Denoting an entity (e.g. a task, a system) where missing a deadline is a system
failure.
Interference Interference of a task Ti with higher priority task Tj occurs when Ti is pre-
empted by Tj.
Jitter Variation in time of an event, e.g. task release jitter, task response time jitter,
sampling jitter (variation in the input instant) and input-output jitter (variation
in the delay from input to output).
Latency Delay between the activation of a task and its start of execution.
160
ECSS-E-HB-40A
11 December 2013
Minimum inter- The worst-case minimum time between two activation events of a sporadic
arrival time task.
Missed deadline Situation when activity is not completed at the time when it should have been
finished. A missed deadline does not necessarily provoke a task overrun.
Period If not used in a different context, period refers to the task’s period, i.e. the time
between two activations of a periodic task. Sporadic tasks are often analysed
the same way as periodic ones and the term “period” is used in the sense of
the minimum inter-arrival time.
Periodic task A task whose execution is repeated based on a fixed period of time. Periodic
tasks are either idle waiting for the next period or they are executable after
being triggered at pre-defined regular intervals by a timer.
Predictability Property of a computational model such that the response time of all tasks is
guaranteed to always be between a statically known best case and a statically
known worst case.
Priority Precedence; for tasks, attribute allowing the selection of the task eligible for
execution by the scheduler
Process Run-time entity recognised by an operating system; an address space with one
or more threads executing within that address space, and the required system
resources for those threads.
Race condition Condition where update to shared resources depends on the interleaving of
task accessing them.
This can be avoided by non-pre-emptible mutually-exclusive access to shared
resources
Relative deadline The deadline for the completion of a task’s job relative to the release instant of
the task. If not explicitly stated, all deadlines are to be intended as relative
deadlines.
Release Task release, i.e. the moment when task’s activation/job starts after an arrival
of its activation event. After its release the task is released.
161
ECSS-E-HB-40A
11 December 2013
Response time The worst-case elapsed time between the release of a task (in fact, of any of its
jobs) and its subsequent completion. Response time is used in response time
analysis in the way that if all tasks in a system have their response times lower
than or equal to their deadlines, then the system is said to be schedulable. In
case system overheads are ignored, response time of a task is equal of the sum
of its worst-case execution time, interference and blocking times.
Soft real-time Denoting an entity (e.g. a task, a system) where missing a deadline degrades
the result, thereby degrading the system’s quality of service.
Sporadic task An aperiodic task for which a minimum inter-arrival time between two
activation events is defined and guaranteed.
Task overrun Situation when task’s execution time exceeds the task period, i.e. when the task
is activated while its previous activation has not finished yet
Worst-case The longest execution time of a sequential code under worst possible
execution time circumstances. Usually referred to as WCET.
162
ECSS-E-HB-40A
11 December 2013
163
ECSS-E-HB-40A
11 December 2013
A periodic clock (that signals the boundary of minor cycles) is usually used to check that the periodic
functions are accomplished within their minor cycles (this mechanism is also known as watchdog).
Sporadic tasks and interrupt handlers could be accommodated by dedicating particular minor cycles
to handling asynchronous events.
NOTE Assuming that the interrupt/asynchronous event frequency is
lower than the frequency of the minor cycle dedicated to their
handling
Interrupt handlers only raise a flag which is detected in the dedicated minor cycle to perform the bulk
of the asynchronous event handler.
A special case of using a cyclic executive combined with the use of multiple RTOS processes and pre-
emption by asynchronous tasks is described in Section [Pre-emptive System without Cyclic Pre-
emptions].
The cyclic executive approach could be implemented in virtually any programming language.
164
ECSS-E-HB-40A
11 December 2013
c. Suspended waiting for a software- or hardware-produced release event (for sporadic tasks);
d. Blocked on a shared resource protected by a mutual exclusion mechanism.
The scheduler based on fixed priorities guarantees that, any point in time, amongst of all runnable
tasks, the one with the highest priority is executing. Pre-emptive systems guarantee that the moment a
new task with a higher priority becomes runnable, the task currently executing is pre-empted and the
higher priority task is run. Priorities are fixed, i.e. known at design time after an initial assessment,
statically assigned to tasks and never changed at runtime. The only time they are temporarily changed
for short periods of time is during RTOS application of one of the synchronisation protocols to avoid
priority inversion.
NOTE The priority assignment could change at later stages of software
development, during coding and testing, when actual
measurements are available and replace execution time
estimates.
A pre-emptive system based on fixed priority scheduling could be implemented in virtually any
programming language, but some concurrent programming languages have FPS already built-in in
them.
165
ECSS-E-HB-40A
11 December 2013
activities can be opportunistically served by RCM servers, for example following the sporadic server
model. Tasks have a single suspension point per activation – either waiting for a time clock event or a
sporadic event. Synchronous communication between tasks (i.e. task rendezvous) is disallowed;
instead tasks use data-oriented asynchronous communication mediated by shared resources equipped
with the Priority Ceiling Protocol (see Section [Shared Resources]). The RCM allows task declaration
and creation only at the system start-up time – dynamic declaration of tasks is not permitted.
In general, a RCM-based system can be implemented in virtually any programming language.
However, only the Ada Language compilers are equipped with static analysis tools for the pragma
Ravenscar construct, which makes it possible to check that the code is compliant to the RCM rules.
166
ECSS-E-HB-40A
11 December 2013
mode of hardware interaction is not directly amenable to scheduling analysis, which causes
undesirable inaccuracy in the predicted results.
167
ECSS-E-HB-40A
11 December 2013
7.5.2.1.1 Overview
This section summarises all necessary inputs to perform schedulability analysis. Minor deviations
from this list are indeed possible; however in general the constituents identified below form a
necessary pre-cursor to any reasonable schedulability analysis.
Some of the information listed in this section should already be a part of the Technical Budget of the
SVR, often in a separate document called Software Timing and Sizing Budget document (STSB) or
Software Budget Report (SBR).
The following information should be provided for the purpose of performing the schedulability
analysis:
a. Selection of the computational model: usually a reference to the Software SDD where the
computational model is described as a part of the architectural design.
b. Selection of the associated programming model: A reference to the Software SDD or coding
standard definition where the use of programming language constructs and library calls
compliant to the selected computational model is specified (e.g. restricted system calls or
options of the OS that can be used for tasks, semaphores…).
168
ECSS-E-HB-40A
11 December 2013
c. Major cycle, or hyper-period of the system, representing the repetition period of the whole
software system. It is the least common multiple of all task periods.
NOTE In flight software, this is usually linked to the driving frequency
of the AOCS control loop. It could also be linked with the main
communication bus.
d. Tasks: the list of task in the software, as per design/implementation, together with their time
properties (cyclic/aperiodic/asynchronous, frequency).
e. Priorities: Based on the used RTOS, a priority range is given and a method of assigning these
priorities to tasks should be described (e.g. deadline monotonic assignment implies giving the
higher priority the shorter the deadline is). It is very important to clearly state what the priority
numbering is, i.e. whether the higher the number the higher the priority (e.g. in 1 – 255 priority
range the priority of 255 is the highest), or vice versa (e.g. priority 1 is the highest). Priorities can
be assigned by scheduling analysis – the preferred option when possible.
f. Deadlines: Specification of which kind of deadlines the tasks in the system have. It is assumed
that most deadlines are hard real-time but it should be specified which deadlines are hard and
which deadlines are not. In addition, it should be specified what the relation of task’s deadlines
is with respect to their periods. Typically the task’s deadline is less or equal to its period, but
some cases it could be always equal to the period or even greater than the period.
g. Task worst case execution time measurement.
h. Measurement methods: All measurement methods used. More specifically:
i. The method of measuring worst-case execution times (see Section 7.5.2.3.2 for more details);
j. Method of measuring system overheads (see Section 7.5.2.1.2 for more details)
k. Clear indication of whether CPU execution time or elapsed time is measured;
l. Detailed description of measurement setup: software and hardware tools used, whether the
measurements are performed on simulator or real target hardware;
m. Precision of the measurements.
n. Status of measurements: estimates vs. measured numbers. A clear way of distinguishing
estimates from real measurements and evolution by milestones/time.
169
ECSS-E-HB-40A
11 December 2013
At code level:
a. mutual exclusion using protected objects (lock), including locking protocol (such as FIFO,
priority inheritance or immediate priority ceiling protocols)
b. interrupt lock (disable task pre-emption and interrupts)
c. task lock (disable task pre-emption)
The schedulability analysis should also indicate whether it is allowed for a task to access two shared
resources simultaneously (e.g. nested critical sections) – which is one of the preconditions potentially
leading to deadlock.
The interrupt handler execution times should also be provided.
Then, a table is provided with each resources represented by a column and tasks in rows ordered by
their priority, so that the cell(i,j) indicates how much time task i uses resource j. Also, the resource
access types are indicated as well as possible locking protocol, so that it is possible to deduce worst-
case blocking time for all tasks.
170
ECSS-E-HB-40A
11 December 2013
b. A subsequent iteration could enrich the model with the interrupts and asynchronous tasks.
c. The model issued from a third iteration could consider the shared resources and critical
sections, with their worst-case blocking time and their relationships with the tasks.
7.5.2.3.1 Introduction
This section provides important guidelines to ensure that the input data (in particular all time
measurements) are adequate and precise for the purpose of schedulability analysis.
171
ECSS-E-HB-40A
11 December 2013
2. Worst-case condition for mutual exclusion locks (based on chosen locking protocol, e.g.
priority ceiling priority inheritance).
NOTE Priority ceiling is superior to basic priority inheritance in terms
of reducing worst-case blocking time as well as in avoiding
deadlock
d. Selection of worst-case scenarios. The specification of this information should be provided at
least to discriminate the different execution scenarios in the various operational modes of the
software. The description of the scenario should be both convenient (easy to use) and accurate
(analysable).
172
ECSS-E-HB-40A
11 December 2013
Finally, the provisions for scenario-based analysis should also include means to specify the
telecommands received from ground (in particular, which telecommands and how many), as software
execution of various components is certainly triggered or conditioned by them.
173
ECSS-E-HB-40A
11 December 2013
pathological cache behaviour, would allow the final timing analysis to confirm the assumptions made
in the early stages of development.
The measurements of execution time taking into account cache should preferably be performed on the
target hardware, as emulators may not be fully representative of real caches.
Max Interference
Response Time
Max Blocking
Frequency
WCET
Type
Figure 7-2: An example of a complete task table with all timing figures
The task (static) priority, type, frequency, period and deadline are static information.
NOTE As assigned by the scheduling algorithm assumed by the
computational model of choice and not arbitrarily chosen by
the user
The task worst-case execution time is either estimated or measured.
The maximum blocking time, maximum interference time, CPU utilisation, response time and
deadline margin are values calculated by the schedulability analysis.
The task table should respect the following guidelines:
a. Tasks, represented by rows, are ordered in priority order (from the highest-priority to the
lowest priority);
b. Measurement units should be indicated.
174
ECSS-E-HB-40A
11 December 2013
c. Highlight (e.g. by choosing different background colour) which of the numbers are
measurements and which of them are estimates.
d. Worst-case scenario description is used for the description of the conditions under which the
estimated/measured numbers where obtained.
e. It is possible to add a column to account for interrupt handler routines (e.g. by having a margin
per activation or per time period). Alternatively, the interrupt handler routines can be modelled
as tasks with top priority.
NOTE In this case, it is ensured that interrupt context switch overhead
is used rather than task context switch overhead when
performing the schedulability analysis
Highlight response times with a small margin comparing to their deadline. The software is
schedulable when all the tasks have a response time not superior to their respective deadline.
2. Does the SSS include requirements on the total computer resource utilization per
milestone, in particular processor capacity and memory capacity available for all
the software items? This usually means CPU, EEPROM, PROM and RAM
utilization.
3 Does the SSS include requirements on a specific real time operating system?
4. Does the SSS include requirements on other software items to be used or
incorporated into the system, having impact on timing or concurrency?
SRS resource requirements Verified
1. Are there any software design requirements for the computational model?
2. Are there any software design requirements for the programming model?
175
ECSS-E-HB-40A
11 December 2013
1. Has the computational model been selected and as a part of the architectural
(See note)
design?
2. Has the computational model been described in sufficient detail in the
architectural design?
been defined?
2. Does the programming model enforce the compliance with the computational
model?
3. Has a method to check compliance with the programming model been specified?
Detailed design of real-time software Verified
1. Have all timing and synchronization mechanisms been documented and
justified?
2. Have all the design mutual exclusion mechanisms to manage access to the
shared resources been documented and justified?
3. Has the use of dynamic allocation of resources been documented and justified?
4. Has protection been ensured against problems that can be induced by the use of
dynamic allocation of resources, e.g. memory leaks?
5. Has it been verified that testing is feasible, by assessing that computational
invariant properties and temporal properties are added within the design?
NOTE: The architectural design is mapped to contemplate highlighting API for activities such as task
activation, deactivation and deadline check.
176
ECSS-E-HB-40A
11 December 2013
177
ECSS-E-HB-40A
11 December 2013
Annex A
Documentation Requirement List
Although the Annex A of the ECSS-E-ST-40C software standard is only informative, project feedback
considers useful to complete the DRL table with the missing reviews such as SWRR, SQSR, DRR and
TRR, TRB.
NOTE 1A document, which has been reviewed in the anticipated part of the review, will be
reviewed again in this review only if it has been updated.
NOTE 2 Due to the nature of the SQSR, where only the reuse file is
reviewed, it is not mentioned in a specific column in the table,
but only in the row of the reuse file, as alternative to SWRR.
NOTE 3 For the TRR and the TRB, the documents to be provided are
either the one against TS in the TRR/TRB before CDR, or
against RB in the TRR/TRB before QR or AR.
Related DRL item DRL item having SWRR DDR TRR TRB
file a DRD
(e.g. Plan, document, file, report, form,
matrix)
178
ECSS-E-HB-40A
11 December 2013
Related DRL item DRL item having SWRR DDR TRR TRB
file a DRD
(e.g. Plan, document, file, report, form,
matrix)
Installation report -
179
ECSS-E-HB-40A
11 December 2013
Related DRL item DRL item having SWRR DDR TRR TRB
file a DRD
(e.g. Plan, document, file, report, form,
matrix)
Training plan -
Procurement data -
MF Maintenance plan -
Maintenance records -
180
ECSS-E-HB-40A
11 December 2013
Related DRL item DRL item having SWRR DDR TRR TRB
file a DRD
(e.g. Plan, document, file, report, form,
matrix)
181
ECSS-E-HB-40A
11 December 2013
Annex B
Generic Techniques
This annex browses a number of generic techniques that are useful when applying the ECSS-E-ST-40C
Standard. Within these techniques, some specific methods or languages, which are being used in
space software, are briefly described with a reference. However, SysML and UML are not described
here, as they can be found easily elsewhere.
182
ECSS-E-HB-40A
11 December 2013
b. Model-checking techniques allow for the automatic verification that a specific model of the
overall behaviour of a system, or a critical component, satisfies a set of requirements, or
properties. Model checkers use a variety of underlying technologies, but essentially work by
checking if a property holds for every reachable state.
The model is specified by means of a formal language, which can be textual or graphical. Typically,
behaviours are modelled as state-transition systems. Each requirement is usually specified as a
temporal logic formula.
A primary advantage of the model-checking approach, when compared with other techniques (e.g.,
semi-automatic theorem proving), is the user-friendliness of the tools, which support the verification
activity.
They fall in the "push-button" category: once a model has been developed and a temporal logic
formula characterizing a desired property of the model has been formulated, the verification of
whether the model satisfies the formula is performed by the tool in a completely automatic way, i.e.
without any further action required from the user. If the formula is not satisfied (i.e. the desired
property is not true), the tool usually provides a counter-example in the form of a sequence of
computation steps, which bring to the violation of the property.
The main disadvantage of this technique is that large models can exceed the capacity of the model
checker.
However recent breakthroughs have made it possible for symbolic model checkers to explore very
large spaces of reachable states. Bounded model checkers use a combination of state space exploration
and induction to deal with even larger state spaces. Model checkers include today symbolic and
probabilistic variants.
While theorem provers do not have the state space limitation of model checkers, they generally
require more mathematical skill and labour to prove the desired properties.
Formal methods provide the techniques, methodologies and tools for producing proofs and
consequently for designing proved correct systems. Of course, the use of formal methods introduces
costs, in terms of additional training, specific tool support, formal specification development time, and
related verification effort. Such costs can be justified when assessed in relation to the criticality of the
components to which formal methods are applied.
Recent studies have demonstrated the applicability of state-of-the-art model checking techniques to
support a variety of V&V activities such as consistency analysis, simulation, correctness verification,
performability evaluation, dynamic fault tree generation, FMEA table generation, FDIR, and
diagnosability analysis.
B.2.1 Introduction
Functional decomposition is a traditional method of analysis. The functional breakdown is
constructed top-down, producing a set of functions, sub-functions and functional interfaces.
The functional decomposition method was incorporated into the Structured Analysis method in the
late 1970's. An inconvenience of the method is that factorisations are not easily possible; therefore the
same sub-function may appear multiply in the tree.
183
ECSS-E-HB-40A
11 December 2013
Structured analysis is a name for a class of methods that analyse a problem by constructing data and
control flow models. Relevant members of this class are:
a. Yourdon methods (Tom DeMarco and Ward/Mellor);
b. Structured Analysis and Design Technique (SADT).
Structured analysis includes all the concepts of functional decomposition, but produces a better
functional specification by rigorously defining the functional interfaces, i.e. the data and control flows
between the processes that perform the required functions. The ubiquitous ‘Data Flow Diagrams’
(DFD) and ‘Control Flow Diagrams’ are characteristic of structured analysis methods. In any form,
they apply finely for data-intensive or data-driven systems, where large part of the processing is
ultimately represented by a sequence of functions in pipeline.
Yourdon methods were widely used in the USA and Europe. SADT has been used within space
projects for some time.
According to its early operational definition by DeMarco, structured analysis is the use of the
following techniques to produce a specification of the system:
a. Data Flow diagrams (DFD);
b. Control Flow Diagrams (CFD);
c. Data Dictionary;
d. Structured English;
e. Decision Tables;
f. Decision Trees.
These techniques are more adequate for the analysis of data-centred information systems. Today is
clear that SA methods are not suitable for software that is much more complex.
184
ECSS-E-HB-40A
11 December 2013
and output from the system, nor where the data will come from and go to, nor where the data will be
stored (all of which are shown on a DFD).
DFD are used in the requirement analysis, or in the detail design and implementation phases.
Data flow analysis may support checking the behaviour of program variables as they are initialized,
modified or referenced when the program executes. Data flow diagrams are used to facilitate this
analysis.
Advantages of the use of the DFD include, but are not limited to the following:
a. Readily automated: there are many tools in the market supporting the representation of data
flow diagrams, supporting the performance of these analyses.
b. Easy to apply: mainly due to its graphical representation and especially at later development
stages when all information is available.
Disadvantages of DFD include, but are not limited to, the following:
a. Not always proper software modularity, therefore making difficult to define the diagram
b. Requires some interpretation
c. It always requires support from the tool used for the data flow definition when designing the
system.
d. It can require a lot of manpower
DFD can be complemented by control flow diagrams. DFD can be contained in some control flow
diagrams, from which the analyses can be performed. In addition, complementary software faults can
be detected through their use.
185
ECSS-E-HB-40A
11 December 2013
OOA methods evolved and today the different techniques are combined. OMT provided a first
answer to this demand, and UML is today the de-facto standard.
186
ECSS-E-HB-40A
11 December 2013
b. HRT-HOOD intends to select a Scheduling Model and investigate the incorporation of the
existing HOOD method in the definition and development of a design method suitable for
Hard Real-Time software and based on that theory. HRT-HOOD aims at providing
practitioners with a structured design method allowing the timing analysis of real-time systems.
Thus, HRT-HOOD extends the HOOD method taking explicitly into account both functional
and non-functional (timing) requirements which constrain a real-time system.
Reference: HRT-HOOD: A Structured Design Method for Hard Real-Time Ada Systems (A.
Burns, A. Wellings) Elsevier, 7 avr. 1995 - 313 pages
http://books.google.nl/books/about/Hrt_Hood.html?id=Aoch3hJhFC4C&redir_esc=y
c. The Hard Real-Time-Unified Modelling Language (HRT-UML) method provide a
comprehensive solution to the modelling and analysis of hard real-time software systems, by
upgrading the HRT-HOOD design concepts to a more powerful and expressive method based
on UML.
HRT-UML provides a methodological process that is derived from the HRT-HOOD principles,
and an UML extension profile endowed with a formal, specific semantic framework suitable to
model and analyse real-time applications. HRT-UML entities are specifically customized types
of components to represent real-time behaviours and to take into account non-functional
properties whose fulfilment is of primary importance in a real-time design.
Reference: HRT-UML: Taking HRT-HOOD onto UML (Reliable Software Technologies — Ada-
Europe 2003 ; Lecture Notes in Computer Science Volume 2655, 2003, pp 405-416) Springer,
Silvia Mazzini, Massimo D’Alessandro, Marco Di Natale, Andrea Domenici, Giuseppe Lipari,
Tullio Vardanega.
http://www.springerlink.com/content/1gwrgt651ap3rg3k/
d. AADL (Architecture Analysis and Design Language) is a standardized notation used to
represent a computer-based physical architecture. Both a textual and graphical language, AADL
permits to design and analyse software and hardware architecture of real-time systems. The
language contains precise semantics to describe software tasks, data inputs and outputs, and
hardware components such as busses, memories, and processors. The language can also be
used to model dynamic aspects of the system such as operational modes and mode transitions.
Each AADL component comes with a set of predefined properties that can be used by tools for
system analysis (e.g. schedulability analysis) and this set can be extended for specific purposes.
AADL is generally well suited to design and verification of the avionics architecture and to be
complemented by other modelling languages and tools that focus on functional analysis or
logical models in general, such as SCADE, SDL, Simulink or ASN.1. It supports annexes to
extend the language, among which is predefined the Error Model Annex suitable for
dependability modelling and analysis.
Reference: www.aadl.info
187
ECSS-E-HB-40A
11 December 2013
mathematic expressions to specify the number of items composing a table. Formal languages are also
able to carry semantics.
The main advantage in using a data description language to specify interfaces is that the interface
definition (Interface Control Document) is clear and unambiguous (there is no reason to have a
different or diverging understanding of the specification). The consistency of the data w.r.t. the formal
definition can be proven. Indeed, the formal data definition can be interpreted in an automated
fashion so that the data can be automatically checked.
There are different data description languages available, e.g.:
a. XTCE (a CCSDS standard for the XML Telemetric and Command Exchange)
b. XML Schema (a W3C recommendation for XML data Exchange)
c. ASN.1: The Abstract Syntax Notation One is a standardized notation to represent data types
that was developed as ISO and ITU-T standards by the telecommunication industry. It is a
simple text notation that allows precise definition of data types, and it is supported by many
tools. It is widely used in many areas such as air traffic control, telecommunications, and space
domain at ESA and NASA.
Reference: http://www.itu.int/ITU-T/asn1/introduction/index.htm .
d. EAST (a CCSDS standard for any kind of data, incl. binary data)
Whatever the data description language used to specify the interface, it is recommended to use a
dedicated editor to help the user in describing the interface without requiring any specific knowledge
of the chosen syntax.
Since data description languages are sometimes unintelligible to users, it is also required to translate
automatically the formal definition into a human-readable but faithful description. The formal
definition remains the contractual specification of the interface but a more readable definition is also
provided.
Data description languages allow the use of code generators that make the reading and the writing of
data a smooth activity (code generators use the formal definition in order to provide a library
dedicated to I/O). The update of data descriptions has less impact on the software because the I/O
library is the result of an automatic process.
188
ECSS-E-HB-40A
11 December 2013
nondeterministic, and/or stochastic. PN are mathematically defined but can be used as a visual
communication aid, similar to data and control flow charts, interaction, state and activity
diagrams. It is also an executable technique, and allows analysis methods to prove properties
about the specifications. Petri Nets were studied and applied for safety critical software (Class
A and B) in European human space flight infrastructure projects.
c. Reference: http://en.wikipedia.org/wiki/Petri_net
d. LTSA is a verification tool for concurrent systems. It mechanically checks that the specification
of a concurrent system satisfies the properties required of its behaviour. In addition, LTSA
supports specification animation to facilitate interactive exploration of system behaviour. A
system in LTSA is modelled as a set of interacting finite state machines. The properties required
of the system are also modelled as state machines. LTSA performs compositional reachability
analysis to exhaustively search for violations of the desired properties. More formally, each
component of a specification is described as a Labelled Transition System (LTS), which contains
all the states a component may reach and all the transitions it may perform. However, explicit
description of an LTS in terms of its states, set of action labels and transition relation is
cumbersome for other than small systems. Consequently, LTSA supports a process algebra
notation (FSP) for concise description of component behaviour. The tool allows the LTS
corresponding to a FSP specification to be viewed graphically. LTSA has an extensible
architecture which allows extra features to be added by means of plugins.
Reference: http://www.doc.ic.ac.uk/ltsa/
B.7 ITIL ®
To maintain software systems the current best practice is to use the Information Technology
Infrastructure Library (ITIL) Version 3 as a framework for the processes to be used. ITIL is the most
widely accepted approach to IT service management in the world. ITIL provides a cohesive set of best
practice, drawn from the public and private sectors internationally. ITIL is being widely applied to
software projects as a way of managing and maintaining software.
For ground systems ITIL is extensively used as a framework. For on board flight software the
maintenance is mainly focussed on providing new releases of the software. This usually means the
processes are based around an extension of the Software Development Plan and ITIL process wrapper
is not required.
There are three main phases that are applicable to ECSS-E-ST-40C. In order to resolve defects, or
improve the software through change, a new version needs to be first designed and developed. Then
this new version is transitioned into service. The software is then used within operations. Operations
may identify incidents and problems with the software that will need fixing by developing a new
version of the software. And so the cycle continues.
189
ECSS-E-HB-40A
11 December 2013
Design
Operations Transition
Further details of how the ITIL processes can be implemented using best practice can be found on the
ITIL web site.
Reference: www.itil-officialsite.com/
190
ECSS-E-HB-40A
11 December 2013
Annex C (normative)
"Software Maintenance Plan (SMP) – DRD"
This annex provides an example for the contents of a Software Maintenance Plan (SMP) which is not
currently covered in the standard ECSS-E-ST-40C.
<1> Introduction
a. The SMP shall contain a description of the purpose, objective, content and the reason prompting
its preparation.
191
ECSS-E-HB-40A
11 December 2013
<6.2> System
a. The SMP shall describe the mission of the system including mission need and employment,
identification of interoperability requirements and system functions description.
b. The SMP shall describe the system architecture, components and interfaces, hardware and
software.
<6.3> Status
a. The SMP shall identify the initial status of the system and the complete identification of the
system with formal and common names, nomenclature, identification number and system
abbreviations.
<6.4> Support
a. The SMP shall describe why support is needed.
NOTE During the projected life of period of the system, corrections
and enhancements will be required. Corrective maintenance
accommodates latent defects as reported by users.
Enhancements or improvements are submitted in order to
192
ECSS-E-HB-40A
11 December 2013
<6.6> Contracts
a. The SMP shall describe any contractual protocols between next-higher-level Contractor and
Contractor.
<7.1> Concept
a. The SMP shall describe the maintenance concepts, including:
1. the scope of software maintenance;
2. the tailoring of the post- delivery process;
3. the designation of who will provide maintenance;
4. an estimate of life-cycle costs.
5. the activities of post-delivery software maintenance
NOTE 1 The contractor develops it early in the development effort with
help from the maintainer. Defining the scope of maintenance
helps the contractor determine exactly how much support the
maintainer will give to the next-higher-level Contractor. Scope
relates to how responsive the maintainer will be to the users.
NOTE 2 Different organisations often perform different activities in the
post-delivery process. An early attempt is made to identify
these organisations and to document them in the maintenance
concept. In many cases, a separate maintenance organisation
performs the maintenance functions.
NOTE 3 Responsiveness to the user community is the primary
consideration in determining the scope of software
maintenance. The scope of software maintenance is tailored to
satisfy operational response requirements. Scope relates to how
responsive the Maintainer will be to proposed changes. For
example, a full scope, software maintenance concept suggests
that the Maintainer will provide full support for the entire
deployment phase. This includes responding to all approved
software change categories (i.e.: corrections and enhancements)
within a reasonable period. Software maintenance concepts that
limit the scope of software maintenance are referred to as
limited scope concepts. Limited scope concepts limit the
support period, the support level or both.
193
ECSS-E-HB-40A
11 December 2013
<9.1> Resources
a. The SMP shall analyse the hardware and software most appropriate to support the
organisation’s needs, including:
1. The definition of the development, maintenance, and target platforms
2. The description of the differences between the environments.
3. Identification and provision of the tools sets that enhance productivity, of the way the
tools are accessible, and the sufficient level of training to users.
4. Description of planning of design, implementation and testing (including associated
documentation).
194
ECSS-E-HB-40A
11 December 2013
b. The SMP shall describe the Software Configuration Control Board (i.e. participants, roles,
activities).
c. The SMP shall describe the Maintenance process phases, including
1. Analysis phase;
2. Design phase;
3. Implementation phase;
4. Acceptance test phase;
5. Delivery phase.
195
ECSS-E-HB-40A
11 December 2013
196
ECSS-E-HB-40A
11 December 2013
197