2023-Book-IntrodSatGround_Nejad
2023-Book-IntrodSatGround_Nejad
2023-Book-IntrodSatGround_Nejad
Bobby Nejad
Introduction
to Satellite Ground
Segment Systems
Engineering
Principles and Operational Aspects
Space Technology Library
Volume 41
Editor-in-Chief
James R. Wertz, Microcosm, Inc., El Segundo, CA, USA
The Space Technology Library is a series of high-level research books treating a
variety of important issues related to space missions. A wide range of space-related
topics is covered starting from mission analysis and design, through a description
of spacecraft structure to spacecraft attitude determination and control. A number
of excellent volumes in the Space Technology Library were provided through the
US Air Force Academy’s Space Technology Series. The quality of the book series
is guaranteed through the efforts of its managing editor and well-respected editorial
board. Books in the Space Technology Library are sponsored by ESA, NASA and
the United States Department of Defense.
Bobby Nejad
Introduction to Satellite
Ground Segment Systems
Engineering
Principles and Operational Aspects
Bobby Nejad, Ph.D.
European Space Agency
Noordwijk, The Netherlands
[email protected]
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland
AG 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my daughter Julia, may she find her way
to reach the stars.
Foreword by David H. Atkinson
vii
viii Foreword by David H. Atkinson
Although the value of naked eye and telescopic observations cannot be over-
stated, the enormity of our Solar System requires sophisticated and increasingly
complex vehicles designed to survive and properly function after many years in the
treacherous environment of space, and for probes and landers to survive the extreme
environments of other worlds.
Whether exploring the distant universe or remaining closer to home to study the
Earth from orbit, all spacecraft require continuous support from ground operators to
remain healthy and capable of performing the tasks for which they were designed.
The maintenance of spacecraft in flight is only possible with the support of a
highly complex and reliable ground infrastructure that provides operators the ability
to reliably monitor, command, and control them, and give them the means to
detect anomalies early enough to be able to take corrective actions and avoid any
degradation or even loss of the mission.
In his book Introduction to Ground Segment Systems Engineering, Dr. Bobby
Nejad presents a concise, complete, and easily understandable overview of the
main architecture, working principles, and operational aspects of that essential
ground infrastructure. In this treatise, Dr. Nejad covers the design, development,
and deployment of ground systems that are vital to safely and efficiently operate
modern spacecraft, be it Earth satellites, planetary orbiters, entry probes, landers, or
manned space vehicles that all support humankind’s continued quest to discover.
Whenever a brand new satellite has made it to the launch pad, ready for liftoff,
all the attention first focuses on the impressive launch and later shifts to the first
images that are transmitted from space. Not much thought is ever given to the
flawless functioning of the complex ground infrastructure that had to be put in place
beforehand, in order to ensure that operators can monitor and command the space
segment throughout all the phases of the mission with high reliability, 24 hours a
day, and 7 days a week.
Ground segment engineering is not a new domain at all and exists since
human’s early space endeavors. The advent of new key technologies in recent years,
however, clearly reshaped many areas of ground segment systems engineering.
Novel technologies at the IT level have reached a level of maturity that makes
them a viable option for space engineering, which by its nature is a rather
traditional and conservative field. Hardware virtualization as one example has
clearly revolutionized ground segment design as it allows a significant reduction
of the physical footprint through a more optimized use of resources and provides
new ways to implement redundancy concepts in a seamless way for the operator.
On the other hand, new challenges have immersed. One of them is the need for
ground operations scalability in view of the mega-constellations comprising several
hundreds of space assets that have been placed into orbit recently. Another challenge
is to deal with the growing frequency and threat of cyberattacks and the enormous
damage they already cause today. This clearly puts a very high responsibility on the
ground segment designer to put a cyber-resilient system into place.
Ground segment engineering is a highly multidisciplinary domain and demands
expertise from a number of different technical fields. Hardware and software devel-
opment, database management, physics, mathematics, radio frequency, mechanical
engineering, network design and deployment, interface engineering, cybersecurity,
and server room technology, to name only a few of them. With this in mind,
the timing of the publication of Introduction to Satellite Ground Segment Systems
Engineering is highly welcome, as it tries to cover a broad range of these topics in a
very understandable, consistent, and up-to-date manner. It allows both beginners and
ix
x Foreword by Sonia Toribio
experts to gain new insights into a very exciting field which so far is only scarcely
addressed by existing literature.
1 The Hubble Space Telescope servicing missions between 1993 and 2009 with astronauts fixing
or replacing items in-orbit should be considered as an exceptional case here, as the high cost and
complexity of such an undertaking would in most cases not justify the benefit.
xi
xii Preface
This development has a clear impact on the ground segment design. Its various
subsystems (or elements) need to become more robust and able to deal with (at least
minor) irregularities without human help. The amount and type of communication
between the various elements need to increase significantly to allow the various
subsystems to ”know” more about each other and cooperate in a coordinated manner
like an artificial neural network. This clearly requires more effort for the proper defi-
nition of interfaces to ensure that the right balance is kept: highly complex interfaces
require more work in their design, implementation, and verification phases, whereas
too simplistic ones might prohibit the exchange of relevant information and favor
misunderstandings and wrong decisions.
Another important aspect is that the system design should never lose the
connection to the operators who are human beings. This puts requirements on
operability like a clear and intuitive design of the Man Machine Interfaces (MMI),
and the ability to keep the system sufficiently configurable in order to support
unforeseen contingency situations or assist in anomaly investigations.
No other technological field has developed at such a rapid pace as the computer
branch. Processor speed, memory size, and storage capacities which were simply
inconceivable just a decade ago are standard today, and this process is far from
slowing down. But there are also entirely new technologies entering the stage, like
the application of quantum mechanical concepts in computers with the quantum bit
replacing the well-known binary bit, or the implementation of algorithms able to
use artificial intelligence and machine learning when solving complex problems.
Despite the fact that these new concepts are still in their infancy, they have already
shown a tremendous benefit in their application.
For the ground segment designer, this fast development can be seen as blessing
and curse at the same time. Blessing, as the new computer abilities, allows more
complex software to be developed and operated or more data to be stored and
archived. At the same time, fewer resources like temperature-regulated server room
space are needed to operate the entire system. Curse, as the fast development of
new hard- and software, makes it more difficult to maintain a constant and stable
configuration for a longer time period, as the transition to newer software (e.g., a
newer version of an operating system) or hardware (e.g., a new server model on
the market) is always a risk for the ongoing operational activities. Well-tested and
qualified application software might not work equally smooth on a newer version
of an operating system, but keeping the same OS version might only work for as
long as it is being supported by the hardware available on the market and software
updates to fix vulnerabilities are available. Therefore, the ground segment designer
has to consider obsolescence upgrades already in the early design phase. One elegant
approach is to use hardware virtualization, which is explained in more detail in this
book.
Today’s ground segment design has only little in common with their early
predecessors like the ones used for the Gemini or Apollo missions. What remains
however unchanged is that efficient and reliable satellite operations can only be
achieved with a well-designed ground control segment. To achieve this, ground
segment design requires a high level of operational understanding from the involved
Preface xiii
systems engineers. Also vice versa, one can state that satellite operators can only
develop sound operational concepts if they have a profound understanding of the
ground segment architecture with all its capabilities and limitations. The different
types of expertise required in the ground segment development phases are not
always easy to gather in one spot. Very often the system design and assembly,
integration, and verification (AIV) are performed by a different group of engineers
as the one that is in the end operating the system. Most of the time, only limited
interaction between these teams takes place which can lead to operator-driven
change requests late in the project life cycle with significant cost and schedule
impact. This book aims to close this gap and to promote the interdisciplinary
understanding and learning in this exciting field. With such an ambitious objective
in mind, it is difficult to cover all important subject matters to the level of detail
the reader might seek. This book should therefore be considered as an introductory
text that aims to provide the reader a first basic overview with adequate guidance to
more specific literature for advanced study.
This book targets two groups of readers equally, newcomers to space projects aiming
to gain a first insight into systems engineering and its application in space projects,
and more experienced engineers from space agencies or industry working in small,
medium, and larger size projects who seek to deepen their knowledge on the ground
segment architecture and functionality.
Despite the book’s focus on ground segment infrastructure, this text can also
serve readers who primarily work in the satellite design and development domain,
considering the growing tendency to reduce ground-based spacecraft operations
through an increase in onboard autonomy. A deeper understanding of the ground
segment tasks and its overall architecture will allow an easier transfer of its
functionality to the satellite’s onboard computer.
The book will also serve project managers who need to closely interact with
systems engineers in order to generate more realistic schedules, monitor progress,
assess risks, and understand the need for changes, all typical activities of a ground
segment procurement process.
Being written with the thought of an introductory textbook in mind, university
students are also encouraged to consult this book when looking for a source to learn
about systems engineering and its practical application, but also to simply gain a first
insight into space projects, one of the most demanding but also exciting domains of
engineering.
xv
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Systems Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 Project Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.1 SOW, WBS, and SOC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.2 Schedule and Critical Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.3 Project Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 System Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Life-Cycle Stages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Life-Cycle Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.1 Sequential or Waterfall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.2 Incremental Models: Agile, Lean and SAFe® . . . . . . . . . . . 19
2.5 Model Based Systems Engineering (MBSE) . . . . . . . . . . . . . . . . . . . . . . . 22
2.6 Quality Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6.1 Software Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6.2 Test Campaign Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.7 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3 The Space Segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1 System Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.1.1 Propulsion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.1.2 Attitude Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.3 Transceiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.4 Onboard Computer and Data Handling . . . . . . . . . . . . . . . . . . 43
3.1.5 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.1.6 Thermal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.2 Spacecraft Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.3 The Satellite Life Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.4 Ground to Space Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
xvii
xviii Contents
Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
About the Author
Dr. Bobby Nejad is a ground segment systems engineer working at the European
Space Agency (ESA), where he has been involved in the design, development,
and qualification of the ground control segment of the European global navigation
satellite system Galileo for the last 14 years. He received his master’s degree in
technical physics from the Technische Universität Graz and in astronomy from the
Karl-Franzens Universität in Graz. He conducted his academic research stays at
the Université Joseph Fourier in Grenoble, the Bureau des longitudes of the Paris
Observatory, and the Department of Electrical & Computer Engineering of the
University of Idaho. Bobby started at ESA as a young graduate trainee in 2001 to
work on the joined NASA and ESA Cassini-Huygens mission to explore Saturn and
its moons in the outer planetary system. Being a member of the ESA collocated team
at the NASA Jet Propulsion Laboratory, he earned his Ph.D. on the reconstruction
of the entry, descent, and landing trajectory of the Huygens probe on Titan in
2005. In the following year, Bobby joined the operational flight dynamics team
of the German Space Operations Centre (GSOC), where he was responsible for
the maneuver planning and commanding of the synthetic aperture radar satellites
TerraSAR-X, TanDEM-X, and the SAR-Lupe constellation, the latter one being the
first military reconnaissance satellite system of the German armed forces. Bobby
joined the Galileo project at the European Space Technology and Research Centre
of ESA in 2008 and has since then contributed to the definition and development of
the ground segment’s flight dynamics, mission planning, and TT&C ground station
systems. Since 2012, he works at the Galileo Control Centre, where he supervises
the assembly, integration, and qualification of the Galileo ground control segment.
xxiii
Chapter 1
Introduction
The successful realisation of a space mission belongs to one of the most challenging
achievements of human beings. Even if space projects are historically seen very
young1 they still have many things in common with some of the oldest engineering
projects of mankind like the erection of the pyramids in Egypt, the design and
production of the first ships or cars, or the construction of aqueducts, bridges or
tunnels, to name a few examples. All engineering projects have a defined start
and end time, a clear objective and expected outcome or deliverable, very project
specific technical challenges to overcome, and limited resources in time, funding,
and people. To avoid that technical challenges and limited resources jeopardise
the successful realisation of a project, it has soon been understood that certain
practises help to achieve this goal better than others. All the engineering processes
and methodologies used to enable the realisation of a successful system can
be summarised as systems engineering (SE). In every technical project one can
therefore find a systems engineer who closely cooperates with the project manager
and must be familiar with the relevant SE processes and their correct application.
A dedicated study by the International Council of Systems Engineering (INCOSE)
could proof that cost and schedule overrun as well as their variances2 in a project
decrease with an increase of SE effort [1]. To provide a basic introduction into the
main SE principles and their application in space related projects is therefore one of
the main objectives of this book (see Chap. 2).
Following the typical systems engineering approach to structure a complex
system into smaller parts called subsystems, components, or elements, a space
project is typically organised into the space segment, the ground segment, the
launch segment, and the operations segment as shown in Fig. 1.1. This structure
1 The first artificial satellite Sputnik has only been put into orbit in the year 1957.
2 The variance of cost and schedule overrun refers to the accuracy of their prediction.
Project Office
is visible throughout the entire project life cycle and influences many aspects of the
project management. It is reflected in the structure of the organisational units of the
involved companies, the distribution of human resources, the allocation of financial
resources, the development and consolidation of requirements and their subsequent
verification, to only name a few.
At the very top sits the project office (PO) being responsible for the overall
project management and systems engineering processes. The PO also deals with all
the interactions and reporting activities to the project costumer who defines the high
level mission objectives and in most cases also provides the funding. The customer
can be either a single entity like a commercial company or in some cases a user
community (e.g., researchers, general public or military users) represented by a
space or defence agency. The PO needs to have a clear and precise understanding of
the mission objectives, requested deliverables or services, and the required system
performance and limitations (time and cost). These must be defined and documented
as mission requirements and the PO is responsible for their fulfilment.
The launch segment comprises the actual launcher that has to lift the satellite into
its predefined target orbit and the launch centre infrastructure required to operate
the launcher during all its phases. The launch infrastructure comprises the launcher
control centre, the launch pad, and the assembly and fuelling infrastructure. The
organisational entity responsible for all the technical and commercial aspects of the
launch segment is often referred to as the launch service provider (LSP). The LSP
has to manage the correct assembly and integration of the launcher and the proper
operations from its lift-off from the launch pad up to the separation of the satellite
from the last rocket stage. The LSP must also ensure the launch safety for both
the launch site environment (including the intentional self destruction in case of
a catastrophic launch failure) and the satellite (e.g., disposal of the upper stage to
prohibit collision with the satellite during its lifetime.)
The term space segment (SS) comprises any asset that is actually being put into
space and requires its remote operation using the ground segment. It is obvious that
there must be a well defined interface between these two segments which is called
the ground to space interface. This interface must undergo a crucial verification and
1 Introduction 3
3 The exact definition of the terms verification and validation and its difference is explained in
Chap. 2.
4 In certain projects the ground segment might be divided into smaller segments with different
responsibilities, e.g., one part to operate the satellite platform and another one specialised for
payload operations. This is usually the case when payload operations is quite involved and requires
specialised infrastructure, expertise, or planning work (e.g., coordination of observation time on
scientific satellite payloads).
4 1 Introduction
part. Due to the different domain specialities the various application software might
differ significantly in their architecture, complexity, operability, and maintainability.
Therefore these might often be developed, tested, and deployed by different entities
or companies which are highly specialised in very specific domains. As an example,
a company might be specialised on mission control systems but has no expertise on
flight dynamics or mission planning software. This diversity will put more effort on
the system integration and verification part and the team responsible for this activity.
The integration success will strongly depend on the existence of a precisely
defined interface control documents. In order to ensure that interfaces remain
operational even if changes need to be introduced, a proper configuration control
need to be applied for every interface. After all the elements have been properly
assembled and tested in isolation the segment integration activity can start. The
objective is to achieve that all the elements can communicate and interact correctly
with each and are able to exchange all the signals as per the design documentation.
Only at this stage can the ground segment be considered as fully integrated and
ready to undergo the various verification and validation campaigns. A more detailed
description of the assembly, integration, and verification (AIV) activities is provided
in Chap. 2.
The ground segment systems engineer requires both the knowledge on how all
the components have to interact with each other as well as a profound understanding
of the functionalities and architecture of each single element. Chap. 4 intends to
provide a first overview of the ground segment which is then followed by dedicated
chapters for each of the most common elements.
The work of the operations segment is not so much centred on the development of
hardware or software but rather based on processes, procedures, people, and skills.
The main task is to keep the satellite alive and ensure that the active payload time to
provide its service to the end user is maximised.5 After the satellite has been ejected
from the last stage of the launcher it needs to achieve its first contact with a TT&C
station on ground as soon as possible. The establishment of this first contact at the
predicted location and time is a proof that the launcher has delivered its payload to
the foreseen location with the correct injection velocity. The important next steps
are the achievement of a stable attitude, the deployment of the solar panels, and the
activation of vital platform subsystems. This is very often followed by a series of
orbit correction manoeuvres to deliver the satellite into its operational target orbit.
All these activities are performed in the so called launch and early orbit phase
(LEOP).
This phase is usually followed by the commissioning phase in which the payload
is activated and undergoes an intense in-orbit test campaign. If this campaign is
successful then the satellite can start its routine operations phase and provide its
service to the end user. In each of these phases, operators must make intense use
5 Especially for commercial satellites providing a paid service to the end user, any out of service
time due to technical problems can have a significant financial impact for the company owning the
satellite.
Reference 5
of the ground segment and the better it is designed and tailored to their specific
needs, the more reliable satellite operations can be performed. It is therefore vital
that the ground segment engineer considers the needs of the operators already in
the early segment design phase as this will avoid the need for potential changes
in later project phases, which always come at a considerably higher cost. A close
cooperation between these two segments is therefore beneficial for both sides. The
main operational phases, processes, and products are introduced in Chap. 15 and
should help the ground segment engineer to gain a deeper understanding of this
subject matter, allowing him to provide a more suitable and efficient segment design.
Every technical field comes with its own set of terms and acronyms which can
pose a challenge to a freshman starting in a project. The aerospace field is notorious
for its extensive use of acronyms which might get even worse once project specific
terms are added on top. Care has been taken to only use the most common acronyms
but to further facilitate the reading, a dedicated section with a summary of the most
relevant terms and acronyms has been added.
Reference
Fig. 2.1 Project life cycle cost versus time, reproduced Fig. 2.5-1 of SP-2016-6105 [3] with
permission of NASA HQ
start of every project and should be clearly documented and justified in the systems
engineering management plan (SEMP).
Project planning by itself is quite a complex field and affiliated to the project man-
agement (PM) discipline. It is therefore typically run by specially trained personnel
in a company or project referred to as project managers. Successful project planning
is not an isolated activity but requires intense and regular cooperation with the SE
domain, otherwise unrealistic assumptions in project schedules will be unavoidable
and render them unachievable fairly soon. Especially at the beginning of a project
a set of important planning activities need to be performed that are briefly outlined
here as the project’s system engineer will most likely be consulted.
2.1 Project Planning 9
Acceptance
1.4 Cal Tower WP 2.3.1 Mast 4.5 IF &
Baseband
WP 2.3.3
4.6 Network &
Computers
Fig. 2.2 Example of a deliverable-based work breakdown structure (WBS) for the case of the
construction of a TT&C antenna. WP = Work Package, LLI = Long Lead Item, IF = Intermediate
Frequency, Cal = Calibration
A project can only be achieved if the exact project scope is clearly understood,
defined, verified, and controlled throughout the life cycle. This is defined as the
project scope management activity by the Project Management Institute (PMI) (refer
to PMBOK [6]). An important output of this activity is a statement of the project
scope which is documented and referred to as the statement of work (SOW) and
should describe in detail the project’s expected deliverables and the work required
to be performed to generate them. The SOW is a contractually binding document,
therefore special attention must be given to reach a common understanding and
document all the tasks in an unambiguous way with an adequate amount of detail
that allows to verify whether the obligation has been fulfilled at the end of the
contract. Having each task identified with an alphanumeric label (similar to a
requirement) allows the contractor to explicitly express his level of contractual
commitment for each task. The terms compliant (C), partially compliant (PC), or
non compliant (NC) are frequently used in this context which the contractor states
for each task in the SOW as part of a so called statement of compliance or SOC.
Especially in case of a PC, additional information needs to be added that details
the limitations and reasons for the partial compliance in order to better manage the
customer’s expectations.
Using the SOW definition of the project scope and all expected deliverables, the
work breakdown structure can be generated which is a hierarchical decomposition
of the overall scope into smaller and more manageable pieces [6]. There are two
generic types of WBS in use, the deliverable based (or product-oriented) WBS
and the phase-based WBS. An example of each type for the development and
deployment of a TT&C station is shown in Figs. 2.2 and 2.3 respectively. For the
10 2 Systems Engineering
WP 2.3.3
Fig. 2.3 Example of a phase-based Work Breakdown Structure (WBS) for the same case of the
construction of a TT&C antenna
deliverable based WBS, the first level of decomposition shows a set of products
or services, whereas in the phase based WBS the first layer defines a temporal
sequencing of the activities during the project. In both cases each descending level
of the WPS provides an increasing level of detail of the project’s work. The lowest
level (i.e., level 3 in the presented examples) is referred to as the work package (WP)
level which is supposed to be the point at which the work can be defined as a set
of work packages that can be reliably estimated in terms of complexity, required
manpower, duration, and cost. A WP should always have a defined (latest) start
time and an expected end time and identify any dependencies on external factors or
inputs (e.g., the output of another WP providing relevant results or parts). The scope
of work contained in a single WP should also be defined in a way it can be more
easily affiliated to a dedicated WP responsible with a defined skill set. This can be
either a company specialised on a specific product or subject matter, a specialised
team in a matrix organisation, or even a single person.
Once all the WPs have been identified as part of a WBS and contracts have
been placed to WP responsible, the organisational breakdown structure (OBS) can
be generated. This represents a hierarchically organised depiction of the overall
project organisation as shown in Fig. 2.4 showing an OBS example for a typical
space project with three contractual layers. The OBS is an important piece of
information for the monitoring of the overall project progress which requires the
regular reception of progress reports, the organisation of progress meetings, and
the assessment of achieved milestones. The OBS is also an important input for the
financial project control as it clearly identifies the number of entities to which a
formal contractual relationship needs to be established and maintained throughout
the project. This requires the regular initiation of payments in case milestones have
been achieved or other measures in case of partial achievements or delays.
2.1 Project Planning 11
Organisational Breakdown
Structure Customer
System Prime
WP 1 WP 2 WP 3 WP 4
N-0 Space Segment Ground Segment Launch Segment Operations Segment
Ground Segment
N-1 WP 2.1 WP 2.2 WP 2.3 WP 2.n
Prime
Fig. 2.4 Example of an organisational breakdown structure (OBS) showing three contractual
levels below the customer
One of the most fundamental aspects of the project is its schedule that defines the
start and predicted end time. An initial schedule is usually expected prior to the
kick-off of a project and will serve as an important input to receive its approval
and funding from the responsible stake holders. It is obvious that such an early
schedule must be based on the best estimate of the expected duration of all the
WPs, considering all internal and external dependencies, constraints, and realistic
margins. Throughout the project execution, a number of schedule adjustments might
be needed and need to be reflected in regular updates of the working schedule. These
needs to be carefully monitored by both the project responsible to take appropriate
actions and to inform the customer. It is important to keep in mind that both schedule
and budget are always interconnected, as problems in either of these domains will
always impact the other one. A schedule delay will always imply a cost overrun,
whereas budget problems (e.g., cuts or delays due to approval processes) will delay
the schedule.2
In order to generate a project schedule the overall activities defined in the SOW
and WBS needs to be put into the correct time sequence. This is done using the
precedence diagramming method (PDM) which is a graphical presentation that
puts all activities into the right sequence in which they have to be executed. For
a more accurate and complete definition as well as tools and methods to generate
2 Especially in larger sized projects the latency in cash flow between contractual layers can
introduce additional delays and could even cause existential problems for small size companies
that have limited liquidity.
12 2 Systems Engineering
Jan Feb March April Mai June July Aug Sep Oct Nov Dec
LCT
EST
Foundations LLI Procurement Duration EoC
SLACK
KO
External
LST ECT Finish-to-start Dependancy
Building
Construction
Critical Path FS
Antenna
System
EST: Earliest Start Time
LST: Latest Start Time
ECT: Earliest Completation Time
Site Security
LCT: Latest Completation Time (Fire Alarm, Fences, IR Detectors)
KO: Contract Kick-Off
EoC: End of Contract
such diagrams the reader should consult more specialised literature on project
management (e.g., [6] or [7]). An example of a precedence diagram is depicted
in Fig. 2.5 which shows in a very simplified manner the construction of a TT&C
antenna with an overall project duration of only one year and four main tasks to
be executed between kick-off (KO) and end of contract (EoC). To allow proper
scheduling, each task needs to be characterised by an earliest start time (EST),
latest start time (LST), earliest completion time (ECT) and latest completion time
(LCT) which are marked on the first task in Fig. 2.5. The most prevalent relationship
between tasks in a precedence diagram is the finish-to-start relationship which
demands that the successor activity can only start if its designated predecessor
one has been completed. The difference between the earliest and latest start or
completion times is called slack and defines the amount of delay that an activity
can build up (relative to its earliest defined time) without causing the depending
successor to be late.3
In the presented example a finish-to-start dependency is indicated via a red arrow
that points from the end of one task to start of its dependent successor. This applies
for the preparation of the foundation which obviously needs to be completed before
the construction of the antenna building can start. The same is the case for the
building construction which requires to be finalised in order to accommodate the
antenna dish on top and allow the start of the interior furbishing with the various
antenna components (e.g., feed, wave guides, BBM, HPA, RF components, ACU,
computers, etc.).
3 Schedule slack is an important feature that can improve schedule robustness by absorbing delays
caused by unpredictable factors like severe weather conditions, logistics problems (e.g., strikes or
delay in high sea shipping), or problems with LLI procurement.
2.1 Project Planning 13
Project management needs to deal with the identification of project risk, its
categorisation, and monitoring throughout the entire life cycle. Project risks (also
referred to as threats) can be seen as any event that can imply a negative or
undesirable5 impact on the overall project execution and in a worst case scenario
even cause project failure. A first analysis of potential risks should be performed at
the very start of a project and is usually documented in the so called risk register.
Such a register usually categorises the identified risks into four types, technical,
cost, schedule, and programmatic (cf. [2]). Technical risks can affect the ability
of the system to meet its requirements (e.g., functional, performance, operational,
environmental, etc.). Cost and schedule risks imply overrun in budget or delay in
the achievement of project milestones. Programmatic risks are external events that
usually lie beyond the control of the project manager as they depend on decisions
4 The reason is that the implementation of the site security related installations (e.g., fire detection
and alarm, fences) can be performed in parallel to the other activities and need to be finalised only
at the end of the project.
5 If the impact of an event is desirable, it is termed opportunity.
14 2 Systems Engineering
Medium
Medium Risk
Low
Low Risk
Likelihood
Fig. 2.6 Risk severity measurement assessment based on the product of the likelihood of
occurrence and the projected consequence this event could happen
taken at a higher level. Examples are legal authorisation processes or funding cuts
due to a changing political landscape.6
Once all possible risk events have been identified and categorised, they need to be
measured as high, medium, or low project risk. The risk level is usually measured
based on the product of two factors: (1) the likelihood or probability that event
occurs, and (2) the consequence or damage it can cause. This is usually presented
in a two-dimensional diagrams as shown in Fig. 2.6. Both the number of identified
risks as well as their respective severity level can change during the project life cycle
which requires a regular update and delivery of the risk register at major project
milestones. The ultimate goal of risk management is to move risks from the high
into the medium or even low risk area of Fig. 2.6. This can only be achieved if the
source and background of each risk item is well understood as this allows to put the
right measures into place at the right time.
6 Larger scale NASA space projects with development times spanning several years can be
adversely affected by the annual (fiscal year) budget approval by the United States Congress.
2.3 Life-Cycle Stages 15
process as it might support the definition of the WBS and OBS as discussed earlier
in the text.
In a system hierarchy, each subsystems is supposed to perform a very specific
set of functions and fulfil a set of requirements that has been made applicable to it
(or traced to it). Any subsystem can be further decomposed to the components or
parts level if needed. Basic examples of such hierarchies are given for the Space
Segment and the Ground Control Segment in Chaps. 3 and 4 respectively. To find
the right balance in terms of decomposition size and level, one can use the 7 ± 2
elements rule which is frequently worded in SE literature as a guideline (cf. [1]).7
Once an adequate decomposition has been found and agreed it needs to be formally
documented in the so called product tree which serves as an important input for the
development of the technical requirements specification (TS). In this process top
level requirements are decomposed into more specific and detailed ones that address
lower level parts of the system. The outcome of this is reflected in the specification
tree that defines the hierarchical relationship and ensures forwards and backwards
traceability between the various specification levels and the parts of the system they
apply to (refer to [8]).
7 This heuristic rule suggests that at each level any component in a system should not have more
than 7 ± 2 subordinates.
16 2 Systems Engineering
MCR SRR SDR PDR CDR SIR SAR ORR LRR DR DRR
FRR
ESA (ECSS-M-ST-10C)
Phase 0 Phase A Phase B Phase C Phase D Phase E Phase F
Mission analsyis Detailed Qualification & Operations &
Feasibility Preliminary Disposal
& Utilization
definition Definition Production
needs
identificat ion
Fig. 2.7 Comparison of life cycle stage definition and review milestones of three different entities:
international council of systems engineering [2] (upper panel), NASA [3] (middle panel), and the
European space agency [9]. The 3 letter acronyms refer to review milestones (refer to Acronyms
list)
in the subsequent stages, i.e., a preliminary design in phase B and the detailed
design in phase C. Once the design is agreed and baselined,8 the production
and qualification can start (phase D). The qualification process itself comprises a
verification, validation, and acceptance activity. The operations phase (Phase E) in
space projects usually starts with the launch of the space vehicle and comprises a
LEOP and in-orbit commissioning phase prior to the start of routine operations. The
final stage starts with the formal declaration of the end-of-life of the product and
deals with the disposal of the product. In a space project this usually refers to the
manoeuvring of the satellite into a disposal orbit or its de-orbiting.
Figure 2.7 also shows the various review milestones (refer to 3-letter acronyms)
that are scheduled throughout the life cycle. These reviews can be considered as
decision gates at which the completion of all required activities of the current phase
are being thoroughly assessed. This is followed by a board decision on whether the
project is ready to proceed to the subsequent stage, or some rework in the current
one is still needed. As such reviews might involve external entities and experts, it
is very useful to provide a review organisation note that outlines the overall review
objectives, the list of documents subject to the review, the review schedule, and
any other important aspects that need to be considered review process (e.g., the
8 This implies that any further change needs to follow the applicable configuration control process.
2.3 Life-Cycle Stages 17
Design Baseline
Producon & Assembly
Fig. 2.8 The Vee Systems Engineering model with typical project milestones as defined in [11]
method how to raise a concern or potential error as a review item discrepancy (RID),
important deadlines, etc.).9
Figure 2.8 shows the life-cycle stages in the so called Vee model [11, 12]
named after the V-form shape representation of the various SE activities. Starting
the SE process from the top of the left branch, one moves through the concept
and development stages in which the requirements baseline must be established
and a consistent design derived from it. Entering the bottom (horizontal) bar,
the production, assembly, and integration activities take place, which deliver the
system of interest at the end. Entering the right side of the “V” (from the bottom
moving upwards) the various qualification stages are performed, starting with the
verification (“Did we build it right ?”), the validation (“Did we build the right thing
?”), and finally the product acceptance (“Did any workman-ship errors occur ?”).
The idea to represent these in a “V” shape is to emphasise the need for interactions
between the left and the right sides. The elicitation and wording of the high level
requirements (top left of the “V”) should always be done with the utilisation and
validation of the system in mind. The utilisation or operational concept of a system
will impact both the high level user requirements and the validation test cases.
The specification stages that aim to define the requirements need to also define the
verification methods and validation plans. This will avoid the generation of either
9 Guidance for the organisation and conduct of reviews can be found in existing standards (see e.g.,
ECSS-M-ST-10-01C [10]).
18 2 Systems Engineering
10 An example for this is the failure of the Ariane 5 maiden flight on 4 June 1996 which could
be traced to the malfunctioning of the launcher’s inertial reference system (SRI) around 36 s after
lift-off. The reason for the SRI software exception was an incorrect porting of heritage avionic
software from its predecessor Ariane 4 which was designed for flight conditions that were not fully
representative for Ariane 5 (see ESA Inquiry Board Report [13]).
2.4 Life-Cycle Models 19
methods are also preferred for the development of safety critical products that need
to meet certification standards before going to market (e.g., medical equipment).
Incremental and Iterative Development (IID) methods are suitable when an initial
system capability is to be made accessible to the end user which is then followed
by successive deliveries to reach the desired level of maturity. Each of these
increments11 by itself will then follow its own mini-Vee development cycle, which
can be tailored to the size of the delivery. This method has the advantage to not
only accelerate the “time to market” but also allows to consider end user feedback
when “rolling out” new increments and change the direction in order to better
meet the stakeholder’s needs or insert latest technology developments. This could
also be seen as optimising the product’s time to market in terms of the vectorial
quantity velocity rather than its scalar counterpart of speed, with the advantage that
the vectorial quantity also defines a direction that can be adapted in the course
of reaching its goal. Care must however be taken to properly manage the level
of change introduced during subsequent increments in order to avoid the risk
of an unstable design that is continuously adapted in order to account for new
requirements that could contradict or even be incompatible with previous ones.
IID methods have already been extensively used in pure software development
areas, where agile methods are applied since much longer time. This is mainly due
to the fact that pure software based systems can be more easily modified, corrected,
or even re-engineered compared to systems involving hardware components. The
latter ones usually require longer production times and might run into technical
problems if physical dimensions or interfaces change too drastically between
consecutive iterations. More recently agile development methods have also gained
more significance in the SE domain, mainly through the implementation of the
following practices (cf., [14]):
• Perform short term but dynamic planning rather than long term planning.
Detailed short term plans tend to be more reliable as they can be based on a
more realistic set of information whereas long term plans usually get unrealistic
soon, especially if they are based on inaccurate or even wrong assumptions taken
at the beginning of the planning process. This approach of course demands the
readiness of project managers to refine the planning on a regular basis throughout
the project, typically whenever more information on schedule critical matters
becomes available.
• Create a verifiable specification that can be tested right at time when being
formulated. In order to verify a requirement or specification it has to first be
that given increment. The SAFe® framework furthermore integrates and combines
concepts from Scrum, Extreme Programming, and Kanban as key technologies.
Scrum is a “lightweight framework” that originates from the software develop-
ment world and was defined by Jeff Sutherland and Ken Schwaber [20]. A Scrum
development builds on small teams (typically ten or even fewer members) that
comprise the product owner (i.e., the product’s stakeholder), the scrum master (i.e.,
the leader and organiser of the scrum team), and the developers. The incremental
delivery is referred to as a sprint and is characterised by a predefined sprint goal
(selected from a product backlog12 ) and a fixed duration of maximum one month or
even less. The outcome of a sprint is inspected in a so called sprint review.
Extreme programming is an agile methodology originally formulated by Kent
Beck as a set of rules and good practices for software development which have
subsequently been published in open literature [21]. The extreme programming
rules address the areas of both coding and testing and introduce practices like pair
programming,13 test-driven development, continuous integration, strict unit testing
rules,14 simplicity in design, and sustainable pace to control the programmer’s work
load.
The Kanban method is a scheduling system that has its origins in the car industry
(e.g., Toyota) and defines a production system that is controlled and steered by
requests that actively steer the stock or inventory resupply and the production of new
items. Such request can be communicated by physical means (e.g., Kanban cards) or
an electric signal and are usually initiated by the consumption of a product, part, or
service. This allows an accurate alignment of the inventory resupply with the actual
consumption or demand of an item. SAFe® suggests Team Kanban as a working
method for the so called System Teams [23], which are responsible for the build up
of the agile development environment as well the system and solution demos. The
latter one is a demonstration of the new features of the most recent increment to all
relevant stakeholders, allowing to receive immediate feedback. Important working
methods in the Kanban context are a continuous monitoring and visualisation of
workflows as well as the establishment of work in process (WIP) limits and their
adaption if necessary. A graphical technique to visualise workflows is shown in
the Kanban board depicted in Fig. 2.9. Work items shown as green squares are
pulled from the team backlog on the left from where they move through the various
workflow states, “Analyse”, “Review”, “Build”, and “Integrate/Test”. The Analyse
and Build states have buffers, which allows to “park” a work item when it has been
completed until it is ready to enter the next one. This is beneficial in case the next
12 The product backlog is defined in [20] as an emergent, ordered list of what is needed to improve
the product and is the single source of work undertaken by the scrum team.
13 Pair programming (also known as collaborative programming) is a coding methodology that
involves two developers collaborating side-by-side as a single individual on the design, coding and
testing of a piece of software: one controls the keyboard/mouse and the other is an observer aiming
to identify tactical defects and providing strategic planning [22].
14 E.g., all code must undergo successful unit testing prior to release.
22 2 Systems Engineering
Fig. 2.9 Kanban board example, reproduced from [24] with permission of Scaled Agile Inc
step in the process depends on external factors that are not directly under the control
of the project like for example external reviewers or the availability of a required
test infrastructure. If maintained on a regular basis, the Kanban board supports
a continuous progress monitoring, which is referred to as measuring the flow in
SAFe® terminology.
approach.
2.6 Quality Assurance 23
the Object Management Group [26]. The use of models allows to create, manage,
and verify engineering data that is less ambiguous, more precise, and most important
consistent. In contrast to having a multitude of documents like in the textual-based
SE, the engineering data is stored in a single model repository which is used to
construct detailed views of the contained information in order to address specific
engineering needs or to perform a detailed analysis. Even if different views need
to be generated in order to provide the required inputs to the various engineering
domains, their consistency can always be easily guaranteed as they were derived
from the same source of data. This methodology also facilitates a fast and more
accurate impact analysis of a technical decision or design modification on the overall
system and its performance, which otherwise would be considerably more involved
to generate and potentially even less accurate.
The SysML language provides a set of diagrams that allow to capture model char-
acteristics, its requirements, but most important specific behavioural aspects like
activities (i.e., transition of inputs to outputs), interactions in terms of time-ordered
exchange of information (sequence diagrams), state transitions, or constraints on
certain system property values (e.g., performance or mass budget). The advantage
to use a well defined (and standardised) modelling language is the compatibility
between various commercially available tools allowing to more easily exchange
modelling data between projects or companies.
The last section of this chapter describes necessary administrative activities and
practices which are grouped under the terms quality assurance (QA) or product
assurance (PA) and aim to ensure that the end product of a developing entity fulfils
the stakeholder’s requirements with the specified performance and the required
quality. Quality in this context refers to the avoidance of workmanship defects or
the use of poor quality raw material. In order to achieve this, the developing entity
usually needs to agree to a set of procedures, policies, and standards at the project
start, which should be described in a dedicated document like the Quality Assurance
Management Plan (see e.g., [27]) so it can be easily accessed and be followed during
the various project phases.
Table 2.1 Software design assurance level (DAL) as specified by DO-178B [28]
Failure Software
condition level Description Failure probability (P)
Catastrophic A A failure may cause multiple fatalities, P < 10−9
usually with loss of the airplane
Hazardous/ B Failure has a large negative impact on 10−9 < P < 10−7
severe-major the safety or performance, reduces the
ability of the crew to operate the air-
craft, or has an adverse impact upon the
occupants
Major C Failure slightly reduces the safety mar- 10−7 < P < 10−5
gin or increases the crew’s workload
and causes an inconvenience for the
occupants
Minor D Failure slightly reduces the safety mar- 10−5 < P
gin or increases the crew’s workload
and causes an inconvenience for the
occupants
No effect E Failure has no effect on the safe opera- N/A
tion of the aircraft
16 The DO-178B standard was actually developed by the Radio Technical Commission for
Aeronautics (RTCA) in the 1980s and has more recently been revised (refer to DO-178C [29])
in order to consider modern development and verification techniques used in model-based, formal,
and object-oriented development [30].
26 2 Systems Engineering
Input Parameters
Software Unit
[C]
Dead Code
Fig. 2.10 Software testing is performed at unit, application, and interface level. Traceability
between code and the applicable documentation needs to be demonstrated in order to avoid dead
code. DDD = Detailed Design Document, SWICD = Software Interface Control Document, SRD
= Software Requirements Document
system.17 Unit testing is the lowest level of software testing and is supposed to
demonstrate that the SU performs correct under all given circumstances. This is
usually achieved via automated parameterised testing in which the SU is called
multiple times with different sets of input parameters in order to demonstrate
its correct behaviour under a broad range of calling conditions. Higher level
testing needs to be performed at application level and needs to demonstrate the its
proper functionality which must be traceable to the relevant design document and
the applicable software requirement (SRD). Executable code in reused software
components that is never executed and cannot be traced to any requirement or
design description has to be identified as “dead code” and is subject to removal
unless its existence can be justified.18 Another important testing activity is
interface testing which has to demonstrate the correct exchange of all interface
signals (in terms of format, size, file name, protocol, latency etc.) as per the
applicable ICDs.
17 This can either be the OS of the host computer or a the one from a Virtual Machine (VM).
18 The detection of dead code can be achieved by either source code inspection or is the outcome
of structural coverage measurement campaign.
2.6 Quality Assurance 27
CCR
CR
Installation
Integration Testing Phase
Development
Inst.
Report Day 1 Day 2 Day 3 Day n
PSR Time
TRR TRB
Verified
Daily Test Results Requirements
VCB
Test Test records (Pass/Fail) Final Test
Plan List of Anomalies (NCRs) Report
Fig. 2.11 Test Campaign Planning: PSR = Pre-shipment Review, TRR = Test Readiness Review,
TRB = Test Review Board, CCR = Configuration Change Request, VCB = Verification Control
Board
Quality Assurance standards also play an important role for the proper planning
and execution of testing activities during the verification and validation phases
of a project. The correct execution and documentation of a test campaign can
maximise its efficiency and outcome and avoid the need for potential test repetitions
at a later stage. The schematic in Fig. 2.11 depicts an example of a test campaign
planning scheme in which the various stages are depicted on a horizontal timeline.
A prerequisite of any testing activity is the finalisation of the development of a
certain part, software component, or sub-system which is ready to be handed over
to a higher level recipient. An example could be something simple like the ground
segment receiving a software maintenance release or a patch or something more
complex like a remote site receiving a brand new antenna dish to be mounted
on the antenna building structure. Any such transfer needs to fulfil a certain
set of prerequisites which need to be verified in a dedicated review referred to
as the pre-shipment review (see e.g., [31]). Such a PSR is held to verify the
necessary conditions for the delivery (“shipment”)19 from the developing to the
recipient entity. The conditions to authorise this shipment usually comprise the
19 The term shipping her refers to any kind of transfer, be it an electronic file transfer, postal transfer
of media, transfer via road or railway, or even a container ship crossing the ocean.
28 2 Systems Engineering
readiness of the destination site to receive the item,20 the readiness of the foreseen
transportation arrangements and transportation conditions (e.g., mounting of shock
sensors for sensitive hardware, security related aspects that impact the delivery
method, the exact address and contact person at destination site, etc.), the availability
of installation procedures, and the completeness of installation media.
After the successful shipment the installation and integration of the new compo-
nent can be performed. In case this implies a modification of an existing baseline,
the proper configuration control process needs to be followed. In other words,
the installation of the element is only allowed if a corresponding configuration
change request or CCR has been approved by the responsible configuration control
board (CCB). The installation itself needs to follow the exact steps defined in the
installation procedures and any deviations or additional changes required need to be
clearly documented in the installation report.
Once the installation has been finalised, a test readiness review (TRR) should be
held prior to the start of test campaign. In the TRR the following points should be
addressed and documented as a minimum:
• Review of the installation report to discuss any major deviations and their
potential impact.
• Define the exact scope of the test campaign, outlining the set of requirements or
the operational scenario that should be validated.
• The readiness of all required test procedures (TPs) should be confirmed and
the procedures should be put under configuration control. The TPs should also
contain the traces of the requirements to the test steps where the verification event
occurs. If needed, a test procedure can be corrected or improved during the test
run, but any performed change should be clearly identified for incorporation into
future versions of the TP (and re-used in future campaigns if needed).
• The test plan should identify the detailed planning of the campaign. The plan
should detail the number of tests that are scheduled for each day and identify
the required resources. Resources should be considered from a human point of
view (test operators and witnesses) and machine point of view (test facilities and
required configuration).
• In order to be reproducible, the exact configuration baseline of the items under
test as well as the used test facilities need to be identified and documented.
Either the configuration is specified in detail in the minutes of meetings that are
generated during this review or a reference to another document is provided.21
During the testing activity, the detailed test results need to be properly doc-
umented. Each performed test step needs to have a clear indication whether the
system has performed as expected (pass) or any problems have been observed (fail).
20 For dust sensitive components like a satellite certain clean room facilities at the receiving end
If a FAIL occurs at a test step to which the verification of a requirement has been
traced, the verification of this requirement could not take place in this run. It is
highly recommended to clearly define the criteria for a pass and fail and also any
intermediate state22 as part of the TRR, to avoid lengthy discussions during the test
execution.
In case an anomaly in the system is observed it needs to be documented with a
clear reference and description. The description needs to comprise all the details that
will allow its reproduction later on-wards, which is a prerequisite for its correction.
Depending on the nature of the anomaly and its impact it can be termed as a software
problem report (SPR), anomaly report, or non-conformance-report (NCR). The
term NCR is usually reserved for anomalies that have a wider impact (e.g., affecting
functionality at system level) and are therefore subject to a standardised processing
flow (see e.g., [32]).
It is highly recommended to hold a quick informal meeting at the end of every day
throughout the campaign, in order to review the test results of each day and ensure
proper documentation with a fresh memory. These daily reports are very useful for
the generation of the overall test report at the end of the campaign, which is one of
the main deliverable a the test review board (TRB). It is important to always keep
in mind that test evidence is the main input to the verification control board (VCB)
which is the place where the verification of a requirement is formally approved and
accepted.
2.7 Summary
The objective of this chapter was to introduce the basic and fundamental principles
of systems engineering (SE) that need to be applied in any project aiming to deliver
a product that fulfils the end user’s needs. SE activities must be closely coordinated
with those of the project management domain and therefore the roles of the systems
engineer and the project manager should be preferably occupied by good friends
in order to guarantee project success. Especially at the project kick-off such a
close interaction will favour the generation of a more realistic working schedule
and enable the setup of more meaningful contractual hierarchies and collaborations,
allowing to control the project cost and schedule more efficiently.
The importance to develop a good understanding of the existing risks in a project
and the ability to properly categorise and continuously monitor them throughout
the entire project life cycle has been briefly discussed. A risk register provides a
good practice to document this but should be considered as a living document that
22 A possible intermediate state is the so called qualified pass (QP) which can be used if a non
expected behaviour or anomaly has occurred during the test execution which however does not
degrade the actual functionality being tested.
30 2 Systems Engineering
requires continuous updates and reviews so it can help the project to take adequate
risk mitigation measures in time.
The two basic life cycle models of sequential and incremental development have
been explained in more detail. A profound understanding of the main characteristics
(in terms of commonalities and differences) is an important prerequisite to be able
to engage in the new lean and agile methods. The later ones can be considered as
different flavours of the incremental approach23 and have originally emerged in the
software development domain but in recent years became more present at SE level.
The modern lean and agile techniques should not be seen as practices that contradict
the old proven methods but rather as a new way to apply SE principles in a more
tailored and efficient manner with the sole intention to avoid rework at a later project
stage.
Model based systems engineering (MBSE) has been briefly introduced as an
approach to improve the generation of requirements (specification) by expressing
them in executable models that are easier to visualise and to test. MBSE can improve
the accuracy, consistency, and quality (e.g., less ambiguous) of a requirements
baseline which in return improves the system design and implementation. At the
same time the risk of to introduce change requests to correct or complement
functionality at later project stages can be reduced significantly.
Finally some aspects of quality assurance (QA)24 have been explained that affect
both the development and verification stage of a project. In the development phase
the application of software standards has been mentioned as an important means to
reduce the likelihood of software bugs. The verification stage will benefit from an
optimised planning and documentation of the test campaigns with clearly defined
review milestones at the beginning and the end. It is worth to mention that the need
and benefit of QA can sometimes be underestimated and easily considered as an
additional burden to already high workloads (e.g., additional documentation that is
considered unnecessary). Such a view should always be questioned as insufficient
QA can lead to design or implementation errors. Inefficient test campaigns can lead
to poor documentation and missed requirements in the verification process. In both
cases additional and potentially even higher workload will be required and most
likely imply schedule delays and cost overrun.
References
23 Some readers might strongly disagree with this statement but the author still dares to express
3. National Aeronautics and Space Administration. (2016). NASA systems engineering handbook.
NASA SP-2016-6105, Rev.2.
4. European Cooperation for Space Standardization. (2017). System engineering general require-
ments. ECSS-E-ST-10C, Rev.1.
5. Belvoir, F. (1993). Committed life cycle cost against time. Belvoir: Defense Acquisition
University.
6. Project Management Institute. (2021). A guide to the project management body of knowledge
(7th edn.). ANSI/PMI 99-001-2021, ISBN: 978-1-62825-664-2.
7. Ruskin, A. M., & Eugene Estes, W. (1995). What every engineer should know about Project
Management (2nd edn.). New York: Marcel Dekker. ISBN: 0-8247-8953-9.
8. European Cooperation for Space Standardization. (2009). Space engineering, Technical
requirements specification. ECSS-E-ST-10-06C.
9. European Cooperation for Space Standardization. (2009). Space project management, project
planning and implementation. ECSS-M-ST-10C.
10. European Cooperation for Space Standardization. (2008). Space management, Organization
and conduct of reviews. ECSS-M-ST-10-01C.
11. Forsberg, K., & Mooz, H. (1991). The relationship of systems engineering to the project
cycle. In Proceedings of the National Council for Systems Engineering (INCOSE) Conference,
Chattanooga, pp. 57–65.
12. Forsberg, K., Mooz, H., & Cotterman, H. (2005). Visualizing project management (3rd edn.).
Hoboken, NJ: Wiley.
13. European Space Agency. (1996). Ariane 5, Flight 501 Failure. Inquiry Board Report. Retrieved
March 5, 2022 from https://esamultimedia.esa.int/docs/esa-x-1819eng.pdf
14. Douglas, B. P. (2016). Agile systems engineering. Burlington: Morgan Kaufmann. ISBN: 978-
0-12-802120-0.
15. Scaled Agile Incorporation. (2022). Achieving business agility with SAFe 5.0. White Paper.
Retrieved March 5, 2022 from https://www.scaledagileframework.com/safe-5-0-white-paper/
16. Womach, J. P., & Jones, D. T. (1996). Lean thinking. New York, NY: Simon & Shuster.
17. Toyota Motor Corporation. (2009). Just-in-time - Productivity improvement. Retrieved March
5, 2022 from www.toyota-global.com
18. Scaled Agile Incorporation. (2022). Agile teams. Retrieved March 5, 2022 from https://www.
scaledagileframework.com/agile-teams/
19. Scaled Agile Incorporation. (2022). Agile release train. Retrieved March 5, 2022 from https://
www.scaledagileframework.com/agile-release-train/
20. Schwaber, K., & Sutherland, J. (2022). The definitive guide to scrum: The rules of the game.
Retrieved March 5, 2022 from https://scrumguides.org/docs/scrumguide/v2020/2020-Scrum-
Guide-US.pdf
21. Beck, K. (2004). Extreme programming explained: Embrace change (2nd edn.). Boston:
Addison-Wesley.
22. Lui, K. M., & Chan, K. (2006). Pair programming productivity: Novice-novice vs. expert-
expert. International Journal of Human-Computer Studies, 64, 915–925.
23. Scaled Agile Incorporation. (2022). System team. Retrieved March 5, 2022 from https://www.
scaledagileframework.com/system-team/
24. Scaled Agile Incorporation. (2022). Team Kanban. Retrieved March 5, 2022 from https://www.
scaledagileframework.com/team-kanban/
25. Object Management Group® . (2019). OMB systems modeling language (OMG SysML® ),
Version 1.6. Doc Number: formal/19-11-01. Retrieved March 5, 2022 from https://www.omg.
org/spec/SysML/1.6/
26. Object Management Group® . (2022). Official website. Retrieved March 5, 2022 from https://
www.omg.org/
27. European Cooperation for Space Standardization. (2018). Space product assurance: Quality
assurance, Rev. 2. ECSS-Q-ST-20C.
28. Radio Technical Commission for Aeronautics (RTCA). (1992). Software considerations in
airborne systems and equipment certification. Doc DO-178B.
32 2 Systems Engineering
29. Radio Technical Commission for Aeronautics (RTCA). (2011). Software considerations in
airborne systems and equipment certification. Document DO-178C.
30. Youn, W. K., Hong, S. B., Oh, K. R., & Ahn, O. S. (2015). Software certification of safety-
critical avionic systems and its impacts. IEEE A&E Systems Magazine, 30(4), 4–13. https://
doi.org/10.1109/MAES.2014.140109.
31. Galileo Project Office. (2016). Galileo software standard for ground (GSWS-G). GAL-REQ-
ESA-GCS-X/0002, Issue 1.0.
32. European Cooperation for Space Standardization. (2018). Space product assurance, noncon-
formance control system. ECSS-Q-ST-10-09C.
Chapter 3
The Space Segment
Fig. 3.1 Overview of main satellite subsystems and their respective interface points to the ground
segment (refer to text). SRDB = Satellite Reference Database, [MoI] = Moments of Inertia matrix,
[CoM] = Center of Mass interpolation table, OOP = Onboard Orbit Propagator
platform that carries a scientific remote sensing payload (e.g., a space telescope)
will usually have to meet quite stringent requirements on its pointing accuracy and
stability (jitter) which will have an impact on the design of its attitude control
system. A telecommunication satellite in contrast usually has a lower demand
for accurate pointing but a much higher one on the power subsystem to feed its
transponder so it can meet the required RF transmission characteristics and the link
budget. Interplanetary spacecraft that travel to the inner Solar System have to face
very high thermal loads whereas a voyage to the Jovian system implies extreme
radiation doses that requires extensive radiation shielding to protect the satellite’s
electronic circuits and ensure it’s survival. With this in mind one can state that
satellite platforms will only resemble in projects with similar mission objectives,
payloads, and operational environments. Independent of this one can however find
a similar set of standard subsystems in all satellite platforms which are briefly
summarised below.
3.1.1 Propulsion
The propulsion system enables the satellite to exert a thrust and change its linear
momentum. This is needed to insert a satellite into its operational orbit position and
to perform any correction of its orbital elements during its nominal lifetime (station
keeping). Propulsion systems can be categorised into chemical, nuclear, or electrical
systems. Nuclear based system have never reached a wider deployment in space and
will therefore not be discussed in more detail here.
3.1 System Design 35
The attitude control system (ACS) enables a satellite to fulfil its pointing require-
ments which are defined in terms of pointing orientation, accuracy, and stability.
The orientation will be primarily dictated by the payload and the mission objectives.
36 3 The Space Segment
External Disturbance
Torques
ACS
Control Torques Attitude Sensors
Computer
OOP
Fig. 3.2 Overview of the satellite attitude control system’s (ACS) main components. OOP =
Onboard Orbit Propagator (see Sect. 6.6)
This is a vector quantity that can be derived by summing up the moments of the
momenta of all particles at their location r inside the rigid body.
HO = (r × m v) (3.1)
The subscript ’O ’ in Eq. (3.1) defines the common origin or reference point from
which r and v of each mass particle is defined. The angular momentum vector H has
in the context of rotational motion an equivalent meaning as the linear momentum
vector L = m × v for translational movement. An external torque T has, equivalent
to a force in linear motion, the ability to change both magnitude and direction of
the angular momentum vector, i.e., δ HC = T δt. A torque component parallel to H
will only modify its magnitude, whereas an orthogonal torque component TN will
change the direction of H into the same direction as T which causes the well known
precession of a gyroscope (see Fig. 3.3). It is worth to remember the following basic
properties of the angular momentum vector H :
• In contrast to external torques, no change of angular moment can be achieved
via internal torques (e.g., fuel movement or movement of mechanisms inside the
satellite).
• A flying satellite is always subject to natural external disturbance torques (e.g.,
solar radiation, gravity gradient, aerodynamic), which will imply a progressive
build up of angular momentum over the satellite’s lifetime. Therefore the satellite
designer will need to fit the satellite with external torquers (e.g., thrusters or
magnetic coils) in order to control (or dump) this momentum build up.
• Activation of thrusters that are not precisely pointing through the centre-of-
mass can change the orientation of the angular momentum vector and destabilise
its attitude. To avoid this unwanted circumstance, satellites are sometimes
38 3 The Space Segment
HC = [IC ]ω
(3.2)
where the subscript C refers to the CoM, [IC ] is the inertia tensor, and ω is the
angular velocity vector relative to an inertial reference frame. The inertia tensor in
matrix notation is defined by
⎡ ⎤
Ixx −Ixy −Izx
[IC ] = ⎣Ixy −Iyy −Iyz ⎦ (3.3)
Izx −Iyz −Izz
where Ixx , Iyy , and Izz are the moments of inertia, and Ixy , Iyz , and Izx the
products of inertia. The later are a measure of the mass symmetry of the rigid
body and are zero if the rotation axis is chosen to be one of the body’s principal
axis which represent the Eigenvectors of the inertia matrix. For a spacecraft that
contains moving parts like momentum or reaction wheels, the additional angular
momentum of each wheel, e.g., Hwh = Iwh ω wh needs to be added to the overall
angular momentum yielding:
⎡ ⎤
Ixx ωx − Ixy ωy − Izx ωz + Hx,wh
HC = ⎣Iyy ωy − Iyz ωz − Ixy ωx + Hy,wh ⎦ (3.4)
Izz ωz − Izx ωx − Iyz ωy + Hz,wh
Depending on the overall attitude motion, one can distinguish between spinning
and three-axis-stabilized satellites. Whereas the first one will always have a
momentum bias both options are possible for the latter one. The main advantage
of momentum bias is to gain gyroscopic rigidity which can be either applied
temporarily (e.g., during rocket firing) or permanently as done for spinners. For the
latter case, the rotation axis will usually be chosen in a direction that does not have to
change during satellite operations. A spinning spacecraft is schematically depicted
in Fig. 3.3 for which the rotation axis ω is aligned with the angular momentum axis
H . This alignment will only be stable if the rotation axis is a principal axis (i.e., an
Eigenvector of the bodies’ inertia matrix).
Attitude determination depends on the input provided by specialised sensors of
which the most common ones are summarised in the upper part of Fig. 3.4. An
Earth sensor is designed to detect the Earth’s horizon and is usually operated in
the infrared spectrum where additional optimisation effects can be exploited (e.g.,
less variation between peak radiation, no terminator line, presence of the Earth in IR
even during eclipses). Static Earth-horizon sensors are usually used for high altitude
3.1 System Design 39
ACS - Sensors
orbits (MEO and GEO) for which the apparent radius of the Earth1 is small enough
to locate transition detectors in a static arrangements that ensures their overlap
with the border of the apparent Earth disc. This will allow the sensor to measure
a displacement of the detectors and derive a deviation of roll and pitch angles from
a reference. Due to the circular shape of the Earth the measurement of the yaw angle
(rotation around the local vertical) is however not possible with that same device.
As the apparent Earth diameter is extensively larger for a LEO type orbit2
scanning Earth-horizon sensors are used to determine the nadir direction. As
implied by their name such units implement a scanning mechanism that moves a
detector with a small aperture (pencil beam) on a conical scan path. When crossing
the horizon the sensor generates a square-wave shaped pulse which is compared to
a reference pulse and its offset used as a measure of the satellite’s deviation from
that reference attitude (see e.g. [6] for more details).
Sun Sensors are widely used on satellites due to the advantage that the Sun’s
angular radius is fairly insensitive to the chosen orbit geometry (i.e., 0.267 deg at 1
AU) and allows it to be used as a point source approximation in the sensor software.
Furthermore, the Sun’s brightness makes the use of simple equipment with minimal
power requirement possible. There is a wide range of designs which differ both in
their field of view (FOV) and resolution specification and lead to the terms Fine Sun
Sensor (FSS) and Coarse Sun Sensor (CSS). Based on their basic design principle
the following main categories are distinguished (cf., e.g. [4]):
1 The Earth apparent radius at an orbit altitude h is given by α = 2 arcsin[RE /(RE + h)], which
RE
yields around 17.5 deg at GEO altitude.
2 As an example, at an orbit altitude of 500 km the apparent Earth has a diameter of ca. 135 deg.
40 3 The Space Segment
Fig. 3.5 Principle of a GNSS derived attitude measurements using two patch antennas mounted
on a satellite surface at distance L
Whereas the ACS sensors explained above are used to determine the satellite’s
attitude, actuators are used to change it, either to correct the current attitude or to
achieve a new one. Actuators can be categorised into two groups, external torquers
affecting the total angular momentum (and necessary to counteract external torques)
and internal torquers that only redestribute the momentum between its moving parts
(see lower panel of Fig. 3.4).
Thrusters are usually mounted in clusters with different directions and are mainly
applied for orbit corrections but they can also assist the attitude control system.
Thrusters have the advantage to provide altitude independent torque magnitudes (as
opposed to magnetic torquers) but come with the drawback that their accumulated
thrust and exercised torque is determined via their switch-on duration. If thrusters
are used as the prime attitude control means of a satellite, a high level of activation
cycles might be needed which will reduce the lifetime of the entire subsystem and
with this even that of the satellite. Reaction and momentum wheels are therefore the
preferred choice and thrusters serve as momentum dumping devices.
Cold Gas systems are a specific type of thruster designed for small thrust levels,
typically in the order of 10 mN. Their design implements an inert gas (e.g., Nitrogen,
Argon, or Freon) which is stored at high pressure and released into a nozzle on
activation where it simply expands without any combustion process involved. The
thrust force stems from the pressure release of the inert gas into space. The low
thrust level and ability to be operate with very short switch-on times allows the
42 3 The Space Segment
achievement of small impulse bits in the order of only 10E-04 Ns which is needed
to achieve high-precision pointing requirements.
Magnetic torquers are rod like electromagnets whose generated magnetic field
depends on the current I that flows through them. The resulting torque T stems
from the interaction with the local Earth magnetic field vector B via the following
relation
nI A
T = (3.5)
e × B
where n is the number of wire turns on the coil, A the cross-sectional area of the
magnetic coil, and e the unit vector of the coil’s axis.
Reaction Wheels (RWs) and Momentum Wheels (MWs) are precision wheels
that rotate about a fixed axis with an integrated motor. They need to implement a
reliable bearing unit which despite high rotation rates can guarantee a lifetime that is
commensurate with the overall satellite life time (usually 15 years). The difference
between RW and MW is mainly the way they are operated. RWs are kept at zero or
very low rotation and through their mounting in orthogonal direction used to provide
three-axis control of the satellite. MWs are operated at high rotation rates (usually
in the range between 5000 to 10000 r.p.m) and used to provide a momentum bias to
the satellite.
3.1.3 Transceiver
The satellite transceiver subsystem has to establish the RF link to the ground
segment’s TT&C station network (refer to Sect. 5). The main components are
depicted in Fig. 3.6 and are briefly described here. The diplexer separates the
transmit and receive path and comprises low-pass and band-pass filters in order to
prevent that out-of band signals reach the receiver’s front end low noise amplifiers
(LNAs). The received signal is routed to the receiver front-end (RX-FE) were it is
amplified and down-converted to the intermediate frequency (IF) of 70 MHz. This
conversion requires the transceiver to accommodate its own frequency generator
which usually is a temperature controlled oscillator (TCXO). The IF signal then
enters the modem interface unit (MIU), where it is converted into the digital domain
for further signal processing (bit synchronization, demodulation, and modulation).
On the transmit chain the TM frames are modulated by the IMU onto the
baseband signal and then transformed from the digital to the analogue domain.
Following up-conversion (UC) to high-frequency the signal is amplified to the
required up-link power level by the high power amplifier (HPA) unit. The adequate
amplification level depends on the link budget which needs to consider an entire
range of parameters in the overall transmission chain. These comprise the antenna
gain, the propagation distance, atmospheric losses, receiver noise level, transmitter
and receiver losses (refer to Sect. 5.5). After amplification the signal is routed to the
Diplexer for transmission to ground.
3.1 System Design 43
LNA DC
(70MHz)
Diplexer TCXO Transponder
IF
RX Front-End
MIU
A
D
TX-Synth Modem
TX-BE HPA TX-BE UC (FPGA)
1
5
D
A
Fig. 3.6 Main components of a typical satellite transceiver. LNA = Low Noise Amplifier, DC =
Down Converter, MIU = Modem Interface Unit, TX-BE SSPA = Transmitter Backend Solid State
Power Amplifier
The overview provided here is quite generic and the main architectural differ-
ences among various transceiver models will usually differ at the level of the MIU
(e.g., need to support different modulation schemes) and the specification of the
HPA.3
The main components of a satellite onboard computer (OBC) and its interfaces to
other satellite subsystems are shown in Fig. 3.7. The core component of the OBC
are the two redundant central processing units (CPUs) which provide the necessary
processing power and memory to run the onboard software (OBSW) and to process
all the incoming data from the satellite’s subsystems. In contrast to computer
systems deployed on ground, CPUs in space need to withstand much harsher
environmental conditions which comprise high energy radiation environments (e.g.,
the Earth’s van Allen belt or the Jovian system), extreme thermal conditions (e.g.,
missions to the inner part of the Solar system), aggressive chemical environment
(e.g., atomic oxygen of Earth’s outer atmosphere), or launcher induced vibrations
[8]. Due to the critical importance of the CPU for the overall functioning of the
3 The required amplification level will drive the energy consumption of the satellite and mainly
depend on the orbit altitude (propagation path) and the required signal level on ground.
44 3 The Space Segment
Command
3.3 V Pulse
Reconfig Decoding
Unit
Unit
(PCDU) Output Pulse
Hardware PCDU
Command
Fig. 3.7 Satellite onboard computer (OBC) components. Solid lines represent data interfaces
whereas dashed lines power connections. OBSW = Onboard Software, PCDU = Power Control
and Distribution Unit, CLTU = Command Link Transfer Unit (CCSDS TC format), CADU =
Channel Access Data Unit, CAN = Controller Area Network
OBC, redundant processor architectures are the standard design. The interface with
other satellite subsystems is realised via the so called spacecraft bus which can
be based on a variety of existing standards. The most frequent one in use is the
US military standard MIL-STD-1553B [9] which consists of a bus controller able
to serially connect to up to 31 “Remote Terminals”. Alternative standards in use
are SpaceWire defined by ESA ECSS [10] or the Controller Area Network (CAN)
interface (cf.[11]) which was originally developed by the automotive industry and
has only later been adapted for the use in other areas including space applications.
The communication of the satellite to the ground segment’s TT&C station is
mainly performed using the CCSDS standardized Space Link Extension or SLE
protocol which is described in more detail in Sect. 7.2.1. This standard describes
the layout of packets and segmentation of these packets into frames and higher
order structures to build the Command Link Transfer Units (CLTU) for TC uplink
and the Channel Access Data Units (CADU) for TM downlink. The TC decoding
and TM encoding is performed by a dedicated CCSDS TM/TC processor board
which interfaces to the modem of the satellite transceiver subsystem. These boards
are usually entirely hardware based (ASICs or FPGAs) and by design explicitly
avoid the need of a higher level software which would need to be implemented as
part of the OBSW. This has the important advantage that the CCSDS processor
board can still interpret emergency commands from ground in case the OBSW
is down. Such emergency commands are referred to as high priority commands
(HPCs) and are identified by a dedicated command packet field code. This allows
them to be immediately identified by the CCSDS processor board which can then
3.1 System Design 45
route them directly to the command pulse decoding unit (PCDU) and bypass the
OBSW. Following the directives in the HPC, the PCDU sends analogue pulses over
pulse lines with different lengths in order to steer the power control and distribution
unit (PCDU). Possible scenarios could be the emergency deactivation of certain
power consumers or a power cycle (reboot) of the entire OBC.
In case of a component failure, the reconfiguration unit has to initiate the
necessary “rewiring” of a redundant component (e.g., the main processor boards)
with a non redundant unit (e.g., CCSDS TM/TC board) inside the OBC.4 The
switchover can either be commanded from ground or initiated via the satellite
internal FDIR sequence. Furthermore, the reconfiguration unit needs to track the
health status of each OBC component and update this state in case it is reconfigured.
This information is kept in a dedicated memory location referred to as spacecraft
configuration vector and any switchover activity is communicated to ground via
dedicated high priority telemetry (HPTM).
Another important OBC component is the DC/DC converter which converts the
satellite bus standard voltage supply of 28 V to the much lower 3.3 V that is usually
required by the various OBC components.
From a mechanical design point of view, OBCs are usually composed of a set of
printed circuit boards, with each of them mounted in a dedicated aluminium frame.
These frames are then accommodated into an overall OBC housing. As every one
of the these circuit boards dissipates heat they are monitored via thermistors located
on the board surfaces which is shown via the Thermal Control box in Fig. 3.7.
3.1.5 Power
The power system has to ensure that sufficient electrical power is generated, stored,
and distributed to the consumers so they are able to properly function. The power
subsystem is usually structured into a primary and secondary block and a power
control and distribution unit (PCDU) which is schematically shown in Fig. 3.8. The
primary block is responsible to generate power using solar arrays, fuel cells, or
nuclear generators referred to as Radioisotope Thermoelectric Generator or RTGs.5
The secondary block is responsible for the storage of energy and therefore comprises
a set of batteries. This storage capacity needs to be correctly dimensioned and is
based on the outcome of a dedicated analysis taking into account the maximum
power demand (load) of all the satellite’s subsystems, the available times the satellite
is exposed to sunlight, and the maximum duration of encountered eclipses. Another
important aspect for the power budget is the solar array degradation rate due to
System where the power received from the Sun is too week to satisfy the consumer’s needs.
46 3 The Space Segment
PCDU
Regulated Main PowerBus
Shunt
Power
Dump
DC Outlets
Transfer
Module
Device MCU
(S3R)
BCR BDR
Fig. 3.8 Simplified view of a satellite’s Electrical Power Subsystem (EPS). SPA = Solar Power
Assembly, S3R = Sequential Switching Shunt Regulation, MCU = Mode Control Unit, BCR =
Battery Charge Regulator, BDR = Battery Discharge Regulator, BMU = Battery Monitoring Unit
radiation which implies a reduction of generated power with ageing of the array,
reaching its lowest value at the end of the satellite’s lifetime.
As both the generated and consumed power levels usually vary during an orbit,
the solar array segments (also referred to as solar power assembly or SPA) can be
activated or deactivated in a staggered way. This is done by the shunt dump module
(SDM) which steers the grounding of each SPA as shown in Fig. 3.8. The voltage
sensing is done by the mode control unit or MCU.
The secondary block comprises the battery pack, a heater, and the battery
management unit. The latter one monitors the battery temperature, the cell voltage,
and the cell pressure. The battery charge regulator (BCR) ensure a constant current
charge of the battery during sunlight operations and the battery discharge regulator
(BDR) a constant supply to the main power electrical bus during eclipse times. Two
different types of direct current power bus systems can usually be found in satellites,
a regulated and an unregulated bus type. The regulated one provides only one single
voltage (e.g., 28 or 50 V) whereas an unregulated one an entire range of voltage
values. The satellite’s subsystems (consumers) are connected to the outlets of the
PCDU.
The prime task of a satellite’s thermal control system is to monitor and regulate
the environmental temperature conditions of the entire satellite platform. Electronic
3.1 System Design 47
components are the most sensitive parts onboard a satellite, and are nominally
qualified to operate within a very narrow range around room temperature. Due to
extreme temperature gradients in space, the thermal control system must implement
active and passive measures to limit these in order to avoid excessive thermal
expansion of the satellite structure which could lead to distortions or even structural
instabilities.
Due to the absence of an atmosphere in space, radiation and conductance are the
dominant heat transfer mechanisms and convection can be neglected. The main heat
sources are externals ones that stem from solar radiation reaching the satellite either
directly or through reflection and thermal infrared radiation from nearby planets.
There is also an internal source which is the heat generated by the various electronic
devices or batteries. The satellite can dissipate heat through its radiators that point
to cold space. Thermal equilibrium is therefore reached when the amount of heat
received from outside and any thermal dissipation inside the spacecraft equals
the amount of energy that is radiated to deep space. This thermal balance can be
expressed as (cf., Sec. 11 of [2])
Aα α Jincident = A σ T 4 (3.6)
where the left part of the equation represents the absorbed energy that is determined
by the absorbing area Aα , the incident radiation intensity Jincident (W/m), and the
surface radiation absorbance α and the right part represents the radiated energy to
space with the emitting surface A , the emittance , and the equilibrium temperature
of the satellite body T . If a spacecraft would behave like a perfect black body, the
coefficients α and would be one. As both coefficients are material constants, the
satellite manufacturer can influence the equilibrium temperature through the proper
choice of surface materials based on their thermal coefficients which are known and
readily available from open literature (see, e.g., [12] or [13]).
A detailed thermal design of a satellite cannot be purely based on the thermal
equilibrium equation and a more refined formulation is required that also takes into
account the thermodynamics inside the satellite specific structure. This formulation
is referred to as the thermal mathematical model or TMM and needs to be
formulated for each satellite during its design phase. The fundamental concept of a
TMM is to discretise the satellite body into a number n of isothermal nodes as shown
in Fig. 3.9. Each node is characterised by its own temperature Ti , thermal capacity
Ci , and radiative and conductive heat exchange mechanisms to its surrounding
nodes. If a node is located at the satellite’s surface, dissipation to outer space also
needs to be considered. The conductive heat transfer between node i and j can
therefore be written as (cf. [14])
where hij is the effective conductance between nodes i and j with their respective
temperatures Ti and Tj . The term hij represents the effective thermal conductance
48 3 The Space Segment
Solar
Qext,i Albedo
Planetary
Fig. 3.9 The thermal mathematical model of a satellite showing the heat exchange of a single
node i
between the two nodes. The radiative heat transfer between nodes i and j is given
by
where Ai is the emitting surface, ij the effective emittance. The term Fij is the so
called radiative view factor that quantifies the fraction of the radiation leaving one
surface and intercepted by the other.6
With the aforementioned definitions in place, the TMM can now be formulated
as a set of n non-linear differential equations where each equation defines the
thermodynamics for one single node i.
dTi n n
m i Ci = Qext,i + Qint,i − Qspace,i − Qcij − Qrij . (3.9)
dt
j =1 j =1
Qint is the internal heat dissipation of the node, Qspace the heat radiated to space
given by
6 For a more detailed treatment of the view factor, the reader is referred to more specialised
where Aspace,i is the surface area of node i facing cold space. Qext is the external
heat input comprising contributions from the Sun, the Earth’s albedo, and any visible
planet according to
J defines the flux densities (W/m2 ) of each heat source. The terms Qcij and
Qrij represent the thermal contributions of heat conduction and radiation as defined
in Eqs. (3.8) and (3.7) respectively.With the TMM in place to accurately model the
thermal dynamics inside satellite the designer can now take appropriate measures to
influence it with the means of passive or active thermal control techniques.
Examples of passive techniques comprise the already mentioned selection of
surface properties that will influence the α/ ratio in Eq. (3.6) or the laying of
conducting paths inside the satellite structure which influence the heat conduction
term Qcij defined in Eq. (3.7). More elaborate and widely used passive technologies
are two-phase heat transport systems like heat pipes, loop heat pipes, or capillary-
pumped loops which implement sealed tubes that contain a working liquid which
is kept in thermodynamic equilibrium with its vapour. A porous structure ensures
that the liquid and the vapour remain separated but can still interact. The exposure
of the pipe to high temperature on one side (incoming heat) will lead to evaporation
of the liquid which causes a pressure gradient between the two sides of the pipe
resulting in a transport of vapour to the opposite side. Once the vapour arrives at
the cooler temperature end, it condensates and releases heat there. Another much
simpler passive technique is the use of insulation materials designed to minimise
radiative heat exchange in vacuum. The most prominent example is the multi-layer
insulation (MLI) blanket used as radiation shields for satellite external surfaces that
are exposed to the Sun.
Active control techniques comprise heaters,7 variable conductance heat pipes,
liquid loops, shutters, or heat pumps. For a more detailed description of these
devices, the reader is referred to more specialised literature (e.g., [14] or [15]).
7 Heaters are more frequently deployed inside propulsion subsystems (e.g., fuel lines, valves, etc.)
tailored to the specific needs and circumstances of the satellite during the following
main operational phases:
• The pre-launch phase with the satellite located in the test centre and connected
to the Electronic Ground Support Equipment (EGSE) for testing purposes.
• The launch and early orbit phase (LEOP) comprising launch, ascent, S/C separa-
tion from the launch vehicle, activation of the satellite transponder, deployment
of solar arrays, and the satellite’s placement into the operational orbit with the
correct attitude.
• The commissioning phase during which the in-flight validation of all subsystems
is performed.
• The nominal operations and service provision phase.
• The end-of-life phase during which the satellite is placed into a disposal orbit
and the satellite is taken out of service.
Following the phases mentioned above, the following satellite system modes
would be adequate:8 during the launch, the satellite is put into a standby mode,
in which the OBC is booted and records a basic set of house-keeping telemetry. The
power generation must be disabled (solar array stowed and battery not connected)
and any AOCS related actuators deactivated. Once ejected from the dispenser, the
satellite enters into the separation mode, in which it stabilises its attitude, reduces
rotational rates, starts transmission to ground, and deploys its solar arrays oriented
to the Sun to initiate the power generation. Depending on the mission profile and its
operational orbit, a set of orbit correction manoeuvres are usually required during
LEOP to reach the operational target orbit. Such manoeuvres are also required
during the nominal service phase and referred to as station keeping manoeuvres.
The execution of manoeuvres usually requires a reorientation of the satellite and also
implies additional requirements on the attitude control system to guarantee stability.
Therefore, a dedicated system mode for the execution of manoeuvres is usually
foreseen which is referred to as orbit correction mode or OCM. The nominal mode
is used for standard operations and is optimised to support the payload operations
that provide the service to the end user (e.g., pointing of the high gain antenna to
nadir direction).
A very important mode is the safe mode (SM) which is used by the satellite in
case a major failure is detected. The SM is designed to transition the satellite into a
state in which any further damage can be avoided. This comprises a safe orientation,
a reduction of power consumption, and the transmission of critical housekeeping
telemetry to the ground segment. Potential reasons for such a transition into SM
could be subsystem failures, an erroneous TC from ground, or, in a worst case
scenario, even a complete black-out of the entire ground segment implying a loss
of ground contact. The transition to SM can either be commanded from ground
or initiated autonomously by the OBC through its Failure Detection Isolation and
8 The system mode naming convention presented here is quite generic and will most likely differ
from one satellite to the other. The satellite specific user manual should therefore be consulted.
3.3 The Satellite Life Cycle 51
REQ
Consol. SRR Thermal Vacuum
Sinus Vibration
Acoustic
SPEC PDR
EMC/ESD
Design
Definition Key Loading
Implementation Fueling
QR AR
„Paper Work“ Integration Wet Mass & Inertia
Testing
SATMAN & Customer Production Launch
Prep/Int
SATMAN LEOP
Factory COMM
Test Centre FRR LRR
Service
CDR DEC
Launch
Site In Orbit Phase
Fig. 3.10 Overview of satellite life cycle and relevant milestones. REQ = Requirements, SRR =
System Requirements Review, PDR = Preliminary Design Review, QR = Qualification Review,
FRR = Flight Readiness Review, LRR = Launch Readiness Review, EMC = Electromagnetic
Compatibility, ESD = Electrostatic Discharge, LEOP = Lauch and Early Orbit Phase, COMM
= Commissioning Phase, DEC = Decommisioning Phase
52 3 The Space Segment
lower level requirements which are technically tailored to each segment and
their respective subsystems. The lower level requirements can then be used as a
specification for the satellite platform and its payload which is an important input
for the design phase. The term consolidation implies several steps of iteration in
order to converge to an acceptable version which can be formalised and “frozen” at
the system requirements review (SRR) milestone at which the formal requirements
baseline is established. This means that any subsequent change of a requirement
needs to be subject to an agreed configuration control process9 in order to properly
assess the technical, schedule, and cost impact.
In the following phase the satellite architecture is defined and documented and
thoroughly scrutinised at the preliminary design review (PDR) and the critical
design review (CDR) prior to the production of real hardware. During the assembly
phase the satellite will usually be located at the satellite manufacturer’s premises.
Some of the subsystems might be subcontracted to other companies which will then
deliver them to the prime contractor for integration.
The subsequent phase is the testing phase which is performed at subsystem and
system level. Typical system level test campaigns are the thermal-vacuum, vibration,
acoustic, and EMC tests which require the relocation of the satellite to a specialised
test centre facility that is equipped with the necessary infrastructure like vacuum
chambers, shakers, or noise cancellation rooms. Due the high cost to build, maintain,
and operate such a complex test facility, they are usually not owned by the satellite
manufacturer but rented for specific test campaigns.
A very important test is the system compatibility test during which the satellite’s
OBC is connected to the Electronic Ground Support Equipment (EGSE) which
itself is connected to the mission control system (SCF) of the ground segment. This
setup allows the ground segment engineers and operators to exchange representative
TM and TC with the OBC and to demonstrate the full compatibility of the space
and ground segments together with the validation of the flight control procedures.
Environmental testing is performed at qualification level which demonstrates that
the satellite performs satisfactorily in the intended environment even if a qualifi-
cation margin is added on top of the nominal flight conditions. This is followed
by testing at acceptance level during which the nominally expected environmental
conditions (with no margins) are applied aiming to demonstrate the conformance
of the product to its specification and to discover potential manufacturing defects
and workmanship errors (cf. [17]).10 At the end of all the test campaigns a
qualification review (QR) is held to analyse and assess the results and if no major
9 A configuration control process usually involves the issuing of configuration change requests,
that is produced from the same drawings and materials as the actual flight module (FM), whereas
acceptance testing is done with the actual FM.
3.4 Ground to Space Interface 53
non-conformance is discovered the satellite can pass the acceptance review (AR)
which is a prerequisite for its transportation to the launch site.
At the launch site the flight keys are loaded, the tanks are fuelled, and final
measurements of wet mass, sensor alignments, and inertia matrix are performed.
After the launch site functional tests have confirmed that the satellite performance
has not been affected by transportation, the flight readiness review (FRR) can be
considered successful and its integration into the launcher payload bay can be
started. An important aspect of this process is the verification of all launch vehicle
payload interface requirements, ensuring that the satellite is correctly “mated” with
the launcher both from a mechanical and electrical point of view. The final review
on ground is the launch readiness review (LRR) which gives the final “go for flight”.
Once the satellite is ejected from the launcher’s upper stage, the LEOP phase
starts during which the satellite needs to establish first contact with ground, acquire
a stable attitude, and deploy its solar panels. The LEOP usually comprises a set
of manoeuvres to reach the operational orbit. Once the satellite has acquired its
nominal attitude, a detailed check up of all subsystem and the payload is performed.
This phase is referred to as satellite commissioning and proofs that all satellite
subsystems have survived the harsh launch environment and are fully functional
to support the operational service phase.
Once the satellite has reached its end of life (usually several years), the
decommissioning phase starts. Depending on the orbit height, a de-orbiting or
manoeuvring into a disposal orbit is performed.
Fig. 3.11 Physical Layer Operations Procedure (PLOP-1) as outlined in Fig.4-1 and Table 4-1 of
CCSDS-201.0-B-3 [18]. Reprinted with permission of the Consultative Committee for Space Data
Systems © CCSDS
References
1. Sforza, P. (2012). Theory of aerospace propulsion (Chap. 12, pp. 483–516). Amsterdam:
Elsevier Science.
2. Fortescue, P., Stark, J., & Swinerd, G. (2003). Spacecraft systems engineering. Hoboken:
Wiley.
3. Hayakawa, H. (2014). Bepi-Colombo mission to Mercury. 40th COSPAR Scientific Assembly.
References 55
The detailed design of a ground control segment (GCS) will most likely differ
throughout the various space projects but one can usually find a common set of
subsystems or elements responsible to fulfil a clear set of functional requirements.
A quite generic and simplified architecture is depicted in Fig. 4.1 showing elements
that address a basic set of functions described in Sect. 4.1. These functions are
realised in specific elements indicated by the little yellow box with the three letter
acronyms (see also Figure caption) which are then described in more detail in the
following Chaps. 5–10. Rather generic element names have been used here and the
actual terminology will differ in the specific GCS realisation of a space project. It
could also be the case that several elements or functionalities are combined into one
single application software, server, or rack and it is therefore more adequate to think
of each described element here as a component that fulfils a certain set of tasks.
The actual realisation could either be a specific software application deployed on a
dedicated server or workstation or a virtual machine and will in the end depend on
the actual segment design and its physical realisation.
The telemetry tracking and commanding (TT&C) function provides the direct link
between the satellite and the ground segment. In most space projects the elements
implementing this function will be located at a geographically remote location from
the remaining GCS infrastructure. It has to support the acquisition of telemetry (TM)
from the satellite and to forward it to the GCS. In the opposite direction, it receives
telecommands (TC) from the GCS and transmits them to the satellite. In addition it
has to support the generation of radiometric data which comprise the measurement
Fig. 4.1 Ground control segment (GCS) functional architecture. The boxed acronyms refer to the
element realisation as described in Chaps. 5–10, i.e., FDF = Flight Dynamics Facility, KMF = Key
Management Facility, MCF = Mission Control Facility, MPF = Mission Planning Facility, OPF
Operations Preparation Facility, SCF = Satellite Control Facility, TSP = Time Service Provider
of the distance between the satellite and the antenna phase centre (ranging), the
angular position of the satellite location projected onto the celestial sphere (angles),
the relative velocity between the satellite and the TT&C station (Doppler), and
finally some measurement of meteorological data. The latter one typically comprise
temperature, pressure, and humidity (at the location of the ground station) which
are needed for the modelling of the signal delay time in the Earth’s atmosphere.
Whereas the acquisition of TM and transmission of TCs are linked to the Satellite
Control Facility (SCF), the radiometric data are needed by the Flight Dynamic
Facility (FDF) as an essential input for the determination of the satellite and orbit
geometry and its position.
The number of TT&C stations needed for a space project depends mainly on the
mission goals. If more than one ground station is being involved the term ground
station network is used. The amount of time a satellite is visible for a specific
TT&C station is highly dependent on the orbit altitude, its shape,1 and its inclination
with respect to the Earth equator. Whereas satellites visibilities in Low Earth Orbit
(LEO) last up to a few minutes a satellite in Medium Earth Orbit (MEO) or in
Geosynchronous Orbit (GEO) can last up to several hours.
1 The shape of the orbit is expressed by the so called orbit eccentricity which can range from close
The ground-track of the satellite will strongly depend on the orbit inclination.2
This orbit element also determines the highest latitude a satellite can be seen from
ground which puts a constraint on the usability of a certain ground station for a
satellite contact.
The need for ground station coverage, i.e., the required amount of time a satellite
is visible from ground, varies throughout the operational phase the satellite is
in. During LEOP the execution of orbit manoeuvres could require double station
visibility for contingency planning reasons, whereas during routine operations only
one contact per orbit might be sufficient. The required contact duration for routine
contacts itself is often driven by the specific payload needs to downlink (dump) its
telemetry and the available data rate provided by the space to ground link design.
From a flight dynamics point of view, ground station coverage (or the lack
thereof) has an impact on the achievable orbit determination accuracy. For this
both the geographical distribution of the ground stations around the globe providing
visibility of several different arcs of an orbit and the duration of a satellite contact
determining the length of the orbit arc have an influence (see Chap. 6).
2 The inclination is defined as the angle between the satellite orbit plane and the equatorial plane
of the Earth.
60 4 The Ground Segment
translate these into satellite specific command parameters like thrust orientation,
thrust start and end time and/or burn duration.
Many satellite operational activities depend on a number of orbital events like
the presence of the Sun or Moon in the field-of-view (FoV) of a sensor, the satellite
presence inside or outside the Earth’s shadow (eclipse periods), or geometrical
conditions like a specific angular collinearity between the Sun, Moon, and the
satellite itself. The computation of such events requires orbit knowledge of the
satellite but also other celestial planets (mainly the Sun and Moon for Earth orbiting
satellite) and are therefore usually computed by flight dynamics.
The main task of the mission control function is to establish the real time
communication with the satellite through the reception of telemetry (TM) and the
uplink of telecommands (TCs). The telemetry is generated by the onboard computer
of the satellite and typically comprises two groups of parameters which stem from
two different sources. The first group is referred to as housekeeping (HK) telemetry
and comprises all the TM parameters that report the status of the various subsystems
of the satellite and allow to assess the overall health status of the spacecraft. The
second one is the TM collected by the payload and the actual data relevant for
the end user of the space asset. It should be kept in mind that the data volume of
the payload specific TM can be significantly larger compared to the HK TM. The
downlink or dump of the payload TM might therefore be scheduled differently to
the HK one (e.g., dedicated time slots, contacts, or even ground stations).
Prior to any possible exchange of TM or TC, a communication link between
the GCS and the satellite needs to be established. This is usually initiated via the
execution of a set of commands that are referred to as contact directives. Such
directives comprise commands to relocate the TT&C antenna dish to the right
location,3 activate and correctly configure all the required TT&C subsystems in the
up- and downlink paths, and finally power-on the amplifiers to transmit (or raise)
the RF carrier.
The TM is organised in so called TM packets and TM frames that follow an
established space communication protocol. To avoid the need for every satellite
project to develop its own protocol, the need for protocol standardisation has been
identified already in early times. One of the most widely used space communication
protocols is the Space Link Extension or SLE protocol which has been standardised
by the Consultative Committee for Space Data Systems (CCSDS) [1] since the
1980s and is explained in more detail in Sect. 7.2.
3 The correct satellite location and track on the sky for a given TT&C station needs to be pre-
Mission control is responsible for the transmission of all TCs to the satellite. The
purpose and complexity of TCs cover a very broad range that can be a simple test
TC verifying the satellite link (usually referred to as ping TC) up to very complex
ones that comprise hundreds of different TM parameters (e.g., a navigation message
of a GNSS satellite). Most of the tasks that need to be carried out require the
transmission of a set of TCs that need to be executed by the satellite in the right
sequence and timing. This is usually achieved through the use of predefined TC
sequences which are loaded into the TC stack in the required order before being
released (uplinked) to the spacecraft. This allows the satellite operations engineer
(SOE) to perform the necessary cross-checks prior to transmission.
After reception on ground, all new TM needs to be analysed which is one of the
main responsibilities of the SOE. As there are usually hundreds of different TM
parameter values to be scrutinised after each contact, the SCF provides a comput-
erised cross check functionality which compares each received TM parameter to
its expected (i.e., nominal) value and range. The definition of the expected range
requires a detailed understanding of the satellite architecture and its subsystems and
is therefore an important input to be provided by the satellite manufacturer as part
of the “out of limit” database. This database needs to be locally stored in the SCF
so it can be easily accessed. In case an out-of-limit parameter is detected, it needs to
be immediately flagged to the operator in order to perform a more detailed analysis.
Even if not mandatory for operations, the mission control function should
provide means for an in-depth analysis of the satellite TM received on ground.
This capability should comprise tools for the investigation of trends in time-
series as well as cross correlations between different types of TMs. The ability
to generate graphical representations will further support this. Also the ability to
compute derived TM parameters, defined as parameters that are computed from a
combination of existing TM values, can be helpful to gain a better understanding of
anomalies or events observed on the satellite.
Another key functionality is the ability to support the maintenance of the satellite
onboard software (OBSW) in orbit which is an integral part of the satellite and
therefore developed, tested, and loaded into the satellite’s onboard computer (OBC)
prior to launch. In case a software problem is detected post launch, the OBSW will
require a remote update and the satellite manufacturer has to develop a patch or even
a full software image and provide it to the ground segment for uplink. The ground
segment needs to be able to convert this patch or image into a TC sequence that
follows the same packet and frame structure as any routine TC sequence used during
nominal satellite operations. A verification mechanism at the end of the transmission
should confirm that the full patch or image has been received without errors and
could be stored in the satellite onboard computer.
More and more space projects today rely on satellite constellations which puts
a high demand on the daily operational workload. Especially for satellites that
have reached their operational routine phase, the TM/TC exchange will follow a
very similar pattern that does not require a lot of human interaction. Automated
satellite monitoring and control has therefore become a key functionality to reduce
the workload and operational cost in a satellite project. An essential input for the
62 4 The Ground Segment
automation component is the reception of a schedule that contains all the upcoming
contact times and expected tasks to be executed within each of the contacts.
Fig. 4.2 The mission planning process, overview of inputs and outputs
A PR can come from either the mission planning operator, another element inside
the GCS (e.g., updates of the onboard orbit propagator, or manoeuvre planning
inputs), or even a project external source or community. In scientific satellite
projects (e.g., space telescopes) such external request are typically requests for
observation time that specify the type, location, and duration of an observation
and can also imply operational tasks for the ground segment (e.g., change of the
satellite attitude to achieve the correct pointing or switch to a special payload mode).
To ensure that all the relevant information is provided and to ease a computerised
processing of this input a predefined format or template for a PR should be used.
As already mentioned, any planning activity has to consider all applicable
resource constraints and contact rules which is another important input to the
planning process. Some might be more stringent than others which needs to be
clearly identified. Typical examples for planning constraints are:
• maximum and minimum contact duration,
• maximum and minimum time spans between consecutive contacts,
• available downlink data rate (determining the required time to downlink a given
data volume),
• mission important orbit location (e.g., for Earth observation purposes),
• any payload and platform constraints which have an impact on the satellite
operation.
64 4 The Ground Segment
4.1.7 Simulation
4 The set of simulators used during the early project definition phases (e.g., system concept
simulator, functional engineering simulator, or functional validation testbench) and the ones
developed by the space segment (e.g., spacecraft AIV simulator or EGSE, onboard software
validation facility) are not described here and the reader is referred to ECSS-E-TM-10-21A [2].
5 The connection from the GCS to the satellite onboard computer would then be achieved via
4.1.8 Encryption
In some projects the satellite might be equipped with a payload or platform module
that requires encrypted commanding and provides security sensitive data to the
end user. In this case the ground segment needs to host an encryption function
able to perform the encryption operations through the implementing of the specific
algorithms and a set of keys that need to be known to both the ground and the
space segment. At satellite level this is ensured through a dedicated key loading
operation which is usually performed a few days before the satellite is being lifted
and mounted into the launch vehicle payload bay. To ensure that all encryption
operations can be performed efficiently with minimum latency being introduced, a
close interaction with via a secure interface to the mission control function needs to
be realised. Due to the very project specific design and realisation and its classified
nature no detailed chapter is provided and the reader is recommended to consult the
project specific design documentation.
All ground segment functions need to be synchronised to the same time system
and one single time source. This can either be generated inside the ground segment
or provided by an external source referred to as the time service provider (TSP).
For navigation (GNSS) type projects such an external source is not required as
an accurate time source is usually available from the atomic clock deployed in
the ground segment which is also the basis for the definition of the time system
(e.g., GPS or GST time). Other satellite projects require an external time system
which needs to be distributed or synchronised to all the servers that host the various
functions. A convenient way to distribute timing information is to use the network
time protocol (NTP) that can take advantage of the existing network infrastructure
that is already deployed. NTP is a well established time synchronisation protocol
used on the internet and is therefore supported by the majority of operating systems
in use today.
The exchange of information inside the GCS but also with external entities can
only be realised through the proper definition and implementation of interfaces. An
accurate and complete interface specification requires as a minimum the definition
of the following points
• the exact scope of information that needs to be exchanged,
• the format that needs to be used for the exchanged information (contents and
filename),
• the exchange mechanism and protocol to be used,
• the specification of all relevant performance parameters (e.g., file size, frequency
of exchange).
The definition of a file content can comprise both required and optional informa-
tion. Optional input must be clearly identified as such and a recommendation given
on how to use it in case provided. The file format can be binary or ASCII, where
the latter one has the obvious advantage to be human readable. In modern ground
segment design the Extensible Markup Language (XML) has gained significant
importance and is extensively used in the design of interfaces. XML brings two
main advantages, a clearly defined syntax (file format) and an extensive set of rules
that allows to specify the expected file content and constraints. Examples of the
latter are the expected data format (e.g. string or numerical value), the allowed range
of a value, or limitations on the number of data items to be provided in a file. As
these rules are defined as part of the XML schema, they can be easily put under
configuration control which is essential for the definition of an interface baseline.
The exchange mechanism and protocol should clarify how the file transfer must be
done. Whereas in some cases a simple file transfer protocol (ftp) might be adequate,
it might not fulfil the security needs in other cases.6
The interface performance specifies the data volume that needs to be transferred
over the network which is determined by the average file size of each transfer and the
transfer frequency. An accurate assessment of the required interface performance
during operations is an important input for the correct sizing of the network
bandwidth. An underestimated bandwidth can cause a slow down in data transfer
that could impact the overall ground segment performance.
Every ground segment architecture will require the definition and implementa-
tion of a number of interfaces both internally but also to external entities which is
schematically shown in Fig. 4.3. Files and signals that need to be exchanged with
the same entity can be grouped and specified in one interface control document
indicated by ICD-A, ICD-B, or ICD-C. The term control emphasises the need
for configuration control in addition to the interface specification. To guarantee a
flawless flow of information, the same ICD versions need to be implemented on both
Fig. 4.3 Example of interface specification with three external entities A, B, and C. Each interface
control document (ICD) is put under configuration control via a specific version number. The
interface baseline (“Baseline-1”) is defined via a set of ICDs and their respective version numbers
sides of an interface. Even if this seems obvious, it might not always be a simple
task if the two ends are under different contractual responsibilities and implemented
by different entities. The definition of an interface baseline (e.g., “Baseline 1” in
Fig. 4.3) specifies a set of documents and their expected version numbers and is a
useful means to make them applicable to a subcontractor. This can be formally done
via the contractual item status list or CISL which is issued by the contracting entity
and the contracted entity can confirm the correct reception and implementation by
issuing the contractual status accounting report or CSAR.
References
1. The Consultative Committee for Space Data Systems. (2007). Overview of space communica-
tion protocols. CCSDS 130.0-G-2, Green Book.
2. European Cooperation for Space Standardization. (2010). Space engineering, system modelling
and simulation. ECSS-E-TM-10-21A.
Chapter 5
The TT&C Network
The TT&C station’s main functions can be derived from its acronym which refers
to telemetry (transmission), tracking, and control (see Fig. 5.1). The word telemetry
reception should be seen in a wider context here comprising the satellite’s own
housekeeping data that provide the necessary parameters to monitor its health and
operational state but also the data generated by the payload. The control part refers
to the ability of the TT&C station to transmit telecommands (TCs) to the satellite
allowing the GCS to actually operate the satellite.
The overall TT&C station architecture can be decomposed into three main
building blocks, (1) the mechanical part providing the ability to support the heavy
antenna dish and point it to a moving target in the sky, (2) the RF subsystem
comprising all the subsystems of the uplink and downlink signal paths, and (3) the
supporting infrastructure comprising the station computers, network equipment, and
power supplies in order to monitor and operate the station from remote and provide
a stable link for the transfer of all data and tracking measurements to the control
centre.
Fig. 5.1 The three main functions of a TT&C antenna expressed by its acronym: tracking,
telemetry reception, and command transmission
where the incoming signal is mixed with a locally generated reference signal for
which the phase is continuously adjusted (delayed) up to the point of constructive
interference. This measured time delay can then be converted into a (two-way)
travel distance that provides the range measurement. It is important to keep in
mind that the slant range is always measured with respect to phase centre of
the satellite and the ground station antennas and additional corrections need to
be applied before using them for orbit determination. Such corrections comprise
the satellite phase centre to centre-of-mass correction and the subtraction of the
additional time delay in the ground station RF equipment (station bias).
• Angular track measurements comprise the azimuth and elevation profiles of the
satellite orbit projected on the surface of the sky. They are usually measured
as function of time during a contact when the antenna is automatically tracking
the satellite RF beam. The corresponding operational antenna mode is therefore
called autotrack mode. The second operational mode is the program track mode
in which the antenna strictly follows a predicted orbit track which is made
available prior to the start of the contact. Every pass usually starts in program
track in order to point the antenna to the location where the satellite is expected
to appear in the sky. Once the antenna managed to lock onto the satellite signal,
it is able to change into autotrack mode which then allows the generation of
azimuth and elevation profiles.
5.2 The Mechanical Structure 71
Δf
vrel = c (5.1)
f0
where f0 is the source frequency (prior to its Doppler frequency shift) and c the
speed of light in vacuum. That relative velocity can also be interpreted as the
time derivate of the slant range dρ/dt known as range rate. The Doppler shift
is measured by the TT&C station receiver by phase-locking onto the incoming
signal and comparing the received (downlink) frequency from the satellite to the
original uplink frequency. This only works if the satellite transponder works in
coherent mode which means it multiplies the uplink frequency by a fixed turn-
around factor to derive its downlink carrier frequency.1
The mechanical structure of the antenna assembly can be split into the moving part
comprising the large antenna reflector dish and everything mounted to it and the
static part consisting of the antenna building that provides the support structure and
hosts the motors and bearings that enable the station to move the antenna and meet
its pointing requirements. An example of a typical architecture is depicted in Fig. 5.2
showing the generic design of a Galileo TT&C station. The antenna movement
needs to be possible in two degrees of freedom referred to as azimuth and elevation.
Azimuth is defined as the motion along the (local) horizon and covers a theoretical
span of 360 deg, whereas elevation refers to the movement in the perpendicular
direction of 180 deg starting from any point on the horizon crossing the zenith
and ending in the exact opposite side on the horizon again. These are of course
ideal ranges in a mathematical sense which will not be feasible in reality due to
mechanical limitations of the dish and the support infrastructure. To allow a precise
positioning at a given azimuth and elevation angle, both the current position and
its change needs to be measured via special encoder devices. This information then
has to be transmitted to the antenna control unit (ACU) which is the subsystem
responsible for the overall management and commanding of the antenna motion.
The antenna motion itself is usually achieved via a set of electrical drive engines
that engage in the toothing of the corresponding bearings as presented in Fig. 5.3
showing again the Galileo TT&C ground station design as an example. As the
1 In non-coherent mode the satellite transponder uses its own generated carrier frequency which it
derives from its on-board frequency source. This operating mode however does not allow Doppler
measurements.
72 5 The TT&C Network
Rotating Azimuth
Elevation
Antenna Building
Ballast Static Part
Cantilevers
Fig. 5.2 Static and dynamic parts of the Galileo TT&C station. The dynamic part moves in two
degrees of freedom, in azimuth and elevation. The antenna building houses all the RF equipment
which is typically mounted into server racks. Also shown is the cable loop for the cable guidance
between the static and the moving antenna sections. Reproduced from [1] with permission of CPI
Vertex Antennentechnik GmbH
Fig. 5.3 Examples of azimuth and elevation drive engines (coloured in red) from the Galileo
TT&C design. Reproduced from [1] with permission of CPI Vertex Antennentechnik GmbH
motion in elevation has to carry the entire weight of the reflector structure, so called
ballast cantilevers are used as a counter-balance to reduce the mechanical load on
the engines during motion.
5.2 The Mechanical Structure 73
Fig. 5.4 Example of a larger size antenna reflector using an antenna supporting structure and
reflector panels. A Cassegrain type dish implements a subreflector that is mounted in the antenna
focus and diverts the RF beam into the antenna feed. Reproduced from [1] with permission of CPI
Vertex Antennentechnik GmbH
For large antenna diameters (typically above 10 meters) the antenna reflector can
be built on top of a supporting structure using individual reflector panels as building
blocks as shown in Fig. 5.4. The main advantage is that a large reflector surface can
be decomposed into small parts and packed into oversea containers for shipment to a
remote location where the dish can then be easily re-assembled. Such a design needs
to allow the exact adjustment of each individual panel after it has been mounted
onto the support structure in order to achieve an accurate parabolic shape of the
full antenna surface. A Cassegrain type reflector incorporates a smaller subreflector
which is mounted in the focal point of the parabola and reflects the RF beam into the
antenna vertex point. From that point onwards a wave guide leads the RF beam into
the antenna building where the equipment for the further RF processing is located.
The antenna building structure has to carry the load of the full reflector dish
and protect all the sensitive RF equipment from environmental conditions like rain,
humidity, storm, and extreme temperatures. The site selection process needs to
ensure that the soil for the antenna building foundations ensures sufficient bearing
capacity to provide stability during the entire antenna life time. The surrounding
soil of the antenna building might even have to provide a higher bearing capacity to
support the assembly cranes used during the antenna construction time. In tropical
74 5 The TT&C Network
areas close to the equator rain seasons can imply intense loads of water in very short
time frames which requires adequate water protection and control to be put in place
in order to avoid damage through flooding (e.g., drainage channels). The antenna
building also needs to accommodate all the power and signal cables required by the
equipment hosted inside. Care must be taken for the cable guidance from the static
part to the moving part by using special cable loops as depicted in the centre of the
right panel of Fig. 5.2.
The antenna dish shape and its size are a very important design factors as they
influence the spatial distribution and power of the radiated energy. The graphical
presentation of the radiation properties as a function of space coordinates is referred
to as the antenna’s radiation pattern and its actual characterisation can only be
accurately achieved via a dedicated measurement campaign which is relevant to
understand both the transmitting and receiving properties of the antenna. Depending
on whether the spatial variation of radiated power, electric or magnetic field, or
antenna gain is presented, the terms power pattern, amplitude field pattern, or gain
pattern are being used respectively.
An ideal antenna that could radiate uniformly in all directions in space is said to
have an isotropic radiation pattern.2 In case the radiation pattern is isotropic in only
one plane it is referred to as an omni directional radiation pattern which is the case
for the well known dipole antenna depicted in panel (a) of Fig. 5.5 showing the two
perpendicular principal plane patterns that contain the electrical and magnetic field
vectors E and H . If no such symmetry is present, the antenna pattern is termed to
be directional which is shown in panel (b) of Fig. 5.5 for the case of a parabolic
reflector type antenna.
A directional radiation pattern contains various parts which are referred to as
lobes. The main lobe represents the direction of maximum radiation level and side
lobes of lower ones. If lobes exist also in the opposite direction to the main lobe
they are called back lobes. Two important directional pattern characteristics are the
half-power beam width (HPBW) also referred to as 3-dB beam width and the beam
width between first nulls (BWFN). The HPBW defines the angular widths within
which the radiation intensity decreases by one-half (or 3 dB) of the maximum value
whereas the BWFN is defined by a decrease of a factor 10 (10 dB) or even 100 (or
20 dB). The following fundamental properties of the beam width dimension should
be kept in mind when choosing an antenna design:
2 Such a uniform radiation pattern is an ideal concept as it cannot be produced by any real antenna
device, but this concept is useful for the comparison to realistic directional radiation patterns.
5.3 The Radio Frequency Subsystem 75
y z
E Plane H Plane
sin Θ
Radiaton
Region
x
D
(a)
Far Field
Reacve Region
Region
Main Lobe
0.5 -3 dB
BWFN
Side Lobes
-10 dB
(b)
Side Lobes
Fig. 5.5 Antenna radiation pattern (far field): (a) ideal dipole antenna omnidirectional E- and H -
plane radiation pattern polar plot (b) directional pattern of parabolic reflector showing main and
side lobes, the half-power beamwidth (HPBW), and the beamwidth between first nulls (BWFN);
(c) definition of antenna field regions with commonly used boundaries
• An increase in beam width will imply a decrease of the side lobes and vice versa,
so there is always a trade off between the intended beam width size and the
resulting main to side lobe ratio.
• A smaller beam width will provide an improved antenna resolution capability
which is defined as its ability to distinguish between two adjacent radiating
sources and can be estimated as half of the BWFN.
Another important aspect in antenna theory is the characterisation of the different
regions surrounding a transmitting antenna consisting of the near-field and the far-
field regions schematically depicted in panel (c) of Fig. 5.5. The near-field region
can be further subdivided into the reactive near-field being immediately adjacent
to the antenna device and the radiating near-field. The first one is characterised
by the existence of radiation field components generated by the antenna surface
through induction which are referred to as reactive field components and add to
the radiation field components. Due to the interaction between these two different
type of radiation, either the magnetic or the electric field can dominate at one
point and the opposite can be the case very close by. This explains why radiation
76 5 The TT&C Network
power predictions3 in the reactive region can be quite unpredictable. In the adjacent
radiating near-field region the reactive field components are much weaker giving
dominance to the radiative energy. The outer boundary of the near-field region lies
where the reactive components become fully negligible compared to the radiation
field intensity which occurs at a distance of either a few wavelengths or a few times
the major dimension of the antenna, whichever is larger.
The far-field region is also referred to as the Fraunhofer region and begins at
the outer boundary of the near-field region and extends indefinitely into space. This
region starts at an approximate distance of greater than 2D 2 /λ and is characterised
by the fact that the angular field distribution is independent from the distance to the
antenna. This is the reason why measurements of antenna gain pattern are performed
here. To give a simple example, for a 13 meter size antenna dish transmitting in S-
Band (ca. 2 GHz), the far-field region starts at ca. 2 kilometers. For the same antenna
size transmitting in X-Band (ca. 11 GHz) the equivalent distance grows to more than
12 kilometers.
The description of the antenna field regions above already show the importance
of radiation power as an antenna performance parameter. Considering that the
energy is distributed equally among the electric and magnetic field components (E
and B with respective units [V/m] and [A/m]), one can derive the surface power
density S (unit [W/m2 ]) building their vectorial cross product
S = E × H (5.2)
called Poynting vector. The total power P (unit [W]) flowing through a closed
surface A is given by the surface integral of the Poynting vector
P = S · d A (5.3)
A
1
Sav = Re Ė × Ḣ ∗ [W/m2 ] (5.4)
2
where Re refers to the real part of the complex number4 and the dots above the
symbols refer to the time derivative and indicate the harmonically time varying
nature of these vector quantities. These are also referred to as phase vectors5 with
Umax = D Uav
Uav
Uav
Fig. 5.6 Illustration of antenna directivity D, refer to text for more details
the asterisk designating to the complex conjugate quantity. The average radiation
power Prad through a closed surface around the radiation source can be obtained via
1
Prad =
Sav · d A = Re Ė × Ḣ ∗ · d A [W] (5.5)
A 2 A
The radiation intensity provides the power per unit solid angle6 and is given
by U (Θ, Φ) = Sav r 2 (W/unit solid angle). Based on the radiation intensity the
antenna directivity D is defined as the ratio between maximum and average radiation
intensity D = Umax /Uav (see Fig. 5.6). Usually the quantity Uav is replaced by the
radiation intensity of an ideal7 isotropic source Ui having the same total radiation
intensity given by Uav = Ui = Prad /4π . In reality antennas will always generate
radiation losses8 which can be quantified by the dimensionless antenna efficiency k
and the simple relation
k Umax (Θ, Φ)
G= = kD (5.6)
Ui
6 The measure of solid angle is the steradian described as the solid angle with its vertex at the
stations the antenna gain at the centre of the main beam can be derived from
4π A
G=η (5.7)
λ2
where A is the physical antenna area, λ the transmission or receiving wavelength,
and η the so called aperture efficiency factor which considers a potential non-
uniform illumination being typically in the range of 0.5–0.7 for microwave antennas.
As it is a characteristic of any antenna to radiate polarised electromagnetic
waves,9 another relevant antenna performance parameter is the maximum ratio of
transmit power achieved in two orthogonal planes referred to as axial ratio. For the
transmission of circularly polarised waves the desired axial ratio should be one (or
zero, if expressed in deciBel10 ). A small axial ratio would reduce the polarisation
losses if the transmitting and receiving waves do not have their planes of maximum
gain aligned. If only a single plane of polarisation is needed, then high axial ratios
are desired in order to discriminate against signals on the other polarisation.
9 The tip of the electromagnetic wave vector remains in the plane of propagation for linearily
polarised waves, whereas it rotates around the direction of propagation for circular or elliptically
polarised waves.
10 Ratio [dB] = 10 log Ratio [physical unit].
10
5.3 The Radio Frequency Subsystem 79
Fig. 5.7 Simplified incoming (receiving) signal path, starting from the left to the right; LNA =
Low Noise Amplifier, D/C = Down-Converter, IF = Intermediate Frequency Processor, BBM =
Baseband Modem, Ta = antenna noise temperature, Gr receiving antenna gain, Lr antenna feed to
LNA waveguide loss
usually contains a demultiplexer to divide the output from the D/C into separate
(frequency) channels. The last stage in this reception chain is the baseband modem
(BBM) which demodulates the actual signal content (satellite telemetry or payload
data) from the incoming signal.
The receiving performance of the TT&C antenna will determine the signal
quality which can be defined as the received signal strength relative to the electrical
noise of the signal background and is referred to as the signal-to-noise-ratio (SNR).
The electrical noise stems from the random thermal motion of atoms and their
electrons which can be considered as small currents that itself emit electromagnetic
radiation. It can be shown that the noise N can be expressed as
N = kT B (5.8)
where k is the Boltzmann constant (k = 1.38 × 10−23 J/K), T is the so called noise
temperature in Kelvin, and B is the frequency bandwidth in Hz. The power spectral
11 The down conversion from HF to IF is a very common process in electronic devices as signal
noise density N0 is the noise power per unit of bandwidth expressed as W/Hz and
therefore given by N0 = N/B. This explains the reason to include band-pass filters
at the early stages of the receiving path in order to reduce the bandwidth and with
this the incoming noise allowing an immediate improvement of the SNR.
An characteristic parameter of every TT&C station is the combined receiving
system noise temperature Tsys which takes into account the noise picked up by its
antenna feed and reflector area Ta , the loss from the wave guide lr = 10Lr /10, and
the receiver equivalent noise temperature Tre which considers the noise generated
by the LNA, D/C, and IF processing units. The system noise temperature can be
derived from [2]
With the knowledge of Tsys , the gain-to-noise-temperature ratio G/T can now
be evaluated which is a very important figure of merit to characterise the TT&C
station’s receiving performance. G/T has the unit of dB/K and is defined as
The outgoing signal path is schematically shown in Fig. 5.8 which begins at the
right side where the telecommand (TC) generated by the SCF is modulated onto
the baseband signal by the BBM. After passing the IF processor, the conversion to
high-frequency is done by the up-converter unit (U/C). Further filtering after this
stage might be needed in order to reduce unwanted outputs. The next stage is the
Sub-
reflector
Gt TC
HPA U/C IF BBM
Lt
P0
Fig. 5.8 Simplified outgoing (transmitting) signal path, starting from the right to the left. HPA
= High Power Amplifier, U/C = Up-Converter, IF = Intermediate Frequency Processor, BBM =
Baseband Modem
5.3 The Radio Frequency Subsystem 81
high power amplifier (HPA) whose main task is the signal power amplification to the
target value P0 . As the HPA is a key component in every TT&C station, its design
is explained in more detailed in the next section. After amplification the signal is
directed through a wave guide or coaxial cable connection to the antenna feed. The
loss that occurs on this path is referred to as the transmission line loss Lt and is an
important parameter to be considered in the evaluation of the overall link budget.
From the feed the signal is radiated into space using the antenna reflecting area
and its geometry that determines the beam shaping (and focusing) ability expressed
as the antenna gain value Gt . The three parameters P0 , Lt , and Gt determine the
transmit RF performance referred to as effective isotropic radiated power or EI RP
and is expressed in dBW.12 The EIRP can be derived from the following power
balance equation
EI RP = P0 − Lt + Gt (5.11)
The design of a high power amplifier (HPA) needs to consider the frequency and
bandwidth range it has to operate with, the required gain of power of its output
(including the linearity over the entire frequency range), the efficiency (power
consumption), and finally its reliability. There are three generic types of HPAs in
use today, the Klystron power amplifier (KPA), the travelling wave tube amplifier
(TWTA), and the solid state power amplifier (SSPA).
Both KPA and TWTA implement an electron gun which generates an electron
beam inside a vacuum tube by heating up a metal cathode and emitting electrons via
the thermoionic emission effect.13 As the cathode and anode are put at different
potentials, the electron beam is accelerated through the vacuum tube into the
direction of the collector (see upper panel of Fig. 5.9). Further beam shaping can
be achieved by focussing magnets. Inside the same tube a helical coil is placed to
which the RF signal is coupled either via coaxial or waveguide couplers. The helix
is able to reduce the phase velocity of the RF signal and generate a so called slow
wave. If correctly dimensioned, the electron velocity (determined by the length of
the tube and difference of potential) and the wave phase velocity (determined by
the diameter of the helix coil and its length) can be synchronised. In this case the
electrons form clusters that can be understood as areas of higher electron population
density which replicate the incoming RF waveform. These clusters then induce a
new waveform into the helix having the same time dependency (frequency) as the
input RF signal but a much higher energy. This amplified signal is then lead to the
RF Input RF Output
e- beam
Cooling Fins
Anode
Heater
Collector
Helix
Grid
Dri Space
Cathode
Anode
Heater
Grid
RF Input RF Output
Fig. 5.9 Principal design of a travelling wave tube (upper panel) and a Klystron power amplifier
(lower panel)
RF output. The achieved amplification can be quite significant and reach an order of
magnitude in the range of 40 to 60 dB, which refers to a factor of 103 to 106 times
the input power.
The Klystron power amplifier (KPA) design is schematically shown in the lower
panel of Fig. 5.9 and implements a set of resonant cavities (typically five, but only
two are shown for simplicity here) into which the RF wave is coupled to generate a
standing wave. These cavities have a grid attached to each side through which the
electron beam passes. Depending on the oscillating electrical field, the electrons get
either accelerated or decelerated and are therefore velocity modulated. This results
in a density modulation further down the tube which is also referred to as electron
bunching with the optimum bunching achieved in the output (or catcher) cavity. The
density modulated electron beam itself generates an output signal with the required
amplification. The gain can be up to 15 dB per cavity therefore achieving in sum up
to 75 dB amplification with 5 cavities.
The SSPA does not involve a vacuum tube but uses semiconductor based
field-effect transistors (FET) for the power amplification. The commonly used
semiconductor material is Gallium Arsenide and the devices are therefore referred to
as GaAsFET. As the maximum output of a singe transistor is limited, transistors are
5.3 The Radio Frequency Subsystem 83
combined to form modules. Even higher powers are achieved by combining several
modules.
From an operational point of view it can be said that KPAs are narrow band
devices with relative high efficiency and stability whereas TWTAs and SSPAs can
be used for wide band applications. An important HPA characteristic is linearity
which refers to the transfer characteristic between input and output power following
a straight line over its operating frequency range.14 Non-linearity can lead to the
well known intermodulation interference which can cause disturbance among the
various channels if the FDMA access protocol15 is used. In this respect the SSPA
usually reaches better linearity performances compared to TWTA and KPA.
Most TT&C antenna systems today support the autotrack mode that allows them
to autonomously follow a satellite once a stable RF link has been established. This
functionality can be realised via different methods and a widely used one is the
monopulse technique which will be described in more detail here.16 The starting
point of monopulse is the tracking coupler which is deployed in the Feed System
and is able to build the difference of two incoming beams and extract this as the so
called Δ-signal (see Fig. 5.7). The two beams usually are the two polarisations of
an incoming RF signal. This Δ-signal is guided to a tracking low noise amplifier
(TRK/LNA) for amplification, followed by its down-conversion to IF by a tracking
down-converter (TRK-D/C). Using the Δ- and the Σ-signal (i.e., the sum of the two
beams available in the receiving path after the LNA), the tracking receiver derives
the required corrections in azimuth and elevation in order to re-point the antenna to
the source.
a slight rotation around the signal emitting source. This leads to an amplitude oscillation of the
received signal which is analysed and used to steer the antenna and centre it onto the source.
84 5 The TT&C Network
EIRP at the sending end and the gain-to-noise temperature G/T at the receiving
end, the link budget can be computed as the ratio of (received) carrier power C to
spectral noise level N0
C C
= = EI RP − Lspace − Latm + G/T − 10 log k (5.12)
N N0 B
where Lspace = (4 π d f/c)2 is the free space loss that considers the signal
attenuation due to the propagation loss over the travelled distance d. The frequency
dependency of Lspace is due to the fact that it is defined with respect to an isotropic
antenna gain at the receiving end. Latm is the signal attenuation caused by the
atmosphere and k is Boltzmann’s constant (10 log k = −228.6 dBW/K/Hz). For the
Earth the main constituents causing atmospheric loss are oxygen, nitrogen, water
vapour, and rain precipitation. It is important to note that signal loss due to rain can
be significant for frequencies above 12 GHz as the corresponding wavelengths have
similar dimensions to the rain drops which makes radiation scattering the dominant
effect.
The final quality of a received signal depends on both, the carrier-to-noise
ratio at RF level and the signal-to-noise ratio at baseband level where the actual
demodulation occurs. At baseband level the type of signal modulation or channel
coding applied to the data prior to transmission plays a fundamental role. Coding
schemes are able to improve the transmission capability of a link allowing to achieve
a higher data throughput with lower transmission errors for a given signal strength.
An important performance measure of any transmission channel therefore is the
received signal energy per bit Eb which has to exceed a minimum threshold in order
to guarantee an error free signal decoding at baseband level. In other words, the
probability Pe of an error in decoding any bit (referred to as the bit error rate or
BER) decreases with an increase of the received bit signal-to-noise ratio Eb /N0 (bit-
SNR). This is demonstrated in Fig. 5.10 which compares the performance of a set of
channel coding schemes as defined by the applicable standard CCSDS-130.1-G-3
[4] by showing their (simulated) BER as a function of received bit-SNR. The same
relationship is also shown for an uncoded channel for comparison and clearly proofs
the improvement in BER achieved by all coding schemes. Figure 5.10 can be useful
to dimension a transmission channel as it allows to find for a given channel coding
scheme the necessary minimum Eb /N0 at signal reception in order to not exceed
the maximum allowed BER.17 If the coding scheme and BER have been fixed, the
required minimum Eb /N0 value determines the corresponding C/N0 which is then
an input to the link budget equation in order to solve the remaining system design
parameters.
17 Channel coding and decoding algorithms are usually part of the BBM functionality which also
involves the application of various error correction techniques able to correct or compensate lost
information due to low signal quality.
5.3 The Radio Frequency Subsystem 85
Fig. 5.10 Performance comparison of CCSDS recommended channel codes with an uncoded
channel (blue line) and the lowest possible performance of a 1/2 rate code as given by the code-
rate-dependent Shannon-limit [3] (line labelled CAPACITY Rate 1/2). LDPC = Low Density Parity
Check; Reprinted from Figs. 3–5 of CCSDS-130.1-G-3 [4] with permission of the Consultative
Committee for Space Data Systems
It is important to note that the link budget equation needs to be solved in two
directions, once for the transmission from ground to space for which the required
EIRP needs to be provided by the TT&C station, taking into account the G/T of the
satellite transponder, and second for the transmissions from space to ground where
the required EIRP needs to be achieved by the satellite transponder (and the satellite
HPA) taking into account the ground station G/T . In case of a shortcoming of EIRP
at satellite level, some compensations on ground can be achieved by increasing the
receiver gain through a larger antenna size or shortening the wave guide distance
between antenna and LNA which would both improve the antenna G/T .
86 5 The TT&C Network
To avoid interferences between the signal transmitted by the TT&C ground station
(uplink) and the one received from the satellite (downlink), different frequencies are
usually used for each direction. The uplink frequency as transmitted by the ground
station (shown as fup,tr in Fig. 5.11) will be Doppler shifted due to the relative radial
velocity ρ̇ (range rate) between the satellite and the TT&C station which is usually
defined as positive in case the slant range ρ increases and negative otherwise. The
uplink frequency as seen by the satellite transponder is therefore Doppler shifted
and can be derived from
1 − ρ̇/c
fup,rc = fup,tr (5.13)
1 + ρ̇/c
As the the satellite transponder is designed (and optimised) to lock at the nominal
uplink centre frequency f0 , the station uplink frequency needs to be corrected by
the expected Doppler shift ΔfDoppler = fup,rc − fup,tr which can be computed
on ground based on the predicted satellite orbit. Due to the satellite orbital motion
vorb and the Earth’s movement vEarth as indicated in Fig. 5.11, the Doppler shift is a
Vorb
fup,rc
fdown,tr
fup,tr
fdown,rc
VEarth
Fig. 5.11 Uplink and downlink frequency adjustments. vorb = satellite orbit velocity, fup,tr =
transmitted uplink frequency, fup,rc = received uplink frequency, fdown,tr = transmitted downlink
frequency, fdown,rc = received downlink frequency, δfDopp = frequency shift introduced by
Doppler, δfT CXO = frequency shift due to satellite oscillator drift
5.4 Remote Sites 87
The TCXO drift should be estimated at every contact and stored in a specific
ground station database so it can be used for subsequent contacts.
There are usually two different modes of satellite transponder operation. The
first one is referred to as non-coherent mode in which fdown,tr is a predetermined
frequency. As this frequency is however affected by internal oscillator drifts, the
ground station demodulator needs to perform a search before being able to lock on
the signal. In coherent mode the satellite transponder sets its downlink frequency to
a value derived from the uplink frequency and multiplied by a known constant factor
(typically expressed as a ratio of two numbers) which allows to ignore ΔfT CXO .
The choice of an adequate remote site for the installation of a new TT&C ground
station is a complex process and needs to consider a number of criteria (or even
requirements) that need to be fulfilled in order to guarantee its reliable operation
and maintenance. The list below provide some important points that need to be
considered as part of the site selection but should by no means seen as an exhaustive
list as there are usually also non-technical criteria (e.g., geopolitical relevance of a
territory, contractual aspects, security, etc.) that are not addressed here.
• The geographical location of TT&C station can have a major impact on the
possible visibilities and derived contact duration between a satellite and the
station. Especially the location latitude (e.g., polar versus equatorial) should be
a relevant selection criteria as it can significantly influence the station’s usability
for a satellite project.
• Site accessibility (e.g., via airports, railways, roads etc.) plays an important
role during both the construction phase and the maintenance phase of a ground
88 5 The TT&C Network
station. Especially during the construction phase, large items (potentially packed
in overseas containers) will have to be transported to site and depend on the
availability of proper roads in terms of size and load capacity in order to reach the
construction site. People will have to travel to site during both the construction
and the maintenance phases in order to perform installation or regular repair
activities.
• In order to guarantee a low RF noise environment, TT&C antennas are often built
at locations that are either far away from noise sources (e.g., mobile telephone
phone mast, radio or television transmitter masts, or simply densely populated
areas) or they are built on locations where the natural terrain can serve as an
RF shielding like distant mountains. This however should not be in conflict with
requirements of a free horizon profile which demands to avoid high elevation
obstacles or the need for a good site accessibility as described in the bullet above.
An appropriate balance needs to be found here.
• In order to host a TT&C antenna, a remote site needs to provide the necessary
infrastructure for power, telephone, and a Wide Area Network (WAN) that allows
a stable connection to the remaining ground segment potentially thousands of
kilometres away. For a completely new site with no such infrastructure in place
yet, the deployment time and cost needs to be properly estimated and considered
in the selection process. For the power provision there are typically specific
requirements related to stability which are described in more detail below.
• The antenna foundations require a sufficient soil bearing capacity in order to
avoid any unwanted soil movement during the deployment and operational
phase. Soil stability is not only relevant for the antenna foundation but also for
the surrounding ground which needs to support heavy crane lifting during the
construction work. To guarantee this, dedicated soil tests should be performed as
part of the site inspection and selection phases and if needed appropriate measure
need to be taken to reinforce the soil (e.g., the deployment of pillars deep into the
ground that are able to reach a region of more stable soil properties (e.g., rock).
• Severe weather conditions like snow, heavy rain with floods, strong winds, or
even earthquakes need to be identified from historic weather and climate statistics
and, where applicable, considered with appropriate margins to account for future
climate warming effects. The antenna design needs to consider appropriate
sensors and warning techniques and deploy mitigation measures (e.g., drainage
channels) to counteract such extreme conditions.
The site preparation phase needs to ensure the provision of all relevant site
infrastructure required for the installation and proper operation of the TT&C ground
station. The majority of the related activities will typically have to precede the actual
station deployment but some activities can potentially be performed in parallel.
Depending on the industrial set up of a project, the site preparation and actual station
build activities could very well be covered by separate contractual agreements and
different subcontractors. In this case a careful coordination of the activities need to
be ensured both from a timing and a technical interface perspective point of view.
5.4 Remote Sites 89
Fig. 5.12 Schematic view of power sources and distribution. UPS = Uninterruptible Power Supply
A very important site preparation activity is the deployment of all relevant equip-
ment to supply and distribute the required power to operate the station. The power
distribution unit (see Fig. 5.12) is the central power management unit in the station
whos main task is to distribute the power via the short-break and the no-break
connections to the various subsystems. As implied by the naming convention, the
no-break supplies are designed to provide uninterrupted power supply even if in
case of a service interruption of the external power grid. In order to implement such
a service the PCDU needs to manage the following types of power sources and be
able to switch between them:
• A long-term power source should be available for the day-to-day station opera-
tion and is usually provided via an external public power grid. As with any public
power supply there is no guarantee for uninterrupted service provision. Even if
used as the principal power source backup sources need to be put in place that
can bridge a power outage for limited amount of time.
• A medium-term power source that can be realised via a small and local power
plant and is directly deployed on-site. Its deployment, operation, and regular
maintenance falls under the control and responsibility of the site hosting entity,
which makes it an independent power source from external factors. The most
common realisation of such a local plant is a Diesel generator (DG) that has low
complexity and is therefore rather easy to maintain and has high reliability. The
deployment of a DG needs to also consider all the infrastructure required to store
and refill fuel and at the same time respect all the applicable safety regulations.
90 5 The TT&C Network
The operational cost of a DG is mainly driven by the cost of its fuel which can
be quite high for remote locations which makes this power source only viable
for short term use in order to bridge an outage of the external power grid. The
dimensioning of the fuel storage determines the maximum possible length of
power provision without the need for refuelling and should cover for a minimum
of several weeks of operations.
• A short-term but very stable power source is an essential equipment for any
critical infrastructure as it can provide immediate power in case of an interruption
of the principal source without the need of any start-up or warm-up time like
a Diesel generator described in the previous bullet. Such a short-term source
is referred to as an uninterruptible power supply or UPS and implements a
battery based energy storage technique. The advantage of immediate availability
makes the UPS the preferred choice to provide the so called no-break power
supplies in a TT&C station to which all critical components are connected that
cannot afford any power interruption. The main disadvantage of a UPS is its
very limited operational time span due to the limited power storage capacity of
today’s batteries18 and the quite demanding power needs of the various TT&C
subsystems which is also the reason why not all components will usually be
connected to the no-break supplies.
TT&C ground stations are composed of large metallic structures with the most
dominant ones being the large parabolic antenna mounted on top of the antenna
building. As such dishes can have diameters up to 30 meters if dimensioned for deep
space communication. During operations they will move several meters above the
ground and can therefore attract lightning. The deployment of adequate lightning
protection is therefore of paramount importance to protect people and equipment
and needs to be considered already during the site preparation phase. The sizing
of the lightning system comprises the evaluation (and subsequent deployment) of
the required number of lightning masts, their required (minimum) heights, as well
as their optimum position. In addition the proper implementation of the grounding
system needs to be considered early onwards in the process as this will usually
require soil movement.
As the TT&C station is a crucial asset for the commanding and control of the space
segment, its protection against unauthorised access is an important aspect that needs
18 A typical UPS deployed in a TT&C station can usually bridge only up to a few hours of
operations.
5.5 Interfaces 91
5.5 Interfaces
The most important interfaces between the TT&C and the other GCS elements are
shown in Fig. 5.13 and described in the bullets below:
• FDF (see Chap. 6): in order for a TT&C station to find a satellite and establish
a contact, it requires the up-to-date orbit predictions from the Flight Dynamics
Facility. These can be either provided as an orbit prediction file (e.g., a set of
position vectors as function of time), a Two-Line-Element (TLE), or a set of
spherical coordinates (e.g. azimuth and elevation as function of time). These files
are also referred to as pointing files as they serve the ground station to point to the
correct direction in the sky, either as an initial aid before the autotrack function of
the antenna can take over, or as a continuous profile for the antenna control unit
to follow during the execution of a pass (program track). An important aspect
Fig. 5.13 TT&C interfaces to other GCS elements: SCF = Satellite Control Facility, M&C =
Monitoring and Control Facility, FDF = Flight Dynamics Facility
92 5 The TT&C Network
here is the time span of validity of the pointing files which needs to be checked
prior to each contact in order to avoid the loss of a contact due to an inaccurate
input.
As part of nominal contact operations the TT&C has to acquire radiometric
data (i.e., range, Doppler, and angular measurements) from the satellite which
need to be sent to FDF as input for orbit determination. In addition, meteo-
rological measurements should be taken prior or during each contact/pass and
provided to FDF as an input for the atmospheric signal delay modelling in the
orbit determination process.
• SCF (see Chap. 7): to be able to perform automated satellite operations, the
Satellite Control Facility should be able to initiate a satellite contact via a set
of commands referred to as contact directives. These commands should establish
the applicable data transfer protocol (e.g., CCSDS SLE), perform the correct
configuration of the RF transmit path, and finally raise the carrier to start the
actual transmission of TCs to the satellite. Once the satellite transmits its telemtry
(TM) to the TT&C station and the BBMs have demodulated the data from the
RF signal, the TM needs to be forwarded to the SCF for further analysis and
archiving.
• MCF (see Chap. 10): the interface to the Monitoring and Control Facility enables
the ground segment operator in the control centre to maintain an up-to-date
knowledge of the operational and health status of the entire TT&C station. It
is also important to access to the most relevant subsystem parameters in order
to gain a better understanding of potential anomalies. For the most critical
components of the TT&C station (e.g., BBMs, LNA, U/C, D/C, HPAs), the
station design should foresee an adequate level of redundancy and implement
automated failure detection and fail-over mechanisms. In order to find potential
upcoming hardware failures at an early stage, the regular remote execution of
built-in tests (BITE) should be performed which can be initiated from the MCF.
For failures, anomalies, and warnings the interface design should foresee the
transmission of corresponding messages to the M&C facility.
References
1. Vertex GmbH. (2013). S-Band 13m TT&C Galileo. In Installation manual mechanical subsys-
tem. GALF-MAN-VA-GCBA/00002, Release 1.0.
2. Elbert, B. (2014). The satellite communication ground segment and earth station handbook (2nd
ed.). Norwood: Artech House.
3. Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical
Journal, 27(3), 379–423.
4. The Consultative Committee for Space Data Systems. (2020). TM Synchronization and Channel
Coding — Summary of Concept and Rationale. CCSDS 130.1-G-3. Cambridge: Green Book.
Chapter 6
The Flight Dynamics Facility
The Flight Dynamics Facility (FDF) can be considered as the element with
the highest incorporation of complex mathematical algorithms and data analysis
techniques. It therefore requires a lot of effort to ensure a thorough validation
of its computational output, especially for parameters that are commanded to the
satellite. Flaws in complex algorithms might not always be immediately evident
to the user but can still lead to serious problems during operations. Especially for
computational software that is developed from scratch and lacks any previous flight
experience, the cross-validation with external software of similar (trusted) facilities
is highly recommended. The FDF has a very large range of important computational
tasks of which the most relevant ones are briefly summarised in the following
bullets:
• The regular orbit determination (OD) is one of the central responsibilities of the
FDF and requires as input radiometric tracking data measured by the TT&C
stations. In other words, the FDF can be considered as the element that is
responsible to maintain the up-to-date operational orbit knowledge and to derive
all relevant products from it.
• One such orbit product is the pointing file required by the antenna control unit
of the TT&C station (see Chap. 5), allowing it to direct the antenna dish and its
feed to the correct location on the sky where the satellite is expected at the start
of an upcoming pass. An inaccurate or incorrect orbit determination would also
imply inaccurate pointing files that could imply the loss of the pass. The lack
of newly gained radiometric measurements would then further deteriorate the
satellite orbit knowledge.
• Not only the TT&C station requires up-to-date orbit knowledge but also the
satellite itself in order to perform future onboard orbit predictions and attitude
control. This is done through the regular transmission of up-to-date orbit
elements in the form of the onboard orbit propagator (OOP) TC parameters which
are prepared by the FDF (see also Sect. 6.6).
• Using the most recent orbit knowledge as initial conditions, the FDF performs
orbit predictions from a given start epoch into the future using a realistic force
model that is adequate for the specific orbit region. Whereas satellites orbiting at
altitudes lower than about 1000 km require the proper modelling of atmospheric
drag as one of the dominant perturbing forces, orbits at higher altitudes (MEO
and GEO) will be mainly influenced by luni-solar forces and solar radiation
pressure and atmospheric drag becomes negligible. Orbit predictions are an
important means to assess the future development of the orbital elements and
detect potential violations of station keeping requirements. Orbit predictions are
also required to compute orbital events like eclipses, node crossings, TT&C
station visibilities, or sensor field-of-view crossings to name only the most typical
examples.
• The determination of the satellite’s attitude profile is a similar task to orbit
determination but based on the inputs from the satellite’s attitude sensors (e.g.,
star trackers unit, gyros, Sun sensors, Earth IR sensor, etc.). Attitude prediction in
contrast does not require any external sensor inputs but uses a predefined attitude
model to compute a profile that can either be as expressed as a set of three angular
values (e.g., raw, pitch, and yaw) or a 4-dimensional quaternion.
• Most satellites will be subject to station keeping requirements which mandate
them to remain within a given box around a reference location on their orbit.
This requires the FDF to perform long-term orbit predictions in order to detect a
potential boundary violation. The time of such a violation is an important input to
the manoeuvre planning module that is supposed to compute an orbit correction
(station keeping) manoeuvre to take the satellite back inside its control box.
• The space debris population has unfortunately increased significantly during the
past decade and poses today a significant risk to operational satellites deployed
in almost all orbit regions. This makes a close and continuous monitoring of
the space environment an essential task for every space project. Having the best
orbit knowledge of the own satellite, the FDF can use an orbit catalogue of
known space debris as an input to perform a collision risk analysis. In case a
non negligible collision risk has been identified, adequate avoidance manoeuvres
need to be computed and commanded to the satellite.
6.1 Architecture
The functional architecture of the FDF can be derived from the main tasks described
in the preceding section and is schematically depicted in in Fig. 6.1.
Both the orbit and attitude determination components make use of similar core
algorithm that implement some kind of least-squares or Kalman batch filter which is
described in more detail in Sect. 6.3. The relevant input data for these modules are
the radiometric tracking data from the TT&C station network and attitude sensor
measurements obtained from satellite telemetry.
6.1 Architecture 95
Orbit
Propagation
S/C Mass
Attitude Esmation
Determination FDDB
Attitude Command
Propagation Parameter
Generation
T T &C Pointing Files
Orbit Events Orbit Products Manoeuvre (TPF)
Generation OOP (TPF)
Sensor Inhibition (TPF)
Orbit Control [...]
Fig. 6.1 High level architecture of the Flight Dynamics Facility (FDF)
The orbit propagation function implements mathematical routines for the numer-
ical integration of the equations of motion and models the relevant external forces
acting on the satellite. The most relevant forces are the Earth’s gravity field
and its perturbations due to the planet’s oblateness, atmospheric drag, third body
perturbations, and solar radiation pressure. Precise orbit determination needs to
consider even more subtle forces that are caused by the Earth radiation pressure,
tides, relativistic effects, and extremely weak forces that can stem from material
properties and their imbalanced surface heating.1
The orbit events function is responsible for the computation of all events relevant
for the operations of the satellites. Apart from generic events like node crossings or
eclipses, there are also satellite platform specific ones, like sensor blindings or FoV
crossings which are defined and described in the relevant satellite user manuals. The
timing of such events is an important input for the operational planning of satellite
activities like the change of attitude modes (e.g., from Sun to Earth nadir pointing)
or the execution of orbit correction manoeuvres which could require the presence of
the Sun in specific attitude sensor field-of-view.
The orbit control module implements algorithms for the computation of orbit
corrections both in terms of Δv and the required thrust orientation. This module
requires as input the up-to-date orbit position from the orbit determination module.
After the execution of every orbit correction manoeuvre, the satellite mass will
decrease due to the fuel used by thruster activity. A dedicated module is therefore
required to compute the consumed and remaining fuel and keep a record of the
satellite mass development throughout the satellite lifetime. The accurate knowledge
of the mass history is of fundamental importance. It is required for the computation
of command parameters for future orbit correction manoeuvres, but also needed for
the update of AOCS relevant parameters like the inertia matrix or centre of mass
position. Two algorithms to compute the satellite mass are therefore described in
Sect. 6.5.
Another important task of the FDF is the computation of all flight dynamics
related command parameters that need to be uplinked to the satellite. A typical
example are the orbit parameters of the onboard orbit propagator (OOP) which are
used by the satellite as initial conditions for its own internal orbit propagator.2 Other
examples are the command parameters for telecommands related to orbit correction
manoeuvres or parameters for the inhibition of sensors to avoid blinding, in case not
autonomously performed by the satellite.
The orbit products generation box in Fig. 6.1 shows the components that generate
the full set of FDF output files (products) in the required format and frequency with
a few typical examples mentioned next to it. The Task Parameter Files (TPFs) is an
example of a clearly defined format used to exchange command parameters with the
Satellite Control Facility (SCF) (cf. [1])
The FDDB in Fig. 6.1 refers to the flight dynamics database which contains all
the necessary parameters of the satellite and the orbit environment that are used
by the various algorithms implemented in the FDF with a few examples listed in
Table 6.1.
Orbit propagation is performed in order to advance orbit information from one initial
epoch t0 to a target epoch t1 . The target epoch will usually lie somewhere in the
future but could in theory also be in the past. The following fundamental properties
are worth mentioning in relation to orbit propagation:
• The accuracy of the propagated orbit is defined by the quality of the initial orbital
elements at t0 , the quality of the applied force model, and assumptions in the
propagation algorithm (e.g., step size, applied interpolation techniques, etc.);
• The accuracy of the propagated orbit degrades with the length of the propagation
interval. This is mainly caused by the build up of truncation and round-off errors
due to a fixed computer word length.
Orbit propagation techniques need to solve the equations of motion of the flying
satellite in orbit. From Newton’s second law the equation of motion can be simply
written as
2 The OOP parameters computed on ground are usually uplinked via a dedicated TC around
once per week. The exact frequency however depends on the complexity of the OOP algorithm
implemented in the satellite onboard software.
6.2 Orbit Propagation 97
Table 6.1 Summary of important parameters in the Flight Dynamics Database (FDDB)
FDDB parameter Description
Mass properties Satellite dry and wet mass (the wet mass value is usually only available
after the tank filling has been completed at the launch site).
Inertia tensor This includes relevant information for interpolation due to a change of
mass properties.
Centre of mass Location (body fixed frame) at initial launch configuration and
(CoM) predicted evolution (e.g., interpolation curves as function of filling
ratio).
Attitude Position and orientation of all attitude sensors (e.g., body frame
mounting matrix); Field-of-View (FoV) dimensions and orientation.
Thruster Mounting position and orientation, thrust force and mass flow (function
of tank pressure), efficiency factor, etc.
Propellant Parameters required for fuel estimation (e.g., hydrazine density, tank
volume at reference pressure, filling ratio, etc.)
Planetary orientation For Earth Orientation Parameters(EOP) refer to relevant IERS
publications [2].
Time system Parameters (e.g., leap seconds) required for the transformation between
time systems (UT1, UTC, GST).
Gravity field Parameters for the definition of the central body gravitational field.
Physical constants Relevant physical and planetary constants required by the various FD
algorithms.
d(m r˙ )
F = = m r¨ (6.1)
dt
where r is the position vector of the satellite expressed in an Earth centred inertial
coordinate system referred to as ECI (see Appendix A),3 r¨ is the corresponding
double time derivative of the position vector (acceleration), m the satellite mass,
and F is the force vector acting on the satellite. If one combines the satellite mass
and the mass of the central body M into the term μ = G(m + M), the equation of
(relative) motion can be expressed in the so called Cowell’s formulation
μr
r¨ = − 3 + fp (6.2)
r
where the first term on the right hand side represents the central force and fp the
perturbing forces acting on the satellite. The latter one can be further decomposed
into several components
3 In a rotating coordinate system a more complex formulation is needed that also considers
Table 6.2 Summary of perturbing forces in Eq. (6.2). The mathematical expressions for the
various for contributions were adapted from [3]
Force Description Mathematical expression
fN S Non-spherical gravitational influence fN S = Txyz
XY Z T xyz ∇U
rφλ
including tidal contributions
Δ rj
f3B = np
Third-body effects f3B j =1 μj −
Δ3j rj3
fg General relativity contribution fg = μ
c2 r 3
2(β + γ ) μr − γ (ṙ · ṙ)
r+
+2(1 + γ )(r · ṙ) ṙ
4 There are also special analytical solutions available for the restricted three-body problem which
assumes circular orbits for the primary and secondary body and the mass of the third body (satellite)
to be negligible compared to that of the major bodies (refer to e.g., [4, 5]).
6.3 Orbit Determination 99
ṙ = v
μr (6.4)
v̇ = − 3 + fp
r
Integrating Eq. (6.4) yields the state vector X(t) = {r , v} at epoch t providing
as input the initial conditions X0 (t) = {r0 , v0 }. Various numerical integration
techniques are available which can be categorised as follows:
• Single-step Runge-Kutta methods have the fundamental property that each
integration step can be performed completely independent of one another. Such
methods are therefore easy to use and the step size can be adapted each time.
• Multi-step methods reduce the total number of functions that need to be evaluated
but require the storage of values from previous steps. These methods are suitable
in case of complicated functions and examples are the Adams-Bashfort or
Adams-Moulton methods.
• Extrapolation techniques like the Burlish-Stoer method perform one single
integration step but improve the accuracy of the Runge-Kutta method by dividing
this step into a number of (micro-) steps for which extrapolation methods are used
in order evaluate the integration function.
Single step methods will usually be used in combination with adaptive step size
techniques that perform an error estimation at each step, compare it to a predefined
tolerance, and adapt the size for the subsequent step (refer to Runge-Kutta-Fehlberg
method). Numerical integration techniques are extensively discussed in the open
literature and the reader is referred to e.g., [6], [7], [8], [9], [10] to gain further
insight.
The aim of this section is to present the fundamental formulation of the statistical5
orbit determination (OD) problem and to provide an overview of the various
techniques used in modern satellite operations. For a more detailed description of
this complex subject matter the reader is encouraged to consult a variety of excellent
text books dedicated to this specific topic (cf., [3], [11], [12], [13]).6
5 The statistical orbit determination stands in contrast to the deterministic one which does not
consider observation errors and limits the problem to only consider the required number of
observations. The statistical approach in contrast processes a higher number of measurements than
actually required which allows to reduce and minimise the effect of noise or errors stemming from
the observations.
6 The notation and formulation presented here follows the one given by Tapley et. al (cf. [3]) which
should allow the reader to more easily consult this highly specialised text.
100 6 The Flight Dynamics Facility
Fig. 6.2 Basic concept of Statistical Orbit Determination: measured observations are represented
by the grey diamonds, modelled observations by black circles
The true trajectory of a satellite flying in an orbit around a central body is only
known to a certain degree of accuracy at any given time. It is therefore common
to term the current best knowledge trajectory as the reference trajectory which is
also the starting point of any OD process. The objective of OD is to improve the
accuracy of the reference trajectory and to minimise its deviation from the true one
by processing new available observations (see Fig. 6.2). Before outlining the basic
OD formulation, it is worth to define a few relevant terms and their notation in more
detail.
• The n-dimensional vector X(t) ≡ X t = {r (t), v(t)} contains the position and
velocity vectors of the satellite at epoch t. To refer to a state vector at a different
epoch ti , the short notation X(t i) ≡ X i is used.
• The observation modelling function G( X i ) defines the mathematical expression
to compute the observation at epoch ti , with (i = 1, . . . , l), using the available
state vector information X i . The asterix (e.g., X ∗ ) indicates that the state vector
i
stems from the reference trajectory at epoch ti .
• The observation vector Yi represents the measured observations at epoch ti
which can be used as input to the OD process to update the orbit knowledge.
The dimension i = 1, .., l represents the number of different epochs at which
observations are available and the dimension of Yi (i.e., 1, .., p) is determined
by the number of different types of observations available. As an example, if
azimuth, elevation, range, and range rate measurements would be provided by
the TT&C station, then p = 4. In general, p < n, where n is the number of
parameters or unknowns to be determined, and p × l >> n.
• The residual vector i has also dimension p and is given at epochs ti (again
with i = 1, .., l) and contains the differences between the measured observations
and the modelled ones. The measured values are affected by measurement
6.3 Orbit Determination 101
errors of the tracking station whereas the computed ones by inaccuracies of the
observation modelling function and the deviation of the reference trajectory from
the true one.
With this notation in mind, the governing relations of the OD problem can be
formulated as
= F (X
Ẋ i , ti )
i
(6.5)
i , ti ) + i
Yi = G(X
where the first equation has already been introduced in Eq. (6.4). It is important
to note that both the dynamics Ẋ and the measurements Y involve significant
nonlinear expressions introduced by the functions F and G. This makes the
orbit determination of satellite trajectories a nonlinear estimation problem that
requires the application of nonlinear estimation theory. With the assumption that
the reference trajectory X ∗ is close enough to the true trajectory X
i throughout the
i
time interval of interest, Eq. (6.5) can be expanded into a Taylor’s series about the
reference trajectory. This leads to the following two expressions:
∗ ∂F (t) ∗ ∗ (t)]
Ẋ(t) = F (X, t) = F (X , t) + [X(t) − X
∂ X(t)
−X
+ OF [X(t) ∗ (t)] (6.6)
∂G ∗ ∗ (ti )]
Yi = G(Xi , t) + i = G(Xi , ti ) + [X(ti ) − X
i
∂X
i) − X
+ OG [X(t ∗ (ti )] + i (6.7)
where the symbols OF (. . .) and OG (. . .) represent the higher order terms of the
Taylor series which are neglected in the further treatment as they are assumed to be
negligible compared to the first order terms.
The state deviation vector xi (dimension n × 1) and the observation deviation
vector yi dimension (p × 1) can now be introduced as
i) − X
xi ≡ x(ti ) = X(t ∗ (ti )
(6.8)
yi ≡ y(ti ) = Y (ti ) − Y ∗ (ti ) = Y (ti ) − G(X
i∗ , ti )
and using the deviation vectors from Eq. (6.8), the expressions in Eq. (6.6) and
Eq. (6.7) can now be re-written as a set of linear differential equations
x˙i = Ai xi
(6.10)
yi = H̃i xi + i
where Φ(t, tk ) = ∂ X(t)
k ) is the n × n state transition matrix, which can be computed
∂ X(t
in the same numerical integration process as the state vector solving the following
set of differential equations, known as variational equations:
With the definition of H = H̃ Φ and dropping the subscripts, the above equation
can be simplified to
6.4 Orbit Control 103
y = H x + (6.15)
The objective of the orbit determination process is to find the state deviation
vector xˆ that minimises the observation deviation vector y, or more precisely, the
sum of the squares of ( y − H x). The linearisation of the problem in the form of
Eq. (6.15) allows the application of linear estimation theory algorithms like the
weighted least squares method which provides
Orbit control encompasses the regular monitoring of the current orbit evolution,
the detection of the need for an orbit correction, and the planning and execution of
orbit manoeuvres. The main phases of the orbit monitoring and control process are
schematically shown in Fig. 6.3. The starting point for every orbit control activity
should always be the acquisition of the up-to-date satellite orbit from an orbit
determination that is based on the most recent available radiometric data. With the
up-to-date orbit knowledge in place, the orbit propagation and monitoring process
can be performed by comparing the current Keplerian elements with the target
ones. If the deviation is considered to be too large and the satellite is reaching the
borders of its station keeping control box, the manoeuvre planning activity can start
to determine the required magnitude of orbit change which can be expressed as
differential Keplerian elements, Δa, Δe, Δi, ΔΩ, and Δu. For circular orbits (near-
zero eccentricity), the Gauss equations can be used to derive the size, direction, and
location of the orbit change manoeuvre Δv
104 6 The Flight Dynamics Facility
Fig. 6.3 Orbit monitoring and correction flow chart including constraints checks (refer to text)
2a
Δa = ΔvT
v
ΔvT ΔvR
Δex = 2 cos u + sin u
v v
ΔvT ΔvR
Δey = 2 sin u − cos u (6.17)
v v
ΔvN
Δi = cos u
v
sin u ΔvN
ΔΩ =
sin i v
ex = e cos(ω)
(6.18)
ey = e sin(ω)
and points from the orbit apoapsis to its periapsis with its magnitude providing the
value of the orbit eccentricity. The argument of latitude (also referred to as phase)
6.4 Orbit Control 105
Initial Orbit
Initial Orbit
is given by u = ω + ν and can be readily obtained from the vector separation angle
between the satellite position r and the ascending node vector n.7
The Gauss equations as defined in Eq. (6.18) provide orbit corrections as so
called impulsive manoeuvres which can be understood as instantaneous velocity
changes in Δv of a given magnitude, direction (e.g., in-plane or out-of plane), and
time or orbit location as depicted in the upper portion of Fig. 6.4. The important
characteristic of such impulsive manoeuvres is that they are fully determined by
the extend of required orbit change. They are however fully independent on satellite
specific parameters like mass or thrust force and are therefore referred to as platform
independent manoeuvres. To give an (extreme) example: to change the semi-major
axis for a given orbit geometry by a certain extent, the exact same Δv value applies
for a nano-satellite with a mass of a few kg and the International Space Station with
a mass of several metric tones.
Once a first impulsive manoeuvre strategy has been determined, it needs to
be checked against any known satellite specific constraints for manoeuvre exe-
cution which is indicated by the “Constraint Check” box of Fig. 6.4. Examples
of constraints could be the visibility of Sun or Moon in AOCS sensors, eclipse
periods affecting the power budget, Solar incidence angles imposed by thermal load
limits, or required visibility to one or more TT&C stations. In case of a violation
the manoeuvre strategy needs to be revised which is indicated by the loop over
“Impulsive Manoeuvres”.
After the impulsive manoeuvre strategy has converged, the platform specific
extended manoeuvre parameters can be derived, which is indicated by the “Mano
7 The argument of latitude is the preferred quantity for the definition of the satellite position along
the orbit as it is also suitable for near-circular orbits for which the perigee is not well defined.
106 6 The Flight Dynamics Facility
Computation” (see Fig. 6.4). The extended manoeuvre parameters are the ones
required for the actual commanding and usually comprise values for the start epoch,
thruster burn duration (or thruster on and off times), and the required satellite
attitude during burn.8 For larger sized manoeuvres the burn duration might last up to
several minutes and an additional constraint check is highly recommended to ensure
that the constraints are fulfilled for the full burn duration. As an example, the start
epoch of a manoeuvre could be well outside of an eclipse, but the satellite might
enter into an eclipse period during the burn which could violate a flight rule and
make the manoeuvre placement invalid. Only if all constraint checks are fulfilled
with extended manoeuvres, the manoeuvre command parameters can be computed
and uplinked to the satellite.
The burn duration Δtthrust of the extended manoeuvre can be derived from the
Δv value using Newton’s second law
msat Δv
Δtthrust = (6.19)
nt F̄ cos(α) ηeff
where F̄ is the average thrust force of one single satellite thruster, nt is the number
of thrusters used for the manoeuvre burn, α is the thruster tilt angle, and ηeff is
an efficiency factor to compensate for (known or expected) manoeuvre execution
errors. The number of thrusters used for an orbit correction manoeuvre depends on
the satellite AOCS software implementation and usually fewer thruster are used for
small sized manoeuvres.
The thrust force F and the mass flow ṁ depend on the tank pressure which
is a function of the filling ratio. The exact relationship for F (p) and ṁf (p)9 are
hardware specific and the exact dependencies can only be measured on ground in
a dedicated thruster performance measurement campaign. This data is therefore
an important delivery from the space to the ground segment10 and needs to
be implemented in the FDF in order to ensure an accurate computation of the
manoeuvre command parameters.
The satellite mass msat decreases during each manoeuvre burn due to the
consumption of propellant which has to be taken into consideration when applying
Eq. (6.19). The decrease of propellant mass mp can be estimated using the well
known rocket or Tsiolkovski equation.
8 The detailed list of the required manoeuvre command parameters need to be consulted from the
Fig. 6.5 Numerical application of the rocket equation Eq. (6.20) for a series of equally sized burns
of Δv=20 m/sec; mi and mf refer to the initial and final satellite mass and mp is the consumed
propellant mass at each burn
Δv
− Isp
mp = mi − mf = mi 1 − e g
(6.20)
where mi and mf are the satellite mass at the beginning and the end of the
burn, Isp is the specific impulse of the thruster system (propellant specific), and
g is the gravitational constant. A numerical application of the rocket equation is
demonstrated in Fig. 6.5 where the decrease in satellite mass for a series of (equally
sized) manoeuvres with a Δv value of 100 m/s is shown. It can be seen that even
for a considerable sized Δv, a linear interpolation of the mass decrease curve is an
acceptable approximation in order to estimate the satellite mass at the manoeuvre
mid-point. msat in Eq. (6.19) can therefore be approximated with the mid-point
satellite mass
mp mi Δv
− Isp
msat = mi − = 1+e g
(6.21)
2 2
The required satellite attitude during a manoeuvre burn phase depends on the
specific orbit element that needs to be changed. From Eq. (6.17) it can be readily
seen that a change of the orbit semi-major axis, eccentricity, or satellite phase
requires an in-plane manoeuvre with the thrust directed parallel to the orbital frame’s
tangential direction eT . A change of inclination or ascending node in contrast
requires an out-of-plane manoeuvre with a thrust vector directed parallel to the
orbital frame normal direction eN .
108 6 The Flight Dynamics Facility
The execution time of a burn is derived from the required location of the burn
on the orbit which can be expressed using the argument of latitude u. The last
two equations of Eq. (6.17) indicate that a pure inclination change requires the
manoeuvre to be located in the ascending node (i.e., u = 0 deg), whereas a pure
change of Ω requires a manoeuvre location at the pole (i.e., u = 90 deg). It is also
possible to combine the change of both elements using the following relation
ΔvN = (Δi)2 + (ΔΩ sin i)2
(6.22)
ΔΩ sin i
u = arctan
Δi
• Orbit acquisition manoeuvres are performed during a LEOP phase and executed
shortly after separation of the satellite from the launcher’s upper stage. The tar-
geted orbit changes, both in-plane and out-of-plane, are usually quite significant.
The main reason for this is that a launchers usually place a satellite into an
injection orbit that is by intention different from the operational orbit in order
to avoid any placement of harmful objects (e.g., the launcher’s upper stage or
dispensers) into the same region, which could later pose a collision risk to the
satellite.
• Station-keeping manoeuvres are usually small size manoeuvres meant to keep the
satellite within a certain orbit control box. Such manoeuvres play an important
role for satellites placed in geostationary orbit or constellations with clearly
defined geometrical patterns (e.g., Walker constellations [14]).
• Disposal manoeuvres are planned at the end-of-life of a satellite with the aim
to place the satellite into a graveyard orbit which (so far) is not considered as
relevant for standard satellite services. For satellites in low altitude orbits, the
aim is to further reduce the semi-major axis in order to accelerate their reentry
and make them burn up in the upper layers of the atmosphere.
6.5 Propellant Gauging 109
This method requires the knowledge of the thruster on-times ton,i of each thruster
cycle (activation) which should be available from satellite housekeeping telemetry.
The consumed propellant mass of one active thruster cycle can be computed from
where ṁi is the mass flow rate (kg/sec) and ηṁ,i is the dimensionless mass flow
efficiency factor which considers a change of the mass flow value if thrusters are
operated in pulsed mode. The consumed propellant at any time t during the mission
is given by summing up the consumed propellant mass of all thruster cycles n that
have so far occurred
n
mp (t) = mp, i (6.24)
i=1
The remaining propellant mass mp can now be derived from the initial propellant
mass at Beginning-of-Life (BOL) mp, BOL which is measured when the satellite is
being fuelled on ground
110 6 The Flight Dynamics Facility
Eq. (6.25) makes the book keeping aspect of this technique quite obvious but
it also shows its major weakness of accumulating the prediction error with every
summation step. This implies the lowest fuel estimation accuracy at the satellite’s
end of life, when the highest accuracy would actually be needed.
The detailed mathematical formulation of the PVT method depends on the satellite
platform specific tank configuration, but the main underlying principle is based on
the tank pressure and temperature measured by dedicated transducers and provided
as part of the housekeeping telemetry. The additional knowledge of the tank volume
from ground acceptance tests and the use of the ideal gas law allows to derive the
propellant mass. A simplified formulation for a typical mono-propellant blow-down
propulsion system is presented here for which the notation from NASA-TP-2014-
218083 [15] has been adapted. A schematic drawing that helps to understand the
notation and method is provided in Fig. 6.6.
The supply bottle on the right side contains a pressurant gas for which Helium
is frequently used having the important property to be non-condensable with the
liquid propellant. The supply bottle is connected to the tank that contains the liquid
propellant. When the latch valve is opened, the gaseous He can flow into the ullage
and initiates the flow of the liquid propellant to the catalyst bed of the thruster where
it ignites. The objective of the propellant gauging algorithm is to compute the mass
of the liquid propellant
mL = ρL VL = ρL (Vt − Vu ) (6.26)
where Vt is the propellant tank volume that needs to be accurately measured prior to
launch and VL and Vu are the volumes of the liquid propellant and the ullage sections
respectively. Vu increases with the mass flow of gaseous He from the pressurant
bottle into the propellant tank Δm according to
ΔmH e
Vu = (6.27)
ρH e
ρH e = f (PH e , Tt ) (6.28)
PH e can be derived from the measured tank pressure Pt and the known value of
the propellant’s vapour saturation pressure Psat according to
The amount of gaseous He mass flown into the ullage can be expressed as
where mH e,i is the Helium mass at initial state and mH e,t the one at time t when
the PVT method is being applied. The volume of the supply bottle Vb is a known
quantity and the gas densities at initial time and time t are a function of the measured
pressure and temperature values at these times, i.e.,
ρH e,i = f (Ti , Pi )
(6.31)
ρH e,t = f (Tb , Pb )
The accuracy of the PVT method depends on various factors like the uncertainty
in the initial loading condition, pressurant gas solubility in propellant,11 thermal
condition that influence the stretch of the tank volume, and finally the measurement
accuracy of the pressure and temperature transducers.
11 To avoid such an unwanted interaction, some tanks implement a diaphragm that separates the
6.6 Onboard-Orbit-Propagator
The onboard software requires the knowledge of the satellite’s state vector in
order to derive frame transition matrices and to predict various orbit events like
upcoming eclipses (see also Chap. 3). Satellites therefore implement their own orbit
propagation module which is referred to as onboard orbit propagator or OOP.
Such algorithms either implement orbit propagation techniques based on numerical
integration with representative force model as described in Sect. 6.2 or use more
simplistic approaches that limit themselves to interpolation techniques. The orbit
propagation performed by the satellite will by design be of much lower accuracy
compared to the one performed on ground, simply to reduce the computational load
on the onboard processor and its memory. This also implies that the accuracy of
the satellite generated orbit solution will decrease more rapidly and sooner or later
reach a point when it is not reliable anymore. At this point, or preferably even
earlier, it requires to receive updated orbit elements from ground so it can restart
the propagation with new and accurate initial conditions. The satellite manufacturer
has to provide a dedicated telecommand with an adequate description of the OOP
design, type of update (e.g., Kepler elements, cartesian state vector, reference frame,
etc.) and required update frequency as part of satellite user manual.
Despite the higher accuracy of the orbit propagation performed on ground, it
is still recommended to implement an OOP emulator in addition. This allows to
simulate the orbit information available to the satellite onboard software which
might be relevant for the planning of specific activities that depend on event
predictions times determined by the satellite.
6.7 Collision Monitoring 113
Due to the growing complexity of the orbital environment, space assets face an
increasing risk to collide with space debris, other satellites (retired and active ones),
or the upper stage of the own launch vehicle after separation. The ability to perform
collision risk monitoring has therefore become a vital task for a modern ground
segment in order to guarantee the safety of a satellite project.
As the collision monitoring module requires numerical orbit propagation tech-
niques, it is usually integrated in the FDF where orbital mechanics libraries can be
easily accessed. A conceptual view of a collision monitoring module is depicted
in Fig. 6.7 where all the input data are shown on the left side and comprise the
following files
• A space object catalogue providing a database of all tracked space objects usually
in the form of Two-Line-Element (TLE). The contents of this database needs to
be provided by external sources like NORAD (provided via the Center for Space
Standards & Innovation [21]) or the EU Space Surveillance and Tracking (SST)
framework [22] who keep an up-to-date database of the highly dynamic space
environment. As TLEs by design have only a limited validity of a few days, a
regular update mechanism for this catalogue is necessary in order to guarantee a
complete and up-to-date scanning of all space objects.
• An accurate estimate of the satellite’s orbit data for which collision monitoring
is performed. This should be based on the most recent orbit determination which
not only provides the state estimate but also its covariance matrix. The latter one
can be seen as an uncertainty ellipsoide that is centered on the satellite position
estimate.
Fig. 6.7 Schematic overview of collision monitoring functionality in FDF. CCSDS CDM =
Collision Data Message as defined by CCSDS [20]
114 6 The Flight Dynamics Facility
• The CCSDS Conjuction Data Message or CDM [20] is a specific format that has
been defined to allow an easier exchange of spacecraft conjunction information
between conjunction assessments centres and satellite owners or operators.
This standardised message contains all relevant information for both objects
involved in a conjunction at the time of closest approach (TCA). This comprises
state vectors, covariance matrices, relative position, velocity vectors, and other
relevant information that is useful for further analysis by the recipient. It is highly
recommended to implement such an external interface and sign up with one of the
centres providing this type of messages (e.g., the Joint Space Operations Center,
ESA/ESOC Space Debris Office, or EUSST).
The SGP-4/SDP-412 propagator component implements the necessary algo-
rithms that allow the orbit propagation of the TLE based risk or chaser object to
the required epoch. The Conjunction Event Analysis component in Fig. 6.7 contains
the algorithms to determine a possible conjunction of the satellite and the risk object
and also determines the likelihood of a collision which is achieved via two principle
methods. The first one is the successive application of the following set of filters (cf.
[23] and [24]):
1. Unreliable TLEs: a filter that rejects TLEs with epochs that are too old to be still
be reliable (e.g., older than 30 days w.r.t. the current investigation epoch) or have
decayed already.
2. Altitude filter: rejects TLEs for which the risk object’s orbit does not intersects
the spherical altitude shell between the perigee and apogee of the target orbit
(also considering the orbit decay and short-periodic semi-major axis variations).
3. Orbit geometry check: computes the intersection line of the risk and target orbits
and rejects those TLEs whose closest approach distance at the ascending and
descending nodes is larger than the maximum extension of the user defined
collision reference ellipsoid that is centred on the target object.
4. Phase filter: computes the passage times of risk and target object at the ascending
nodes of the intersection line and rejects the TLE if the at the times of possible
conjunction the relative distance is larger than the reference collision ellipsoid of
the target spacecraft.
If a TLE has passed all these filters an iterative Newton scheme is applied to
find the zero-transition of the range-rate ρ̇ between the two objects at which the
minimum distance is achieved (time of closest approach)
Δr
ρ̇ = Δ
v = 0.0 → tT CA (6.32)
Δr
The second method requires less CPU time as it replaces the last two filters with
the implementation of a smart sieve algorithm (cf., [25] and [26]) which analyses
12 The SDP-4 theory is used for objects with an orbital period exceeding 225 minutes.
6.8 Interfaces 115
the time history of ranges ρ(t) between the target orbit ρt and the risk orbit ρr
at equidistant time steps Δt across the prediction interval. At each time step the
components of the range vector are sequentially checked against a series of safety
distances, that are refined each time (refer to e.g., Table 8.2 of [27]). For the orbits
passing the smart sieve filter, the root-finding method defined in Eq. (6.32) is applied
again to get the time of closest approach tT CA . To assess the collision probability, the
covariance matrices of both conjunction objects are propagated to the conjunction
time tT CA . Reducing the (7 x 7) state covariance matrix to a (3 x 3) position
covariance matrix and assuming that these are not correlated, they can be simply
added to obtain a combined covariance matrix [28].
6.8 Interfaces
13 For a more detailed mathematical explanation the reader is referred to [27] and [28].
116 6 The Flight Dynamics Facility
OPF SCF
Poinng Files
FDF CIs
S/C TM
TC Command
Parameters (TPF)
Ranging/Doppler
Angle Data FDF
Meteo Data
Element Commands
S/C Design Planning Request H/W
Data Event File monitoring
SATMAN M&C
MPF
e.g., thruster on-times required for the computation of the mass consumption via
the Book Keeping method (see Sect. 6.5).
• MPF (see Chap. 8): the FDF needs to provide an event file to the Mission
Planning Facility that contains station visibilities and the timings of any other
orbit specific events. Furthermore, the FDF needs to be able to inform the
MPF about flight dynamics relevant planning requests (e.g., OOP update, or
manoeuvres).
• OPF (see Chap. 9): as the FDF maintains a significant database of flight
dynamics specific parameters, an interface to the OPF helps to ensure the proper
configuration control of this data. The data to be exchanged are in the form of
Configuration Items (CIs) that need to be clearly defined as part of that specific
interface.
• MCF (see Chap. 10): the interface to the Monitoring and Control Facility serves
to monitor the state of the FDF and allows the transmission of macros for remote
operations (e.g., start of automatic procedures during routine operations phases).
References
4. Szebehely, V. (1967). Theory of orbits: The restricted problem of three bodies. New York and
London: Academic.
5. Vallado, D. A. (2004). Fundamentals of astrodynamics and applications, space technology
library (STL) (2nd ed.). Boston: Microcosm Press and Kluwer Academic Publishers.
6. Shampine, L. F., & Gordon, M. (1975). Computer solution of ordinary differential equations:
The initial value problem. San Francisco, CA: Freeman.
7. Bulirsch, R., & Stoer, J. (1966). Numerical treatment of ordinary differential equations by
extrapolation methods. Numerical Mathematics, 8, 1–13.
8. Hairer, E., Norsett, S. P., & Wanner, G. (1987). Solving ordinary differential equations. Berlin,
Heidelberg, New York: Springer.
9. Gupta, G. K., Sacks-Davis, R., & Tischer, P. E. (1985). A review of recent developments in
solving ODEs. Computing Surveys, 17, 5.
10. Kinoshita, H., & Nakai, H. (1990). Numerical integration methods in dynamical astronomy.
Celestial Mechanics, 45, pp. 231–244.
11. Montenbruck, O., & Gill, E. (2011). Satellite orbits: Models, methods and applications. New
York: Springer.
12. Wiesel, W. E. (2010). Modern orbit determination (2n ed.). Scotts Valley: CreateSpace.
13. Milani, A., & Gronchi, G. (2010). Theory of orbit determination. Cambridge: Cambridge
University Press.
14. Walker, J. G. (1984). Satellite constellations. Journal of the British Interplanetary Society, 37,
559–571.
15. Neil, T., Dresar, V., Gregory, A., et al. (2014). Pressure-volume-temperature (PVT) gauging of
an isothermal cryogenic propellant tank pressurized With gaseous helium. NASA/TP—2014-
218083.
16. Lal, A., & Raghunnandan, B. N. (2005). Uncertainty analysis of propellant gauging systems
for spacecraft. Journal of Spacecraft and Rockets, 42, 943–946.
17. Hufenbach, B., Brandt, R., André, G., et al. (1997). Comparative assessment of gauging
systems and description of a liquid level gauging concept for a spin stabilised spacecraft. In
European Space Agency (ESTEC), Procedures of the Second European Spacecraft Propulsion
Conference, 27–29 May, ESA SP-398.
18. Yendler, B. (2006). Review of propellant gauging methods. In 44th AIAA Aerospace Sciences
Meeting and Exhibit, 9–12 Jan 2006.
19. Dandaleix, L., et al. (2004). Flight validation of the thermal propellant gauging method used
at EADS astrium. In Proceedings of the 4th International Spacecraft Propulsion Conference,
ESA SP-555, 2–9 June 2004.
20. The Consultative Committee for Space Data Systems. (2018). Conjunction data message.
CCSDS 508.0-B-1, Blue Book (including Technical Corrigendum June 2018).
21. Center for Space Standards & Innovation. (2022). http://www.celestrak.com/NORAD/
elements/ Accessed 09 March 2022.
22. EU Space Surveillance & Tracking Framework. (2022). https://www.eusst.eu/. Accessed 09
March 2022.
23. Hoots, F., Crawford, L., & Roehrich, R. (1984). An analytical method to determine future close
approaches between satellites. Celestial Mechanics, 33, 8.
24. Klinkrad, H. (1997). One year of conjunction events of ERS-1 and ERS-2 with objects of the
USSPACECOM catalog. In Proceedings of the Second European Conference on Space Debris
(pp. 601–611). ESA SP-393.
25. Alarcón, J. (2002). Development of a Collision Risk Assessment Tool. Technical Report, ESA
contract 14801/00/D/HK, GMV.
26. Escovar Antón, D., Pérez, C. A., et al. (2009). closeap: GMV’s solution for collision risk
assessment. In Proceedings of the Fifth European Conference on Space Debris, ESA SP-672.
27. Klinkrad, H. (2006). Space debris, models and risk analysis. Berlin, Heidelberg, New York:
Springer. ISBN: 3-540-25448-X.
28. Alfriend, K., Akella, M., et al. (1999). Probability of collision error analysis. Space Debris,
1(1), 21–35.
Chapter 7
The Satellite Control Facility
The Satellite Control Facility (SCF) is the element in the ground segment respos-
nible to receive and manage all the telemetry (TM) received from the satellite
and to transfer any telecommand (TC) to it. The SCF needs to initiate a contact
through direct interaction with the TT&C station (baseband modem) using the
project applicable space link protocol which is described in more detail in Sect. 7.2.
As a TM dump will typically provide a vast amount of new TM values, the SCF
needs to provide the means to perform a quick health check of all received TM and
flag any out-of-limit parameter immediately to the operator. Furthermore, it needs
to provide means to archive all received TM to allow a detailed trend analysis and
error investigation.
The SCF is also responsible to maintain the satellite’s onboard software and
therefore needs to implement a module that is able to upload software patches or
even entire new images to the satellite (see Sect. 7.3).
Once a satellite has moved from its LEOP phase into routine operations, routine
contacts with a limited and repetitive set of TM and TC exchange will become the
dominant regular ground-to-space interaction. To reduce the work load on operators,
an automated satellite operations function is required which is described in Sect. 7.4.
Depending on the project specific needs for data encryption (e.g., encryption
at TM or TC frame level), the necessary real time interaction with the security
module in the ground segment needs to be performed and a possible architecture
is described.
7.1 Architecture
Based on the main tasks described in the introduction, a simplified SCF architecture
is presented in Fig. 7.1. As can be seen, the core part of the element is the TM/TC
module which provides all the functionalities for the reception of telemetry and the
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 119
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8_7
120 7 The Satellite Control Facility
Encryption
Automation
Correlation
Fig. 7.1 Simplified architecture of the SCF. OBSM = Onboard Software Management, SDHS =
Site Data Handling Set, OOL = Out-of-Limit, TM/TC = Telemetry and Telecommand Component
OBQ
∆ (Difference)
Fig. 7.2 SCF onboard queue model. PUS = Packet Utilisation Standard
The OBSM subsystem manages the upload of software patches or even full
images to the satellite onboard computer and is described in more detail in Sect. 7.3.
The automation component provides the ability to execute operational proce-
dures without the need of operator interaction and therefore requires full access to
all functionalities of the TM/TC component. Automated satellite operations is a very
important feature for satellite routine operations and even more relevant for projects
that deploy satellite constellations (see Sect. 7.4).
The SCF needs to host the following set of databases which are shown in the
upper right corner of Fig. 7.2:
• The satellite reference database (SRDB) and the operational database (ODB)
contain the full set of TM and TC parameters defined in the onboard software,
the definition of derived TM parameters, and the definition of TC sequences (see
Sect. 9.2).
• The out-of-limit (OOL) database comprises for every TM parameter the expected
range of values (e.g., nominal, elevated, and upper limits) which serves as a basis
to determine whether an OOL event occurs that can be flagged to the operator
accordingly (e.g., display of parameter in green, yellow, and red colour).
• The file archive (FARC) stores all received telemetry and makes it accessible for
further analysis. Furthermore, the FARC stores the onboard software images and
patches and makes them available to the OBSM component for further processing
(see Sect. 7.3).
122 7 The Satellite Control Facility
Many satellite projects make use of time-tagged TCs which are not intended to
be executed immediately after reception but at a specified time in the future. This
functionality has a high relevance for satellites in lower orbits with only short ground
station visibility. The onboard queue (OBQ) model is used to keep track of time-
tagged TCs and a high level architecture is shown in Fig. 7.2. The ground model
keeps a record of all commands that where successfully released from the TM/TC
component (TC stack) and the spacecraft model reflects what has been received and
is stored on the satellite. Using the PUS Service 11 command schedule reporting
function of the onboard software (refer to ECSS-E-70-41A [1]), the contents of
the on-board queue can be dumped and used to update the spacecraft model. The
OBQ model needs to provide a means to view and print the time-tagged commands
contained in both models and perform a comparison to show potential differences
which is indicated by the Δ symbol in Fig. 7.2.
1 The word tele-metry is derived from Greek roots: tele stands for remote and metron for
Fig. 7.3 CCSDS space communication protocol standards and mapping to the Open Systems
Interconnection (OSI) basic reference Model. P-1 = Proximity-1, AOS = Advanced Orbiting
System, SPP = Space Packet Protocol, CFDP = CCSDS File Delivery Protocol
which are summarised in Fig. 7.3, together with their main features. Furthermore,
the relationship to the OSI reference model2 is indicated.
One of the basic and central features of the CCSDS space link standard is the
introduction of the space packet protocol (SPP) which defines the space packet3 as
a means for the user to transfer data from a source user application to a destination
application via a logical data path (LDP) [4]. The structure of a space packet is
shown in Fig. 7.4 and is applicable for both TC (referred to as TC source packet) and
TM (referred to as a TM source packet). The following basic packet characteristics
are worth noting:
• A space packet comprises a fixed size primary header and a variable data field.
• The transmission of packets can be at variable time intervals.
• The data field can contain an optional secondary header that allows the transmis-
sion of a time code and ancillary data in the same packet.
2 The Open System Interconnection (OSI) basic reference model is a conceptual model for the
Fig. 7.4 Space Packet structure according to CCSDS-133-0-B-2 (refer to Fig. 4–1, 4–2, and 4–3 in
[4]). Reprinted with permission of the Consultative Committee for Space Data Systems © CCSDS
TC
Start Tail
Sequence BCH/LDPC 1 .. N Codeblocks Sequence
TC Segment
MAP ID Segment
Packet #1 Packet #2 Packet #n
Header
Fig. 7.5 The Command Link Transmission Unit (CLTU) definition. APID = Application Process
ID, VCID = Virtual Channel ID, TFVN = Transfer Frame Version Number, SCID = Spacecraft ID,
BCH = Bose-Chaudhuri-Hocquenghem, LDPC = Low Density Parity Check
4 The frequency of bit transitions can influence the acquisition time required by the bit synchroniser
which is part of the TT&C BBM and needed to demodulate the data stream from the subcarrier.
5 One code block has a size of 8 bytes. A maximum of 37 code blocks can be encapsulated into
one CLTU.
126 7 The Satellite Control Facility
TM
Check
Sync Marker RS/CV/Turbo/LDPC Code Block
Symbols
Frame Trailer
SCID Frame Data Field
Header CLCW
VCID
CCSDS 133.0-B-1
APID TM Source Packet
Packet Error
Packet Data Field
Header Control
Fig. 7.6 The Channel Access Data Unit (CADU) definition. CLCW = Communication Link
Control Word
Channel
VCID-3
VCID-4
VCID-5
VCID-6
VCID-7
VCID-8
Fig. 7.7 CCSDS Space Data Link Protocol channel multiplexing concept. MCID = Master
Channel ID, TFVN = Transfer Frame Version Number, SCID = Spacecraft ID [7]
7.3 Onboard Software Management 127
Space Link
TT&C BBM
SCF
SLE Subsystem
FCLTU
SLE Subsystem CLTU
RAF Production
All Frames
Production
FTCF
TC Frames
RFSH/ROCF Production
RAF Channel
Frames
Production VC and
Packet
RCF RSP Production
Packet
Production
FSP/FTCFVCA
Fig. 7.8 CCSDS Space Link Extension (SLE) Services (refer to CCSDS-910-0-Y-2 [10] and
Tables 7.1 and 7.2 for definition of acronyms)
Whereas format and structure of the TM and TC packets and frames were introduced
in the previous chapter, this section explains the space link extension (SLE) services
for the transportation of frames between the ground and space segment. An overview
of the currently defined services is shown in Fig. 7.8 and a detailed description is
provided in the SLE Executive Summary [10] and the references cited in there. The
SLE services can be categorised according to their transfer direction into forward
(from ground to satellite) and return (from satellite to ground) services with a short
summary provided in Tables 7.1 and 7.2 respectively.
The main task of the onboard software management system (OBSM) is to update
the satellite software from ground. This encompasses both, the uplink of patches,
defined as a partial replacement of software code, or even entire images that replace
the existing software. The need for a software modification in first place can
128 7 The Satellite Control Facility
Table 7.1 Summary of SLE forward direction services as defined in CCSDS-910-0-Y-2 [10]
SLE service Acronym Description
Forward space packet FSP Allows a single user to provide packets for
uplink without the need to coordinate with
different packet providers.
Forward TC virtual channel FTCVCA Enables a user to provide complete VCs for
access uplink.
Forward TC frame FTCF Enables a user to supply TC frames to be
transferred.
Forward communications link FCLTU Enables a user to provide CLTUs for uplink.
transmission unit
Table 7.2 Summary of SLE return direction services as defined in CCSDS-910-0-Y-2 [10]
SLE service Acronym Description
Return all frames RAF Delivers all TM frames that have been received by
the TT&C station (and decoded by the BBM) to the
end user. This is usually the SCF in the GCSa
Return channel frames RCF Provides Master Channel (MC) or specific Virtual
Channels (VCs), as specified by each RCF service
user.
Return frame secondary RFSH Provides MC or specific VC Frame Secondary
header Headers (FSHs), as specified by each RFSH service
user.
Return operational control ROCF Provides MC or VC Operational Control Fields
field (OCFs) channel, as specified by each ROCF service
user.
Return space packet RSP Enables single users to receive packets with selected
APIDs from one spacecraft VC.
stem from the discovery of a critical anomaly after launch or the need for new
functionality. The main components of the OBSM module are shown in Fig. 7.9.
The file archive or FARC has already been introduced before as an internal file
repository that hosts the OBSW code as provided by the satellite manufacturer. The
image model is an additional (optional) product which contains details on specific
memory address areas and their attributes (e.g., whether an area can be patched
or not). As OBSW images should be kept under strict configuration control, the
Operations Preparation Facility or OPF described in Chap. 9 should be used as the
formal entry point for the reception of OBSW updates and forward these to the
FARC.
The three main OBSM components are shown in the central area of Fig. 7.9.
The image viewing tool is used to load an image from the FARC and display its
contents for detailed inspection prior to its uplink to the satellite. An example for an
image representation in matrix form is shown in Fig. 7.10, where each line refers to
7.3 Onboard Software Management 129
a dedicated memory address and each field in a column represents the contents of
one information bit in that memory address. If a memory model is available, it can
be used as an overlay to better display the memory structure.
The image comparison component provides a means to compare two images
for differences, e.g. a previously uplinked one and a new one received.6 If such
differences are small, they can be used to generate a difference patch for transfer to
the satellite which avoids the need to replace the entire image.
The image monitoring component has a similar function as the comparison tool
but builds one of the images directly from live TM and does not load it from the
archive.
After image viewing, comparison, or monitoring, the command generator con-
verts the OBSW image into a sequence of TCs which are suitable for uplink to
the satellite. After successful reception of the full set of TCs, the satellite onboard
computer rebuilds the original image format. That image or patch is however not
immediately used but stored in a dedicated area of the OBC memory which is sized
to store two image versions at the same time (indicated by Image-0 and Image-1 in
Fig. 7.9).
6 In SCOS2K the terminology uplink image and base image are used to distinguish them.
130 7 The Satellite Control Facility
7 HPCs are a special set of TCs which can be executed without the need of the OBSW itself as they
are routed directly in hardware to the dedicated unit that triggers emergency switching.
7.4 Automated Satellite Control 131
Telemetry
Contact Manager
Start/Stop Contact
Start/Stop Ranging
Enable/Diasble Carrier
Schedule
Send TC / Dump TM
Executor
Operations
Progress File Archive
Monitor
• Auto Proc (comp)
Schedule • Contact Schedule
Generator Schedule File(s) • Command Schedule
Proc Compiler
Automation
Fig. 7.11 Overview of the SCF Automation Module components. STP = Short Term Plan, Proc =
Procedure
The automation component represents one of the most complex parts of the SCF
and its detailed architecture can differ significantly from one project to the other.
Only an overview of the most relevant components is shown in Fig. 7.11 which
is inspired by the design of the automation component implemented in Galileo
ground segment mission control system [11] and should therefore only be seen as
an example. Independent of the detailed architecture, every automation component
has to interface with the TM/TC module of the SCF in order to request TM dumps
and uplink TCs to the satellites during a contact.
On the input side, the automation components receives the timeline of upcoming
contacts and the required tasks and activities to be performed during each of the
scheduled contacts. As this information is part of the mission plan, this input
needs to be provided by the Mission Planning Facility (MPF) which follows the
hierarchical planning philosophy as introduced in Chap. 8 with the Short-Term-Plan
(STP) covering the most suitable time span for automated operations. The schedule
generator is in charge to process the STP and generate the system internal schedule
files which are stored in the local file archive (FARC) for internal processing.
Another important input for automation are the automation procedures which
contain the detailed timeline (i.e., sequence of activities) of every contact with
the exact reference to the applicable TM and TC mnemonics. These have to
132 7 The Satellite Control Facility
follow, similar to a manual contact, the applicable and formally validated Flight
Operations Procedures (FOP). Due to the complexity of FOPs, a higher level
procedure programming language like PLUTO8 might be used for the generation
of the automation procedures. As these are highly critical for operations, it is
recommended to develop them in the Operations Preparation Editor of OPF (see
description in Chap. 9) where they can be kept under strict configuration control.
They can then be distributed as configuration items to the SCF for compilation into
machine language.9
The schedule executor combines the schedule files and the compiled automation
procedures to drive the contact manager which uses a set of macros referred to
as contact directives to steer the TM/TC component of the SCF (see also TM/TC
component in Fig. 7.1).
8 PLUTO stands for Procedure Language for Users in Test and Operations and is an operational
required if the automation design is based on a higher level procedure languages like PLUTO.
7.6 Telemetry Displays 133
Fig. 7.12 The Datastream concept in SCF. SSR = Solid State Recorder, OBDH = Onboard Data
Handling System, BBM = Baseband Modem, OBC = onboard computer.
Both real time and buffered TM packets need to be downlinked from the
satellite to the ground segment via the same physical RF channel and are therefore
multiplexed into one data stream. In order to separate the different type of TM flow
at the receiving end again, the virtual channel concept is provided by the space data
link protocol (SDLP) described earlier (see also Fig. 7.7). This allows the SCF data
stream functionality to segregate the multiplexed TM into the different streams again
and store them in dedicated areas of the TM packet archive. The TM monitoring can
then access and display the various streams independently which is schematically
shown in Fig. 7.12.
A specific data stream can also be reserved for TM received by project external
infrastructure which is only booked for a certain project phase. A typical example
would be the use of additional TT&C stations from external station networks during
a LEOP phase to satisfy more stringent requirements for ground station coverage.
Telemetry displays are the most frequently used applications of the TM/TC compo-
nent as they provide the means for an operator to inspect and analyse the received
134 7 The Satellite Control Facility
AND#01023_OCM
Sample Time Reference Description Value Unit
GD#01453_PRESS
Tank Pressure (bar)
23
22.5
22
21.5
21
20.5
20
10 20 30 40
Mission Elapsed Time (sec)
Fig. 7.13 Examples of an alphanumeric display (AND) and a graphical display (GD) in SCF.
(e.g., attitude and thruster activity), power subsystem (e.g., battery capacity), and
the satellite transponder (e.g., transmitter status).
• A different type of display is a graphical display which shows the time history
of a selected TM parameter as a two dimensional graph (see lower panel of
Fig. 7.13). This type of representation is especially useful if the time history and
trend of a TM parameter needs to be monitored.
• A third type of display is referred to as a mimic display and uses geometric shapes
(usually squares, triangles, and interconnection lines) that represent relevant
satellite components. A colour scheme can be used to show the operational
state of a component. An example could be a change from red to green, if a
transmitter is activated or a celestial body enters into the FOV of a sensor. The
operational state can be derived from a single or even a set of TM values and
the corresponding background logic needs to be preconfigured in order to make
a mimic meaningful. The look and feel of mimics displays are comparable to the
MMI of the Monitoring and Control Facility which are described in Chap. 10 and
shown in Fig. 10.3.
10 Note that satellites in low Earth orbit carrying a GNSS receiver payload could also perform
On-board Clock
Time Packet (OBT)
BBM UTC[0]
Time Packet
{ OBT[ i ] ,ERT [ i ]) } OBT[0] OBT[n] OBT
UTC [ i ] = ERT [ i ] - δ - τ
ERT –
CADU (OBT)
Stamping UTC = UTC[n]+D+ k x (OBT-OBT[n])
Fig. 7.14 The concept of time correlation performed by the SCF. OBT = On-board Time, ERT =
Earth Received Time, UTC = Universal Time Coordinated.
which allows to correlate a time couple {OBT[i] , U T C[i] } at each epoch ti . The
absolute time system UTC is used as an example here being a widely used system
in many ground segments. It could however also be replaced by a GNSS based
system like GPS time or Galileo System Time (GST) which are readily available
today. As the assumed δ and τ in Eq. (7.1) might differ from the real values by a
different offset at each time ti , the time correlation function in the SCF applies a
linear regression module on a specified set of collected time packets that consists of
time couples {OBT[1,..,n] , U T C[1,..,n] } as indicated by the linear regression line in
red in Fig. 7.14. The result yields the correlation coefficients D and k which can be
used to convert any TM parameter time stamp from OBT to UTC using the following
simple equation
where UTC[n] and OBT[n] can be any arbitrarily chosen initial time couple. As the
linear regression is performed on a limited set of time couples only, the resulting
time correlation parameters will have limited accuracy and validity. The time
correlation accuracy is defined as the distance of a measurement point to the linear
regression line and needs to be monitored to not exceed a defined limit, otherwise
D and k need to be recomputed based on a new set of time couples.
7.8 Interfaces 137
Fig. 7.15 Overview of the most important SCF interfaces to other GCS elements.
7.8 Interfaces
The most relevant interfaces between the SCF and the other ground segment
elements are shown in Fig. 7.15 and described in the bullets below:
• FDF (see Chap. 6): the SCF needs to receive from the FDF parameters for
flight dynamics related TCs. An example is a TC to execute an orbit control
manoeuvres which usually requires the thrust duration, propulsion system spe-
cific parameters, and manoeuvre target attitude angles. Furthermore, the FDF
needs to provide the command parameters to update the satellite OOP and send
the propagation delay file that specifies the elapsed propagation time δ for each
satellite to ground station link geometry (refer to section describing the time
correlation).
• MPF (see Chap. 8): the automation component of the SCF requires as input the
Short Term Plan (STP) which needs to be provided by the Mission Planning
Facility. The STP must contain a detailed contact schedule and command
sequences for the upcoming days that can be used for the automated contacts.
• OPF (see Chap. 9): the Operations Preparation Facility (OPF) provides the SCF
two important inputs: the satellite reference database (SRDB) which contains the
full set of satellite specific command and telemtry parameters, and the automation
procedures described earlier.
• MCF (see Chap. 10): the interface to the Monitoring and Control Facility ensures
the central monitoring of the SCF element hardware and processes and provides
a basic remote commanding functionality to change element operational modes.
138 7 The Satellite Control Facility
References
1. European Cooperation for Space Standardization. (2003). Ground systems and operations -
Telemtry and telecommand packet utilization. ECSS-E-70-41A
2. The Consultative Committee for Space Data Systems. (2022) . https://public.ccsds.org/about/.
Accessed 11 March 2022.
3. International Organization for Standardization. (1994). Information Technology — Open
Systems Interconnection — Basic Reference Model — Conventions for the Definition of OSI
Services. ISO/IEC 10731:1994.
4. The Consultative Committee for Space Data Systems. (2020). Space Packet Protocol. CCSDS
133.0-B-2.
5. The Consultative Committee for Space Data Systems. (2015). TC Space Data Link Protocol.
CCSDS 232.0-B-3.
6. The Consultative Committee for Space Data Systems. (2015). TM Space Data Link Protocol.
CCSDS 132.0-B-2.
7. The Consultative Committee for Space Data Systems. (2014). Overview of Space Communica-
tion Protocols. Informational Report (Green Book), CCSDS 130.0-G-3.
8. The Consultative Committee for Space Data Systems. (2017). TM Synchronization and
Channel Coding. CCSDS 131.0-B-3.
9. The Consultative Committee for Space Data Systems. (2017). TC Synchronization and Channel
Coding. CCSDS 231.0-B-3.
10. The Consultative Committee for Space Data Systems. (2016). Space Link Extension Services -
Executive Summary. CCSDS 910.0-Y-2.
11. Vitrociset. (2010). Galileo SCCF Phase C/D/E1 and IOV, SCCF Automation and Planning
Component, Software Operations Manual (SW5). GAL-MA-VTS-AUTO-R/0001.
12. European Cooperation for Space Standardization. (2008). Test and operations procedure
language. ECSS-E-ST-70-32C.
13. Eickhoff, J. (2012). Onboard computers, onboard software and satellite operations. Heidel-
berg: Springer.
Chapter 8
The Mission Planning Facility
The mission planning discipline has already been briefly introduced in Sect. 4.1.7
and aims to efficiently manage the available ground segment resources, consider all
known constraints, and resolve all programmatic conflicts in order to maximise the
availability of the service provided to the end user. The Mission Planning Facility
(MPF) represents the ground segment component that implements this functionality
and depends on planning inputs shown in Fig. 4.2. A more detailed description of
the functional requirements is given below.
• The reception, validation, and correct interpretation of planning requests (PRs)
is the starting point of any planning process. Such PRs can originate from an
operator sitting in front of a console or stem from an external source which could
be either an element in the same ground segment or an external entity. External
PRs are of special relevance in case a distributed planning concept is realised
which is outlined in more detail in a later part of this chapter. Once a PR has
been received, it should always be immediately validated to ensure correctness in
terms of format, meaningful contents, and completeness. This avoids problems
at a later stage in the process when the PRs contents is read by the planning
algorithm.
• The reception and processing of the flight dynamics event file containing the
ground station visibilities is an essential task that allows the correct generation
of the contact schedule. Furthermore, the event file provides epochs which are
needed for the correct placement of tasks which require specific orbit event
related conditions to be fulfilled. Examples could be the crossing of the ascending
nodes, the orbit perigee/apogee, satellite to Sun or Moon co-linearities, or Earth
eclipse times.
• The planning algorithm (or scheduler) must be able to consider all defined
constraints and, in case of conflicts, be able to solve them autonomously. Such an
automated conflict resolution can only be performed through the application of a
limited number of conflict solving rules which have been given to the algorithm.
If these rules do not allow to find a solution, the software must inform the
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 139
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8_8
140 8 The Mission Planning Facility
operator and provide all the necessary information as output to facilitate a manual
resolution.
• Once the plan has been generated, it needs to be distributed to all end users in the
expected format. Mission planning products can comprise a contact schedule, a
mission timeline, or an executable plan.
• The processing of feedback from the mission control system (SCF) about the
status of the execution of planned tasks and activities allows to update the
planning time line. This update is an important input that should be considered
in the subsequent planning cycle. As an example, a planned task that could not
be executed would need to be replanned to a future slot which would impact the
future planning schedule.
MPF
Fig. 8.1 Simplified architecture of the Mission Planning Facility (MPF) indicating the planning
flow direction
8.2 Planning Concepts 141
station visibility, it must be kept in mind that additional contact rules and constrains
can shorten the actually planned contact duration. The output of the scheduling
algorithm is a valid contact plan which can be provided to the task manager in
order to place the relevant tasks and activities in their correct sequence. In the final
step, the products generator creates all the planning products in the required format
and file structure and distributes them to the end user.
The aim of a planning concept is to describe the overall planning sequence and
philosophy. This should already be defined and documented in the early project
phases as it can impact the architecture and interfaces of the MPF. Only the
most applied concepts are described here using the standardised terminology
from CCSDS-529.0-G-1 [1]. All the described planning concepts should not be
considered as exclusive and can in principle be combined.
Hierarchical planning is a concept that aims to structure the planning task into
several cycles with each covering a different time span (planning window) with
a different level of detail. Whereas the higher-level cycles deal with longer time
spans, the lower-level ones go into more detail and granularity (see Fig. 8.2). The
most common planning windows are the long-term (months up to years), medium-
term (weeks up to months), and short-term (days up to weeks) cycle with the
corresponding long-term-plan (LTP), mid-term-plan (MTP), and short-term-plan
(STP) attached to them. A typical example for a task considered as part of the LTP
is a launch event which is usually known months ahead of time. A task considered
in the MTP could be the execution of an orbit correction manoeuvre which should
be known weeks ahead of time. The update of the satellite onboard orbit propagator
might be a weekly task and can therefore be considered as part of the STP.
Launch
Time
142 8 The Mission Planning Facility
Fig. 8.3 Concept of distributed planning showing the planning functionality distributed among
two different entities, one specialised for the payload and the other one for the overall satellite and
mission planning. SCF refers to the Satellite Control Facility as described in Chap. 7
The mission planning process (MPP) defines the steps required to generate a conflict
free mission plan that contains the exact times for all tasks to be executed by the
satellite platform, its payload, or the ground segment. Figure 8.4 shows the basic
steps and sequence of a typical MPP starting with the processing of all planning
inputs like the orbit events and the full list of planning requests (PRs). In the next
step the contact schedule (CS) is generated which takes into account the available
ground station visibilities from the event file and the project specific contact rules
(or constraints) indicated by the contact rules box. It is important to distinguish
between the duration of a visibility which is given by the time span during which
a satellite has an elevation of at least 5 deg (or higher) above the ground station’s
144 8 The Mission Planning Facility
Planning Inputs:
Consistency
Contact Task Check &
Scheduling Placement Conflict
Resolution
Conflict Solving
Contact Rules Planning Resource Rules
Fig. 8.4 Overview of Mission Planning Process (MPP). Inputs and outputs to the process are
depicted as boxes whereas processes itself as circles
horizon,1 and the contact duration itself which can be much shorter, as it is driven
by the set of tasks that need to be executed. The start and end times of a contact are
termed acquisition of signal (AOS) and loss of signal (LOS) and refer to the actual
ground station RF contact to the spacecraft transponder for the exchange of TM/TC
and ranging (see Fig. 8.5). Typical contact rules comprise a minimum and maximum
contact duration, constraints on gaps between two consecutive contacts (to the same
satellite in a constellation), or requirements on the minimum number of contact per
orbit revolution (contact frequency).
The next step in the planning process is the placement of tasks and their related
activities within the available contact slots which is shown in the lower portion of
Fig. 8.5. The task placement can be either done based on absolute times specified in
the planning request like the earliest or preferred start time tearliest or tpref shown
for “Task 4”, or via a relative time (or offsets) specification Δt defined with respect
to a specific orbital event like the crossing of the perigee (see “Task 1”) or the start
of a contact (see “Task 2”). The relative task placement can be automated by the
planning algorithm using the definition of a relative-time task trigger that is able to
detect the occurrence of a specific event.
Instead of a specific orbit event, the successful execution of a task itself could also
be a condition for the start of a subsequent one. This can be done via the definition of
so called task-to-task triggers as shown for “Task 3”. The placement of tasks might
1 Satellite ground station visibilities can span from a few minutes for orbits at low altitude up to
several hours for high altitude orbits (e.g., MEO) and ground stations close to the equator.
8.3 The Mission Planning Process 145
AOS LOS
Orbit Event:
Perigee Satellite Event:
Crossing Sensor Blinding
Satellite Contact tearliest tpref
Relative-Time
∆t2 Task Trigger
Task 1
Task Placement
Task 4
Task 2
∆ta ∆tb ∆tc
Task-Task Trigger
A1 A2 A3 Task 3
∆td ∆te
A4 A5
Fig. 8.5 Contact scheduling and Task Placement outcomes (refer to text for detailed explanation)
2 The battery power is usually much lower after the satellite exits an eclipse period which might
by simply shifting the start times, a change in the overall scheduling approach
might be needed. For such cases, the conflict solving rules should also provide clear
guidelines on how to relax scheduling constraints and allow the generation of a
consistent contact schedule in a second iteration.3
Once the consistency check has passed, the mission plan can be finalised and
released. One important aspect of constellation type missions is the need to perform
mission planning for all satellites simultaneously, which also implies that the
mission plan must contain a list of contacts and tasks for all satellites. Some
constraints or resource limitations (e.g., the down time of a TT&C station due to
a maintenance activity) might have a potential impact on the overall schedule and
should therefore be categorised as system wide resources.
The complexity of the contact scheduling process is strongly driven by the actual
mission profile. As an extreme example, for a project that only operates one single
satellite, one single ground station, and implements an orbit with a repeating ground
track,4 the contact schedule will be very simple and highly repetitive. In contrast
to this, a constellation type mission with several satellites (e.g., a navigation type
constellations can have 30 or more spacecraft in service), a large TT&C ground
station network deployed around the globe, and a complex set of contact rules (e.g.,
minimum of one contact per orbit revolution, need to schedule back-up contacts,
need on specific contact duration, etc.), the generation of the contact schedule
requires a highly complex algorithm.
The difference between the ground station visibility being the physical satellite
location of 5 deg (and higher) above the ground station horizon and the actual
contact time defined by AOS and LOS has already been briefly introduced in the
previous section. The following contact scheduling parameters are highly relevant
for the computation of contact times and must therefore be clearly defined.
• The minimum contact duration between AOS and LOS is driven by the amount of
time a TT&C station requires to establish an RF contact. This parameter should
be defined based on the TT&C station performance5 to avoid that the contact
scheduling process creates contacts which cannot be realised from an operation
point of view.
3 This iterative approach is indicated by the dashed arrow pointing from the consistency check
paths.
8.5 Interfaces 147
Orbit Period
Separation
Sat-1 Sat-1ÆTT&C-1 Sat-1ÆTT&C-2
Periodicity
Duration
Sat-3 Sat-3ÆTT&C-2
Time
Fig. 8.6 Example of contact schedule for 3 satellites and 2 TT&C ground stations
• The contact separation time is defined as the elapsed time between the end
of one contact and the start of the consecutive one to the same satellite. The
maximum separation time could be driven by limitations of the onboard data
storage capacity and the need to downlink data before it gets overwritten.
• The contact periodicity is defined as the minimum number of contacts that need
to be scheduled for each satellite within one full orbit period. This parameter can
depend on both satellite platform or payload specific requirements.
All of the aforementioned parameters are graphically depicted in Fig. 8.6 which
gives an example for a simple contact schedule of three satellites (referred to as Sat-
1, Sat-2, and Sat-3) and two TT&C ground stations. Even for this simple set up and
the short window of only one orbit period, resource limitations evolve quite quickly.
The contact schedule for the first two satellites only leaves one possible contact
(per orbit) for the third one. This would imply a violation of a contact periodicity
requirement, if a minimum of two contacts per orbit would be required. A possible
conflict solving strategy in this case could be to either shorten the contact duration
or relax the periodicity requirement for the third satellite.
8.5 Interfaces
An overview of the various interfaces of the MPF to other GCS elements is depicted
in Fig. 8.7 and briefly described below:
148 8 The Mission Planning Facility
Execution
Fig. 8.7 Overview of the most important MPF interfaces to other GCS elements
• The planning request (PR) is the main driver of the planning process and
needs to contain all the relevant information to allow the corresponding tasks
and activities to be planned according to the initiator’s needs or preferences. A
meaningful PR needs to be clear on its scope and timing needs so the planning
function can allocate the correct set of tasks and activities (goal based planning
concept). As discussed in the context of conflict resolution, it is quite useful
to provide margins on the requested start (or end) times in order to provide a
certain amount of flexibility to the contact scheduler. The example of earliest,
preferred, and latest start time has been mentioned earlier as a recommended rule
(refer to Fig. 8.6). Another important characteristic of a PR is the definition of a
PR life cycle that needs to be applied during the planning process. In line with
the terminology introduced in CCSDS-529.0-G-1 [1], a PR can go through the
following stages:
– Requested: this is the initial state right after the PR has been submitted to the
MPP.
– Accepted: the PR has passed the first consistency check and can be considered
in the MPP.
– Invalid: the PR contains inconsistent information or does not follow the
formatting rules and is therefore rejected. It needs to be corrected and resent
for a subsequent validation.
– Rejected: the PR has been considered in the MPP but could not be planned due
to constraints or for any other reason. The possibility to provide information
on the reason for the rejection should be considered in the definition of the
interface.
8.5 Interfaces 149
– Planned: the PR could be successfully processed and has been added into
the mission plan. Further processing will now depend on the reception of the
execution feedback signal.
– Executed: the tasks and activities related to the PR have been executed and the
execution status information is available.
– Completed: this is an additional evaluation of the execution status and might
be relevant for PRs that depend on certain conditions that must be met
during the execution. An example could be a scientific observation which
might depend conditions that can only be evaluated, once the observation has
been performed. This PR state is not meaningful for every project and might
therefore not always be implemented.
– Failed: the scheduled tasks and activities related to the PR were not correctly
executed. The reason could be an abort due to a satellite or ground segment
anomaly.
The source of a PR can be a manual operator interaction by a mission planning
engineer sitting in front of the MPF console or stem from an external source like
the FDF as shown in Fig. 8.7. An example for an FDF originated PR is the request
to perform an orbit correction manoeuvre or the uplink of new orbit vectors for
the OOP.
• FDF (see Chap. 6): the event data file contains all satellite orbit related infor-
mation and is based on to the most up-to-date satellite state vector information.
The type of events might differ among various satellite projects depending on the
specific planning needs. Typical event file entries are ground station visibilities,
satellite eclipse entry and exit times, orbit geometry related events like node or
apogee/perigee crossings, co-linearities with celestial bodies, or events related to
bodies entering the Field of View (FOV) of satellite mounted sensors.
• SCF (see Chap. 7): the transfer of the Mission Plan (MP) to the Satellite Control
Facility is the main output of the planning process and should contain a sequence
of contacts with scheduled tasks and activities that accommodates all received
PRs. The MP needs to be conflict free and respect all known planning constraints
and contact rules and at the same time ensure an optimised use of all project
resources. The format and detailed content of the MP will be very project
specific. In the simplest case, it could be a mission human readable timeline
or Sequence of Events (SOE) directly used by operators. In a more complex
case, the MP could be realised as a file formatted for computers based reading
and contain an extended list of contact times and detailed references to satellite
procedures which can be directly ingested and processed in automated satellite
operations. The execution feedback is a signal sent from the SCF to the MPF and
contains important information on the execution status of a each task or activity
and is based on the most recent available satellite telemetry. It allows to progress
the life cycle state of the PR from planned to executed and finally to completed.
Depending on the project needs, the execution status can either be sent to the
MPF on a regular interval (e.g., after the completion of a task) or on request.
150 8 The Mission Planning Facility
• OPF (see Chap. 9): planning resources comprise planning databases, constraints,
and contact rules. A planning database should as a minimum comprise the
detailed definition of all tasks and activities specific to a satellite project. As any
change in the planning resources can have a strong impact on the contact schedule
and mission plan, this information should be managed as configuration items and
kept under configuration control via exchange with the Operation Preparations
Facility. Having a clearly defined source of all planning related assumptions, the
outcome of a planning process can be easily understood and reproduced.
• MCF (see Chap. 10): the interface to the Monitoring and Control Facility ensures
the central monitoring of the MPF element hardware and processes and provides
a basic remote commanding functionality like the change of element operational
modes.
Reference
1. The Consultative Committee for Space Data Systems. (2018). Mission planning and scheduling.
CCSDS 529.0-G-1. Cambridge: Green Book.
Chapter 9
The Operations Preparation Facility
The importance of a thorough validation process for all operational products and
their proper management as configuration data has already been emphasised in the
ground segment architectural overview in Chap. 4. The functionality of a segment
wide configuration control mechanism for this data is realised in the Operations
Preparation Facility (OPF) who’s main functionalities can be summarised as
follows:
• To import and archive all ground segment internal and external configuration
items (CIs).
• To support the validation of CIs in format, contents, and consistency.
• To provide tools for the generation and modification of CIs (e.g., satellite TM/TC
database, Flight Control Procedures, etc.).
• To provide means for a proper version and configuration control of all ground
segment CIs.
• To distribute CIs to the correct end user for validation and use.
• To serve as a gateway between the operations and validation chains (OPE and
VAL) in case a chain separation concept has been implemented (see Fig. 13.2).
A simplified architectural design of the OPF is depicted in Fig. 9.1. The core of this
element is the CI archive which is used to store all segment CIs. Considering the
vast amount of configuration files present on the various GCS elements, the overall
number of CIs can be quite significant. In addition, the archive must also host the
satellite specific CIs which have to be multiplied by the number of spacecraft, in
case a satellite constellation is being operated.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 151
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8_9
152 9 The Operations Preparation Facility
OPF OPF
Element
CI Editor
Modify
CI
OPF internal CI editor is used. The only difference is that the modifications would
be done on the OPF element directly1
The operations procedure editor is a more advanced CI editor which is spe-
cialised to support the generation, modification, and validation of flight and ground
operations procedures. These are usually referred to as FOPs and GOPs respectively
and play an important role in satellite operations (see also description in the
operations related Chap. 15). The consistency checker is a tool to verify that the
TM/TC parameters which are referenced in the operations procedures are consistent
with the applicable and most recent satellite databases (refer to SRDB definition in
next section).
The number of CIs, their contents, and categorisation is very project specific but
some common types appear in most satellite projects which are described in more
detail below.
1 The CI modification policy (i.e., where the editing of a CI needs to be performed) should be
decided for each CI class and driven by operational needs and aspects. It could also be the case
that only one location for CI editing is sufficient which could simplify the GCS architecture in this
respect.
154 9 The Operations Preparation Facility
ODB
SRDB TM_AOCS_01
TM_AOCS_02
[…]
TM_POW_01
TM_POW_02
[…]
TM_COM_01
TM_COM_02
[…]
TM_THER_01
TM_THER_02
TPF
TC_AOCS_01
- 1234.12 - PAR01
+ 53.75 - PAR02
- 42.321 TC_AOCS_02
TRUE - PAR01
- PAR02
Fig. 9.3 Definition of derived parameters (synthetic TM) and Command Sequences with rules to
populate TC parameters from external input files. TPF = Task Parameter File, SRDB = Satellite
Reference Database, ODB = Operational Database
• The satellite reference database (SRDB) contains the full set of TM and TC
parameters of a satellite. As the SRDB needs to be derived from the satellite on-
board or avionic software, it is usually provided by the satellite manufacturer.
The SRDB should therefore be considered as a contractual deliverable2 from the
space to the ground segment and needs to be tracked in terms of contents, delivery
schedule, and delivery frequency. The SRDB might however not be the final
product used for operations. It will usually be extended with additional derived
(or synthetic) telemetry parameters which are generated by the database analyst,
who combines the state or value of existing TM parameters to form a new one
according to a defined logic or rule (see Fig. 9.3). This extended database is called
the operational database (ODB) and can, in addition to the derived parameters,
also contain definitions of telecommand sequences and rules needed to populate
TCs with values from an external source. A good example would be the FDF
providing AOCS related TC values (e.g., manoeuvre command parameters) via
an input file to the SCF to populate the relevant TC sequences. The SCOS2000
standard (refer to [1]) provides a specific definition for such an interface file
which is referred to as a Task Parameter File (TPF). It is important to note that
the OPF has to be designed to manage both, the SRDB and the ODB for each
satellite domain.
• The satellite flight dynamics database (FDDB) contains all satellite specific
data required by flight dynamics in order to generate satellite platform specific
products. The content can be categorised into static parameters which remain
constant throughout the satellite lifetime and dynamic data subject to change
with certain satellite activities. Examples for static data are the satellite dry mass
or the position of sensors and thrusters and their orientation (see also Table 6.1).
An obvious dynamic parameter is the propellant mass which decreases after each
thruster activity. Less obvious is the dynamic character of the satellite centre of
mass location which moves within the satellite reference frame, whenever the
propellant mass changes. The FDDB is a very important deliverable provided
by the satellite manufacturer. Depending on the availability of intermediate and
final versions, a staggered delivery approach should be agreed in order to gain an
early visibility at ground segment level. A possible scenario would be to agree
a delivery of design data (i.e., expected values) during the early satellite design
and build process which is followed by updates on sensor and thruster position
and orientation values obtained in the alignment campaign. A final delivery
could be done after the satellite fuelling campaign has been completed which
provides values of the wet mass and related updates of mass properties (e.g.,
inertia matrix).
• Flight and ground operations procedures (FOPs and GOPs) form another impor-
tant set of CIs which were already briefly discussed in the context of the OPF CI
editor. These procedures are usually written and maintained by the flight control
team (FCT) and require several updates throughout the satellite lifetime due to
their complexity. The OPF is an important tool for their development, archiving,
and configuration control.
• The satellite on-board or avionics software (OBSW) is developed by the satellite
manufacturer and needs to be delivered to the ground segment as a fully
representative image ready for integration into the satellite simulator (SIM).
The SIM serves as a representative environment for the validation of the
operational procedures prior to their use in real operations and is explained
in more detail in Chap. 11. The satellite manufacturer might need to develop
software updates (patches) during the satellite lifetime and has to provide these
to the ground segment. The OPF should serve as the formal reception point in
order to guarantee the proper validation (e.g., completeness and correctness)
and configuration control of the received products. A new OBSW image should
always be distributed as a CI from the OPF to the simulator.
• The planning data CI comprises contact rules, conflict solving rules, and
definitions of tasks and activities which are needed for mission planning (see
also Chap. 8). This CI is a good example for a GCS internal CI, as it is generated
and maintained without any segment external interaction. All the other examples
given above can be categorised as external CIs.
156 9 The Operations Preparation Facility
OPF OPF
Anti-Virus
Screening
9.4 Interfaces
An overview of the various interfaces of the OPF to other GCS elements is depicted
in Fig. 9.5. As can be readily seen, the OPF needs to be able to exchange CIs to
all elements in the GCS. Even if the TT&C station is not explicitly shown here, it
is in theory not excluded. The transfer mechanism needs to be designed to operate
in both directions to support a CI reservation, update, and replacement protocol as
described earlier in the text.
The interface to the Monitoring and Control Facility (see Chap. 10) is used to
monitor the functionality of the OPF element and send commands. Example of
commands could be to change element operational modes and even initiate CI
exchange if considered as useful.
3 Projects with high security standards put a lot of emphasis on the implementation of a highly
secure network design that ensures no harmful data ingestion, transfer, export, or forbidden user
access. Audits and penetration tests designed to proof the robustness of the architecture might
form part of an accreditation process which can even be considered as a prerequisite to grant the
authorisation to operate.
158 9 The Operations Preparation Facility
Reference
1. European Space Agency. (2007). SCOS-2000 Task Parameter File Interface Control Document,
EGOS-MCS-S2K-ICD-0003.
Chapter 10
The Monitoring and Control Facility
The functional readiness of the ground segment depends on the proper function-
ing of all its components which can comprise a significant number of servers,
workstations, network devices (e.g., switches, routers, or firewalls), and software
processes. The Monitoring and Control Facility (MCF) is responsible to provide
such a functional overview and inform the operator in case an anomaly occurs.
In more detail, the following minimum set of functional requirements need to be
fulfilled:
• The collection of monitoring data at software and hardware level for all major
elements and components in order to provide a thorough, reliable, and up-to-date
overview of the operational status of the entire ground control segment. Special
focus should be given to any device that is essential to keep the satellite alive.
• The ability to detect and report an anomaly and report it with minimum latency
to the operator to ensure that informed actions can be taken in time to avoid
or minimise any damage. The type, criticality, and method of reporting (e.g.,
visual or audible alerts) should be optimised to ensure an efficient and ergonomic
machine operator interaction.
• To provide the user the ability to command and operate other ground segment
elements. Emphasis should be given to infrastructure which is located remotely
and might note be easily accessible (e.g., an unmanned remote TT&C station).
• The ability to generate log files that store all the monitored data in an efficient
format which minimises the file size but does not compromise on data quality.
The file format should allow the use of a replay tool which eases any anomaly
investigation at a later stage.
• To provide the means for remote software maintenance for elements that are not
easily accessible.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 159
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8_10
160 10 The Monitoring and Control Facility
Data
Archive
GCS
Elements Online
Macros/Commands
M&C
Host Agent Manager Replay
Monitoring Data Tool
MMI
Fig. 10.1 High-level overview of the Monitoring and Control Facility (MCF) architecture
1 Most operating systems allow the execution of software daemons which can be automatically
power, and required disk space for archiving. A polling rate that has been set too low
might however increase the risk to loose the detection capability for highly dynamic
parameters.
Once all the required post-processing is done, the M&C manager must forward
the monitoring data to the Man Machine Interface (MMI) component which is in
charge for data presentation to the operator. Depending on the ground segment
complexity, there might be a considerable amount of monitoring data and events
to be displayed. A good and efficient MMI architecture is therefore a crucial part of
the overall MCF design and some relevant aspects for consideration are therefore
discussed in Sect. 10.3.
The ability to remotely operate other GCS elements is another important task of
the MCF. For satellite projects which require the use of unmanned TT&C ground
stations which are deployed at remote locations distributed around the globe, the
MCF can be the main access point for their daily operation. In this context the
macro & command editor is an important tool to develop, modify, and potentially
even validate command sequences which are sent to the GCS elements through the
M&C manager.
All the monitoring data received by the MCF must be stored in the data archive.
This archive needs to be properly dimensioned in terms of disk space in order to
fulfil the project specific needs. The disk size analysis needs to consider the actual
polling rates and compression factors that are foreseen for the various types of
monitored parameters. A tool for the replay of the monitoring data is not mandatory
but is a convenient means to investigate anomalies observed in the ground control
segment.
The remote management2 of devices being connected via a network requires the
implementation of a communication protocol. As the specification (and validation)
of such a protocol requires a considerable amount of effort, it is more efficient to
use an existing one that is standardised and has already been extensively used.
A very prominent remote network management protocol that meets these
requirements is the Simple Network Management Protocol or SNMP. It has been
published in its first version in 1988 [1], soon after the TCP/IP internet protocol was
established and widely used which generated a strong demand to remotely manage
network devices like cable modems, routers, or switches. The SNMP protocol has
since then undergone three major revisions, mainly to implement improvements
both in flexibility and security. The overall interface design is based on the following
three building blocks (see also Fig. 10.2):
2 Remote management in this context refers to the ability to collect monitoring data of a remote
SNMP Manager
(NMS)
GetRequest (v1)
Response (v1)
SetRequest (v1)
MIB Trap (v1)
OID: sysUpTime GetNextRequest (v1) InformRequest (v2)
OID: CPUusage
OID: MemoryUsage GetBulkRequest (v2)
[…]
Agent
Managed Device
Fig. 10.2 The Simple Network Management Protocol (SNMP) architecture and messages. MIB =
Management Information Base, NMS = Network Management System, OID = Object Identifier
the manager of significant events and the InforRequest supports the trap allowing
the manager to provide an acknowledgement after having received a trap.
The Man Machine Interface (MMI) of the MCF must provide an efficient and
ergonomic design that allows an operator to gain a quick and accurate overview on
the overall GCS status both at software and hardware level. In case of problems, the
MMI should provide easy access to a more detailed set of information and be able
to display additional monitoring data at element and component level. The most
important MMI tasks are to report a non-nominal behaviour as soon as possible
(preferably via visible and audible means) and to subsequently support the impact
analysis and recovery process through the provision of accurate information on the
error source.
An example for a MMI design is shown in Fig. 10.3 which, despite its simplicity,
shows the most relevant elements one can also find in complex systems. The area on
the upper left provides a number of buttons that allow the operator to switch between
different views. The currently active one (shown in a different colour) provides the
GCS
TT&C 1 TT&C 2 TT&C 3
TT&C
TM/TC
SCF
Encryption
OPF
SCF
SCF
FDF
OPF MPF FDF
Racks
…
… …
Fig. 10.3 Example of a possible MCF MMI for a highly simplified GCS
164 10 The Monitoring and Control Facility
GCS
TT&C
SCF
OPF
SCF
FDF
Racks
… … …
Fig. 10.4 The TT&C mimic as an example for a mimic at element level
GCS segment level overview which is the most important view for an operator
to gain the overall segment status at one glance. As this view also comprises the
status of all the major elements including their respective interfaces, it will be useful
whenever the ground segment status needs to be confirmed as input to the start of a
critical sequence (e.g., the ground segment “GO for Launch” confirmation).
The remaining buttons on the left grant access to views at element level. This is
demonstrated for the case of a TT& station in Fig. 10.4 where the largest area of the
MMI is occupied by the graphical presentation of the status of the various TT&C
subsystems and their respective interfaces. This type of presentation is referred to
as a mimic and is widely used in all kind of monitoring and control systems. The
subsystems are represented as rectangles or more representative symbols and the
interfaces between them by connecting lines. The operational state is indicated by a
colour which changes according to a predefined colour scheme. An intuitive colour
scheme could be defined as follows:3
3 Even if a colour scheme is pure matter of definition, care must be taken to avoid non intuitive or
highly complex ones. As an example, the choice of the colour green to mark a critical event will
very likely be misinterpreted. A scheme with too many colours, e.g., pink, violet, brown, or black,
might not be obvious and requires a clear definition to be correctly interpreted.
10.3 Man Machine Interface 165
The example above clearly demonstrates the ability of simple but well designed
MMI to provide a quick overview of a number of anomalies in a complex
infrastructure. The log also plays an important role as a short and simple text based
message can already provide a good indication of the time, location, nature, and
criticality of an anomaly which allows to narrow down the root cause investigation.
4 The intention of this section is to make the reader aware of the additional monitoring and control
Virtual
Management Centre
(VMC)
VM VM VM VM VM VM
Hypervisor Hypervisor
BIOS BIOS
Physical Server Physical
i l Server
S
Fig. 10.5 VM and physical hosts monitoring by the Virtual Management Centre
M&C M&C
M&C
MCF
M&C
M&C
Higher-Level
MCF MPF
10.6 Interfaces
References
1. Case, J., Fedor, M., Schoffstall, M., & Davin, J. (1988). Network working group request for
comments 1067: A simple network management protocol. https://www.rfc-editor.org/rfc/pdfrfc/
rfc1067.txt.pdf Accessed Mar 14, 2022.
2. Vmware Vcenter Website. (2022). https://www.vmware.com/de/products/vcenter-server.html
Accessed Mar 14, 2022
Chapter 11
The Satellite Simulator
In a space project, every mistake can have a serious impact on the satellite and its
ability to provide its service to the end user. In a worst case scenario, an incorrect
telecommand could even lead to a permanent damage of flight hardware or even the
loss of a satellite. It is therefore worth to scrutinise the main sources for mistakes in
flight operation which will typically fall into one of the following categories:
1. The specification of incorrect or inaccurate steps in flight operations procedures
(FOPs).
2. The incorrect execution of a (correctly specified) step in a FOP due to a human
(operator) error.
3. An incorrect behaviour of a GCS element or an interface due to a software bug
or hardware malfunctioning.
A representative satellite simulator can reduce the risk of any of these scenarios
to occur by satisfying the following functional requirements:
• To support the qualification (i.e., verification and validation) test campaigns of
the GCS and its elements during their development, initial deployment, and
upgrade phases.1 The Satellite Control Facility (SCF) has a direct link the
simulator and will therefore be the main beneficiary. It can send TCs to the
simulator and receive TM that is representative of the actual satellite onboard
software, sensors, and actuators and their behaviour in space.
• To provide a test and “dry-run” environment for the validation of an onboard
software patch prior to its uplink to the real satellite.
• To provide a training environment to the flight control team (FCT) that allows
routine and emergency situations to be simulated in a realistic manner. Even if
the risk factor of human error can never be entirely excluded, the likelihood of
1 In this context the simulator is also referred to as the Assembly Integration and Verification (AIV)
Simulator.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 169
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8_11
170 11 The Satellite Simulator
2 Note that the terms onboard software and avionics software have similar meaning.
3 For Earth bound satellites the Sun, Moon, and Earth will most likely be sufficient.
11.1 Architectural Overview 171
Fig. 11.1 High-level simulator architecture: SMP = System Model Portability (refer to ECSS-E-
TM-40-07 [1]), SLE = Space Link Extension (refer to CCSDS-910-0-Y-2 [2])
The execution of the real onboard software inside the simulator is the most efficient
means to provide a realistic and fully representative behaviour of the satellite. This
can also ensure that a software bug inside the OBSW can be discovered during
a FOP validation campaign or a simulation exercise. Satellite onboard computers
usually contain space qualified micro-processors that implement a so called Reduced
Instruction Set Computer (RISC) architecture.4 The RISC architecture clearly
differs from the one used in standard ground-based computer systems which
are based on Complex Instruction Set Computers or CISC micro-processors. The
simulator therefore needs to implement a μ-processor emulator to properly handle
the on-board computer RISC instruction set, its memory addressing scheme, and all
the I/O operations (see Fig. 11.2).
The RISC emulator is an interpreter software and forms part of the simulator
software suite which itself runs on the host platform and its processor. The execution
of such an emulator can be quite demanding for the host, as for a real time emulation
of the target processor the host processor requires a 25–50 times higher performance
Fig. 11.2 The μ-processor emulator of the simulator: the host is the computer platform on which
the emulator is running (i.e., the simulator’s processor) and the target is the emulated computer
hardware (i.e., the satellite onboard computer)
4 Many flying ESA missions use for example the ERC32 and LEON2 processors which are based
(refer to [4]). This is even a conservative estimation, as it only applies for single core
type target processors. For newer high performance quad-core processors (e.g., the
GR740 radiation hardened multi core architecture processor developed under ESA’s
Next Generation Microprocessor (NGMP) program [5]), this emulation load is even
more demanding and needs to be considered for the specification of the simulator
hardware performance.
The development effort for a new satellite simulator can be reduced if existing
simulation environments and models can be reused (ported) across platforms
and projects. This is facilitated with the application of model based software
development techniques and the use of model standardisation like the Simulation
Model Portability Standard (SMP) proposed by ESA [6]. SMP can be used to
develop a virtual system model (VSM) of a satellite5 which must be understood
as a meta-model that describes the semantics, rules, and constraints of a model
and can therefore be specified as a platform independent model or PIM that does
not build on any specific programming language, operating system, file format or
database (see Fig. 11.3). If the PIM has been based on the rules and guidelines
of the SMP standard6 as suggested in ECSS-E-TM-40-07 [1], its translation into
a platform specific model (PSM) is more easily achieved. According to the SMP
specification as defined in the SMP Handbook [7], the PIM can be defined using the
following three types of configuration files:
• The catalogue file containing the model specification,
• the assembly file describing the integration and configuration of a model instance,
and
• the schedule file defining the rules for the scheduling of a model instance.
with the Simulation Model Definition Language (SMDL) defining their format
as XML. The SMP specification also provides standardised interfaces for the
integration with a simulation environment and its services. This is schematically
depicted in Fig. 11.4 where a number of satellite models referred to as Model 1
to Model N are able to connect to the simulation environments that are specific to
two different projects A and B which both implement the SMP simulation services
Logger, Scheduler, Time Keeper, Event Manager, Link Registry, and Resolver. The
SMP standard also foresees interfaces for an inter model communication which
furthermore facilitates the reuse of models among different projects and platforms.
5 In ESA standards this satellite VSM is also referred to as the Spacecraft Simulator Reference
Architecture or REFA.
6 The most recent version today is referred to as SMP version 2 or simply SMP-2.
174 11 The Satellite Simulator
Fig. 11.3 Simulation Model Portability (SMP) development concept; SysML = System Modelling
Language, UML = Unified Modelling Language, SMDL = System Modelling Definition Language,
PIM = Platform Independent Model, PSM = Platform Specific Model
Logger Logger
Scheduler Scheduler
Time Keeper Time Keeper
Project A Project B
Event Manager Event Manager
Link Registry Link Registry
Resolver Resolver
Simulation Environment
E i E i
Simulation Environment
11.4 Interfaces 175
11.4 Interfaces
The most relevant interfaces between the ground segment simulator and the other
GCS elements are shown in Fig. 11.5 and described in the bullets below:
• FDF (see Chap. 6): the interface to the Flight Dynamics Facility (FDF) is useful
for the provision of either initial satellite orbit state vectors or even complete
orbit files. In the latter case the simulator internal orbit propagation would be
replaced by interpolation of the provided FDF orbit files. As historic orbit files
generated by the FDF are based on orbit determination providing a very accurate
orbit knowledge, their use can be highly beneficial in case a certain satellite
constellation needs to be accurately reproduced in the context of an anomaly
investigation. Furthermore, the exchange of space or orbit events (e.g., ascending
node, perigee, apogee crossings, or sensor events) might be useful information
for the cross correlation of events seen in the simulation.
• SCF (see Chap. 7): the most important interface is the one to the Satellite Control
Facility for the exchange of TM and TC. Depending on the need to simulate
classified satellite equipment with encrypted data, a separate data flow channel
between the simulator and the SCF has to be implemented. This specific flow has
to pass through the specific security units in each of the two elements which host
the keys and algorithms to handle encrypted TM and TC (indicated by the red
boxes and arrow in Fig. 11.5). Depending on the project needs and security rules,
such encryption and decryption units might have to be housed in dedicated areas
which fulfil the required access restrictions of a secure area.
• MPF (see Chap. 8): the implementation of a dedicated interface between the
Mission Planning Facility (MPF) and the simulator can be useful to exchange
Encryption
Decryption
Fig. 11.5 Simulator interfaces with other GCS elements. Note that interfaces to FDF and MPF
might not be realised in every ground segment
176 11 The Satellite Simulator
References
Auxiliary services refer to all those tools and applications that are required for the
proper functioning of a ground segment, but do not actually belong to any of the
previously described functions or elements. These functions usually support the
(centralised) management of user accounts, the data backup policy, an efficient
and traceable transfer of data, but also security related aspects like the antivirus
protection or the management of software releases and their version control.
This chapter provides a description of these auxiliary services which should be
considered in every initial ground segment design.
User account management is an important function in every multi user system and
even more in a security sensitive facility like a satellite ground segment where
a controlled and accountable user access must be part of the applicable security
policy. User account management should provide the means to efficiently create,
maintain, restrict, and if necessary remove any user access. In a ground segment
and operations context the role-based access control or RBAC scheme shown in
Fig. 12.1 is very suitable and should therefore be the preferred choice. The RBAC
principle is based on the definition of a set of roles which are authorised to perform a
set of transactions on objects, where this transaction can be seen as a transformation
procedure and/or data access [1]. Once all the roles and their allowed transactions
are well defined (and usually remain constant), the user administrator’s task is to
simply grant a new user the adequate membership to one or more specific roles,
which will then automatically define the amount and type of resources the user
can access. This scheme provides quite some flexibility as it allows one user to
be member of several roles or even one role to be composed of other roles (see
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 177
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8_12
178 12 Auxiliary Services
Fig. 12.1 Basic principle of Role-Based Access control with notation as suggested by Ferraiolo
and Kuhn [1]
e.g., Role 3 in Fig. 12.1 being member of Role 2). In a ground segment context the
definition of the following basic roles could be meaningful:
• The operator role being able to launch operations specific application software
processes on a GCS element. A more specific definition could distinguish
between operators doing flight dynamics, mission planning, mission control (this
would include the release of TCs to the satellite), or ground segment supervision
(monitoring and control).
• The maintainers role having the necessary access rights and permissions to
configure, launch, and monitor typical maintenance tasks like the execution of
regular antivirus scans, the update of the antivirus definition files, or the launch
of backup processes.
• The administrators role with root level access at operating system level having
the ability to install and remove software or change critical configuration files like
for example network settings. The user access and profile management would
also fall under this role.
In order to avoid the need to manage the user accounts on each machine in
a ground segment separately, it is recommended to implement a centralised user
management system. For this, well established protocols are available, with the most
prominent one being the Lightweight Directory Access Protocol (LDAP) that allows
the exchange of user profile information via an internet protocol (IP) based network.
12.2 File Transfer 179
This requires the setup of an LDAP server which supports the authentication
requests from all the LDAP clients that need to access it. From a security point of
view, it is recommended to establish the client to server connection via an encrypted
connection, e.g., using the transport layer security (TLS) authentication protocol.
A further advantage of a centralised user management system is the possibility
to store highly sensitive information like user passwords in one place only,
where increased security measures can be applied. An important measure is the
implementation of cryptographic hash functions like secure hash algorithms which
avoid the need to store passwords in cleartext.1 In this case the user password is
immediately converted into its hashed form and compared to the encrypted one
stored in the password file. Thanks to the nature of the hashing algorithm, it is
not possible to reverse engineer the password from the hashed form, in case the
password file gets stolen. However, strong hashing algorithms need to be applied
(e.g., SHA-256, SHA-512, Blowfish, etc.) in order to reduce the risk of cracking
software to be successful. Furthermore, a centralised user management system eases
the definition and enforcement of a segment wide password policy which defines
rules for a minimum password length, complexity, and validity.
1 Hashing is a mathematical method to produce a fixed length encoded string for any given string.
2 An examples for an action procedures could be the initiation of data validation process (e.g.,
xml parsing, integrity check, antivirus check, etc.) and, depending on the outcome, a subsequent
moving of it into a different directory.
180 12 Auxiliary Services
time and lead to a major malfunctioning. In a worst case scenario, a complete outage
of the entire ground segment simply due to the failure of the file transfer system
could be possible. Especially multi node file transfer architectures may require the
proper set up of a large number of configuration files which need to be properly
tested during a dedicated segment qualification campaign.
[…]
Configuration
Integrity
Control &
Monitoring
Element Machine N
[…]
Deployment Element Machine 02
Tool
Element Machine 01
12.5 Data Protection 181
factory (where it has been developed and qualified) into the CMS release repository.
The deployment tool has to interface that repository in order to copy and install a
release candidate to a target machine. The configuration and integrity monitoring
tool can be understood as a scanning device which performs a regular screening of
the overall system configuration and flags any changes. This not only provides an
efficient means to provide the overall segment (as-built) configuration but also helps
to identify any unauthorised modifications.
In case of a critical system break down or an infection with a malicious virus
(which cannot be isolated and removed), the CMS can serve as an important tool
to perform a recovery of parts or even the entire ground segment software within a
reasonable time frame.
The objective of a data protection or backup system is to allow a fast and complete
system recovery in case of data loss. As data loss can in principle occur at any time
and potentially cause a major system degradation, a data protection functionality
should be part of every ground segment design. The following types of data need to
be distinguished and taken into account when configuring the backup perimeter of
a data protection system:
182 12 Auxiliary Services
• Configuration data which comprise all items that are subject to a regular change
resulting from the daily use of the system and are distributed over the various
software applications deployed in the entire ground segment. Furthermore, all
databases (e.g., the full set of configuration items maintained by Operations
Preparation Facility as described in Chap. 9) or configuration files of the various
operating systems fall into this category. The complete list of datat items and their
location must be clearly identified to ensure that they can be properly considered
in the backup plan of the data protection system.
• System images can either represent an entire physical hard disk (e.g., ISO image
files) or Virtual Machines (VMs) which have a specific format referred to as
OVA/OVF files. It is recommended to generate updated system images at all
relevant deployment milestones, like the upgrade of an application software
version or an operating system.
• Operational data can be generated inside the ground segment (e.g., mission
planning or flight dynamic products) or originate from an external source.
Examples for external data sources are the satellite telemetry, the SRDB, or
ground segment configuration items received from external entities.
The detailed design and implementation of the data protection system is very
project specific but will in most cases rely on a backup server that hosts the
application software and the data archive which needs to be adequately dimensioned
for the project specific needs (e.g., expected TM data volume, number of software
images, estimated size of configuration files, etc.). Backup agents are small software
applications deployed on the target machines and help to locally coordinate the data
collection, compression, and transfer to the main application software on the server.
The data protection system should allow to schedule automatic backups, so they
can be performed at times when less operational activities are ongoing. This will
reduce the risk of performance issues when backup activities put additional load on
the network traffic, memory, and CPU resources. Manually initiated backups should
still be possible so they can be initiated right after an upgrade or deployment activity.
The definition of backup schedules and the scope of backups is often referred
to as the segment backup policy and needs to be carefully drafted to ensure it
correctly captures any relevant configuration change that is applied to the system.
At the same time, the policy should minimise the data footprint in order to avoid
the generation of unnecessary large data volumes which need to be stored and
maintained. A possible strategy to achieve this, is to schedule small size differential
backups on a more frequent basis3 targeting only a subset of files, and full disk
images whenever a new version of an application software or operating system is
deployed (see Fig. 12.3). With this strategy in place, a full system recovery should
always be possible, starting from the latest available disk image and deploying
differential backups taken at a later time on top.
3 Differential backups should be scheduled as a minimum each time a configuration change has
been applied.
12.6 Centralised Domain Names 183
Differential Backup
Differential Backup
S y s t e m C o n f i g u ra t i o n
App S/W
Upgrade
Clean Install
OS & App S/W
OS & App S/W
Time
Fig. 12.3 System configuration as function of time: a possible backup policy could consider the
generation of images after major upgrades and differential backups to capture smaller configuration
changes
Finally, it is worth to stress that any backup of data should also provide a means
to check the integrity of archived data to ensure that no errors have been introduced
during the backup process (e.g., read, transfer, or write error). An example for a very
simple integrity check is the generation and comparison of checksums before and
after a data item has been moved to the archive.
DNS
update DNS
Master Server Slave Server
Queries
Queries
DNS DNS Client
Slave Server
Fig. 12.4 Schematic outline of a centralised and redundant DNS server configuration
An example of a redundant and centralised DNS sever setup is shown in Fig. 12.4
which consists of a DNS master and a slave server. The slave is synchronised with
the master on a regular basis, and is ready to take over in case the master fails. Due
to the geographical distance of remote sites (e.g., a TT&C station on a different
continent), the limiting bandwidth and reliability of the network connection favours
the deployment of a local DNS slave server which is also shown in Fig. 12.4. The
local DNS server is used for the daily operation of the remote DNS clients, and the
wide area network connection is only needed to synchronise the DNS configuration
between the remote slave and the master which is located in the main control centre.
As the data volume for a DNS configuration is very small, the required network
bandwidth is very low.
GNSS
Antenna
[GST/GPS/UTC]
Time Distribution
Unit
Fig. 12.5 Time synchronisation inside the GCS based on the reception of a GNSS based time
source
GLONASS, etc.) where a highly complex and expensive atomic clock on ground is a
mandatory asset to operate the navigation payload. Such a clock will usually not be
available for other projects which therefore have to rely on an external source. With
the availability of several operational navigation services today, the deployment of
a GNSS antenna and a corresponding receiver is a very simple means to acquire
a highly accurate and stable external time source as shown in Fig. 12.5. The same
navigation signal also provides all the relevant information to derive UTC which is
a very common time system used in space projects.
The time signal can be distributed inside the GCS via a time signal distribution
unit that implements standardised time synchronisation protocols like the Inter-
Range Instrumentation Group (IRIG) time codes or the Network Timing Protocol
(NTP). The time distribution could also be done to remotely located TT&C stations,
using the wide area network connection. Alternatively, a local GNSS antenna could
be deployed at the remote site to avoid the dependency on the network.
Reference
1. Ferraiolo, D. F., & Kuhn, D. R. (1992). Role-based access controls. In Proceedings of the 15th
national computer security conference (NCSC), Baltimore, Oct 13–16, 1992, pp. 554–563.
Chapter 13
The Physical Architecture
Whereas in the previous chapters the functional aspects of the ground segment
where described, this one focuses on the physical implementation of the infrastruc-
ture. This comprises the actual hardware (workstations, servers, and racks) that need
to be deployed but also the setup and layout of the server and control rooms hosting
it. Furthermore, the virtualisation technology is introduced which is a very important
technology that allows to significantly reduce the hardware footprint through a more
optimised use of the available computing resources. Virtualisation also provides an
efficient means to add and manage system redundancies and improve the overall
segment robustness which is an important feature to ensure service continuity to
the end users of a system. Finally, some considerations are introduced to help the
planning and execution of large scale system upgrades or even replacements which
is referred to as system migrations.
The client server architecture is the preferred computing model deployed in ground
segments as it provides many advantages in terms of data and resource sharing,
scalability, maintenance, and secure access control. The basic concept is illustrated
in Fig. 13.1 where the operator accesses the application software via a client
workstation that is placed at a different location (e.g., a control room) and connected
to the server via the local area network (LAN). As the application software and
its data are physically deployed and run on the server machine, they have much
higher needs for processing power, RAM, and data storage volume and are therefore
usually mounted in racks and deployed in server rooms. A network attached storage
(NAS) can be used to keep all operational data which can also implement some
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 187
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8_13
188 13 The Physical Architecture
Client Workstation
Rack
Server (Host)
kind of data storage virtualisation technology to protect from data loss in case of
disk failure.1
Compared to the server, the client workstations can have a much lighter design
and therefore need less space and generate less heat and noise. These are therefore
more suitable to be accommodated in office or control rooms, where humans work
and communicate. Client workstations can be either dedicated to a specific server or
element or be independent which means that any client can be used to operate any
element. If the client workstations do not have any software or data installed locally,
they are commonly referred to as a thin client architecture.
Servers can be grouped and mounted into racks which are set up in rows in
dedicated server rooms as shown in Fig. 13.4. With multiple servers mounted in
one rack operating on a 24/7 basis, a considerable amount of heat is generated that
needs to be properly dissipated in order to avoid hardware damage. With several
racks usually hosted in the same room the appropriate sizing of the air conditioning
capacity is an important aspect for the server room design and will be discussed in
more detail in Sect. 13.4.
1 A frequently used storage virtualisation concept is the Redundant Array of Inexpensive Disks or
RAID architecture which provides a variety of configurations (i.e., RAID 0 to 6) with each of them
having a unique combination of disk space, data access speed, and failure redundancy [1].
13.3 Chain Separation Concept 189
The flight control team (FCT) performs all the satellite operations from a dedicated
control room in which the client workstations (consoles) are located. Quite com-
monly, a ground control segment architecture provides several control rooms with
different size, layout, and hardware equipment. The largest one is usually referred to
as the main control room (MCR) and is used during important mission phases like
the LEOP, major orbit relocation activities, or contingency operations with intense
ground to satellite interaction during which the full FCT team capacity needs to be
involved.
Not all satellite operational activities require a real time ground-to-space interac-
tion but focus on offline work like the generation of flight dynamics products or the
planning of time lines. Such activities do not require to be to be done in the same
rooms where the FCT is located, but can be better transferred into specific areas
dedicated for planning, offline analysis, and flight dynamics.
Once a satellite project goes into routine phase, the necessary ground interaction
will usually decrease and so the team of operators that need to be present. During
this phase, satellite operations can be transferred into a smaller sized room which is
referred to as routine or special operations room (SOR).
In order to provide operators an isolated environment for the validation of flight con-
trol procedures and to perform all the training activities without impacting ongoing
flight operations, a network or chain separation concept should be implemented.
This concept foresees a duplication of all ground segment components on two
separate network environments, referred to as operational (OPE) and validation
(VAL) chains, which must be strictly isolated from each other as shown in Fig. 13.2.
The OPE chain is used for real satellite interaction and must therefore be
connected to the TT&C station network. The VAL chain is only used for validation,
simulation, or training campaigns and should therefore be isolated from the station
network in order to avoid any unplanned satellite communication. To ensure a
representative environment for the validation of flight procedures, any TM/TC
exchange with the satellite is replaced by the satellite simulator which hosts and
executes the OBSW as explained in Chap. 11.
To ensure a representative behaviour of the ground segment, the configuration
of all elements and components (e.g., versions of installed application software,
database contents, network configuration, etc.) between the OPE and VAL chains
must be fully aligned. In the frame of a System Compatibility Test Campaign
(SCTC) aiming to demonstrate the ground segment ability to communicate to the
space segment (prior to launch), the VAL chain is connected to the satellite via the
electronic ground support equipment (EGSE) located in the Assembly Integration
and Test (AIT) site (see lower part of Fig. 13.2).
190 13 The Physical Architecture
NTP Protocol
Operational Network
SIM
TSP
SCF MPF FDF OPF MCF KMF
AIT
EGSE
Fig. 13.2 Chain separation concept: the operational and validation chains are deployed on
separated networks. The time source provider (TSP) is conceptualised here as a single entity
connected to both networks, but alternative implementations (e.g., with separated components)
are also possible
Server rooms need to host a large amount of very performant computer hardware
mounted in server racks which are usually arranged side-by-side and in rows in
order to optimise the use of available space. The close proximity of servers inside
a rack and the racks themselves imply a considerable generation of heat that needs
to be measured and controlled in order to avoid a hardware overheating that could
reduce its lifetime or even lead to damage. The monitoring and control of the rack
temperature can be done with the following means:
• To manage the heat dissipated by the servers, a ventilation system is mounted
inside the rack that generates an air flow of cold air entering the rack and hot air
leaving it (see ventilator symbol at the top of the rack in Fig. 13.3). The cold air
can either enter from an opening at the bottom (e.g., from below the false floor)
or from the front through perforated doors. The hot air leaves the rack from the
rear side.
13.5 Rack Layout, False Floor, and Cabling 191
Fig. 13.3 Illustration of a rack with three servers mounted on a false floor. The power distribution
unit (PDU), network switch, ventilators, and temperature sensor inside the rack are indicated. Also
shown are the cabling ducts for the network and power source below the false floor and the server
room air conditioning unit below the ceiling
Each rack has to be connected to the power supply of the building which usually
provides a no-break and a short-break power source. The no-break source is
protected via an uninterruptible power supply (UPS) unit which can guarantee a
192 13 The Physical Architecture
Inefficient thermal
heating effect.
Separation.
Fig. 13.4 Examples for rack arrangement inside a server room. The upper example shows a
suboptimal configuration that implies a pile up heating from one row to the next, whereas the
lower arrangement supports the availability of cool air from the front for all rows (hot/cool isle
separation)
continuous supply in case of a power cut or outage. As the UPS is a battery based
unit, it can only replace the main source for a relatively short time period (usually
less than one hour). To protect against longer outages (e.g., several hours up to days),
a Diesel generator with an adequate tank size is more suitable. Such a generator
should either be an integral part of the ground segment building infrastructure or
at least be located somewhere in close vicinity to allow a connection to the ground
control segment.
Sever rooms that host several rack positions require a large amount of cables to
be routed throughout the room in order to connect each position to the desired end
point. This is most efficiently achieved through the deployment of a false floor2
as shown in Fig. 13.1. When planning the cable routing, it is important to separate
network and power cables to avoid interference. Usually a large amount of network
cables (also referred to as patch cables) have to be deployed next to each other and
over long distances which can cause crosstalk that reduces the signal quality. It is
therefore recommended to choose cable types that are designed to minimise such
crosstalk. This is achieved through a twisting of wire pairs with different current
directions inside the cable which will cancel the overall fields generated by these
wires. Patch cables can combine several twisted pairs in one cable ensuring that
no twists ever align. Additional foil shielding around each twisted pair will improve
the performance over long distances where cables usually pass through areas of high
electrical noise. The quality of shielding of a cable can be derived from its specified
categorisation (e.g., Cat 5, Cat-6, Cat-6a, etc.) which provides an indication on the
maximum supported transmission speed and bandwidth. The choice of the right
cable category is therefore an important consideration for the deployment of large
scale network and computing infrastructure.
The distribution of hardware inside one rack requires careful planning in order
to avoid unnecessary difficulties for its maintenance or upgrade at a later stage. For
maintenance activities, it is very important that the racks are arranged in a way that
allows a person to access them from the front and rear side.
Throughout the lifetime of a project, a larger scale upgrade of the entire ground
segment might be required. Possible reasons for this could be the obsolescence of
server and workstation hardware requiring a replacement, or the implementation
of design changes due to the addition of new functionality. Larger scale upgrades
always imply an outage of a larger portion or even the entire ground segment. The
purpose of a migration strategy is to define and describe the detailed sequence of
such an intervention, in order to minimise the infrastructure down time and minimise
the impact on the service. This strategy must of course be tailored to the scope of
the upgrade and the very specific design of the ground segment and requires detailed
knowledge of the hardware and software architecture.
Two high level migration concepts are outlined below which can serve as a
starting point for a more detailed and tailored strategy. Whichever concept is
considered to be more suitable, it is always important to have a clear and detailed
description for both the roll-out and a roll-back sequence in place. Especially
the latter one is an important reference in case major problems occur during the
migration. Having a clear plan on how to deal with potential contingencies including
a detailed description of how to overcome them, will provide more confidence to
managers and approval boards, who have to authorise such a migration in first place.
The prime-backup migration strategy requires the existence of two separate ground
control segments which are designed and configured to operate in a fully redundant
mode (see Fig. 14.4). This means that each of the two segments must be capable to
operate the space segment in isolation, without the need to communicate with the
other centre. This implies that both segments are connected to the remote TT&C
station network and any other external entity required to operate the satellites. For
such a setup, a possible migration strategy is described in the following four stages
which are also graphically shown in Fig. 13.5:
194 13 The Physical Architecture
1. In the nominal configuration the prime and backup sites operate in the so
called prime-backup configuration, which means the prime is in charge of
satellite operations and performs a regular inter-site synchronisation of all
operationally relevant data to the backup centre. Once the readiness for migration
has been confirmed (usually by a checkpoint meeting), the planned handover3
of operations from the prime to the backup site can be initiated. This is done
following a detailed handover procedure and requires a detailed and advanced
planning as it might require a transfer of operational staff to the backup site.
2. Once it is confirmed that the space segment can be successfully operated from
the backup site (without the need of the prime), both sites can be configured
into standalone mode which stops the inter-site synchronisation. Now the prime
segment can be isolated from the external network connection which implies
an intentional loss of contact to all remote and external sites from this centre.
After successful isolation of the prime, all required upgrade activities can now
be performed without any operational restrictions which is indicated by the
transition from GCS v1.0 to v2.0 in Fig. 13.5.
3. Once the prime upgrade has been finalised, the internal interface connections
can be reestablished in order to resynchronise any new data to the prime site.
As a next step, the reconnection of the external interfaces can be performed.
This might however not be trivial, in case the network equipment has been
upgraded at the prime site and is now connected to old network equipment at the
remote sites. In case backwards compatibility is not supported, a reconfiguration
3 There is also the concept of an unplanned handover which needs to be executed in an emergency
case and follows a different (contingency) procedure that allows a much faster transition.
13.6 Migration Strategies 195
4 The transmission of a test command with no impact (e.g., a ping TC) could still be considered as
interaction, this should be avoided as it usually implies a degradation or even interruption of the
service it provides.
196 13 The Physical Architecture
Fig. 13.6 Outline of the Bypass migration strategy steps. NET = segment network infrastructure
1. In the first step the ground segment is grouped into elements providing offline
and online functionality. The offline elements are not directly required during the
execution of a satellite contact as they mainly contribute to the preparation of
operational products. Typical elements belonging to this category are the Flight
Dynamics Facility (see Chap. 6), the Mission Planning Facility (see Chap. 8),
the Operations Preparation Facility (see Chap. 9), or the Satellite Simulator (see
Chap. 11). Therefore, the timing of their upgrade can be readily coordinated
with their required product delivery time. In an extreme case, they could even
be upgraded during a running contact without any major impact, which should
however not be considered as good engineering practice. In the example given
here the offline element upgrade is done at the very start of the overall migration
schedule, but it could in principle also be done at the very end. It is however
recommended to separate the upgrade of the offline elements from the online
ones, in order to reduce the risk of multiple failures occurring at the same time
and simplify trouble shooting in case of problems.
2. In the second stage, the implementation of the bypass infrastructure is performed
which comprises the deployment of the necessary hardware, the installation of
the required application software, the connection to the segment network, and
the proper configuration of operational data (e.g., installation of the satellite
specific TM/TC database, up-to-date Pluto procedures, etc.). At the end of this
phase, the readiness of the bypass infrastructure to take over the critical online
functionality (e.g., the TM/TC exchange with the satellite) needs to be proven.
This could be done via the execution of a simple a test contact during which the
References 197
References
1. Patterson, D. A., Gibson, G., & Katz, R. (1988). A case for redundant arrays of inexpensive
disks (RAID). In Proceedings of the ACM ACM SIGMOD international conference on
management of data, pp. 10–116. ISBN: 0897912683, https://doi.org/10.1145/50202.50214.
2. Cohn, L. H. (2017). Cardiac surgery in the adult (5th ed.). McGraw-Hill Education-Europe.
ISBN 13 9780071844871.
Chapter 14
Virtualisation
The basic architecture of a classical (i.e., non virtualised) computer system is shown
Fig. 14.1. At the very bottom sits the hardware layer which can be a rack mounted
server or a smaller device like a workstation or a simple desktop computer. The
next higher layer is the Basic Input Output System (BIOS)1 which is the first piece
of software (or firmware) loaded into the computer memory immediately after the
system is powered on and booting. The BIOS performs hardware tests (power-on
self tests) and initiates a boot loader from a mass storage device which then loads
the operating system (OS). Once the OS is up and running, one or several user
applications can be launched and operated in parallel (indicated by “App-1,-2,-3” at
the top of Fig. 14.1).
A basic example of a virtualised architecture is shown in Fig. 14.2. The hardware
layer at the very bottom is again followed by the BIOS or UEFI firmware which is
loaded after the system is powered on. The next higher layer in a virtualised system
1 The BIOS has been supplemented by the Unified Extensible Firmware Interface or UEFI in most
new machines.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 199
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8_14
200 14 Virtualisation
Hypervisor
BIOS/UEFI
is however not the OS, but a hardware abstraction layer referred to as type 1 or
bare metal hypervisor.2 This hypervisor is able to create and run one or several
virtual machines (VMs) which can be understood as software defined computer
systems based on a user specified hardware profile, e.g., a defined number of CPU
cores, memory size, number of network adaptors, disk controllers, or ports. Each
VM provides the platform for the installation of an operating system (e.g., Linux,
Windows, etc.), which serve as a platform to install the end users applications.
Hypervisors provide virtualisation functions in three IT domains, computing
(CPU and memory), networking, and storage which are briefly outlined below.
An infrastructure that implements all three components is referred to as a hyper-
2 The difference of a type 2 hypervisor is that it is launched from an already installed operating
Virtualisation allows to configure and launch a large number of VMs on the same
physical server, with each having a different hardware profile and even running a
different operating system. If VMs are operated in parallel, they have to share the
hardware resources of their common host server and it is the task of the hypervisor
to properly manage the CPU and memory resources among the VMs. There is
however an important limitation that the maximum number of (virtual) processor
cores (vCPUs) of a VM must always be lower than the physical number of CPU
cores of the host machine.
An operating system (OS) can only make proper use of a hardware component if
the hardware specific driver software for that operating system is available. Such
drivers are usually provided and maintained by the hardware vendor. If an obsolete
hardware item needs to be replaced by a newer model, the new driver software
coming with it might not be supported by the old OS anymore.3 In such a case, the
obsolescence recovery will not be possible unless the OS is upgraded to a newer
version. Doing so, some application software might however not work properly
anymore, as it could depend on a set of libraries that only run on that old OS version.
After the application software with all its dependencies (e.g., other COTS software
products) has been ported, it needs to be requalified to ensure correct output and
performance. This is even more relevant for a software running on a critical system
that is subject to a specific certification or accreditation process, prior to its formal
use.
The use of a virtualised architecture can help to avoid such an involved process of
software porting, as the hypervisor acts as an intermediate layer between the OS and
the hardware and can provide the necessary backwards compatibility to an older OS.
To give a simple example, a hypervisor running on a brand new server hardware can
be configured to run a VM with an old and obsolete OS (e.g., Suse Linux Enterprise
9). This would not have been possible without the legacy drivers that are provided
by the virtualisation layer. This setup now allows to operate a legacy application
software on exactly that (legacy) OS it has been qualified for but on new hardware.
This concept is referred to as the decoupling of software from hardware and avoids
the need to port application software to a newer OS version in order to benefit from
new and more performant hardware.
3 Try to install a brand new printer on a very old computer running Windows XP !
14.1 Hyper-Converged Infrastructure 203
14.1.5 VM Management
4 To give an example, the widely used bare metal hypervisor ESXi® of the Vmware Incorporation
provides vCenter® as their management centre solution [1].
5 System robustness should always be a prime objective aiming to avoid any single point failure by
design.
204 14 Virtualisation
Fig. 14.4 Redundancy concepts at server level (Server-A and -B) and center level (GCS Prime
and GCS Backup)
the fact that they are physically separated. This allows a seamless transition with no
down time in case of failure of the physical server where the primary VM is running.
14.2 Containers
The container technology uses three key features of the Linux kernel, names-
paces, cgroups, and chroot which respectively allow to isolate processes, fully
manage resources, and ensure an appropriate security [3]. Namespaces allow a
Linux system to abstract a global system resource and make it appear to the process
inside a namespace like an own isolated instance of the global resource. In other
words, namespaces can be understood as a means to provide processes with a
virtual view of their operating environment. The term cgroups6 is an abbreviation
for “control groups” and allows to allocate resources like CPU, memory, disk space,
and network bandwidth to each container. The chroot feature can isolate namespaces
from the rest of the system, and protect against attacks or interferences from other
containers on the same host.
An important difference between a Linux kernel running several containers or
several processes (in the classical multi-tasking concept) is the ability to isolate,
as code running on one container cannot accidentally access resources or interfere
with processes running in another container through the kernel. Another important
feature or concept of container systems is that they are immutable (unchangeable).
This means that in case a container based application code is modified, no (physical
or virtual) machine needs to be taken down, updated, and rebooted again. Instead,
a new container image with the new code is generated and “pushed” out to the
cluster. This is an important aspect that supports modern software development
6 cgroups was originally developed by Google engineers Paul Menage and Rohit Seth under the
name “process containers” and mainlined into the Linux kernel 2.6.24 in 2008 [5].
14.3 Orchestration: Kubernetes® 207
processes that build on continuous integration and delivery (CI/CD) and continuous
deployment as main corner stones.
One widely used software suite to build, manage, and distribute container based
software is Docker® [6], which is also the basis for open industry standards like
the container runtime specification (runtime-spec [7]) and the container image
specification (image-spec [8]) as defined by the Open Container Initiative [9].
Containers are today a very important tool for the development and operations
of cloud-native applications7 that mainly build on a microservices architectural
approach. This defines the functions of an application as a delivered service which
can be built and deployed independently from other functions or services. This also
means that individual services can operate (and fail) without having a negative
impact on others. Such architectures can require the deployment of hundreds or
even thousands of containers which makes their management challenging. For this
purpose, container orchestration tools were invented which support the following
functions: [10]
• container provision and deployment,
• container configuration and scheduling,
• availability management and resource allocation (e.g., fitting containers onto
nodes),
• container scaling (i.e., automated rollouts and rollbacks) based on balancing
workloads,
• network load balancing and traffic routing,
• container health monitoring and failover management (self-healing),
• application configuration based on the container in which they will run,
• ensuring secure container interactions.
There are several container orchestration tools that can be used for container
lifecycle management, some popular options are Docker® in swarm mode [11],
Apache Meso® [12], or Kubernetes® (also referred to as K8s® ) [13].
Kubernetes is an open source container orchestration tool that was originally
developed and designed by engineers at Google and donated to the Cloud Native
Computing Foundation in 2016 [14]. The Kubernetes software consists of the
following main components (see also Fig. 14.6) [15]:
• Nodes (Minions) are machines (either physical or VMs) on which the Kubernetes
software is installed. A node is a worker machine (bare-metal server, on-premises
7 This term refers to software applications that are designed and tested in a way to allow simple
Fig. 14.6 The main components of a Kubernetes® cluster (refer to text for more detailed
description)
References
The most common picture we associate with spacecraft operations are engineers
in a control room sitting in front of a console as shown in Fig. 15.1 where they
analyse new telemetry received from orbit, prepare telecommand sequences for
future uplink, or talk to astronauts on the voice loop in case of a manned mission.
Whereas these activities are for sure the most prominent tasks in satellite operations,
there are also many others which are less noticeable but still play an important role.
The aim of this chapter is to provide an overview of all the main operational
tasks and responsibilities of ground operations engineers. These can be grouped
into three main categories according to the project phase they are relevant for. The
first set of activities are described in Sect. 15.1 and need to be performed during
the early project phase when the ground and space segment requirements need to
be defined. The second category is relevant during the integration, verification, and
validation of the ground and space segments (see Sect. 15.2), and the third one after
launch, once in-orbit flight operation takes place (see Sect. 15.3).
As the overall operations concept is usually quite project specific and will also
depend on the type and background of the operator,1 the operational processes and
corresponding terminology might also vary significantly. It is therefore worth to
consult existing standards like e.g., ECSS-E-ST-70C [1] titled Ground Systems &
Operations which can be easily tailored to the applicable project size and needs.
The terminology and description presented here also follows this standard.
1 The term operator in this context refers to the entity that is contractually responsible to provide
all operations task as a service to its customer. A more precise term would therefore be operations
service provider, which could be a public (space) agency or a commercial company.
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 211
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8_15
212 15 Operations
Fig. 15.1 View of the flight directors console in the Houston Mission Control Center (Johnson
Space Center) during the Gemini V flight on August 21, 1965. Seated at the console are Eugene F.
Kranz (foreground) and Dr. Christopher C. Kraft Jr. (background). Standing in front of the console
are Dr. Charles Berry (left), an unidentified man in the centre, and astronaut Elliot M. See. Image
Credit: NASA [2]
15.1.1 Requirements
In the early phases of a space project the main focus is the definition and
consolidation of the mission requirements which translate the high level needs
and expectations of the end user or customer into mission requirements (see also
Chap. 2). These are documented in the project requirements that are the basis for
the derivation of lower level requirements for the space and ground segments. An
important activity for the operator at this stage is to review these requirements and
assess the operational feasibility of the mission, as a project with an unrealistic
or even impossible operational concept will never be successful. Furthermore, this
review needs to encompass key design documents of the space and ground segment
like the satellite user manual, the detailed design definition files, performance budget
files, and risk registers. Furthermore, the operator needs to support the definition
of ground segment internal and external interfaces to ensure that all information
required to operate the satellite is properly defined and provided at the times needed.
The operator needs to document the specific needs and a detailed description on
how the mission will and can be operated in the Mission Operations Concept
Document (MOCD) which also serves as an additional source to derive design
requirements for the space and ground segments. Once the requirements baseline
15.1 Preparation and Definition Phase 213
has been established, the operator should participate in the requirements review
milestone (refer to SRR in Chap. 2) and focus on the following operational key
areas in the ground control segment:
• Functionality that allows to evaluate the satellite performance and investigate
anomalies based on the analysis of all received telemetry.
• Requirements specific to algorithms and tools needed for mission planning and
contact scheduling. Special attention should be given to efficiency and perfor-
mance requirements (e.g., efficient use of ground station resources, consideration
of all relevant planning constraints, planning of backup contacts, generation time
of mission plans and schedules).
• Completeness in the specification of flight dynamic related needs, especially
the ones that go beyond the standard routine functions (e.g., advanced mission
analysis tools, manoeuvre planning optimisers, launch window analysis, etc.).
• Specification of the ground segment automation functionality for satellite mon-
itoring and commanding. The defined concept for automated contact handling
must be able to support the operations concept and consider the foreseen flight
control team staffing profiles (e.g., FCT main and shift team sizing). In case of
incompatibilities, additional FCT staffing might be needed to operate the mission
which might generate additional costs that exceed the foreseen budget in the
mission concept and return of investment plans.
• Ensure the correct specification of any functionality required to operate and
maintain all remote assets that cannot be easily accessed by operators and
maintainers. A typical example would be an unmanned remote TT&C station
that is located an a different continent.
• Medium altitude orbits (MEO) are usually used by GNSS type satellites as
they provide favourable conditions for navigation receivers on ground (e.g., the
simultaneous visibility of a minimum of four satellites to achieve a position fix).
• Geo-synchronous orbits (GEO) provide a constant sub-satellite point on the
Earth’s surface which is required by broadcasting (TV or telecommunication)
or weather satellites.
• Interplanetary trajectories are quite complex and often use gravity assist from
planets to reach a target object. The definition of such trajectories are very
project specific and require an extensive amount of effort for their design and
optimisation.
Another aspect that needs to be analysed is the TT&C ground station network
which needs to be properly dimensioned with respect to the number of antennas
and their geographic distribution. Both will impact the possible contact frequency
and the contact durations which themselves determine the data volume that can
be downlinked (e.g., per orbit revolution). This must be compatible with the data
volume that is accumulated by the satellite subsystems and the capacity of onboard
storage that can be used to buffer telemetry onboard.
The geographic distribution of the antennas in a ground station network impacts
the achievable orbit determination accuracy which is derived from the radiometric
measurements (see Chap. 6). The required size of the ground station network will
usually differ for a LEOP phase and during routine operations and both cases need
to be considered separately.
The mission analysis should also comprise the estimation of maximum eclipse
durations that need to be supported by the satellite. This determines the required
size of the solar panels and the battery storage capacity. Environmental impacts
like the expected accumulated radiation dose determines requirements for additional
satellite shielding or the use of radiation hardened electronic components.
The launch window and orbit injection analysis is used to select the correct
launch site, epoch, and azimuth. The estimation of the overall Δv budget needed
for the entire mission is required to properly size the fuel tanks. The budget needs
to consider the needs to reach the operational target orbit during LEOP and also
maintain it throughout the entire projected service time according to the station
keeping requirements. Additional margins need to be considered that allow to de-
orbit the satellite or place it into a disposal orbit at the end of its lifetime.
Once the space and ground segment have passed their design phase and reached
their implementation stage, the operator must shift focus to the development of
operational products which need to be ready (and validated) well in advance of
the planned launch date. A very important product is the complete set of flight
operations procedures (FOPs) that describe in detail the steps to be performed by
15.2 Procedure Validation and Training 215
Fig. 15.2 Example for the format and contents of a very simplified flight operations procedure
(FOP) for the update of the onboard orbit propagator
ground operators in order to complete a specific satellite task. These procedures also
provide a detailed description of key activities, their prerequisites or conditions2 and
“GO/NOGO” criteria for critical commands.
In human space flight where astronauts are involved and can interact with the
ground crew, such FOPs could in theory also be in free text format. In unmanned
missions they will usually have a clearly defined tabular format and are generated
using specialised software products that host the satellite TM/TC database for cross
referencing and consistency checks.3 Well written FOPs should as a minimum
contain the following inputs (see also example shown in Fig. 15.2):
• The name of the procedure, a reference or short acronym, the version number,
and a change record.
• A detailed description of the objective that is supposed to be achieved.
• A list of all conditions and constraints that exist as a prerequisite for its execution.
• The required satellite configuration at the start of the procedure (e.g., which
system mode, attitude mode, payload mode, etc.).
• A tabular list with the detailed steps including the exact references to telemetry
parameters (mnemonic) and telecommands (see also recommendations in [3]).
2 Examples could be specific eclipse conditions, Sun or Moon presence in a sensor FOV, or the
FOPs are usually grouped into different categories which is also reflected in
their short reference or acronym. The grouping can either consider the level
of satellite activity (e.g., system procedure versus subsystem procedures) or the
mission phase they are used for (e.g.,LEOP, routine, or contingency operations).
This categorisation also helps in organising the operator training and allows the
formation of specialised sub teams that are dedicated to a specific type of activity.
As an example, a shift team that is meant to perform overnight and weekend
satellite monitoring must not necessarily be familiar with the more complex special
operations tasks required during a LEOP or a special operations phase.
The generation of FOPs requires a very detailed knowledge of the satellite and
its avionics software. An initial set should therefore always come directly from the
satellite manufacturer who knows the spacecraft very well and can therefore best
describe the required satellite configuration and applicable constraints to perform
a satellite activity. Also additional information and recommendations that might be
relevant for commanding can be added. The operator has to review and validate
this initial set of procedures and potentially even extend them with ground segment
specific information that was not known to the satellite manufacturer.
The FOP validation must be performed on a representative ground control
segment (VAL chain) which is either connected to a satellite simulator (running the
correct version of OBSW as described in Chap. 11) or to the actual satellite, if still
located at its AIT site or a test centre. The detailed scope of the validation campaign
must be clearly defined and documented as part of an operational validation plan,
which will ensure that all FOPs (and their correct versions) have been properly
validated and are ready to be used in flight.
The development and validation of ground operations procedures or GOPs is
another important task the operator has to perform at this stage. These procedures
define in detail all the required steps at ground segment level in order to generate,
import, or export a specific product. Operational GOPs are usually derived from the
software operations manuals provided by the element manufacturers and need to
be tailored to the specific operational needs. Examples of such operational GOPs
could be the generation of an event file, a set of manoeuvre command parameters,
or a mission plan. A different type of GOPs are the maintenance procedures
which describe the detailed steps to keep the ground segment configuration up
to date. Examples here could be the update of an anti-virus definition file, the
alignment of a leap second offset value required for correct time conversion (see
Appendix B), or even hardware specific tasks like the replacement of a faulty
hard disk, RAID controller, or even an entire server. A specialised subset are
the preventive maintenance procedures which define and detail specific checks or
activities to be performed at a regular basis in order to reduce the wear-out of certain
components in the segment and prolong the lifetime of hardware (e.g., check of
cooling fans on server racks or the correct level of grease inside the gear boxes of a
TT&C antenna system).
The formation and training of all operations staff needs to be done during this
phase in order to be ready for the day of launch. The following team structure is
usually adapted (see Fig. 15.3)
15.2 Procedure Validation and Training 217
Fig. 15.3 Possible structure of a spacecraft operations teams according to ECSS-E-ST-70C [1, 3].
SOM = Spacecraft Operations Manager, OBDH = Onboard Data Handling, AOCS = Attitude and
Orbit Control System
• The mission or flight control team (FCT) is in charge of the overall control of
the space segment and is headed by the spacecraft operations manager (SOM)
and the flight director (FD). All major subsystems of the spacecraft should be
represented as dedicated console positions in the control room and must therefore
be manned by a subsystem specialist. Having dedicated operators to monitor
each of the subsystem will lower the risk to overlook an out-of-limit telemetry
value and also ensures that the adequate level of expertise is present, in case
an anomaly occurs that requires a deeper understanding of a specific subject
matter. Another important role in the FCT is the command operator or spacecraft
controller (SPACON) who is in charge to operate the Spacecraft Control Facility
(see Chap. 7) in order to load and release all the TC sequences to the satellite. It
is important that only one role in the entire FCT is allowed to perform this highly
critical task as this will avoid any uncontrolled or contradictory TC transmission
to the satellite.
• The flight dynamics (FD) team is responsible to perform all relevant activities
to maintain an up-to-date knowledge of the orbit and attitude position and
to generate all orbit related products (e.g., event files, antenna pointing, orbit
predictions etc.). The FD team is also responsible to generate the relevant
telecommand parameters for manoeuvres or orbit updates (OOP) that need to
be provided to the FCT for transmission to the satellite.
• The ground data systems (GDS) team has to coordinate the booking and
scheduling of TT&C stations. The infrastructure provided for this is referred
as the network operations centre or NOC and is of special importance during
LEOP phases when additional ground station time has to be rented from external
218 15 Operations
The in-flight phase can be subdivided into the Launch and Early Orbit Phase
(LEOP), the commissioning, the routine, and the disposal sub-phases. LEOP is by
far the most demanding undertaking, as it requires a number of highly complex and
risky activities to be performed as depicted in the timeline in Fig. 15.4. It starts with
the launch vehicle lifting off from a space port, going straight into the ascent phase
during which a launch vehicle dependant number of stages are separated after their
burn-out. The last one is the upper stage which has to deliver the satellite into its
injection orbit. Once arrived, the satellite is separated from the launch dispenser
and activates its RF transponder. The first critical task for the operations team is to
establish the very first contact from ground using the pointing information based
on the expected injection location. In case of a launch anomaly, the actual injection
point might however deviate from the expected one which could imply an inaccurate
antenna pointing making the first contact to fail. In this worst case scenario, a search
campaign with several ground antennas must be performed. If this is not successful,
passive radar tracking might even be needed. Once the ground station link could
4 In case an urgent software fix is needed, a special process for the development and installation of
Fig. 15.4 Important activities during the Launch and Early Orbit Phase (LEOP)
be established, the first radiometric data can be gathered and used for a first orbit
determination to get more accurate pointing vectors for subsequent contacts. From
the first received housekeeping telemetry dump an early initial check up of all major
satellite subsystems can be performed which allows to determine whether they have
survived the launch and are operating nominally.
As the stability of the radio link also depends on the satellite orientation, it is
important to gain a stable attitude as soon as possible. During the launch phase and
right after injection the satellite depends on battery power with only limited capacity.
The immediate deployment of the solar panels and their proper orientation to the
Sun is therefore one of the first tasks to be performed. A favourable first orientation
is the so called Sun pointing attitude in which the body axis that is perpendicular
to the solar panel is pointed into the direction of the Sun. For stability reasons, the
entire satellite body is then rotated around that Sun pointing axis.
After the batteries have been sufficiently charged and the first check ups of the
main subsystems finalised, the satellite can be transitioned into an Earth pointing
orientation.5 As the satellite is usually not directly injected into its target orbit,6 a
set of orbit change manoeuvres must be performed in order to reach its operational
orbit. Accurate orbit correction requires radiometric tracking of the satellite before
and after each orbit manoeuvre in order to estimate the burn execution error and
obtain an up-to-date orbit vector. This requires the regular scheduling of ranging
5 This is of course only applicable for satellites with a payload that requires to be oriented to the
Earth surface, which is not the case for interplanetary spacecraft or space telescopes.
6 This allows to keep sufficient distance between the operational orbit and the launcher stage,
campaigns during the manoeuvre phase which demands more frequent contacts
from the ground station TT&C network.
The LEOP phase ends once the satellite has reached its designated orbit and
performs in a stable configuration. The next step is the commissioning phase,
during which a more thorough in-flight validation of both platform and payload is
performed. Depending on the complexity of this activity, some projects might even
distinguish between platform and payload commissioning phases. One important
aspect to be tested is the proper functioning of component redundancies, as satellite
manufacturers usually build in hot or cold redundancies for critical subsystems,
where hot-redundant devices have a backup component that continuously runs in
parallel to allow a very fast hand-over in case of failure. The cold redundant devices
in contrast need to be powered on and “booted” if the prime fails which implies a
certain latency during the switching phase. The testing of cold-redundant devices
is more risky as the state of the device is not known and could be deteriorated,
after having been exposed to the harsh launch environment. Despite this risk, the
testing is important in order to gain trust in the full availability of all redundancies
which might be required at a later stage in the mission. Payload commissioning
can also comprise calibration campaigns of complex instruments or the accurate
alignment of the main antenna beam which might require additional expert support
and specialised ground equipment during that phase.7
Once the commissioning phase is finished, the routine phase starts which is
usually the longest phase of the entire satellite lifetime. During this phase the
satellite and its payload have to provide their service to the end user. For commercial
projects this is also the time to reach the break even point or return of investment
and achieve its projected profit. Depending on the mission profile, routine contacts
are usually planned once per orbit in order to dump the house-keeping telemetry,
uplink all required telecommands, and perform ranging measurements. The routine
phase can be supported by a much smaller operations team with emphasis on long
term monitoring and trend analysis of the most relevant subsystem telemetry. Also
regular orbit determination needs to be performed by the flight dynamics team in
order to monitor the satellite’s deviation from the reference trajectory and perform
station keeping manoeuvres to correct it. Special manoeuvres could also be required
in case the satellite risks to collide with another space object.
The end-of-life of a satellite can be either determined by the depletion of its fuel
tank or a major malfunctioning of a key subsystem that cannot be replaced by any
onboard redundancy. Either of these events implies that the service to the end user
cannot be maintained anymore and the satellite needs to be taken out of service. This
marks the start of the disposal phase which should ensure that the satellite is either
removed from space8 or transferred into a disposal orbit where it does not disturb
or cause any danger for other satellites in service. For geostationary satellites, the
7 This
sub phase is also referred to as in-orbit testing or IOT.
8 Satellite
de-orbiting is mainly an option for orbit altitudes low enough to be exposed to sufficient
atmospheric drag.
References 221
removal of the old satellite from the operational slot is also commercially relevant
in order to make space for its replacement allowing service continuity. For satellites
that must remain in orbit, the depletion of all remaining fuel in the tanks or any latent
energy reservoirs is very important and today even mandatory [4]. This satellite
passivation avoids the risk of a later break up or explosion that would contribute
to a further increase of the space debris population, which already poses a major
concern for safe satellite operations today. The chosen disposal orbit must also fulfil
adequate stability requirements that ensure it does not evolve into an orbit that could
cause a collision risk for operational satellites in the near and even far future.
References
1. European Cooperation for Space Standardization. (2008). Space engineering, ground systems
and operations. ECSS. ECSS-E-ST-70C.
2. National Aeronautics and Space Administration. (2019). In S. Loff (Ed.) Image source:
https://www.nasa.gov/image-feature/kranz-on-console-during-gemini-v-flight, last updated Jul
22, 2019.
3. Uhlig, T., Sellmaier, F., & Schmidhuber, M. (2015). Spacecraft operations. Springer. https://doi.
org/10.1007/978-3-7091-1803-0_4.
4. United Nations Office for Outer Space Affairs. (2010). Space debris mitigation guidelines of
the committee on the peaceful uses of outer space. https://www.unoosa.org/pdf/publications/
st_space_49E.pdf Accessed Mar 23, 2022.
Chapter 16
Cyber Security
Cyber crime has significantly increased in the past decade and is today considered
as a major threat for private IT users, companies, and even more for critical public
infrastructure like power plants, electric distribution networks, hospitals, airports,
traffic management systems, or military facilities. Cyber attacks are a very peculiar
kind of threat because they require only very basic IT equipment and a persons’
skills, but can still cause a significant amount of damage even to a large and
complex infrastructure. Attacks can be launched any time by an individual or a
group of people, independent of age and location. Even if an attack has already
been launched, it might take time before the victim realises the infiltration, and
meanwhile the target computer could be transformed into a “zomby” machine
that itself initiates an attack. In such cases, it might even be extremely difficult
to backtrace the real aggressor and make the right person responsible for the
caused damage. To better understand potential sources for cyber threats and the
corresponding risk, it is worth to explore the motivation of a cyber attack, which
can fall into one or even a combination of the following categories:
• Monetary driven attacks aim to gain access to bank accounts or credit cards
and perform illegal transactions to a rogue account. Alternatively, an attack can
encrypt data on the hard disk of a target host with the initiator subsequently black
mailing the victim to make a payment to regain access to the data (ransomware).
In both cases, the dominant motivation is financial profit.
• Espionage driven attacks in contrast try to remain fully invisible to the victim
and remain undiscovered for as along as possible. During this time, all kind of
sensitive data are illegally transferred from the infected host to the attacker’s
machine. Potential target data can be intellectual property of a company (indus-
trial espionage) or military secrets. The infiltrated machine could also be used as
an eavesdropping device to transfer keystrokes or audio and video recordings via
a connected webcam.
• Sabotage driven attacks aim to disturb or even destroy the operational functional-
ity of an application, a target host, or even an entire facility. Especially industrial
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 223
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8_16
224 16 Cyber Security
systems such as refineries, trains, clean and waste water systems, electricity
plants, and factories which usually are monitored and controlled by so called
Supervisory Control and Data Acquisition (SCADA) systems1 can be likely
targets for this kind of attack. Well known examples are the 2011 Stuxnet cyber
attack on Iran’s nuclear fuel enrichment plant in Natanz or the 2015 attack on the
Ukrainian power grid [2].
Even if one might argue that the motivation of an attack might be of secondary
relevance, it is still worth to be considered as it might point to the person or
entity who has launched it. Monetary driven cyber crime will usually be committed
by a single entity called black hat hacker,2 who’s primary target are privately
owned IT systems which are usually less protected and their owners are more
susceptible to hand out sensitive credit card information (e.g., responding to
phishing emails) or a pay the hackers’ ransom in order to regain data access.
Industrial or military espionage and sabotage driven attacks are more likely to
be launched by governments and their respective intelligence agencies who are
interested to acquire a very specific type of information that is of high strategic
value to them. These entities are therefore ready (and able) to invest a considerable
amount of effort (and money) to carefully design, plan, and launch a highly complex
orchestrated attack that could exploit not only one but even an entire range of zero-
days vulnerabilities in order to be successful.
The ground control segment of every space project represents a complex
IT infrastructure that comprises a substantial amount of servers, workstations,
operating systems, application software packages, and interface protocols in a single
location. Especially for projects with high public profile and strategic relevance
(both for civilian and military purposes), the espionage and sabotage driven attacks
must be considered as a likely threat scenario. An appropriate level of cyber
hardening therefore needs to be an integral part of the ground segment design and
has to also be considered in the maintenance plan throughout the entire operational
lifetime.
The detailed methodology and structure of a cyber attack can be very complex,
especially if targeted to complex and well protected systems. It is therefore not
possible to provide a generic and at the same time accurate description of the
1 SCADA systems are based on sensors and actuators that interact in a hierarchical way with
field control units located either remotely and/or in inhospitable environments. They usually use
Programmable Logic Controllers (PLC) that communicate with a so called fieldbus application
protocols like Modbus, which in current versions have only few security features implemented [1].
2 This term is used in combination with white hat hackers, representing computer specialists that
exercise hacking with the aim to discover vulnerabilities of a system in order to increase its security.
16.1 Attack Vectors 225
Password hacking is one of the most efficient methods to gain access to a system’s
data and its application. A system usually stores all registered passwords in
encrypted format generated with a hash function. A hash algorithm maps (input)
data of arbitrary size into a fixed length hash code, which is supposed to be collision
resistant. This means that there is a very low probability to generate the same
hash code from two different input values. The hashing function only works in
one direction (i.e., clear text into encrypted format) which makes it impossible to
reconstruct the input data from the generated hash code. The only way to unveil
hashed passwords is therefor by guessing a large amount of input values and
compare their hashed outputs to the saved values on the system. This can either
be done using a trial and error technique, referred to as a brute force attack, or
with more elaborated techniques that use password dictionaries or rainbow tables.
An important countermeasure to password hacking is the increase of password
complexity and a policy that enforces regular password modifications.
Back door attacks are designed to exploit either existing or newly created entry
points of a computer hardware, operating system, or application software. The term
backdoor in this context describes any means to access a system, bypassing the
nominal authentication process. Existing backdoors in a system could have been left
by vendors for maintenance reasons which usually pose a low risk as they are widely
known and therefore easy to secure (e.g., admin accounts with default passwords).
More problematic are secret backdoors which are hard to detect as they could be
embedded in proprietary software or firmware whose source code is not available
to the public. Backdoors can also be actively created by malware like Trojans or
worms which is use them to infiltrate the system with additional malware that could
spy out passwords, steel sensitive data, or even encrypt data storage in order to ask
for ransom (ransomware).
226 16 Cyber Security
The Distributed Denial of Service (DDoS) attack uses a large number of infected
hosts in order to flood a target system with a very high number of requests that
fill up either the available network bandwidth or the system’s CPU or memory
resources. The ultimate goal of such an attack is to make the target unresponsive
for any legitimate user or service. Apart from the performance impact, there is
also the risk for the target system to get infected with malware that turns it into an
attack “handler” contributing to the overall DDoS attack (sometimes even without
the knowledge of the system’s owner).
16.1.4 Man-in-the-Middle
The root account is an account with very high privileges in Unix and Linux based
systems and is therefore a high risk target for illegal access. Hackers can use rootkit
software packages that are designed to gain complete control over a target system
or the network. It is therefore considered as a good practice to disable any root login
accounts which are part of a default configuration of an OS when it is initially being
deployed. An additional precaution is to adapt the principle of least privilege, which
dictates that users should only be attributed the minimum set of rights necessary to
perform the required work on the system.
16.1.6 Phishing
Phishing attacks refer to attacks that are designed to query sensitive information
from a person (e.g., passwords or other personal identifiable information) without
making the victim aware to hand that information to an unwanted source. This is
16.1 Attack Vectors 227
The watering hole and drive by downloads are attack vectors that infect legitimate
websites with malicious scripts, which are designed to take advantage of specific
vulnerabilities of the used web browser. Especially in the case of a drive-by
download, malicious code is installed on the target computer. The infected websites
are usually carefully chosen and based on the attacker’s analysis of the user group
frequently accessing it. This is also the reason for the choice of the name “watering
hole”, which makes reference to predators in nature who would rather wait for their
prey near a watering hole rather than tracking it over long distances [3, 4].
16.1.8 Formjacking
16.1.9 Malware
Cyber attacks exploit at least one but most of the time a number of so called zero-
day vulnerabilities. These can be understood as design flaws that are present in a
hardware item, operating system version, application, or a communication protocol
which are not yet known to the vendor or programmer and can therefore be used
for an attack. The time span between the discovery of a vulnerability and its fix
by the vendor or programmer is called the vulnerability window. The Common
Vulnerabilities and Exposures (CVE) numbering scheme provides a unique and
common identifier for publicly known information-security vulnerabilities in pub-
licly released software packages [5]. If a new vulnerability is unveiled, a CVE
number is assigned by a so called CVE Numbering Authority or CNA which
follows predefined rules and requirements on its numbering scheme and provides
information on the affected products, the type of vulnerability, the root cause, and
possible impact. A new record will be added to the CVE database which is publicly
available (see official CVE search list [6]). The CNA has to make reasonable effort
to notify the maintainer of the code so corrective measures can be taken. A fix for a
CVE (or a number of CVEs) will usually be provided through the development and
release of a software patch which is made available to the end users via the vendor’s
website.
The various areas of a system that are potentially susceptible to a cyber attack
form the system’s attack surface which is schematically depicted as the large
sphere in Fig. 16.1. The arrows pointing from outside to the surface represent the
Man-in-the-Middle Weak
Open Rootkit
Password
Network Policy
Ports Open
Vulnerability Trojan
Denial of Service Non- Obsolete CVE-XXX
encrypted COTS, OS,
I/F Hardware
Unlimited
PW Phishing/Hacking Loose System Spyware
Access Resources
Policy Unused
User
Backdoor Accounts Ransomware
Fig. 16.1 Attack vectors (arrows) and attack surface (large sphere) of an IT system. The aim
of cyber security is to reduce the attack surface by identifying and reducing the number of
vulnerabilities represented by the small spheres
16.2 The Attack Surface 229
attack vectors which can breach and infiltrate the system and refer to the examples
described in Sect. 16.1. The task of the cyber security officer is to continuously
measure a systems’ attack surface and try to reduce it as much as possible. A
possible measurement metric is based on the identification of all system resources or
weaknesses (shown as small spheres in Fig. 16.1), and to determine their respective
contribution to the overall attack surface. The size of a single contribution can be
understood as the product of the likelihood a specific resource could be exploited
and the potential damage that could be caused to the system (refer to detailed
explanation in [7]). In other words, a system with a large measured attack surface
implies (1) a higher likelihood of an attack exploiting the existing vulnerabilities
in the system (with less effort), and (2) the ability for an attacker to cause more
significant damage to its target.
Looking at the typical ground segment architecture as described in this book,
the following system resources are candidates for attacks and therefore need to be
considered for the attack surface metric:3
16.2.2 OS Vulnerabilities
The operating systems (OS) deployed on servers and clients should always be an
area of attention and must therefore be continuously monitored for vulnerabilities
and the corresponding updates or patches published by the vendor to fix critical
cyber security issues. As such upgrades might be available quite frequently, a good
3 The list provided here is not meant to provide a complete summary and the reader is encouraged
balance between their deployment and ongoing operational activities must be found.
This must also comprise less visible IT components deployed at remote sites (e.g.,
the baseband modem or the antenna control unit of a TT&C station) as these devices
might be subject to a looser access control policy and could therefore be more easily
accessible for malware infiltration attempts.
Physical security measures aim to ensure the proper protection of physical hardware
items like servers or workstations. Servers mounted inside racks are protected
through the rack door lock mechanism, which is either operated with a physical key
or a digital code (see also description in Sect. 13.4). This provides an easy means to
control rack access and restrict it to authorised personnel. Furthermore, the access
times can be monitored and logged. Locking devices for external media ports (e.g.,
USB, Ethernet, or optical ports) provide an additional level of security and further
reduce the risk of malware infiltration through external media upload. This has an
even higher relevance for workstations deployed in areas that are accessible to a
much wider group of users where even additional measures like the logical lock-
down of external USB ports and media devices are an essential measure.4
4 A logical lock-down could foresee that only users with elevated rights (e.g., administrator or root)
Wake-on LAN is a technology introduced by Intel and IBM in 1997 [8] and allows
to activate a dormant computer system that is connected to a (wired) network by
sending a specially coded message called magic packet. This packet is sent to
the network broadcast address on the data link layer (layer 2 in the OSI model
[9]), and therefore reaches all devices on the same network. It contains an easily
recognisable marker and a repetitive sequence of the computer’s MAC address5
which is intended to be activated. For this to work, the BIOS of the target computer’s
motherboard needs to have this capability enabled, which ensures that a part of
the network adaptor remains active, even if the host machine is being powered
off. Despite the good intention to give a system administrator an easy means to
perform maintenance activities on remote machines (without having to physically
visit them), this technology could be abused by anyone having gained access to the
same LAN and become one building block in larger scope cyber attack strategy. It is
therefore highly recommended to deactivate this feature, unless absolutely needed.
Some operating systems come with installed compilers and/or interpreters for script-
ing languages. Examples are the GNU C++ compiler in certain Linux distributions
or interpreters for shell, perl, or python scripts. While in a development environment
the presence of compilers and interpreters makes absolute sense, they should not be
needed on servers, workstations, or VMs that are used in a pure operational context.
As compilers or interpreters can be used to create and execute malicious code, their
presence on a system is a clear enabler for a cyber attack and should therefore be
removed by default. In case interpreters are needed to execute operational scripts,
their type, nature, and expected location must be clearly specified and documented.
Every ground segment uses application software that has been developed for the
project to meet its specific requirements which is referred to as bespoke software.
There will however always be some integration of COTS software from third party
vendors, where COTS stands for commercial off-the-shelf and refers to any software
product that can be procured on the open market and configured for the specific
5 The Media Access Control Address or MAC address is a 48 bit alphanumeric number of an IT
device able to connect to a network. It is also referred to Ethernet hardware address or physical
address of a network device.
232 16 Cyber Security
needs of the end user. As COTS software is widely used by many customers, it is
also an attractive target for the exploitation of vulnerabilities through cyber attacks.
A vulnerability in a COTS product could even be used as an entry point to infiltrate
the bespoke software, which itself would not be an easy target due to its unknown
architecture and code. All deployed COTS software packages therefore need to
be listed and tracked for their version number and regular updates with available
patches from the vendor performed. There might also be cases of very old COTS
versions that are not maintained anymore but are still in use due to backwards
compatibility with a bespoke software. If the replacement of such problematic
packages is not immediately possible, alternative protective measures like a sandbox
execution of an application should be envisaged.6
Every file on a computer system has a set of permission attached to it, which defines
who can read, write, or execute it. In Unix/Linux systems, file permissions can be
specified for the user (u), the group (g), and others (o), where the last category is
also referred to as “everyone else” or the “world” group. Every operating system
comes with a default file permission configuration which needs to be carefully
revised and potentially adapted to the project’s need. Special attention needs to be
given to permissions defined for the “world” group. World-readable files should not
6 A sandbox environment refers to a tightly controlled and constraint environment created for an
application to run in. The aim of this is to limit the level of damage an application could cause in
case it has been infiltrated with malicious code.
16.2 The Attack Surface 233
contain sensitive information like passwords, even if these are hashed and not human
readable. World-writable files are a security risk since they can be modified by any
user on the system. World-writable directories permit anyone to add or delete files
which opens the door for the placement of malware code or binaries. A systematic
search for such files and directories should therefore be performed on a regular
basis and, if found, appropriate measures taken. A mitigation would be the addition
of the so called sticky bit flag for the affected data item which is a specific setting
in the properties that restricts the permission for its modification to the file owner or
creator.7 The permission to execute is a necessary condition to run an application or
script on the system and should therefore be also carefully monitored, as it is also
required by malicious code to operate.
A password policy defines the rules that apply for the required complexity of a
new password chosen by a user and any restrictions on the re-use of previously
chosen ones. It also sets requirements on password renewal intervals, which can
be enforced through the setting of password expiration time spans. The rules on
password strength are usually defined by the project or subject to a company policy.
Stricter rules will apply for systems that host sensitive data with some kind of
security classification.
Limiting the amount of system resources that users, groups, or even applications can
consume is a method to avoid the slow down or even unavailability of a machine, in
7 The name sticky bit is derived from the metaphoric view to make the item “stick” to the user.
234 16 Cyber Security
case an application wants to use up all available system resources. Examples could
be an uncontrolled build up of memory leaks, or the reservation of all available
file handles. This can make a server unresponsive for any new user login or task,
which is referred to as an accidental Denial of Service (DoS) event. There is also the
intentional creation of a DoS event which is a widely established cyber attack vector
and can cause major service interruptions for end users. The limitation of machine
resources is therefore also an efficient means to counteract this type of DoS attacks.
Login banners refer to any information presented to a user when logging into a
server from remote. This is also referred to as the message of the day (MOTD) and
in the OS default configuration might presents sensitive system related information
to the outside world. A typical examples is the type of OS and its version, which
unintentionally provides an attacker important hints on potential zero-days existing
on the target host. It is therefore recommended to always adapt the MOTD banner
in order to avoid an unnecessary presentation of system related information and
replace it with statements that deter intruders from illegal activities (e.g., a warning
that any login and key strokes are being tracked).
The use of system integrity checkers are a powerful means to verify any unin-
tentional or illegal modification on a system. A widely used tool is the Advanced
Intrusion Detection Environment (AIDE) which is an open source package provided
under the GNU General Public License (GPL) terms [11]. When initially run on
a system, the application creates a database that contains information about the
configuration state of the entire file system. This can be seen like taking a fingerprint
or a snapshot of the system’s current state. Whenever AIDE is rerun at a later stage,
it compares the new fingerprint with the initial reference database and identifies
any modification that has occurred since then. Potential unintended or forbidden
configuration changes can then be easily spotted and further scrutinised.
The aim of cyber security engineering is to ensure that during the design, devel-
opment, and maintenance of an IT system all measures are taken to maximise its
robustness against cyber attacks. Good practice cyber security engineering aims
to consider relevant measures in the early design so they become part of the
16.3 Cyber Security Engineering 235
initial system architecture. This will not only improve the cyber robustness at
first deployment, but also allow it to be kept up-to-date and robust throughout its
entire operational lifespan. The upgrade of old infrastructure to improve the cyber
resilience can be very challenging and can potentially require a major redesign
activity. If this is the case, such a system upgrade requires careful planning in
order to avoid any service interruption or if not otherwise possible reduce it to
the minimum extent. Some generic principles for consideration are provided in the
bullets below.
• Cyber security aspects must be taken into account as part of any system design
decision that impacts the overall system architecture and must always aim to
reduce the system’s attack surface to the greatest possible extent. An example is
the introduction of air gaps at highly sensitive system boundaries, which refers to
the deliberate omission of a physical network connection between two devices for
security purposes. This forces any data transfer to be performed via offline media
which can be more easily tracked and controlled. Other examples are efficient and
fast means to introduce new software patches to fix vulnerabilities throughout the
entire system lifetime, the introduction of external and internal firewalls, the use
of intrusion detection systems, the centralised management and control of user
access, and efficient means to upgrade antivirus software and perform antivirus
scanning, to name only a few.
• The regular planning and execution of obsolescence upgrades both at hardware
and software level is an important means to keep the number of cyber vulnerabil-
ities in a system low. Obsolete software will reach the end of vendor support
period very soon which implies that no patches to fix vulnerabilities might
be available anymore. The use of virtualisation technology (see Chap. 14) is
highly recommended as it provides more flexibility in the modernisation process.
If an already obsolete operating systems has still to be used for backwards
compatibility with heritage software, adequate protective measures must be
taken. An example would be the use of a virtual machine (VM) that is configured
to have only very limited connectivity to the remaining system. For the absolutely
necessary interfaces, additional protective measures should be deployed (e.g.,
firewalls, DMZ, etc.)
• The development and maintenance of security operations procedures (SECOPS)
is an important building block to ensure and improve secure operations. Such
procedures define and document a set of rules that reduce the risk for cyber
attacks or sensitive data leakage and can be easily handed to system operators
and maintainers as a guideline or even applicable document. Examples of topics
defined in SECOPS procedures are the required time intervals to update antivirus
(AV) definition files, schedules for the execution of AV scans, the enforcement
of password complexity rules and change policies, the definition of user groups
and rights, procedures for data export import and tracking, or port lock-down
procedures.
• Penetration tests (also referred to as pentests) are designed and executed to unveil
unknown vulnerabilities of an operational system and should always be followed
236 16 Cyber Security
by a plan to fix any major discovery in a reasonable time frame. Good pentest
reports should provide a summary of all discovered vulnerabilities, their location,
how they could be exploited (risk assessment), and how they should be fixed. As
all of this information is highly sensitive, extreme care must be taken with the
distribution of such type of documentation, as it would provide a hacker a huge
amount of information to attack and damage the entire system.8
The cyber robustness of an IT system is never a stable situation and will by nature
degrade over time due to the advent of new threats aiming to exploit new discovered
vulnerabilities. It is therefore important to monitor and assess the cyber state of
a system at regular intervals, which can be achieved by two different types of
activities, the cyber audit and the penetration test.
The cyber audit aims to investigate the implementation of cyber security related
requirements, both at design and operational level. At design level, such an audit
could for example verify whether an air gap is actually implemented in the
deployed system and corresponds to the system documentation. Another example
is the inspection of the lock-down state of a system to verify whether the actual
configuration (e.g., USB port access, network ports, firewall rules, etc.) is consistent
with the required configuration. At operational level, the proper execution of
applicable SECOPS procedures should be audited. This could comprise a check
when the last antivirus scan was performed or whether the applicable password
policy is actually being exercised and enforced.
The purpose of a penetration test, colloquially also referred to as pentest, is to
unveil vulnerabilities of an IT system via a simulated cyber attack. The important
difference to a real attack is that in the case of a pentest, the system owner is fully
aware of it and has also agreed to it, in order to gain more knowledge. Every pentest
needs to be carefully planned and executed and the detailed scope should not be
known to the system owner in order to ensure a high degree of representativeness to
a real case attack scenario. The outcome of a pentest must be a well documented and
summarise all the findings together with a categorisation of their severity. The latter
will give the owner an indication of the urgency to either fix a finding or to mitigate
the potential damage that it could cause in the case of an attack. It is also useful to
suggest potential countermeasures, both at short and long term level, which could
reduce or even eliminate the risk of a vulnerability.
To effectively design and plan a pentest scenario, it is worth to apply the
Cyber Kill Chain framework, which has been proposed by the Lockheed Martin
Corporation [12, 13] and is well described in the open literature on computer
8 The results of a pentest are usually subject to a security classification like restricted, confidential,
or even secret, depending on the criticality of the system they have been collected from.
16.5 Threat Analysis 237
security (see e.g., [14]). The Kill Chain describes the various phases of the planning
and execution of a cyber attack. It starts from early reconnaissance of the target
system, followed by the selection of the appropriate malware (”weaponisation”), its
delivery and installation onto the target system, and finally the remote control and
execution of all kind of malicious actions (e.g., lateral expansion to other systems,
data collection or corruption, collection of user credential, internal reconnaissance,
destruction of the system, etc.). The Cyber Kill Chain framework also suggests
countermeasures that can be taken during each of these phases in order to defend
a computer network or system. These Kill Chain phases should be emulated by
the pentest designer in order to make it more realistic and to better evaluate the
effectiveness of defensive countermeasures. As a minimum, a pentest should focus
on the following aspects:
• Intrusions from external and internal interfaces,
• inter-domain intrusions, e.g., between classified and unclassified parts of a
system and even parts that are separated by air gaps,
• robustness against insider threats (e.g., possibilities to introduce malware),
• robustness against threats that could exploit all existing software vulnerabilities,
and
• the functionality and efficiency of a system’s integrity verification algorithm, in
other words, to measure how quickly and reliably a potential contamination with
malicious code can be discovered and appropriate measures taken.
A thorough and continuous analysis of the current cyber threat landscape must be
a fundamental component in every efficient cyber security strategy. The threat-
driven approach as described by Muckin and Fitsch in their White Paper (cf.,
[15]) resembles, from a methodology point of view, the classical fault analysis
practices applied in systems engineering like for example the failure mode effects
analysis (FMEA) and the failure mode effects and criticality analysis (FMECA).
The respective authors advocate that threats should be the primary driver of a
well designed and properly defended system, as opposed to approaches that simply
comply to one or several pre-defined cyber security regulations, directives, policies
or frameworks, which might not be fully adequate for a specific system architecture
or already outdated. In other words, this approach places threats at the forefront
of strategic, tactical and operational practice, which requires the incorporation of
a thorough threat analysis and threat intelligence into the system development and
operations. This enables the measures derived from such an approach to be tailored
more specific to the system and its environment, also taking into account the context
it is actually operated in. The authors propose a threat analysis method that defines
a set of steps that can be grouped into a discovery and an implementation phase.
238 16 Cyber Security
System Assets
Contain …
Against… Applied to…
Components
Target…
Utilise …
Threats
Actors
Events or
Conditions
These are briefly outlined below and the reader is referred to the cited literature for
more details:
• The discovery phase focuses on the clear identification of the system’s assets,
which refers to the list of potential targets for a cyber attack (see also Fig. 16.2),
followed by the determination of their attack surface. The identified assets
are then decomposed into their technical components which could be devices,
interfaces, libraries, protocols, functions, APIs, etc. This of course requires the
detailed knowledge of the system and should be based on a thorough consultation
of the relevant design documentation. The next step is to identify the list of
potential attack vectors specific to the listed assets and their derived components.
It is also recommended to analyse the threat actors, their objectives, motivation,
and ways how the attack could be orchestrated as this might impact the list of
assets identified in this step (this data is referred to as threat intelligence).
• The implementation phase first produces a prioritised listing (also referred to as
“triage”) considering business or mission objectives that contribute to the risk
assessment of a threat. To give an example, a threat that might lead to the leakage
of sensitive information (e.g., project design information, source code subject
to corporate intellectual property, etc.) must be considered as a higher priority
than threats affecting less sensitive data (e.g., a corporate directory). The final
step in this activity is to implement the necessary security controls which are the
counter measures that need to be taken in order to remove or at least mitigate the
16.6 Cryptography 239
identified threats and attack vectors. This is either done as part of the development
or engineering work (in case the system is still under development), the roll-out
of software patches, or the adaptation of configuration parameters in the system
(e.g., lock down of interfaces or ports, etc.).
It should be noted that the threat analysis described above should be considered
as one example of many and the reader is encouraged to also consult other
approaches described in the following references: [16–18].
16.6 Cryptography
Eve / Mallory
Alice Bob
Adversary
attacks eavesdrops
Encryption: D
Decryption:
Encrypted data c{dxewet72q2455dd}
E(p,k) = c D(c,k‘) = p
…insecure data channel
secure secure
channel channel
plaintext p plaintext p
„Hello Bob“ „Hello Bob“
Fig. 16.3 Basic concept to establishing a secure communication channel using cryptography
(refer to text for more detailed explanation)
240 16 Cyber Security
As can be readily seen, both the encryption and decryption phase depend on
the definition and knowledge of a key k that needs to be applied at both ends.
The exchange and use of such keys is a non-trivial task as it ultimately defines
the security of the entire system.9 There are two main concepts in use for such
an exchange, the symmetric and asymmetric key algorithms. The symmetric-key
algorithms use the same cryptographic keys10 for both the encryption of the
plaintext and its decryption. Such keys represent a shared secret between the parties
and therefore need to be exchanged on secure channels themselves. The asymmetric
key concept is based on the generation of a pair of keys by each of the two parties
involved. One of the two keys, the so called public key, can be openly distributed
to another party who can then use it to encrypt a message. The decryption of
that message will however only be possible using the second privately held key.
This allows a secure transmission in one direction which can be used to exchange
a symmetric key according to the Diffie-Hellmann public-key exchange protocol
[21]. Another key exchange protocol was introduced a bit later and is referred to
as the Rivest-Shamir-Adleman or RSA protocol [22]. It is also based on a public-
key cryptosystem with different keys for encryption and decryption, but the key
encryption process uses two large prime numbers. The key itself is public and
available for anyone to encrypt a message but decryption is only possible for those
who know the actual prime numbers.
Another important application of public-key cryptography is the digital signing
of documents which allows to authenticate the sender’s identity. This allows to
demonstrate file integrity and to prove that the document has not been tampered
during its transmission. In this case, the sender uses the private key for encryption
(thereby signing it) and the recipient the public key for decryption (thereby verifying
the signature).
The specification of an encryption methodology comprises the encryption algo-
rithm, the key size, and the key symmetry. Key sizes are given in number of bits,
keeping in mind that a key size of n-bits implies 2n possible key combinations that
will have to be tested in a brute-force type of attack. There are two basic types of
symmetric encryption algorithms referred to as block and stream ciphers (refer to
e.g., [19]). Block ciphers process the input plain text in fixed length blocks (e.g.,
64 bits) and transforms these into corresponding ciphertext blocks, applying one
encryption algorithm and key onto the entire block size. Stream ciphers process the
input plain text one bit at a time, and that bit encryption will be different every time
it is repeated. The term symmetric means that the same secret key is applied for both
encryption and decryption.
9 It is a fundamental concept of cryptography that all the secrecy must reside entirely in the key,
as the cryptographic algorithm itself is known to the public (and an adversary). This is referred to
as Kerckhoffs’s assumption as it has been put forward by A. Kerckhoffs in the nineteenth century
[19, 20].
10 Or alternatively one can be computed from the other.
16.6 Cryptography 241
Fig. 16.4 Basic concept of block-ciphers (refer to text for detailed explanation). Left side: schema
of a Feistel network as used in the Data Encryption Standard (DES); L0 and R0 = 32 bit half-
blocks, F = Feistel function, Ki..n = sub-keys used in round-i. Right side: schema of a substitution-
permutation-network as used in the Advanced Encryption Standard (AES); S1...n = substitution
boxes (S-boxes), P = permutation boxes
Two different block cipher techniques are depicted in Fig. 16.4. The left branch
is used by an encryption method developed in the early seventies at IBM with
contributions of the US National Security Agency (NSA). It has been published
as an official Federal Information Processing Standard (FIPS) as Data Encryption
Standard (DES) [23] and has been widely used as a standard encryption methodol-
ogy in the past decades. The process starts by dividing a 64 bits plaintext block
into two equal half-blocks of each 32 bits which are then alternatively fed to
the encryption algorithm referred to as a Feistel function (F ).11 The F function
performs several processing steps, starting with a bit expansion (i.e., from 32 bits
to 48 bits), a sub-key (K0..n ) mixing stage, substitution, and permutation. In the
last two steps the block is divided into so called substitution and permutation boxes,
called S-, and P-Boxes respectively. During substitution, the input bits of the S-
box are replaced by different output bits taken from a lookup table, and during
permutation these S-boxes are rearranged. The output of F is then recombined
with the corresponding half-block that has not been processed by F using an
exclusive-OR (XOR) operation. This process is repeated several times (referred to
11 Named after Horst Feistel, a German-born physicist and cryptographer who was one of its
as “rounds”), and in every round the block fed into the K-function is alternated.
This criss-crossing process is known as a Feistel-network [25].
Due to its relatively short key size of only 56-bits, the DES encryption standard
has been considered as insecure and was subsequently superseded by a more secure
one, referred to as the Advanced Encryption Standard (AES) [26] shown on the
right side of Fig. 16.4. AES uses longer keys (available lengths of 128, 192, and
256 bits) and a block size of 128 bits. It also performs several rounds with each
one performing a substitution operation on a set of S-boxes. This is then followed
by a permutation operation that takes the outputs of all the S-boxes of one round,
permutes the bits, and feeds them into the S-boxes of the next round. Each round
takes a different sub-key K1...n as input [25]. Due to its improved security, AES is
today (2022) widely used in many internet protocols (e.g., transport layer security
or TLS) and even for the handling of top secret information.
The aim of this chapter was to provide the reader a basic introduction into the
complex and very dynamic subject of cyber security. The threats stemming from
cyber attacks are constantly growing, and so the potential damage they can cause.
As both ground and space segment complexity is continuously growing and with
it their presence in our strategically vital infrastructure, they have moved more and
more into the focus of hackers. Therefore, cyber security has to be an integral part
of a modern ground segment systems engineering process.
For a better assessment of a system’s vulnerable areas, it is important to develop
a good understanding of the various types of attack vectors, their dynamic evolution,
and the potential damage they can cause. To support this, the concept of the attack
surface has been introduced. Cyber security aspects need to be considered during the
design, development, and operational phases of a new ground segment. For existing
ones, obsolete infrastructure or software need to be updated or replaced as soon as
possible. SECOPS procedures are an important contribution to identify the need for
cyber patches and to enforce secure operational practices like system lock-downs,
access control, or password complexity and renewal rules.
The need for cyber security audits and penetration tests has been explained, as
both are of paramount importance to understand the current cyber state of a system.
Their regular execution allows to unveil existing vulnerabilities and initiate their
fixes, or to take measures to mitigate the risk for them to be exploited in a potential
cyber attack. A basic understanding of the Cyber Kill Chain helps to “design” more
efficient and representative penetration tests.
The concept of threat analysis has been introduced as a means to drive system
design in relation to the specific cyber threat environment the system is exposed to.
The described methodology is more effective compared to the simple application of
a standard set of cyber requirements and procedures.
References 243
Basic concepts of cryptography were described with the aim to provide the reader
a first insight into a mathematically complex subject matter. Unless the readers
wants to become a cryptography expert, it is probably sufficient to understand
the main features of cryptographic algorithms and protocols, which are strongly
driven by the key size and the effort to generate and securely exchange them. It is
important to understand the difference of existing encryption standards and the level
of security they can offer. The need to apply up-to-date standards in early system
design is obvious, but existing infrastructure might still use deprecated encryption
protocols and ciphers which needs to be carefully analysed and upgraded, if
possible. Encryption must be an integral part of any information flow in a modern
ground segment. Special attention should be given to interfaces that connect to
remote sites (e.g., TT&C stations) which usually make use of wide area network
infrastructure that is not necessarily part or under control of the project specific
infrastructure perimeter (e.g., rented telephone lines).
References
13. Lockheed Martin. (2015) Seven ways to apply the cyber kill chain with a threat intelligence
platform. https://www.lockheedmartin.com/content/dam/lockheed-martin/rms/documents/
cyber/Seven_Ways_to_Apply_the_Cyber_Kill_Chain_with_a_Threat_Intelligence_Platform.
pdf, Accessed Apr 4, 2022.
14. Yadav, T, & Rao, A. M. (2015). Technical aspects of cyber kill chain. In J. Abawajy, S.
Mukherjea, S. Thampi, & A. Ruiz-Martínez (Eds.), Security in computing and communications
(SSCC 2015). Communications in computer and information science (Vol. 536). Springer.
https://doi.org/10.1007/978-3-319-22915-7_40.
15. Muckin, M., & Fitch, S. C. (2019). A threat-driven approach to cyber security, method-
ologies, practices and tools to enable a functionality integrated cyber security organiza-
tion. Lockheed Martin Corporation, https://www.lockheedmartin.com/content/dam/lockheed-
martin/rms/documents/cyber/LM-White-Paper-Threat-Driven-Approach.pdf, Accessed Apr
4, 2022.
16. Microsoft Corporation. (2022). Microsoft security development lifecycle (SDL). https://www.
microsoft.com/en-us/securityengineering/sdl/threatmodeling Accessed Apr 4, 2022.
17. Building Security In Maturity Model (BSIMM). (2022). http://www.bsimm.com/, Accessed
Apr 4, 2022.
18. Synopsis. (2022). Application security threat and risk assessment. https://www.synopsys.com/
software-integrity/software-security-services/software-architecture-design.html Accessed
Apr 4, 2022.
19. Kahn, D. (1967). The codebreakers: The story of secret writing. Macmillan Publishing.
20. Schneier, B. (1996). Applied cryptography: Protocols, algorithms and source code in C (2nd
ed.). John Wiley and Sons. ISBN 0-471-11709-9.
21. Merkle, R. C. (1978). Secure communications over insecure channels. Communications of the
ACM, 21(4), 294–299. CiteSeerX 10.1.1.364.5157. https://doi.org/10.1145/359460.359473.
S2CID6967714.
22. Rivest, R., Shamir, R. A., & Adleman, L. (1978). A method for obtaining digital signatures
and public-key cryptosystems. Communications of the ACM, 21(2), 120–126. CiteSeerX
10.1.1.607.2677. https://doi.org/10.1145/359340.359342.S2CID2873616.
23. Federal Information Processing Standard FIPS. (1999). Data encryption standard (DES).
National Institute of Standards and Technology (NIST), Publication 46-3.
24. Feistel, H. (1973). Cryptography and computer privacy. Scientific American, 228(5), 15–23.
25. Stinson, D. R., & Paterson, M. B. (2019). Cryptography theory and practice (4th ed.). CRC
Press Taylor & Francis Group.
26. Federal Information Processing Standard FIPS. (2001). Advanced encryption standard (AES).
National Institute of Standards and Technology (NIST). Publication 197.
Appendix A
Coordinate Systems
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 245
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8
246 A Coordinate Systems
(terrestrial frame) and allows a convenient way to specify a set of fixed coordinates
(e.g., longitude, latitude, and height) for any point on the surface.
Another group of coordinate systems are connected to the satellite body and
its orbital movement. The orbital system (LVLH) is a frequently used example to
express a satellite’s thrust directions in radial, tangential, and normal components.
The satellite body frame and the instrument frame are used to specify geometric
aspects of the satellite’s mass properties (centre of mass location, inertia tensor), the
location and orientation of an instrument sensor (mounting matrices), and the sensor
field-of-view.
A celestial reference systems is a systems who’s centre coincides and moves with
the centre of a planetary body but does not rotate with it. As it is a system free of
rotational motion, it can be considered as an inertial or Newtonian reference system.
An example is the Earth Centred Inertial or ECI system which has its origin at the
Earth’s centre (see Fig. A.1) and the z-axis aligned with the Earth’s rotation axis
and pointing to the north pole. The x− and y-axes both lie in the equatorial plane,
where x points to the vernal equinox,2 and the y-axis completes an orthogonal right-
handed system.
N N
Equatorial plane ε
+x +y +x +y
Vernal-
equinox
Fig. A.1 Definition of the Earth Centred Fixed (left) and the Earth Centred Inertial (right)
coordinate systems. The obliquity angle ≈ 23.5◦ defines the inclination between the ecliptic
and equatorial planes whose intersection defines the location of the autumnal and vernal equinoxes
2 One of the two intersection points of the equatorial plane with the ecliptic plane.
A Coordinate Systems 247
If the Earth’s motion around the Sun would only be determined by the central
force of the Sun, its plane would remain fixed in space. However, perturbations
stemming from the other planets in the Solar system and the non-spherical shape
of the Earth (equatorial bulge) add a torque on the rotation axis and cause a secular
variation known as planetary precession that resembles a gyroscopic motion with a
period of about 26,000 years [1]. As a result, the vernal equinox recedes slowly
on the ecliptic, whereas the angle between the equatorial and the ecliptic plane
(obliquity ) remains essentially constant. In addition to the gyroscopic precession,
short term perturbations of about one month can be observed which is called
nutation. In view of the time dependent orientation of the Earth’s equator and
ecliptic, the standard ECI frame is usually based on the mean equator and equinox
of the year 2000 and is therefore named Earth Mean Equator and Equinox of J2000
or simply EME2000.
A terrestrial coordinate system has its origin placed in the centre of a rotating body.
In contrast to the inertial (ECI) system, the x− and y− axes rotate with the bodies’
angular velocity. The main advantage of a body fixed system is that any point
defined on the body’s surface can be conveniently expressed by constant coordinate
components (e.g., longitude, latitude, and height above a reference surface). For
the Earth the International Terrestrial Reference System or ITRS3 provides the
conceptual definition of a body-fixed reference system [5]. Its origin is located at the
Earth’s centre of mass (including oceans and atmosphere), and its z− axis is oriented
towards the International Reference Pole (IRP) as defined by the International Earth
Rotation Service (IERS). The time evolution of the ITRS is defined to not show any
net rotation with respect to the Earth’s crust.
For the transformation of a position vector of a satellite in orbit expressed
in ECI/EME2000 coordinates (rECI ) to its corresponding position on the Earth’s
surface (rI T RS ), a coordinate transformation between the ECI and ITRS system
has to be performed. This transformation is quite involved as it needs to take into
account several models, i.e.,
• the precession or the Earth’s rotation axis,
• the nutation describing the short-term variation of the equator and vernal
equinox,
3 Taking formal terminology strictly, the term reference system is used for the theoretical definition
of a system, which comprises the detailed description of the overall concept and associated models
involved. The term reference frame refers to a specific realisation of it, which is usually based on
some filtering of measured coordinates from various ground stations.
248 A Coordinate Systems
• the Greenwich Mean Siderial Time (GMST), describing the angle between the
mean vernal equinox and the Greenwich Meridian at the time of coordinate
transformation, and
• the Earth’s polar motion describing the motion of the rotation axis with respect
to the surface of the Earth.
Taking all these components into account, the transformation can be expressed
by a series of consecutive rotations in the following order4
where N and P refer to the rotation matrices that describe the coordinate changes due
to nutation and precession, respectively. The matrix Θ considers the Earth rotation
and can be expressed as
where Rz is the rotation axis around the z−axis, and GAST is the Greenwich
apparent sidereal time which is given by the equation of the equinoxes
Fig. A.2 Polar motion components xp and yp of the Celestial Intermediate Pole expressed in the
ITRS as published by the International Earth Rotation Service (IERS) [3]. Data retrieved from
IERS online data centre [4]
4 For a more detailed description of this involved transformation the reader is referred to specialised
where ψ is the angle between true and mean equinox and GMST the Greenwich
mean sidereal time. The Polar motion rotation matrix Π is given by
where xp and yp refer to the angular coordinates that describe the motion of the
Celestial Intermediate Pole (CIP) in the International Terrestrial Reference System
(ITRS) which is published in the Bulletins A and B of the IERS (cf., [3] and see
Fig. A.2 for example data).
The orbital coordinate frame is a right handed coordinate system with its origin
located in the satellite’s centre of mass and moving with its orbital motion. It
is therefore convenient to define thrust orientation vectors for orbit correction
manoeuvres. The unit vectors of its coordinate axes can be derived at any time from
and v(t)
the satellite’s inertial position and velocity vectors r(t) defined in the ECI
system, using the following simple relations
r
eR = −
|r |
r × v
eN = −
|r × v|
eT = eN × eR (A.5)
where the subscripts R, N, and T refer to the radial, normal, and tangential direc-
tions, respectively. As shown in Fig. A.3, the radial axis points to the central body
(nadir direction), the normal axis is anti-parallel to the orbit angular momentum
vector, and the tangential axis completes the right-handed system (and is close to the
satellite’s inertial velocity vector). The axis orientation shown here is also referred
to as the Local Vertical Local Horizontal or LVLH-frame. An alternative definition
of the orbital system is also in use, which has the radial direction defined as pointing
away from the central body and the normal direction parallel to the orbital angular
momentum vector.
The satellite body frame is an important coordinate system that is required for the
definition of the exact location of any device or sensor mounted on the satellite
structure. Its origin and exact orientation have to be defined and documented by
250 A Coordinate Systems
Fig. A.3 Definition of the satellite body frame (xB , yB , and zB ), the instrument frame (xI , yI , and
zI ), and the orbital or Local Vertical Local Horizontal (LVLH) frame (eR , eN , and eT referring to
the radial, normal, and tangential unit vectors
the satellite manufacturer. The orientation will usually follow some basic alignment
of the satellite structural frame as shown by the axes labelled as xB , yB , and zB
in Fig. A.3. The body frame is important for the definition of geometry dependant
mass properties, like the moments of inertia tensor or the position of the centre of
gravity and its movement due to change of propellant (refer to Chap. 3).
The instrument frame is used to define the field-of-view (FoV) of a payload sensor
mounted on the satellite structure. The ability to express the orientation of a sensor’s
FoV is important for the prediction of sensor related events, like the transition of a
celestial body (e.g., the time the Sun enters the FoV of a Sun Sensor). The instrument
frame origin is a reference point defined with respect to the satellite body frame and
defines the position of the sensor on the satellite (see vector rBI in Fig. A.3). The
orientation of the axes xI , yI , and zI is given by the so called instrument mounting
matrix which is a direction-cosine matrix that defines the transformation between
the two systems.
A Coordinate Systems 251
References
1. Montenbruck, O., & Gill, E. (2000). Satellite orbits (1st ed.). Springer Verlag.
2. Vallado, D. A. (2001). Fundamentals of astrodynamics and applications (2nd ed.). Space
Technology Library, Kluwer Academic Press.
3. International Earth Rotation Service. (2014). Explanatory supplement to IERS bulletin A and
bulletin B/C04. https://www.hpiers.obspm.fr/iers/bul/bulb_new/bulletinb.pdf, Accessed Apr 4,
2022.
4. International Earth Rotation and Reference Systems. (2022) Service website: https://www.iers.
org, Accessed Apr 4, 2022.
5. McCarthy, D. D. (1996). IERS conventions (1996). IERS Technical Note 21, Central Bureau
of IERS, Observatoire de Paris.
Appendix B
Time Systems
The definition of time systems is a very complex subject matter and can therefore
not be addressed here with the level of detail that it might deserve, and only some
basic explanations are provided.1 A basic understanding of the most common time
systems is important, as every computer system in a ground segment must be syn-
chronised to a single time source and system. Furthermore, any product exchanged
with an external entity (e.g., telemetry, orbit files, or command parameters) must
be converted to the time system that is defined in the applicable interface control
document. It could even be the case, that a ground segment interfaces to various
external entities and each of them works in a different system and expects the
products to be converted accordingly.
Prior to the invention of atomic clocks, the measurement of time was mainly
based on the length of a day, being defined by the apparent motion of the Sun in
the sky. This gave the basis for the solar time which is loosely defined by successive
transits of the Sun over the Greenwich meridian (i.e., the 0◦ longitude point). Due to
the Earth’s annual orbital motion, it rotates slightly more than 360◦ during one solar
day. To avoid this inconvenience, the siderial time was defined as the time between
successive transits of the stars over a particular meridian.
Unfortunately, things are even more complex. Due to the elliptical shape of the
Earth’s orbit around the Sun, it moves at variable speed. In addition, the obliquity
between the celestial equator and the ecliptic gives raise to an apparent sinusoidal
motion around the equator, which adds even more irregularities that are inconvenient
for the measurement of time. Therefore, the mean solar time was introduced and
based on a fictitious mean Sun having a nearly uniform motion along the celestial
equator. The mean solar time at Greenwich is defined as Universal Time (UT)
which has three distinct realisations, UT0, UT1, and UT2. UT0 is derived (reduced)
from observations of stars from many ground stations. Adding corrections for
1 For a deeper treatment, the reader is referred to more specialised literature (e.g., [1]).
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 253
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8
254 B Time Systems
Fig. B.1 Difference between TAI, UT1, and UTC for the past decades as depicted in Nelson et al.
(2003) [2]. The start epoch of the GNSS based GPS and GST time systems are shown as “GPS T0”
and “GST T0”. Both systems have a constant offset to TAI of 19.0 seconds. The difference between
GPS and GST time (GGTO) is broadcasted as part of the Galileo Open Service (OS) navigation
message [3]. Examples for the broadcast GGTO values are shown in the right panel (mean value ca.
0.59 m or ≈ 2 ns). Also shown are ground receiver measured GGTO values with a range between
0.65 m to 10 m (≈ 2–33 ns) [4]
location dependant polar motion provides UT1,2 and further corrections for seasonal
variations provides UT2.3
A completely different source for a highly accurate and stable measurement of
time came with the invention of atomic clocks, which use the well known low-
energy (hyperfine) state transitions of specific atoms or molecules (e.g., Cesium,
Hydrogen, or Rubidium). Using microwave resonators that are able to excite these
states, these can be accurately tuned to achieve a maximum population density of
excited atoms. This can be measured, and is used to keep the microwave resonator
frequency at a highly stable reference value. In 1972, the atomic time scale was
established at the french Bureau International des Poids et Mesures (BIPM), and
adopted as a standard time under the name International Atomic Time. TAI is
physically realised with an elaborate algorithm that processes readings from a large
number of atomic clocks located in various national laboratories around the world
(cf., [5]). A different atomic time scale is realised by GNSS satellite constellations
like GPS or Galileo, which provide respectively the GPS time (GPST) and the
Galileo System Time (GST). Both time systems are kept at a constant offset of
exactly 19 seconds to TAI (see Fig. B.1).
2 UT1 is therefore the same around the world and does not depend on the observatory’s location.
3 UT2 is not really used anymore and can be considered obsolete.
B Time Systems 255
The most commonly used time system is Coordinated Universal Time or UTC
which differs by an integer number of seconds from TAI and forms the basis for
civilian time keeping. UTC is also used to define the local time around the world,
by adding regionally dependant offsets as defined by the time zone and the use
of daylight saving time (DST). UTC is derived from atomic time but periodically
adjusted by the introduction of leap seconds at the end of June and/or the end of
December in order to keep the difference ΔU T 1 = U T 1 − U T C ≤ ±0.9s (see
Fig. B.1). The adjustment of UTC via the introduction of leap seconds is needed
to compensate for the gradual slow down of the Earth’s rotation rate, which is
furthermore affected by random and periodic fluctuations that change the length
of day relative to the standard reference day of exactly 86,400 SI seconds. It should
be noted that the motivation to continue the introduction of new leap seconds has
diminished in the past years due to the availability of satellite navigation time scales
and the operational complexity to incorporate leap seconds on a regular basis. It is
even debated to discontinue the introduction of new leap seconds entirely, meaning
that |U T C −U T 1| could exceed 0.9 seconds in the future. A final decision has so far
(2022) not been taken, but the topic continuous to be discussed in various working
groups of international scientific organisations [2].
As UTC might be used for both time keeping and the communication with
external entities, the design of the ground control segment needs consider the ability
to introduce new leap seconds whenever published and ensure the correct number is
considered for time conversions.
References
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 257
B. Nejad, Introduction to Satellite Ground Segment Systems Engineering, Space
Technology Library 41, https://doi.org/10.1007/978-3-031-15900-8
258 Acronyms