FULLTEXT01
FULLTEXT01
net/publication/341509071
CITATIONS READS
2 5,256
2 authors, including:
Haris Suljić
University of Sarajevo
1 PUBLICATION 2 CITATIONS
SEE PROFILE
All content following this page was uploaded by Haris Suljić on 20 May 2020.
Abstract
Modern technology requires reliable, fast, and cheap networks as a backbone for the data transmis-
sion. Among many available solutions, switched Ethernet combined with Time Sensitive Networking
(TSN) standard excels because it provides high bandwidth and real-time characteristics by utilizing
low-cost hardware. For the industry to acknowledge this technology, extensive performance studies
need to be conducted, and this thesis provides one. Concretely, the thesis examines the performance
of two amendments IEEE 802.1Qbv and IEEE 802.1Qbu that are recently appended to the TSN
standard. The academic community understands the potential of this technology, so several simu-
lation frameworks already exist, but most of them are unstable and undertested. This thesis builds
on top of existent frameworks and utilizes the framework developed in OMNeT++. Performance is
analyzed through several segregated scenarios and is measured in terms of end-to-end transmission
latency and link utilization. Attained results justify the industry interest in this technology and
could lead to its greater representation in the future.
i
Haris Suljić, Mia Muminović Performance study and analysis of TSN
Table of Contents
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Expected outcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Background 3
2.1 Real-time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Switched Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Real-time Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.5 Time Sensitive Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.6 Network simulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.61 Comparison of network simulators . . . . . . . . . . . . . . . . . . . . . . . 10
2.7 OMNeT++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.71 The NED language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.72 Messages and packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.73 Configuring simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.74 Result analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.8 INET Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.9 Network Simulator for Time-Sensitive Networking - NeSTiNg . . . . . . . . . . . . 13
3 Related Work 15
3.1 Comparison of solutions for real-time Ethernet . . . . . . . . . . . . . . . . . . . . 15
3.2 Performance Evaluation of TSN amendments . . . . . . . . . . . . . . . . . . . . . 16
3.3 Simulating Time Sensitive Networking . . . . . . . . . . . . . . . . . . . . . . . . . 17
4 Method 19
4.1 System Development Research Method . . . . . . . . . . . . . . . . . . . . . . . . . 19
6 Evaluation of TSN 23
6.1 Scenario 1 - IEEE 802.1Qbv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2 Scenario 2 - IEEE 802.1Qbu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.3 Scenario 3 - IEEE 802.1Qbv and IEEE 802.1Qbu . . . . . . . . . . . . . . . . . . . 27
6.4 Scenario 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.5 Scenarios 5 - Industrial use-case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7 Conclusion 34
7.1 Reflection on research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8 Future Work 36
References 38
ii
Haris Suljić, Mia Muminović Performance study and analysis of TSN
List of Figures
1 Switched Ethernet topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Original Ethernet frame and Ethernet frame with 802.1Q tag . . . . . . . . . . . . 6
3 Enhancement for Scheduled Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Example for IEEE 802.1Qav and IEEE 802.1Qbv amendments . . . . . . . . . . . 8
5 Priority-based frame preemption . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
6 a) simple module output gate, b) compound module output gate, c) simple module
input gate, d) compound module input gate . . . . . . . . . . . . . . . . . . . . . . 11
7 TSN switch and its sub-modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
8 A multi-methodological research approach [1] . . . . . . . . . . . . . . . . . . . . . 19
9 A research methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
10 Modules responsible for shapers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
11 Modules responsible for frame preemption . . . . . . . . . . . . . . . . . . . . . . . 21
12 Scenario 1 - network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
13 Scenario 1 - traffic latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
14 Scenario 1 - execution trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
15 Scenario 2 - network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
16 Scenario 2 - traffic latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
17 Scenario 2 - execution trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
18 Scenario 3 - network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
19 Scenario 3 - traffic latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
20 Scenario 4 - network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
21 Scenario 5 - network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
22 Projects inet and nesting checked under Projects . . . . . . . . . . . . . . . . . . . 40
23 Example NED file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
24 Simple network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
25 Simulation window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
26 Selecting the input files for analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
27 Chart of end-to-end delay for the example . . . . . . . . . . . . . . . . . . . . . . . 42
iii
Haris Suljić, Mia Muminović Performance study and analysis of TSN
List of Tables
1 802.1Q Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Priority levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Mapping traffic classes to queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Model configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5 Scenario 1 - end-to-end latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6 Scenario 1 - utilization of each link . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7 Scenario 2 - end-to-end latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
8 Scenario 2 - utilization of each link . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
9 Scenario 3 - end-to-end latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
10 Scenario 3 - utilization of each link . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
11 Scenario 4 - end-to-end latency with preemption . . . . . . . . . . . . . . . . . . . 29
12 Scenario 4 - end-to-end latency without preemption . . . . . . . . . . . . . . . . . 29
13 Gate Control List for the switch A . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
14 Gate Control List for the switch B . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
15 Scenario 5 - traffic characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
16 Scenario 5 - end-to-end latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
17 Scenario 5 - utilization of each link . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
18 Scenario 5 - end-to-end latency (with AVB only) [2] . . . . . . . . . . . . . . . . . 32
iv
Haris Suljić, Mia Muminović Performance study and analysis of TSN
1 Introduction
Ethernet as technology is as widely used in industry, as it is popular in regular households. It was
first introduced in 1980, and after its first standardization in 1983 by a working group of Institute
of Electrical and Electronics Engineers (IEEE), it almost replaced all the other wired local area
networks (LAN) technologies. Initial speed it could have reached was 10 Mbit/s. Since then, many
improvements have been made and newer standards are introduced. In 2017, Ethernet reached
a speed of 400 Gbit/s. A collection of all these standards that define Ethernet is IEEE 802.3.
They define the physical layer and data link layers Media Access Control (MAC) of wired Ether-
net. These two layers are also known as the first two layers of the Open Systems Interconnection
(OSI) model[3]. IEEE 802.3 is a working group of the IEEE 802 project, and all standards defined
within this project are dealing with Local Area Networks (LAN) and Metropolitan Area Networks
(MAN). A working group IEEE 802.1 defines LAN/MAN bridging and management. A part of
this group is a Time-Sensitive Networking (TSN) task group[4]. Audio Video Bridging (AVB) is a
former task group of IEEE, and this task group was renamed to Time-Sensitive Networking Task
Group in 2012 in order to extend the work.
The main reason for introducing TSN is because Ethernet based implementations are gaining mo-
mentum and are being widely considered because of high bandwidth, scalability and compatibility
[5], but are still missing the predictability in data delivery. Real-time embedded systems put
strong limitations on data communication mechanisms. The amount of data exchanged between
components in distributed systems is constantly increasing, so it is becoming harder to satisfy all
the temporal constraints. The main application domain is the automation process and industry
requires real-time performance. In order to deliver, the network has to be able to forward mes-
sages with bounded end-to-end latency. Since applications within this domain are safety-critical,
the latency has to be bounded deterministically. A set of standards specified by the TSN working
group provides deterministic real-time communication over standard Ethernet.
This research focuses on analyzing the performance of TSN networks by using a simulation tool
developed in the simulation framework OMNeT++. Several scenarios cover interesting network
topologies and traffic configurations, and the final one is an industrial use-case designed by BMW
group [6], which is further described in Section 6.5.
1.1 Motivation
TSN is an extension of the Ethernet and is a set of standards under IEEE 802.1 Task Group1 .
TSN is compliant with switched Ethernet, an Ethernet network with a switch instead of a hub, so
advance traffic control is supported. IEEE 802.1Q is a part of this standard and it makes Virtual
Local Area Network (VLAN) possible by adding 802.1Q tag into the Ethernet frame [7]. It also
enables forwarding and queuing of messages in the network. This standard puts messages into
different classes based on their priority and uses traffic shapers in order to predict and prevent
overload of switch ports. In order to enable a priori message transmission and preemption of non-
critical messages, two amendments 802.1Qbv and 802.1Qbu are introduced. The first amendment
introduces scheduled traffic, while the second one introduces frame preemption. These two amend-
ments are key enablers of real-time communication in TSN networks [8]. The work in paper [9]
shows that the TSN standard can provide a deterministic and low end-to-end latency compared
to standard Ethernet standards. This is confirmed with a simulation model based on OMNeT++.
The standards IEEE 802.1Qbv and IEEE 802.1QAS were used to enable scheduled traffic and time
synchronization, respectively. The work in paper [8] examines only IEEE 802.1Qbv amendment.
It discusses the functional parameters of 802.1Qbv devices and how those affect control of the
temporal behavior of traffic flow in the case if high-criticality traffic. However, no paper examines
the performance of amendments 802.1Qbv and 802.1Qbu when they are combined and that is the
goal of this thesis.
In order to test the performance of time-critical traffic in terms of end-to-end latency and link
utilization, a simulation tool based on OMNeT++ is used. OMNeT++ is an extensible, modular,
component-based C++ simulation library and framework, primarily for building network simula-
1 Time-Sensitive Networking (TSN) Task Group. [Online]. Available: https://1.ieee802.org/tsn/
1
Haris Suljić, Mia Muminović Performance study and analysis of TSN
tors2 .
Since there are several data transmission end-to-end latency definitions depending on the context,
the one considered in this thesis follows. Data transmission end-to-end latency implies the
time required for a message to be transmitted from its source to the final destination. Another
important performance metric is link utilization and it represents the percentage of the link
capacity which traffic consumes.
This thesis is being done in collaboration with Arcticus Systems AB3 .
2
Haris Suljić, Mia Muminović Performance study and analysis of TSN
2 Background
In modern Ethernet networks, shared Ethernet is replaced with switched Ethernet because it pro-
vides an efficient and convenient way to extend the bandwidth of the network. TSN, an extension
of Ethernet, is used to achieve real-time properties of network and to utilize its full potential.
Among many simulation tools, OMNeT++ is chosen as a simulation environment to provide the
performance results. In following subsections, real-time systems, switched Ethernet, TSN and
network simulators are described widely.
2.2 Ethernet
Ethernet is a group of networking technologies for communication over a Local Area Network
(LAN). It was developed by XEROX Palo Alto Research Centre (PARC) in 1976, commercially
introduced in 1980 and first standardized in 1983 as IEEE 802.3. It supports several network
topologies such as bus, tree, star, line, ring, etc. In the OSI reference model, it corresponds to
the physical layer and data link layer [10]. The physical layer is the lowest layer of the OSI model
and it consists of electronic circuit transmission technologies of a network. In Ethernet, it can be
coaxial cable, twisted pair, or even optical fiber. The speed of the Ethernet is mostly dependent
on the physical layer and it can reach up to 400 Gbit/s. Data link layer consists of MAC sub-
layer and Logical Link Control (LLC) sub-layer. LLC sub-layer provides synchronization, flow
control and error checking of the data link layer. Every node on the network has its unique 6
byte MAC address. Network arbitration is done at the MAC sub-layer. Carrier Sense Multiple
Access/Collision Detection (CSMA/CD) is an arbitration mechanism on which Ethernet is based.
Every node is continuously listening to the network state (Carrier Sense), multiple nodes can
begin transmission if they detect that the network is quiet (Multiple Access) and in the case of
multiple concurrent transmission, each of the nodes must detect the collision, stop its transmission
and try again after some random time interval (Collision Detection) [10]. If the collision of the
same framework occurs 16 times, that frame is withdrawn and will not be transmitted [11]. The
main disadvantage of this arbitration approach is frequent collision under heavy traffic which leads
to unbounded end-to-end transmission delay. Introducing switches in the LAN can upgrade the
arbitration mechanism since they work on the data link layer.
4 DIN 44 300 Informationsverarbeitung No. 9.2.11, 1985.
3
Haris Suljić, Mia Muminović Performance study and analysis of TSN
4
Haris Suljić, Mia Muminović Performance study and analysis of TSN
5
Haris Suljić, Mia Muminović Performance study and analysis of TSN
In order to enable prioritizing and preemption of frames, 802.1Q14 added a 32-bit (4-byte) field
named 802.1Q Header between the source MAC address and the EtherType fields of the original
Ethernet frame (Figure 2).
Figure 2: Original Ethernet frame and Ethernet frame with 802.1Q tag
Two bytes are used for the tag protocol identifier (TPID), the other two bytes for tag control
information (TCI). TPID is used to identify the frame as an IEEE 802.1Q-tagged frame and is set
to a constant value (0x8100). The TCI consist of three sub-fields:
• Priority code point (PCP) - A 3-bit field which specifies the frame priority level.
• Drop eligible indicator (DEI) - A 1-bit field.
• VLAN identifier (VID) - A 12-bit field which specifies the VLAN to which the frame belongs.
Since PCP is a 3-bit field, up to eight different traffic classes can be defined. It is not defined how
the traffic is treated after being assigned to a specific class, but there are some recommendations
proposed by IEEE and they are given in Table 2.
6
Haris Suljić, Mia Muminović Performance study and analysis of TSN
There can be up to eight queues that correspond to each traffic class. In the case of implementing
less than eight queues, different types of traffic classes can utilize each queue. There is a certain
way to map traffic classes based on their priorities to available queues and it is represented in Table
3. For example, if only two queues are available, queue 0 contains traffic classes 0-3 and queue 1
contains traffic classes 4-7.
3 0 0 0 1 1 2 3 3
4 0 1 1 2 2 3 4 4
5 0 1 1 2 2 3 4 5
6 0 1 2 3 3 4 5 6
7 0 1 2 3 4 5 6 7
A switch can have up to eight queues for different types of traffic. Each frame has a three-bit tag
called Priority Code Point (PCP). According to this tag, a frame is forwarded to the corresponding
queue where it is buffered. It can be transmitted over the port only if the gate corresponding to
the queue is open. GCL defines the schedule for gates. It is usually a list of bit vectors which
represent a configuration for each gate. It also contains the time duration for each bit vector. In
7
Haris Suljić, Mia Muminović Performance study and analysis of TSN
figure 3 is shown an example where only gate of queue 0 is open and the time duration is specified
with ti1 . In the next entry, only the gate of queue 1 is open. Configuration of gates for each entry
has to be made for one period. The gate control list is cyclic and is specified by industry, i.e. it
has to be assembled manually. Additionally, the traffic of queue 1 and queue 5 is shaped using
CBS, which is specified by amendment 802.1Qav [18].
Prioritizing by itself cannot guarantee that time-critical messages are going to be transmitted at
the right time. For example, if a control message arrives while a lower priority message is being
transmitted, the control message is going to be queued and transmitted after the current message.
However, in industry control messages are usually scheduled. GCLs can be created according to
their schedule and assure that only gates of the scheduled traffic are opened for a specific time
period. This way scheduled control messages are not delayed by the lower priority traffic, because
the time interval is reserved for the scheduled traffic.
//www.ieee802.org/1/pages/802.1av.html
8
Haris Suljić, Mia Muminović Performance study and analysis of TSN
CBS smooths out the traffic for a stream, it distributes frames evenly in time. Each queue can have
CBS assigned to it. Furthermore, those that have it, also have the credit assigned. Transmission
of a frame is possible only if the credit is non-negative. If there are no messages in the queue and
the credit is positive, it is reset to zero. It is increased with a configurable rate called the idle
slope while frames are waiting for transmission in the queue, or while there are no messages and
the credit is negative. It is decreased with a configurable rate called the send slope while frame or
frames (depending on how much credit has been built up previously) are being transmitted [2].
Figure 4 shows an example IEEE 802.1Qav and IEEE 802.1Qbv amendments. There are four
different traffic classes: Scheduled Traffic, Class A, Class B, and Best Effort Traffic. Message m3
should be transmitted after messages m1 and m2 , but it is not, because the gate for the scheduled
traffic is open, and other gates are closed. Message m5 carries control data and it is known that
it could arrive within time interval ti2 , so during that time the gate of the ST queue is scheduled
to be open. From this example, it is obvious that gating decreases the end-to-end latency of the
control message.
9
Haris Suljić, Mia Muminović Performance study and analysis of TSN
network simulation tools such as Ns216 , Ns317 , J-Sim18 , OPNET19 , QualNet20 , etc. The general
goal of these tools is to create a simulator platform on which different simulation frameworks can
be implemented. Since OMNeT++ provides many network topology components, is open-source,
and is based on C++, it was an easy choice for this type of research.
2.7 OMNeT++
OMNeT++ is an open-source discrete event simulation environment, based on C++, available
since 1997. It has been well accepted by the academic community ever since, mostly because of its
general applicability. It is available on all commonly used platforms such as Windows, Mac OS, and
Linux. Its main advantage lies in the fact that it provides basic logic and functional components
that can be used to develop more advanced network simulation frameworks. OMNeT++ offers
high scalability which implies that the software is modular and inter-operable, simulation models
can be visualized and debugged, it has its own Integrated Development Environment (IDE), and
data interfaces are as open and general as possible [23]. In this research, it is used to develop a
simulation model for TSN network and all performance analysis is based on it.
Model structure of OMNeT++ consists of modules called simple modules and compounds, which
communicate with each other by messages. Simple modules are written in C++ and are the lowest
level structural component of OMNeT++. Compounds are created by combining several simple
modules. These module types exchange messages either directly, or through gates which is shown
in figure 6. Simple module’s gate is described by a single pointer towards “next” or “prev”, while
compound’s gate requires an additional pointer to the inner simple module the compound consists
of. The user defines model structure through OMNeT++ Network Description language NED. In
NED one can declare simple modules, define compound modules, configure the network, etc. In a
16 The Network Simulator - NS-2. [Online]. Available: https://www.isi.edu/nsnam/ns/
17 NS-3. [Online]. Available: https://www.nsnam.org/
18 JSim Home Page. [Online]. Available: https://www.physiome.org/jsim/
19 Riverbed. [Online]. Available: https://www.riverbed.com/se/products/steelcentral/opnet.html
20 QualNet Network Simulator Software. [Online]. Available: https://www.scalable-networks.com/
qualnet-network-simulator-software
21 What Is INET Framework? [Online]. Available: https://inet.omnetpp.org/Introduction
22 NeSTiNg. [Online]. Available: https://gitlab.com/ipvs/nesting
10
Haris Suljić, Mia Muminović Performance study and analysis of TSN
general scenario, a network is configured and initialization files are not part of the NED since they
can change on every run. Initialization data is stored in the INI files [23]. OMNeT++ offers its own
IDE which contains a graphical editor. The graphical editor offers the change of network topology
graphically or directly through NED source view. Messages are the central concept in OMNeT++
and in the model they can represent events, packets, commands, or other kinds of entities [24].
They can be configured by using MSG files. OMNeT++ has data analysis tool integrated into the
Eclipse environment. The results of the simulation can be stored as scalar values, vector values, or
histograms. Different statistical methods can be used to extract useful data to draw conclusions,
and this process is automated by using Analysis Files (ANF). 23
11
Haris Suljić, Mia Muminović Performance study and analysis of TSN
12
Haris Suljić, Mia Muminović Performance study and analysis of TSN
This framework offers many useful features, but the ones of interest for this research are the
following:
• Data link layer - allows a device to access the network, offers it a physical address, works
with device’s networking software and provides error detection.
• Network layer - provides logical addressing system, so the data can be routed through several
lower layer networks.
• Transport layer - offers end-to-end network communication between devices and can supply
the upper layers with connection-oriented or connectionless best-effort communication.
• Session layer - offers many functionalities, but most general one is allowing applications on
devices to establish, manipulate and terminate a dialog through the network.
• Presentation layer - formats the data the application sends and receives through the network.
• Application layer - provides the interface for the end user that operates the device connected
to the network.
First three layers are mostly hardware related, while the upper four layers are more abstract and are
software related. All these layers except the session and presentation layer are implemented in the
INET framework. Many protocols for various layers are included in the framework. Some of them
are: Address Resolution Protocol (ARP) and Ethernet on data link layer, Internet Protocol (IP)
and Internet Control Message Protocol (ICMP) on network layer, Transmission Control Protocol
(TCP) and User Datagram Protocol (UDP) on transport layer, Simple Mail Transfer Protocol
(SMTP) and Hypertext Transfer Protocol (HTTP) on application layer, etc 26 . INET framework
is very comprehensive, so it is frequently used as a base for some new network frameworks, which
is the case with the Network Simulator for Time-Sensitive Networking (NeSTiNg) too.
13
Haris Suljić, Mia Muminović Performance study and analysis of TSN
A TSN switch is the main component of NeSTiNg. It is mostly implemented based on existing
modules from INET. Figure 7 gives an insight on modules and sub-modules of the TSN switch.
The number of ports is configurable. Each port consists of three modules, eth, processingDelay
and relayUni. An incoming frame goes first through eth module. Since it is an incoming
frame, it is forwarded from mac module through the express queue. The frame is delayed by
the processingDelay component. The relayUnit module routes the frame to the correspond-
ing output port. The frame goes then through the processingDelay component of the output
port. As an outgoing frame, it first goes through the queuingFrames component. It evaluates the
priority of the frame based on the PCP field of the frame and maps it to a corresponding queue
according to the matrix presented in Table 3. The transmissionSelection component selects
the next frame for transmission according to the priorities and states of the queue gates. Gates
are controlled by the gateController component which stores a gate control list. The frame
is then transmitted according to strict priority or credit-based algorithm. For each queue, the
tsAlgorithms component is configurable by the user. Finally, the frame is forwarded to the mac
module through the preemptable or express queue.
14
Haris Suljić, Mia Muminović Performance study and analysis of TSN
3 Related Work
Since there is a number of different research areas that are closely related to this thesis, the
related work is divided into subsections. Firstly, the fact that TSN is not the only solution for
real-time Ethernet has to be taken into consideration. Subsection 3.1 provides an overview of
existing solutions, their strongest features, advantages and disadvantages. Many papers focus only
on TSN and its evaluation. Section 3.2 provides a summary of extensive research that is done
so far regarding TSN. Few papers are examining the key components of TSN individually, while
others are providing an evaluation of TSN as a communication system reaching its full potential
with all standards included. Many different ideas for evaluation are proposed, and the simulation
of TSN is one of them. Section 3.3 presents simulators for TSN that are developed so far using
different simulation tools. This section is of utmost importance for the thesis.
15
Haris Suljić, Mia Muminović Performance study and analysis of TSN
set is schedulable with AVB. This thesis heavily relies on the work done by M. Ashjaei et al. by
using the same use-case and identical model configuration.
16
Haris Suljić, Mia Muminović Performance study and analysis of TSN
and delay are reduced significantly by using PPS. Beneficial characteristics of IEEE 802.1Qbu are
accurately pinpointed by this paper and it is shown that real-time communication via the Ethernet
network is significantly improved with this amendment.
Lin Zhao et al. [14] provide a comparative analysis of TSN and TTEthernet. One of several simi-
larities of mentioned technologies is that both implement Time-Division Multiple Access (TDMA)
strategy, i.e., traffic collision is avoided by precisely dividing the transmission time of traffic. After
introducing both technologies, authors also explain their network architecture. In terms of band-
width allocation, TSN outperforms TTEthernet. In TTE low priority flows might starve in case of
large high priority traffic transmission. On the other hand, Stream Reservation Protocol (SRP) in
TSN provides fairness in bandwidth allocation for low priority traffic. Researchers then compare
TSN and TTEthernet in terms of delay analysis. TSN mechanisms CBS and frame preemption
allow high bandwidth, so end-to-end delay is minimal. TTEthernet can combine Time-Triggered
(TT) and Rate Constrained (RC) traffic in three ways: shuffling in which TT traffic is just de-
layed until RC finishes transmission, preemption in which RC traffic is suspended and TT traffic
is forwarded after which RC traffic continues with transmission and timely block in which RC
will not be transmitted before TT if it might affect the transmission of TT. TSN also outperforms
TTEthernet in terms of redundancy approach. TSN avoids frame loss by sending packets along
several paths while TTEthernet does not have a concrete mechanism to deal with packet loss. In
general, TSN is more flexible and adaptive to any modifications in the topology while TTE requires
an entirely new schedule. This paper once again shows the enormous potential of TSN and justifies
this thesis research goals.
17
Haris Suljić, Mia Muminović Performance study and analysis of TSN
simulate different system setups, files that define messages, specify topology, virtual links and con-
figuration of simulation have to be provided. Simulation results confirm that time-critical frames
are transmitted through the network as planned and that their latency is bounded. It is noticed
that amendments for frame preemption and Credit-based Shaper favor Audio Video frames and
Best effort traffic.
Most of the work in this thesis is based on NeSTiNg simulator implemented by J. Falk et al. [26].
The technical details of this simulator are explained in subsection 2.9. The simulator is available to
the research community and our contribution to their work is an extensive evaluation of the TSN
functionalities implemented in the network simulator. Results provided in the paper are regard-
ing the network simulator itself; the average runtime for simulations and memory consumption
are measured. There is no information given about the performance of the TSN, only about the
performance of the simulator. Work of this thesis is more orientated to that part.
18
Haris Suljić, Mia Muminović Performance study and analysis of TSN
4 Method
In this section, a scientific research method applied within this thesis is described briefly. Two
main parts are discussed: a motivation for employing this particular method and an overview of
how the method actually applies to the thesis work.
Theory building is usually used to construct the research questions and to give a justification
for their significance to a research community. In some cases, research problems cannot be solved
mathematically or tested empirically. Instead of that, the development of a system can be used to
provide answers to research questions. System development usually leads to new theories and
improvement of existing. Once the system is built, researchers can use it for experimentation.
Experiments are usually driven by theories and affected by system development itself, and results
provided by the experiment should be evaluated. Observation includes case studies, fields studies,
and similar methodologies. It helps with the formulation of the research questions that are going
to be tested and answered through the experimentation. No research method alone is sufficient
19
Haris Suljić, Mia Muminović Performance study and analysis of TSN
within the field of computer science. Usually, there are multiple methods that are applicable and
valuable feedback can be achieved by combining them.
System development is an essential strategy of this research method and it is interconnected with
other strategies. The partial goal of this thesis is developing simulation models for TSN that include
802.1Qbu and 802.1Qbv amendments. This part can be considered system development. The
purpose of developing a simulation model of TSN is to examine the performance of the amendments,
individually and when they are combined, in order to provide a proof-by-demonstration that TSN
can deliver real-time data transmission and still provide excellent transmission for remaining traffic
classes. In order to confirm ideas and concepts developed about TSN as part of the theory building,
several case studies are created. This methodology is included in observation. Experimentation
found its place between theory building and observation. It includes computer and experimental
simulations and has a purpose to validate proposed theories. In this thesis, simulation allows us
to do a performance analysis of TSN and answer research questions stated in Section 1.2. In
order to test the performance of the network, several performance metrics can be tracked, such as
end-to-end transmission latency and utilization [32]. These are dependent variables in the research
because they are being observed. Their change represents a result of experimental manipulation
of the independent variables [33]. Independent variables are network topology, data transmission
rate, ports bandwidth, etc. These are the values that are being manipulated in an experiment and
are unaffected by the other variables [33].
The research methodology consists of several milestones that are presented in Figure 9. In order
to understand the state-of-the-art, the literature review is conducted as the first stage. In the
second stage, the existing simulation tools are revised, with OMNeT++ in focus. Since the goal
of this thesis is not “reinventing the wheel”, developing entirely new simulation framework would
be unnecessary. Multiple existing simulation frameworks for TSN networks can be found, but
NeSTiNg is the most promising one. This framework is altered to fit the scope of the research.
The third stage is the design stage, and it implies developing the simulation models that are used
to demonstrate the functionality and efficiency of two amendments of interest. The fourth stage
of the research is an implementation which implies synthesis combining requirements elicitation
phase, analysis phase, and model checking phase. It provides a solution that can later be tested
and improved. The final stage of the research is the evaluation in which the performance analysis
of the TSN network is conducted by comparing it with the AVB network. Last three stages are
closely related so in to order to explain them, a holistic approach is used. Section 6 covers this
three stages.
20
Haris Suljić, Mia Muminović Performance study and analysis of TSN
Figure 11 presents the modules responsible for frame preemption. After the frame goes through
the gate it exits the queing module and can use either express or preemptable path. Before it
exits the switch the traffic needs to be encapsulated again. Module etherEncap adds the missing
Ethernet frame fields to the payload and vlanEncap adds the 802.1Q header.
21
Haris Suljić, Mia Muminović Performance study and analysis of TSN
It needs to be mentioned that original NeSTiNg had some issues that prevented the simulation.
One of them was an error with spending the credit in module CreditBasedShaper. It was resolved
by canceling the credit spending before rescheduling and this allowed usage of any slope factors
which was not possible before. Since dealing with these errors was not systematic, i.e., it was done
by trial and error method, all the changes cannot be mentioned in this report. This updated version
of NeSTiNg was uploaded to the GitHub in order to allow the researchers further investigation of
TSN.
22
Haris Suljić, Mia Muminović Performance study and analysis of TSN
6 Evaluation of TSN
Performance analysis of TSN is conducted through five scenarios. The first scenario tests the
functionality and performance of enhancements for scheduled traffic (IEEE 802.1Qbv). The second
scenario aims to demonstrate the functionality and test the performance of frame preemption
(IEEE 802-1Qbu). The third scenario is the unification of the first two and demonstrates the
functionality of the combination of two mentioned amendments. The fourth scenario demonstrates
the efficacy of frame preemption when combined with scheduled traffic. Final scenario combines
all of the TSN amendments and is based on the industrial use-case model proposed in [2] and [6].
Following sections describe the model configuration and present the simulation results. All relevant
configuration parameters of the following scenarios are given in Table 4. It needs to be mentioned
that most of these configuration parameters values are default, datarate value was adopted from
the work of M. Ashjaei et. al. [2]. Simulation time of all scenarios is 300s and from those simulation
results, relevant data is extracted and presented in the following tables. However, graphs show
only simulation time intervals of interest. All switches share the same buffer size and it is 100
Maximum Transmission Units (MTU). Another important reminder that is related to all scenarios
is that simulation framework does not provide the data about the latency of packets that are lost.
Whenever the switch buffer is full, packet loss occurs. That explains the saturation point of some
graphs featured in the following section.
Message size discussed in following sections refers to message payload size. Since the Ethernet
frame that is used is with 802.1Q tag all the messages gain 22B intended for destination MAC,
Source MAC, 802.1Q header, EtherType field and CRC. Minimum payload size is 42B, while max-
imum one is 1500B. After adding 7B for Ethernet Preamble and 1B for Start of Frame Delimiter
(SFD) transmitted message size ends up in 72B-1530B range.
The model consists of three end systems and one switch. ST node generates scheduled traffic,
BE node constantly generates best-effort traffic and Sink node is the recipient of all the generated
traffic.
ST node transmits control traffic periodically every 500µs. Gating mechanism ensures protected
23
Haris Suljić, Mia Muminović Performance study and analysis of TSN
window in which scheduled traffic transmits, unprotected window in which best-effort data trans-
mits and guard band window in which all the gates are closed in order to guarantee that BE traffic
transmission will not interfere with ST traffic transmission [31]. End-to-end latency is presented
in Figure 13. It is shown that scheduled traffic has a constant latency of 16.72µs. It is not possible
to show how end-to-end latency stabilizes in the full simulation time graph, so the first graph
shows only the first 100ms of the simulation. End-to-end latency of best-effort traffic is linearly
increasing until the buffer of the switch gets full. Buffer capacity of the queue is set to 100 MTU
packets. When the queue is completely full, end-to-end latency of best effort traffic becomes sta-
ble. However, best-effort traffic experiences packet loss because of the buffer overflow. Best-effort
traffic cannot be forwarded to output port as fast as it is being generated. In Table 5 minimum,
maximum and average latency are presented for both types of traffic.
Utilization of each link is presented in Table 6. Scheduled traffic is generated every 500ms and
the payload of it is 20B, so low utilization of that link is understandable. BE node constantly
generates MTU packets so the utilization is high.
24
Haris Suljić, Mia Muminović Performance study and analysis of TSN
Figure 14 graphically presents traffic flow on its way to the destination. Scheduled traffic directly
goes towards switch input port eth[0] after which it is forwarded to the appropriate output port
eth[2]. It is queued in the highest priority queue which checks the setup of the Transmission
Selection Algorithm (TSA) and the state of the gate. TSA is set to StrictPriority and gate
state is open, so the scheduled traffic is forwarded towards Sink. Best-effort traffic is sent towards
switch input port eth[1] where it is queued. If the gate is closed, traffic waits in the queue along
with all the other traffic sent to that queue. As soon as the gate opens, traffic is forwarded towards
Sink. Gating mechanism is configured to ensure the protected window length of 17µs, unprotected
window length of 360µs and guard band window of 123µs [31]. The guard band ensures that best-
effort traffic finishes its transmission before protected window starts and it is set to its maximum
value, i.e., the time needed to transmit the MTU.
There are two types of traffic transmitted through this network. Audio-video traffic generated by
AV node can preempt best-effort traffic generated by BE node. Sink node is the recipient of all
frames that are generated. Model configuration is the same as for Scenario 1 (Table 4). TSA for
Audio-Video traffic is set to CreditBasedShaper and IdleSlopeFactor of the shaper is set to
25
Haris Suljić, Mia Muminović Performance study and analysis of TSN
Utilization of the links is shown in Table 8. The payload of traffic is 1400B, while BE node
constantly generates MTU packets (1500B). This explains the high utilization of links from end
systems to the switch. Utilization from the switch to the Sink is 98.5128%.
26
Haris Suljić, Mia Muminović Performance study and analysis of TSN
Figure 17 presents execution trace for this scenario. Audio-video traffic uses the port eth[0] as
an input port. The switch forwards packets to an output port eth[2]. Its PCP is set to 7, so this
traffic is using queue[1] according to the mapping matrix (Table 3). The input port for best-
effort traffic is eth[1] and it uses the same output port as audio-video. The PCP value is set to
0, so best-effort traffic is queued in queue[0]. Gates associated with these queues are constantly
open. If transmission selector requests packet from queue[1] and credit is zero, the frame is being
instantly forwarded to upper layers through express queue preempting the transmission of any
other traffic. Best-effort frames are transmitted through the preemptable queue.
In this scenario, the maximum end-to-end latency of AV traffic is 19.904ms. Audio-video traffic is
delayed by credit-based shaper, so frame preemption does not reduce the latency of frames with
higher priority.
27
Haris Suljić, Mia Muminović Performance study and analysis of TSN
Figure 19 shows the trend of end-to-end latency for every traffic type throughout simulation time.
Best-effort and audio-video traffic latency increases linearly until their buffers get full, after which
end-to-end latency becomes approximately constant. Credit-based shaper allows best-effort traffic
to be transmitted even though it has the lowest priority. Scheduled traffic transmission is fully
deterministic and has constant latency value of 16.72µs. Because of the gating mechanism, it
is never preempted by audio-video traffic. Table 9 presents concrete minimum, maximum and
average end-to-end latency values of all three traffic types.
Increasing the number of sources affects the utilization of the link towards Sink. Gating mechanism,
credit-based shaping and frame preemption cause traffic queuing, so the utilization of mentioned
link is 71.3415%. Utilization of remaining links is as expected and this is presented in Table 10.
28
Haris Suljić, Mia Muminović Performance study and analysis of TSN
6.4 Scenario 4
The previous scenario demonstrates how gating, frame preemption and credit-based shaping work
when they are combined. Obtained results show that frame preemption does not decrease end-to-
end latency significantly. Scenario 4 combines frame preemption and gating. Topology is presented
in Figure 20. There are two types of control frames, time-triggered (TT) and event-triggered(ET).
Event-triggered frames have the highest priority and are able to preempt best-effort traffic, which
has the lowest priority. Frame preemption guarantees that these frames are going to be transmitted
as soon as they arrive providing low end-to-end latency. Gating mechanism assures that time-
triggered frames are scheduled and not preempted by any other traffic, not even higher priority
frames. PCP of event-triggered traffic is set to 7, PCP of time-triggered traffic (equivalent to
scheduled traffic from the previous scenario) is set to 5 and best-effort traffic has PCP of 0. ET
node generates 40B frames, T T node generates 20B frames every 500ms, BE node generates
MTU packets throughout the entire simulation. The receiver of these frames is node Sink. Gating
mechanism is regulated the same way as it is in the previous scenario.
Table 11 presents minimum, maximum and average end-to-end latency values for all there types
of traffic. In order to evaluate the performance of frame preemption, the same results are gathered
for the scenario with frame preemption disabled. These results are presented in Table 12.
By comparing these two tables, it can be concluded that frame preemption decreases the average
end-to-end latency of event-triggered traffic for 4.366µs. For time-critical packets, this is a very
significant improvement in end-to-end transmission latency. Even though the latency is low, it is
not deterministic. It depends on whether there is a frame being transmitted through a preemptive
queue or not. If not, end-to-end latency is lower. Preemption also takes some time; the transmission
of the preemptable frame has to be stopped; the indicators of preemption have to be inserted.
29
Haris Suljić, Mia Muminović Performance study and analysis of TSN
The express frame can be transmitted then. When no frames are being transmitted through the
preemptive queue, the end-to-end latency of event-triggered traffic is 0.01672ms. Otherwise, it is
0.02172ms. When frame preemption is not enabled, end-to-end latency is the same for frames that
are not being queued. However, frames that are queued and transmitted as soon as transmission
of the previous frame is completed, experience the end-to-end latency of 0.02672ms. End-to-end
latency of scheduled traffic is low and deterministic, as shown in previous scenarios. Best-effort
traffic experiences higher end-to-end latency when preemption is enabled as expected because it is
being preempted.
Traffic generated by all source nodes is presented in Table 15. Control traffic is labeled “ST”
and should have minimal and deterministic end-to-end latency. Video cameras generate traffic of
identical size.
The configuration of GCL for switch A is given in Table 13. Even though this switch has two
output ports, GCL for output port towards switch B is given. Protected window assures that the
scheduled traffic transmission is not disturbed by other types of traffic. The duration of protected
window is the time required for control messages from nodes Control 3 and Control 4 to reach the
switch B. Other output port transmits only one type of traffic, so there is no need to change the
states of gates, i.e. all gates are always open.
Cycle - 10ms
Gate configuration
Duration
(bit vector)
Protected window 0001 24µs
Unprotected window 1110 9853µs
Guard band 0000 123µs
Table 14 contains the configuration of switch B output port towards Head Unit. The duration of
the protected window is few microseconds longer than the one mentioned above. Messages from
nodes Control 1 and Control 2 are transmitted first, followed by messages from nodes Control 3
30
Haris Suljić, Mia Muminović Performance study and analysis of TSN
and Control 4, which are transmitted upon their arrival from switch A. The output port towards
PUCAM does not require any gate state changes.
Cycle - 10ms
Gate configuration
Duration
(bit vector)
Protected window 0001 37µs
Unprotected window 1110 9840µs
Guard band 0000 123µs
Simulation time of this scenario is 5 minutes. Table 16 presents achieved maximum end-to-end
latency of all messages when frame preemption is active. All control messages are delivered under
36.88µs which is satisfying for critical applications. Audio-video data can preempt Bulk data.
IdleSlopeFactor is set to 95M bps.
Utilization of every link is presented in Table 17 and the tests show that it is not altered by
activating frame preemption. Head Unit link has the highest utilization as expected, because it
receives packets from eight nodes.
31
Haris Suljić, Mia Muminović Performance study and analysis of TSN
In paper [2], this model was used for the response-time analysis of the Ethernet AVB protocol.
Results that researchers have obtained are shown in Table 18. It is important to point out that
model configuration is different. In this case, control frames are considered as Class A, forwarded
through credit-based shaper with the idle slope of 10M bps. Camera frames represent Class B, and
the idle slope of the shaper is 85M bps. Best-effort traffic (Bulk) is transmitted as a non-real time
frame.
Comparing these results with results stated above, we can see that control frames have much lower
end-to-end latency if transmitted through TSN switch. When frames are transmitted as Class A,
they are delayed by the credit-based shaper. If they are transmitted as scheduled traffic, they can
be transmitted immediately. Camera and audio-video frames also have lower end-to-end latency
when transmitted through TSN switch, because the idle slope is 95M bps. Control frames and
bulk data do not take more than 5% of overall network utilization, so there is no need to decrease
the data rate of camera and audio-video frames by assigning lower idle slope to the credit-based
shaper.
6.6 Discussion
This subsection summarizes all observations related to the performance of TSN amendments.
Obtained simulation results are discussed, and two main amendments are evaluated according to
them. Enhancement for scheduled traffic and frame preemption are analyzed individually and in
conjunction.
Scenario 1 analyzes enhancement for scheduled traffic individually. Results obtained utilizing
32
Haris Suljić, Mia Muminović Performance study and analysis of TSN
this network topology show that this amendment can guarantee low and deterministic latency for
scheduled traffic, but only if the gating schedule is created correctly. The main disadvantage is
that the schedule has to be created offline, i.e., before the execution. It has to be precise in order
to achieve maximum utilization. However, network delays have to be thought of as well. Even if
precise transmission times are calculated, the safety margin should be added. This is a trade-off. In
order to maximize the utilization, safety margins have to be decreased and vice versa. The guard
band decreases the utilization of network links because it stops transmission of all types of traffic.
Setting the guard band to the time interval needed for the transmission of MTU packet guarantees
the deterministic end-to-end latency of scheduled traffic. However, this is sometimes redundant.
A better approach would be determining what the size of the packet that is being transmitted just
before the scheduled traffic is and then calculate the guard band length based on the size. This
could decrease the guard band length and increase utilization. The utilization of Switch to Sink
link with the guard band is 50.14%. It increases to 76.796% without the guard band. For this
topology, the scheduled traffic is unharmed, and the end-to-end latency is the same. However, it is
more important to provide a guarantee that time-critical and scheduled packets are transmitted in
a predictable manner. End-to-end latency of best-effort traffic is continuously increasing until the
buffer of the queue gets full. Latency reaches its maximum value then and becomes stable. Once
the buffer gets full, packet loss occurs.
There are time-critical packets that are not scheduled, but it is essential that they are transmitted
as soon as they arrive. Since creating an offline schedule for gating is not an option; in this case,
frame preemption is used. Scenario 2 analyzes the amendment for frame preemption. Audio-video
traffic should be transmitted evenly, and credit-based shaper potentiates this. Frame preemption
allows the immediate transmission of audio-video traffic right after the credit reaches zero. This
reduces the end-to-end latency of audio-video traffic, and these frames are evenly distributed in
time. However, it does not make a significant improvement in this scenario. Preemption should be
used only for the frames that do not require determinism in their latency but do require immediate
transmission. It is not suitable to use preemption instead of gating, because the latency would be
completely unpredictable.
Scenario 3 is a combination of the first two scenarios. Even though scheduled traffic does not
have the highest priority, the gating assures that nothing disrupts its transmission, not even the
AV traffic that is able to preempt packets of the lower priority. Credit-based shaper favors the
best-effort traffic and prevents its starvation. Preemption favors the audio-video traffic and slightly
increases the latency.
Scenario 4 is similar to the previous one, except it does not include credit-based shaping. Two types
of critical frames are transmitted through the network; time-triggered and event-triggered. Gating
mechanism assures low and deterministic end-to-end latency of time-triggered frames. Frame
preemption assures low, but not deterministic end-to-end latency of event-triggered frames. This
scenario shows that frame preemption is not immediate; it takes some time. Because of that,
end-to-end latency of frame that is preempting other frame is increased. However, latency is much
lower with frame preemption than without. End-to-end latency of preempted traffic (best-effort
traffic) is increased, as expected.
The purpose of scenario 5 is to demonstrate the differences between TSN and AVB switch. AVB
switch can assure low end-to-end latency of time-critical frames, but it cannot be considered
deterministic. Non-critical-time frames could interfere with them, while that is not the case when
they are transmitted through TSN switch. If gating is implemented correctly, no other traffic class
can disrupt scheduled traffic. Frame preemption assures that AV frames are distributed evenly in
time and transmitted as soon as the credit reaches zero. However, for this scenario, it does not
improve overall network performance.
33
Haris Suljić, Mia Muminović Performance study and analysis of TSN
7 Conclusion
The main goal of this thesis is conducting performance analysis of TSN by providing concrete and
valid results in a network simulation framework. TSN consists of numerous substandards, but only
enhancement for scheduled traffic (IEEE 802.1 Qbv) and frame preemption (IEEE 802.1Qbu) are in
the scope of the research. In order to address this topic, the thorough background has been provided
starting with network architecture followed by TSN features and OMNeT++ introduction. The
background was finalized with concrete TSN simulation framework NeSTiNg, that was used to
build models of interest. Related work summarizes important research papers that investigate
TSN networks and justifies the thesis goal. The evaluation has been performed by creating five
scenarios that each provide simulation results. First two scenarios demonstrate the functionality
of mentioned amendments, the third and fourth one provide interesting results on combining those
two. The fifth scenario is a real-world use-case that presents a vehicle network. The discussion
that concludes the evaluation provides a detailed analysis of mentioned scenarios based on which
research questions can be answered.
34
Haris Suljić, Mia Muminović Performance study and analysis of TSN
by the research community all around the world. Advantage of using simulation framework is its
ability to be easily reconfigured, all the parameters are controlled and researchers can focus on the
analysis. On the other hand, performing an experiment could provide more accurate results, but
the experiment is not easily scalable. The process of wiring and configuring the network mentioned
in Section 6.5 would be too exhausting, and collecting the data from the nodes would not be an
easy task. NeSTiNg can provide the latency, utilization, queuing time, express and preempt-able
frames delay, etc., to understand the TSN. Also, animation speed can be configured so the traffic
flow can be easily tracked.
The initial goal of this thesis was to perform a performance analysis of TSN networks by developing
a simulation framework based on OMNeT++, but since there were several already implemented
such as NeSTiNg and CORE4INET28 the focus of the thesis shifted towards deeper performance
analysis of the standard. With the increase of data being transmitted in the automotive industry,
this technology is predicted to be the next step of in-car networking. With this being said, the
authors of this thesis believe that some interesting results have been shown by examining these
two amendments; the potential and utility of TSN, in general, are unquestionable.
35
Haris Suljić, Mia Muminović Performance study and analysis of TSN
8 Future Work
While creating gate control lists for each of the scenarios, one can conclude that it is a tiring process
which requires much planning in advance. However, it is not impossible to implement an automatic
offline gate scheduler with respect to traffic characteristics. It requires taking a lot of information
about a single frame into a consideration, like period, release time, priority, destination, size of the
package, size of the previous and the next frame and also information about network configuration
like data rate, period, number of queues that are used, processing delay of the switches, etc. Based
on these inputs, GLCs that provide minimal protected windows and guard bands could be easily
generated and then tested using simulation tools.
Utilized simulation framework required some modifications such as solving errors with credit spend-
ing or canceling other modules self-messages. Mentioned issues would not allow one to run simula-
tions longer than several seconds, and there are still some errors not resolved. Future work could
be related to developing more stable simulation framework and some comparison with another
simulation framework could lead to its greater credibility.
Future work could provide a performance analysis of TSN used in networks that are more complex
to deliver the limits on its efficient utilization. That kind of research could demonstrate TSN’s
scalability.
36
Haris Suljić, Mia Muminović Performance study and analysis of TSN
References
[1] J. F. Nunamaker Jr, M. Chen, and T. D. Purdin, “Systems development in information
systems research,” Journal of management information systems, vol. 7, no. 3, pp. 89–106,
1990.
[2] M. Ashjaei, S. Mubeen, J. Lundbäck, M. Gålnander, K.-L. Lundbäck, and T. Nolte, “Modeling
and timing analysis of vehicle functions distributed over switched Ethernet,” in IECON 2017-
43rd Annual Conference of the IEEE Industrial Electronics Society. IEEE, 2017, pp. 8419–
8424.
[3] P. Simoneau, “White paper: The OSI Model: Understanding the Seven Layers of Computer
Networks,” Global Knowledge Training LLC, Tech. Rep., 2006.
[4] I. C. Society, “IEEE 802.1Q Standard.” IEEE, 2014.
[5] I. C. Society, “IEEE 802.3 Standard for Ethernet.” IEEE, 2016.
[6] H.-T. Lim, K. Weckemann, and D. Herrscher, “Performance study of an in-car switched Eth-
ernet network without prioritization,” in International Workshop on Communication Tech-
nologies for Vehicles. Springer, 2011, pp. 165–175.
[7] I. L. S. Committee et al., “IEEE standard for local and metropolitan area networks,” 2014.
[12] Z. Lin, S. Pearson et al., “An inside look at industrial Ethernet communication protocols,”
White Paper Texas Instruments, 2013.
[13] T. Brand, “Time Sensitive Networks: Real-Time Ethernet,” 2018.
[14] L. Zhao, F. He, E. Li, and J. Lu, “Comparison of Time Sensitive Networking (TSN)
and TTEthernet,” in 2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC).
IEEE, 2018, pp. 1–7.
[15] G. A. Ditzel and P. Didier, “Time sensitive network (TSN) protocols and use in Ethernet/IP
systems,” in 2015 ODVA Industry Conference & 17th Annual Meeting, 2015.
[16] T. Wan and P. Ashwood-Smith, “A performance study of CPRI over Ethernet with IEEE 802.1
Qbu and 802.1 Qbv enhancements,” in Global Communications Conference (GLOBECOM),
2015 IEEE. IEEE, 2015, pp. 1–6.
[17] L. Zhao, P. Pop, and S. S. Craciunas, “Worst-case latency analysis for IEEE 802.1 Qbv time
sensitive networks using network calculus,” Ieee Access, vol. 6, pp. 41 803–41 815, 2018.
37
Haris Suljić, Mia Muminović Performance study and analysis of TSN
[20] J. Pan and R. Jain, “A survey of network simulation tools: Current status and future devel-
opments,” Email: jp10@ cse. wustl. edu, vol. 2, no. 4, p. 45, 2008.
[21] S. Siraj, A. Gupta, and R. Badgujar, “Network simulation tools survey,” International Journal
of Advanced Research in Computer and Communication Engineering, vol. 1, no. 4, pp. 199–
206, 2012.
[22] E. Weingartner, H. Vom Lehn, and K. Wehrle, “A performance comparison of recent network
simulators,” in 2009 IEEE International Conference on Communications. IEEE, 2009, pp.
1–5.
[23] A. Varga and R. Hornig, “An overview of the OMNeT++ simulation environment,” in Pro-
ceedings of the 1st international conference on Simulation tools and techniques for communi-
cations, networks and systems & workshops. ICST (Institute for Computer Sciences, Social-
Informatics and , 2008, p. 60.
[24] A. Varga, “OMNeT++ Simulation Manual Version 5.4.1.” OpenSim Ltd, 2016, pp. 123, 425,
133, 280.
[25] A. Varga, “OMNeT++ User Guide Version 5.4.1.” OpenSim Ltd, 2016, p. 128.
[26] J. Falk, D. Hellmanns, B. Carabelli, N. Nayak, F. Dürr, S. Kehrer, and K. Rothermel, “NeST-
iNg: Simulating IEEE time-sensitive networking (TSN) in OMNeT++,” in Proceedings of the
2019 International Conference on Networked Systems (NetSys), Garching b. München, Ger-
many, Mar. 2019.
[27] P. Doyle, “Introduction to real-time Ethernet I,” The ExtensionA Technical Supplement to
Control Network, vol. 5, no. 3, 2004.
[28] P. Doyle, “Introduction to real-time Ethernet II,” The ExtensionA Technical Supplement to
Control Network, vol. 5, no. 4, 2004.
[29] W. S. S. APART, “5 Real-Time, Ethernet-Based Fieldbuses Compared.”
[30] T. Steinbach, F. Korf, and T. C. Schmidt, “Real-time Ethernet for automotive applications:
A solution for future in-car networks,” in 2011 IEEE International Conference on Consumer
Electronics-Berlin (ICCE-Berlin). IEEE, 2011, pp. 216–220.
[31] P. Heise, F. Geyer, and R. Obermaisser, “TSimNet: An industrial time sensitive networking
simulation framework based on OMNeT++,” in 2016 8th IFIP International Conference on
New Technologies, Mobility and Security (NTMS). IEEE, 2016, pp. 1–5.
[32] J. Farkas, “Introduction to IEEE 802.1 - Focus on the Time-Sensitive Networking Task
Group.” IEEE, 2017.
[33] C. Wohlin, M. Höst, and K. Henningsson, “Empirical research methods in software engineer-
ing,” in Empirical methods and studies in software engineering. Springer, 2003, pp. 7–23.
38
Haris Suljić, Mia Muminović Performance study and analysis of TSN
A Installation guidelines
This section provides guidelines and technical tips for installing OMNeT++, INET framework and
NeSTiNg simulation framework. It is important to point out that simulations have been tested in
the latest version of the NeSTiNg simulation framework with OMNeT++ version 5.4.1 and INET
version 4.1.0 under Linux (version 18.04).
1.1 OMNeT++
If you are installing OMNeT++ on one of Linux distributions or macOS, prerequisite packages
have to be installed. No prerequisite packages are necessary for Windows. The next step is
downloading OMNeT++ from the official web-page29 . You have to manually copy the downloaded
OMNeT++ archive to the directory where you want to install it. It is important that you choose
a directory whose full path does not contain any space (do not put OMNeT++ under
Program Files). OMNeT++ Installation Guide30 describes how to install OMNeT++ on various
platforms. Instructions provided within this document are sufficient for successful installation.
Java Runtime must be installed before you can use the IDE. It is strongly recommended to utilize
OpenJDK, version 8.0 or later.
39
Haris Suljić, Mia Muminović Performance study and analysis of TSN
6. Project folder should now appear and be checked under Projects with inet project folder.
40
Haris Suljić, Mia Muminović Performance study and analysis of TSN
B Getting started
In this section NeSTiNg utilization is demonstrated on a simple example model. It guides the users
through some concrete steps and provides practical tips in order to encourage them to experiment
with the framework.
• After starting the OMNeT++ we need to select the desired workspace. Welcome screen
offers several options such as: Overview, What’s New, Tutorials, Migrate and Workbench. In
this guideline we will not use them so just exit Welcome screen.
• Popping screen will notify you that your workspace is empty and ask if you would like to
install or import some projects. However, clicking OK will lead to an error screen because
web source cannot be found, so we will skip this by clicking Cancel.
• IDE is usually slow unless we create makefiles for INET through terminal. First we need to
navigate to INET folder and use commands make makefiles and make sequentially.
• Instead creating new project which is straight forward, we will just add new folder to the
NeSTiNg project in order to minimize the importing from this framework.
• We can add the HelloWorld folder to nesting->Simulations folder by right clicking on it,
selecting New->Folder. Next we need to add the NED file to that folder and to do that we
right click on created folder and click New->Network Description File (NED), now we need
to name it and later we select NED file with one item for easiest start.
• Next step is to add INI file to the same folder by similar fashion, right click on it and New-
>Initialization File (INI) that we customize to be empty. This will result in NED file similar
to one in Figure 23.
• In the submodules window we can see nodes that are included in NeSTiNg package. Let’s
add some to create a simple network such as one in the Figure 24. In order to make it able
to run we need to define the routing table and the schedule for the traffic that source needs
to transmit.
41
Haris Suljić, Mia Muminović Performance study and analysis of TSN
• We will add xml folder to HelloWorld folder and there we will create two xml files, Rout-
ing.xml and Schedule.xml.
• Now we can start the simulation and analyze the results. Simulation window is presented in
Figure 25.
• In order to analyze the data we need to add ANF file to the folder HelloWorld. In the INI
file we stated the directory for storing the results and in it we can find sca, vec and sci files.
We can drag them to the input files of the ANF file. This is presented in Figure 26.
• Now we can browse the data, filter them, plot and do much more. In the Figure 27 we can
see chart of end-to-end delay of the traffic in the Example scenario.
42