0% found this document useful (0 votes)
41 views48 pages

FULLTEXT01

This thesis examines the performance of two Time Sensitive Networking (TSN) amendments - IEEE 802.1Qbv and IEEE 802.1Qbu - through simulation. The author utilizes an OMNeT++-based simulation framework called NeSTiNg to evaluate the amendments under various scenarios measuring end-to-end latency and link utilization. The results provide insight into how TSN can enable reliable, fast data transmission for real-time applications in industry use cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views48 pages

FULLTEXT01

This thesis examines the performance of two Time Sensitive Networking (TSN) amendments - IEEE 802.1Qbv and IEEE 802.1Qbu - through simulation. The author utilizes an OMNeT++-based simulation framework called NeSTiNg to evaluate the amendments under various scenarios measuring end-to-end latency and link utilization. The results provide insight into how TSN can enable reliable, fast data transmission for real-time applications in industry use cases.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/341509071

Performance Study and Analysis of Time Sensitive Networking

Thesis · June 2019


DOI: 10.13140/RG.2.2.22265.08803

CITATIONS READS

2 5,256

2 authors, including:

Haris Suljić
University of Sarajevo
1 PUBLICATION 2 CITATIONS

SEE PROFILE

All content following this page was uploaded by Haris Suljić on 20 May 2020.

The user has requested enhancement of the downloaded file.


Mälardalen University
School of Innovation Design and Engineering
Västerås, Sweden

Thesis for the Degree of Master of Science in Computer Science -


Embedded Systems 15.0 credits

PERFORMANCE STUDY AND


ANALYSIS OF TIME SENSITIVE
NETWORKING

Haris Suljić, Mia Muminović


[email protected], [email protected]

Examiner: Saad Mubeen


Mälardalen University, Västerås, Sweden

Supervisors: Mohammad Ashjaei


Mälardalen University, Västerås, Sweden

Company supervisor: John Lundbäck,


Arcticus Systems AB, Stockholm

June 14, 2019


Haris Suljić, Mia Muminović Performance study and analysis of TSN

Abstract
Modern technology requires reliable, fast, and cheap networks as a backbone for the data transmis-
sion. Among many available solutions, switched Ethernet combined with Time Sensitive Networking
(TSN) standard excels because it provides high bandwidth and real-time characteristics by utilizing
low-cost hardware. For the industry to acknowledge this technology, extensive performance studies
need to be conducted, and this thesis provides one. Concretely, the thesis examines the performance
of two amendments IEEE 802.1Qbv and IEEE 802.1Qbu that are recently appended to the TSN
standard. The academic community understands the potential of this technology, so several simu-
lation frameworks already exist, but most of them are unstable and undertested. This thesis builds
on top of existent frameworks and utilizes the framework developed in OMNeT++. Performance is
analyzed through several segregated scenarios and is measured in terms of end-to-end transmission
latency and link utilization. Attained results justify the industry interest in this technology and
could lead to its greater representation in the future.

i
Haris Suljić, Mia Muminović Performance study and analysis of TSN

Table of Contents
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Expected outcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Background 3
2.1 Real-time systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Switched Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.4 Real-time Ethernet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.5 Time Sensitive Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.6 Network simulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.61 Comparison of network simulators . . . . . . . . . . . . . . . . . . . . . . . 10
2.7 OMNeT++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.71 The NED language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.72 Messages and packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.73 Configuring simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.74 Result analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.8 INET Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.9 Network Simulator for Time-Sensitive Networking - NeSTiNg . . . . . . . . . . . . 13

3 Related Work 15
3.1 Comparison of solutions for real-time Ethernet . . . . . . . . . . . . . . . . . . . . 15
3.2 Performance Evaluation of TSN amendments . . . . . . . . . . . . . . . . . . . . . 16
3.3 Simulating Time Sensitive Networking . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 Method 19
4.1 System Development Research Method . . . . . . . . . . . . . . . . . . . . . . . . . 19

5 Simulation framework architecture 21

6 Evaluation of TSN 23
6.1 Scenario 1 - IEEE 802.1Qbv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.2 Scenario 2 - IEEE 802.1Qbu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.3 Scenario 3 - IEEE 802.1Qbv and IEEE 802.1Qbu . . . . . . . . . . . . . . . . . . . 27
6.4 Scenario 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.5 Scenarios 5 - Industrial use-case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

7 Conclusion 34
7.1 Reflection on research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

8 Future Work 36

References 38

Appendix A Installation guidelines 39


1.1 OMNeT++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.2 INET framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
1.3 NeSTiNg framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Appendix B Getting started 41

ii
Haris Suljić, Mia Muminović Performance study and analysis of TSN

List of Figures
1 Switched Ethernet topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Original Ethernet frame and Ethernet frame with 802.1Q tag . . . . . . . . . . . . 6
3 Enhancement for Scheduled Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Example for IEEE 802.1Qav and IEEE 802.1Qbv amendments . . . . . . . . . . . 8
5 Priority-based frame preemption . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
6 a) simple module output gate, b) compound module output gate, c) simple module
input gate, d) compound module input gate . . . . . . . . . . . . . . . . . . . . . . 11
7 TSN switch and its sub-modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
8 A multi-methodological research approach [1] . . . . . . . . . . . . . . . . . . . . . 19
9 A research methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
10 Modules responsible for shapers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
11 Modules responsible for frame preemption . . . . . . . . . . . . . . . . . . . . . . . 21
12 Scenario 1 - network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
13 Scenario 1 - traffic latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
14 Scenario 1 - execution trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
15 Scenario 2 - network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
16 Scenario 2 - traffic latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
17 Scenario 2 - execution trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
18 Scenario 3 - network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
19 Scenario 3 - traffic latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
20 Scenario 4 - network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
21 Scenario 5 - network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
22 Projects inet and nesting checked under Projects . . . . . . . . . . . . . . . . . . . 40
23 Example NED file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
24 Simple network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
25 Simulation window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
26 Selecting the input files for analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
27 Chart of end-to-end delay for the example . . . . . . . . . . . . . . . . . . . . . . . 42

iii
Haris Suljić, Mia Muminović Performance study and analysis of TSN

List of Tables
1 802.1Q Header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Priority levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Mapping traffic classes to queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Model configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5 Scenario 1 - end-to-end latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6 Scenario 1 - utilization of each link . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7 Scenario 2 - end-to-end latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
8 Scenario 2 - utilization of each link . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
9 Scenario 3 - end-to-end latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
10 Scenario 3 - utilization of each link . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
11 Scenario 4 - end-to-end latency with preemption . . . . . . . . . . . . . . . . . . . 29
12 Scenario 4 - end-to-end latency without preemption . . . . . . . . . . . . . . . . . 29
13 Gate Control List for the switch A . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
14 Gate Control List for the switch B . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
15 Scenario 5 - traffic characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
16 Scenario 5 - end-to-end latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
17 Scenario 5 - utilization of each link . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
18 Scenario 5 - end-to-end latency (with AVB only) [2] . . . . . . . . . . . . . . . . . 32

iv
Haris Suljić, Mia Muminović Performance study and analysis of TSN

1 Introduction
Ethernet as technology is as widely used in industry, as it is popular in regular households. It was
first introduced in 1980, and after its first standardization in 1983 by a working group of Institute
of Electrical and Electronics Engineers (IEEE), it almost replaced all the other wired local area
networks (LAN) technologies. Initial speed it could have reached was 10 Mbit/s. Since then, many
improvements have been made and newer standards are introduced. In 2017, Ethernet reached
a speed of 400 Gbit/s. A collection of all these standards that define Ethernet is IEEE 802.3.
They define the physical layer and data link layers Media Access Control (MAC) of wired Ether-
net. These two layers are also known as the first two layers of the Open Systems Interconnection
(OSI) model[3]. IEEE 802.3 is a working group of the IEEE 802 project, and all standards defined
within this project are dealing with Local Area Networks (LAN) and Metropolitan Area Networks
(MAN). A working group IEEE 802.1 defines LAN/MAN bridging and management. A part of
this group is a Time-Sensitive Networking (TSN) task group[4]. Audio Video Bridging (AVB) is a
former task group of IEEE, and this task group was renamed to Time-Sensitive Networking Task
Group in 2012 in order to extend the work.
The main reason for introducing TSN is because Ethernet based implementations are gaining mo-
mentum and are being widely considered because of high bandwidth, scalability and compatibility
[5], but are still missing the predictability in data delivery. Real-time embedded systems put
strong limitations on data communication mechanisms. The amount of data exchanged between
components in distributed systems is constantly increasing, so it is becoming harder to satisfy all
the temporal constraints. The main application domain is the automation process and industry
requires real-time performance. In order to deliver, the network has to be able to forward mes-
sages with bounded end-to-end latency. Since applications within this domain are safety-critical,
the latency has to be bounded deterministically. A set of standards specified by the TSN working
group provides deterministic real-time communication over standard Ethernet.
This research focuses on analyzing the performance of TSN networks by using a simulation tool
developed in the simulation framework OMNeT++. Several scenarios cover interesting network
topologies and traffic configurations, and the final one is an industrial use-case designed by BMW
group [6], which is further described in Section 6.5.

1.1 Motivation
TSN is an extension of the Ethernet and is a set of standards under IEEE 802.1 Task Group1 .
TSN is compliant with switched Ethernet, an Ethernet network with a switch instead of a hub, so
advance traffic control is supported. IEEE 802.1Q is a part of this standard and it makes Virtual
Local Area Network (VLAN) possible by adding 802.1Q tag into the Ethernet frame [7]. It also
enables forwarding and queuing of messages in the network. This standard puts messages into
different classes based on their priority and uses traffic shapers in order to predict and prevent
overload of switch ports. In order to enable a priori message transmission and preemption of non-
critical messages, two amendments 802.1Qbv and 802.1Qbu are introduced. The first amendment
introduces scheduled traffic, while the second one introduces frame preemption. These two amend-
ments are key enablers of real-time communication in TSN networks [8]. The work in paper [9]
shows that the TSN standard can provide a deterministic and low end-to-end latency compared
to standard Ethernet standards. This is confirmed with a simulation model based on OMNeT++.
The standards IEEE 802.1Qbv and IEEE 802.1QAS were used to enable scheduled traffic and time
synchronization, respectively. The work in paper [8] examines only IEEE 802.1Qbv amendment.
It discusses the functional parameters of 802.1Qbv devices and how those affect control of the
temporal behavior of traffic flow in the case if high-criticality traffic. However, no paper examines
the performance of amendments 802.1Qbv and 802.1Qbu when they are combined and that is the
goal of this thesis.
In order to test the performance of time-critical traffic in terms of end-to-end latency and link
utilization, a simulation tool based on OMNeT++ is used. OMNeT++ is an extensible, modular,
component-based C++ simulation library and framework, primarily for building network simula-
1 Time-Sensitive Networking (TSN) Task Group. [Online]. Available: https://1.ieee802.org/tsn/

1
Haris Suljić, Mia Muminović Performance study and analysis of TSN

tors2 .
Since there are several data transmission end-to-end latency definitions depending on the context,
the one considered in this thesis follows. Data transmission end-to-end latency implies the
time required for a message to be transmitted from its source to the final destination. Another
important performance metric is link utilization and it represents the percentage of the link
capacity which traffic consumes.
This thesis is being done in collaboration with Arcticus Systems AB3 .

1.2 Problem formulation


The main goal of this master thesis is analyzing the performance of TSN based switched Ethernet
network by utilizing a simulation tool based on OMNeT++. The thesis aims at providing an
answer to the following research questions:
• RQ1 - How does TSN based switched Ethernet network perform in terms of end-to-end data
transmission latency?
• RQ2 - How does the combination of 802.1Qbv and 802.1Qbu amendments affect the network
performance for critical messages?
• RQ3 - How can a simulation framework help in evaluating the performance of TSN networks?
OMNeT++ based simulation tool can be used to test network end-to-end transmission latency,
utilization, etc. Performance analysis will be based on the data extracted from the simulation tool
prototype, that will be described later. In order to do so, one must first review existing literature
and state of the art. There are several available simulation frameworks for TSN, and they should
be reviewed and compared in order to use the most functional one. Before the model design, one
should systematically familiarize themselves with OMNeT++ and understand the possibilities and
constraints of this platform.

1.3 Expected outcome


The expected outcome of the thesis is the performance analysis of the combination of two amend-
ments 802.1Qbv and 802.1Qbu, and examination of end-to-end latency behavior for scheduled,
time-critical messages. These two amendments provide a preemption mechanism which suspends
non-critical messages in the emergence of the critical one. It is expected to provide performance
benchmarks which could benefit the standardization process of the TSN. These results could also
stimulate other researchers to investigate this topic, which is crucial for improving TSN perfor-
mances.

1.4 Thesis outline


Section 2 explores the relevant background and provides an explanation of the main concepts and
terms this thesis consists of. Further, Section 3 covers a wide range of related work. The chosen
research method is explained briefly in Section 4. A motivation for employing the method along
with an explanation of how the method is applied is given. Section 6 provides extensive evaluation
of TSN and its amendments. Few scenarios are designed in order to address research questions
stated in Section 1.2. Lastly, the final conclusions are presented in Section 7 and potential future
work is suggested in Section 8.

2 What is OMNeT++. [Online]. Available: https://omnetpp.org/intro/


3 Arcticus Systems AB. [Online]. Available: https://www.arcticus-systems.com/

2
Haris Suljić, Mia Muminović Performance study and analysis of TSN

2 Background
In modern Ethernet networks, shared Ethernet is replaced with switched Ethernet because it pro-
vides an efficient and convenient way to extend the bandwidth of the network. TSN, an extension
of Ethernet, is used to achieve real-time properties of network and to utilize its full potential.
Among many simulation tools, OMNeT++ is chosen as a simulation environment to provide the
performance results. In following subsections, real-time systems, switched Ethernet, TSN and
network simulators are described widely.

2.1 Real-time systems


Real-time systems are systems that react upon outside events, process input data and provide
output in a specific time interval according to the specified timing requirements. The correctness
of the result depends not only on the correctness of the function but the correctness of the response
time interval too. Since the term real-time is a highly important part of this thesis its German
industry standard (DIN 44 300, 1985)4 definition is stated below:
“The operating mode of a computer system in which the programs for the processing of data arriving
from the outside are permanently ready, so that their result will be available within predetermined
periods of time; the arrival times of the data can be randomly distributed or be already a priori
determined depending on the different applications.”
As the definition stated, real-time systems must be reactive and time-correct, its behavior must
be dependent and predictable. Typical temporal constraints in real-time systems are:
• hard real-time constraints - deadline miss can result in system failure, which may result in a
catastrophe, e.g., airbag system, pacemaker, aircraft control, etc.
• soft real-time constraints - execution of the task after the deadline still might have some
importance, e.g., consumer electronics, video buffering, etc.
This temporal criticality puts hard real-time constraints on embedded systems integrated into
real-time systems. End-to-end latency in the system must be short and deterministic, worst-
case response time must be bounded, in case of preemption highest priority message must be
transmitted, etc.

2.2 Ethernet
Ethernet is a group of networking technologies for communication over a Local Area Network
(LAN). It was developed by XEROX Palo Alto Research Centre (PARC) in 1976, commercially
introduced in 1980 and first standardized in 1983 as IEEE 802.3. It supports several network
topologies such as bus, tree, star, line, ring, etc. In the OSI reference model, it corresponds to
the physical layer and data link layer [10]. The physical layer is the lowest layer of the OSI model
and it consists of electronic circuit transmission technologies of a network. In Ethernet, it can be
coaxial cable, twisted pair, or even optical fiber. The speed of the Ethernet is mostly dependent
on the physical layer and it can reach up to 400 Gbit/s. Data link layer consists of MAC sub-
layer and Logical Link Control (LLC) sub-layer. LLC sub-layer provides synchronization, flow
control and error checking of the data link layer. Every node on the network has its unique 6
byte MAC address. Network arbitration is done at the MAC sub-layer. Carrier Sense Multiple
Access/Collision Detection (CSMA/CD) is an arbitration mechanism on which Ethernet is based.
Every node is continuously listening to the network state (Carrier Sense), multiple nodes can
begin transmission if they detect that the network is quiet (Multiple Access) and in the case of
multiple concurrent transmission, each of the nodes must detect the collision, stop its transmission
and try again after some random time interval (Collision Detection) [10]. If the collision of the
same framework occurs 16 times, that frame is withdrawn and will not be transmitted [11]. The
main disadvantage of this arbitration approach is frequent collision under heavy traffic which leads
to unbounded end-to-end transmission delay. Introducing switches in the LAN can upgrade the
arbitration mechanism since they work on the data link layer.
4 DIN 44 300 Informationsverarbeitung No. 9.2.11, 1985.

3
Haris Suljić, Mia Muminović Performance study and analysis of TSN

2.3 Switched Ethernet


Switched Ethernet exploits high bandwidth, scalability, and cost efficacy of Ethernet and provides
full-duplex links, temporal reliability, bounded delays and low jitter. The network topology of
conventional Ethernet and switched Ethernet are presented in Figure 1. The main difference
between shared Ethernet and switched Ethernet is the usage of a switch instead of a hub. Hub is a
node that transmits a message to all other nodes in the network, whereas switch can determine the
address of the receiver node and forward the message only to it. This allows multiple concurrent
transmissions in the network if the receivers are different. The switch creates its own MAC address
table in which it stores MAC addresses of all directly reachable nodes, so the price of using the
switch instead of a hub is a latency of table look-up which is acceptable. Another difference
between a switch and a hub is the type of connection they provide. Hub provides a half-duplex
link, whereas switch is capable of handling a full-duplex link, so the node can transmit and receive
messages simultaneously [11]. These characteristics make switched Ethernet suitable platform for
real-time traffic solutions such as AVB and TSN.

Figure 1: Switched Ethernet topology

2.4 Real-time Ethernet


Shared Ethernet is not suitable for use in the industrial environment, automation and process
control. Many standards are used to fulfill the lack of determinism and to provide low latency.
These standards use modified MAC sub-layer and are also known as Industrial Ethernet or Real-
time Ethernet standards. Typical Transmission Control Protocol (TCP), User Datagram Protocol
(UDP) and Internet Protocol (IP) are not suitable protocols since they do not offer real-time traffic
characteristics, but the hardware used by them could still be combined with some other real-time
protocols. Cost-efficacy of this approach resulted in several industrial real-time Ethernet solutions
such as EtherCAT5 , TTEthernet6 , EtherNet/IP7 , PROFINET8 , Ethernet POWERLINK9 , SER-
COS III 10 , etc. TSN switched Ethernet is also among them [12]. A set of amendments developed
by TSN Task Group provides secure data transmission and real-time capability. In addition to
these features, TSN supports high data transfer rates up to the gigabyte range. However, TSN is
fully effective only if both sides - talker and listener and Ethernet switches support TSN functions
and standards [13]. All mentioned solutions utilize switched Ethernet since switches introduce full-
duplex link which allows real-time features. TSN outperforms TTEthernet in terms of stability of
transmission context, starvation of low-priority messages is lower and TSN is more flexible to adapt
to configuration modifications [14]. TSN switched Ethernet network offers higher bandwidth and
is more cost-effective than other mentioned real-time solutions, which proves that further research
and development of this technology is reasonable.
5 EtherCAT - The Ethernet Fieldbus. [Online]. Available: https://www.ethercat.org/default.htm
6 Time-Triggered Ethernet. [Online]. Available: https://en.wikipedia.org/wiki/TTEthernet
7 EtherNet-IP. [Online]. Available: https://www.odva.org/Technology-Standards/EtherNet-IP/Overview
8 PROFINET. [Online]. Available: https://www.profibus.com/technology/profinet/
9 ETHERNET POWERLINK. [Online]. Available: https://www.ethernet-powerlink.org/
10 What is Sercos? [Online]. Available: https://www.sercos.org/technology/what-is-sercos/

4
Haris Suljić, Mia Muminović Performance study and analysis of TSN

2.5 Time Sensitive Networking


One of the most significant IEEE standards is IEEE 802 group of standards. It is a family of IEEE
standards related to local area networks and metropolitan area networks. Most of the standards are
dealing with either wired (Ethernet, aka IEEE 802.3) or wireless (IEEE 802.11 and IEEE 802.16)
networks. IEEE 802.3 is a working group that defines the physical layer and data link layer’s media
access control of wired Ethernet. It also supports the IEEE 802.1 network architecture. TSN is
a set of standards developed by the TSN task group of the IEEE 802.1 working group. This set
of standards is allowing the time-sensitive transmission of data over Ethernet networks. Most of
them are extensions to the IEEE 802.1Q Virtual Local Area Networks (VLAN). The IEEE 802.1Q
standards work at OSI Layer 2 - data link layer11 .
As the name implies, TSN’s main focus is time sensitivity. Based on IEEE 802.1 and IEEE 802.3
standards, TSN guarantees critical real-time communication and provides deterministic messag-
ing on switched Ethernet. The foundation of TSN is AVB12 , a set of technical standards that
provide streaming services through IEEE 802 (Ethernet) networks whose characteristics are time-
synchronization and low latency. Standards developed under AVB Task Group are:
• 802.1BA: AVB systems,
• 802.1AS: Timing and Synchronization for Time-Sensitive Applications (gPTP),
• 802.1Qat: Stream Reservation Protocol (SRP),
• 802.1Qav: Forwarding and Queuing for Time-Sensitive Streams (FQTSS).
The goal of these standards was to allow the user to create plug-and-play ad hoc networks guar-
anteeing well-bounded latency and low jitter. Communication between a talker and a listener was
envisioned as follows. Using SRP, a listener would request a path through the network with specific
bandwidth, jitter and latency requirements. The request would be forwarded hop-by-hop to the
talker. Bandwidth, jitter and latency would be calculated while being forwarded to the talker.
After this, the flow would be established and data transmission would be able to begin. The traffic
would be shaped according to the Credit-Based Shaper (CBS) defined in 802.1Qav. During the
development of these standards, it was noticed that the capabilities of AVB could be useful in
industry. However, 802.1Qav was not robust enough, so it was decided to form a new group which
will fix the problems with the CBS and add more features. This new group was named TSN Task
Group [15].
Published standards for TSN are:
• IEEE 802.1ASrev: Timing and synchronization - Timing and Synchronization for Time-
Sensitive Applications
• IEEE 802.1Qbu13 : Forwarding and Queuing - Frame preemption
• IEEE 802.1Qbv13 : Forwarding and Queuing - Enhancements for Scheduled Traffic
• IEEE 802.1Qca: Stream Reservation (SRP) - Path Control and Reservation
• IEEE 802.1CB: Stream Reservation (SRP) - Seamless Redundancy
• IEEE 802.1Qcc Stream Reservation (SRP) - Enhancements and Performance Improvements
• IEEE 802.1Qci: Forwarding and Queuing - Per-Stream Filtering and Policing
• IEEE 802.1Qch: Forwarding and Queuing - Cyclic Queuing and Forwarding
• IEEE 802.1CM: Vertical - Time-Sensitive Networking for Fronthaul
• IEEE 802.1Qc: Forwarding and Queuing - Asynchronous Traffic Shaping
11 Time-sensitive networking (TSN) task group. [Online]. Available: https://1.ieee802.org/tsn/
12 Audio video bridging task group. [Online]. Available: http://ieee802.org/1/pages/avbridges.html
13 Thesis’ main focus

5
Haris Suljić, Mia Muminović Performance study and analysis of TSN

• IEEE 802.1CS: Stream Reservation - Local Registration Protocol

In order to enable prioritizing and preemption of frames, 802.1Q14 added a 32-bit (4-byte) field
named 802.1Q Header between the source MAC address and the EtherType fields of the original
Ethernet frame (Figure 2).

Figure 2: Original Ethernet frame and Ethernet frame with 802.1Q tag

Two bytes are used for the tag protocol identifier (TPID), the other two bytes for tag control
information (TCI). TPID is used to identify the frame as an IEEE 802.1Q-tagged frame and is set
to a constant value (0x8100). The TCI consist of three sub-fields:
• Priority code point (PCP) - A 3-bit field which specifies the frame priority level.
• Drop eligible indicator (DEI) - A 1-bit field.

• VLAN identifier (VID) - A 12-bit field which specifies the VLAN to which the frame belongs.

Table 1: 802.1Q Header

802.1Q tag format


3 1
16 bits 12 bits
bits bit
TCI
TPID = 0x8100
PCP DEI VID

Since PCP is a 3-bit field, up to eight different traffic classes can be defined. It is not defined how
the traffic is treated after being assigned to a specific class, but there are some recommendations
proposed by IEEE and they are given in Table 2.

Table 2: Priority levels

PCP value Priority Acronym Traffic Type


1 0 (lowest) BK Background
0 1 (default) BE Best effort
2 2 EE Excellent effort
3 3 CA Critical applications
4 4 VI Video (<100 ms latency and jitter)
5 5 VO Voice (<10 ms latency and jitter)
6 6 IC Internetwork control
7 7 (highest) NC Network control
14 IEEE 802.1Q-2011. [Online]. Available: http://www.ieee802.org/1/pages/802.1Q-2011.html

6
Haris Suljić, Mia Muminović Performance study and analysis of TSN

There can be up to eight queues that correspond to each traffic class. In the case of implementing
less than eight queues, different types of traffic classes can utilize each queue. There is a certain
way to map traffic classes based on their priorities to available queues and it is represented in Table
3. For example, if only two queues are available, queue 0 contains traffic classes 0-3 and queue 1
contains traffic classes 4-7.

Table 3: Mapping traffic classes to queues

Number of available queues


1 2 3 4 5 6 7 8
0 0 0 0 0 0 1 1 1
1 0 0 0 0 0 0 0 0
2 0 0 0 1 1 2 2 2
Priority

3 0 0 0 1 1 2 3 3
4 0 1 1 2 2 3 4 4
5 0 1 1 2 2 3 4 5
6 0 1 2 3 3 4 5 6
7 0 1 2 3 4 5 6 7

IEEE 802.1Qbv - Enhancements for scheduled traffic


Time-Aware Shaper (TAS) is specified in IEEE 802.1Qbv. Scheduled traffic introduces Gate Con-
trol Lists (GCLs) [9]. There is a transmission gate related to each queue. GCLs are used to control
the gate state. Each gate can be in one of the following states: open or closed. Transmission of
frames is allowed when a gate is in an open state. If the gate is closed, there is no transmission
of frames [16]. When the gate is open, frames from the corresponding queue are forwarded for
transmission. If two or more gates are open at the same time, frame with a higher priority is
transmitted while lower priority traffic is delayed [17]. Scheduling is usually synchronized by a
global clock to guarantee real-time and predictable network. IEEE 802.1ASrev can be used for the
synchronization in time-sensitive networks [9].

Figure 3: Enhancement for Scheduled Traffic

A switch can have up to eight queues for different types of traffic. Each frame has a three-bit tag
called Priority Code Point (PCP). According to this tag, a frame is forwarded to the corresponding
queue where it is buffered. It can be transmitted over the port only if the gate corresponding to
the queue is open. GCL defines the schedule for gates. It is usually a list of bit vectors which
represent a configuration for each gate. It also contains the time duration for each bit vector. In

7
Haris Suljić, Mia Muminović Performance study and analysis of TSN

figure 3 is shown an example where only gate of queue 0 is open and the time duration is specified
with ti1 . In the next entry, only the gate of queue 1 is open. Configuration of gates for each entry
has to be made for one period. The gate control list is cyclic and is specified by industry, i.e. it
has to be assembled manually. Additionally, the traffic of queue 1 and queue 5 is shaped using
CBS, which is specified by amendment 802.1Qav [18].
Prioritizing by itself cannot guarantee that time-critical messages are going to be transmitted at
the right time. For example, if a control message arrives while a lower priority message is being
transmitted, the control message is going to be queued and transmitted after the current message.
However, in industry control messages are usually scheduled. GCLs can be created according to
their schedule and assure that only gates of the scheduled traffic are opened for a specific time
period. This way scheduled control messages are not delayed by the lower priority traffic, because
the time interval is reserved for the scheduled traffic.

IEEE 802.1Qav - Forwarding and Queuing for Time-Sensitive Streams


IEEE 802.1Qav15 is one of the amendments developed by AVB Task Group. The main purpose
of this amendment is traffic shaping for media streams. Control messages require low end-to-end
latency and have to be transmitted as soon as possible. On the other hand, media streaming
does not have the same requirements. It is more important to provide a continuous stream of
audio-video frames and to transmit data more evenly.

Figure 4: Example for IEEE 802.1Qav and IEEE 802.1Qbv amendments


15 802.1Qav - Forwarding and Queuing Enhancements for Time-Sensitive Streams. [Online]. Available: http:

//www.ieee802.org/1/pages/802.1av.html

8
Haris Suljić, Mia Muminović Performance study and analysis of TSN

CBS smooths out the traffic for a stream, it distributes frames evenly in time. Each queue can have
CBS assigned to it. Furthermore, those that have it, also have the credit assigned. Transmission
of a frame is possible only if the credit is non-negative. If there are no messages in the queue and
the credit is positive, it is reset to zero. It is increased with a configurable rate called the idle
slope while frames are waiting for transmission in the queue, or while there are no messages and
the credit is negative. It is decreased with a configurable rate called the send slope while frame or
frames (depending on how much credit has been built up previously) are being transmitted [2].
Figure 4 shows an example IEEE 802.1Qav and IEEE 802.1Qbv amendments. There are four
different traffic classes: Scheduled Traffic, Class A, Class B, and Best Effort Traffic. Message m3
should be transmitted after messages m1 and m2 , but it is not, because the gate for the scheduled
traffic is open, and other gates are closed. Message m5 carries control data and it is known that
it could arrive within time interval ti2 , so during that time the gate of the ST queue is scheduled
to be open. From this example, it is obvious that gating decreases the end-to-end latency of the
control message.

IEEE 802.1Qbu - Frame preemption


There are two methods of priority queuing, non-preemptive and preemptive. Switched Ethernet
uses non-preemptive priority queuing. A time-critical frame is not processed right away after its
arrival if there is a non-time-critical frame being processed. It is put at the beginning of the
queue and processed as soon as possible after the non-time-critical frame is transmitted. The
time-critical frame experiences a delay depending on the amount of traffic load. In order to reduce
delay, amendment IEEE 802.1Qbu introduces preemptive queuing. It minimizes delay for time-
critical frames, but also provides protection for non-time-critical frames to minimize the effect
on it. If the time-critical frame is received while the non-time-critical frame is being processed,
the processing of the non-time-critical frame is stopped. It continues once the processing of the
time-critical frame is done. The non-time-critical frame can be preempted multiple times until
preempted burst limitation is reached. This limitation is specified by amendment IEEE 802.1Bbu.
The purpose is to prevent the starvation of non-time-critical frames, while the purpose of IEEE
802.1Qbu is to reduce latency transmission for time-critical frames [19].

Figure 5: Priority-based frame preemption

The process of priority-based frame preemption is illustrated in Figure 5. A non-time-critical frame


N T CF is preempted two times by time-critical frames T CF1 and T CF2 . They are processed right
upon their arrival. Two reserved control symbols HOLD (K28.2) and RETRIEVAL (K28.3) are
used as indicators of preemptive insertion of a time-critical frame into a non-time-critical frame.

2.6 Network simulators


In the last couple of years, many network simulation tools have emerged, but not all of them expe-
rienced acceptance of the research community. There are many commercial and non-commercial

9
Haris Suljić, Mia Muminović Performance study and analysis of TSN

network simulation tools such as Ns216 , Ns317 , J-Sim18 , OPNET19 , QualNet20 , etc. The general
goal of these tools is to create a simulator platform on which different simulation frameworks can
be implemented. Since OMNeT++ provides many network topology components, is open-source,
and is based on C++, it was an easy choice for this type of research.

2.61 Comparison of network simulators


Ns2, Ns3, OPNET, and OMNeT++ are the network simulators selected for the comparison. These
four simulators are typically used in the academic community, as well as in industry. OPNET is a
commercial simulator, while the other three simulators are non-commercial. The main advantage
of commercial simulators is that they have complete documentation and are maintained by the
company. However, open-source simulators have many users and everyone can contribute to the
development of it [20]. Ns2 uses two programming languages, C++ and OTcl. C++ is used to
implement protocols, while OTcl is used to start the event scheduler, set up the topology of the
network and control the traffic through the scheduler. Ns3 uses C++ for the implementation and
core of the simulation models. It does not rely on OTcl. It has a Python scripting interface [21].
The main language in OPNET is C, but it supports C++ development too. The main feature is that
the configuration of the network can be initialized by using Graphical User Interface (GUI), while
simulation scenarios require writing the code. On the other hand, OMNeT++ is not a network
simulator. It is a C++, component-based simulation framework with GUI support, so the main
usage of OMNeT++ is for building the network simulators [22]. That is the main difference between
OMNeT++ and the other simulators. In the paper [22], Ns2, Ns3 and OMNeT++ were tested
on an identical simulation model. The result is showing that Ns3 and OMNeT++ can efficiently
carry out large-scale network simulations. The advantage of OMNeT++ is its GUI support, while
the other simulators rely on pure coding. Also, many useful network frameworks such as INET21
and NeSTiNg22 can be easily imported into this analysis model and that will put the focus of this
study on the TSN performance analysis rather than the TSN protocols implementation. Taking
everything stated above into consideration, OMNeT++ seems like the most reasonable choice for
this research.

2.7 OMNeT++
OMNeT++ is an open-source discrete event simulation environment, based on C++, available
since 1997. It has been well accepted by the academic community ever since, mostly because of its
general applicability. It is available on all commonly used platforms such as Windows, Mac OS, and
Linux. Its main advantage lies in the fact that it provides basic logic and functional components
that can be used to develop more advanced network simulation frameworks. OMNeT++ offers
high scalability which implies that the software is modular and inter-operable, simulation models
can be visualized and debugged, it has its own Integrated Development Environment (IDE), and
data interfaces are as open and general as possible [23]. In this research, it is used to develop a
simulation model for TSN network and all performance analysis is based on it.
Model structure of OMNeT++ consists of modules called simple modules and compounds, which
communicate with each other by messages. Simple modules are written in C++ and are the lowest
level structural component of OMNeT++. Compounds are created by combining several simple
modules. These module types exchange messages either directly, or through gates which is shown
in figure 6. Simple module’s gate is described by a single pointer towards “next” or “prev”, while
compound’s gate requires an additional pointer to the inner simple module the compound consists
of. The user defines model structure through OMNeT++ Network Description language NED. In
NED one can declare simple modules, define compound modules, configure the network, etc. In a
16 The Network Simulator - NS-2. [Online]. Available: https://www.isi.edu/nsnam/ns/
17 NS-3. [Online]. Available: https://www.nsnam.org/
18 JSim Home Page. [Online]. Available: https://www.physiome.org/jsim/
19 Riverbed. [Online]. Available: https://www.riverbed.com/se/products/steelcentral/opnet.html
20 QualNet Network Simulator Software. [Online]. Available: https://www.scalable-networks.com/
qualnet-network-simulator-software
21 What Is INET Framework? [Online]. Available: https://inet.omnetpp.org/Introduction
22 NeSTiNg. [Online]. Available: https://gitlab.com/ipvs/nesting

10
Haris Suljić, Mia Muminović Performance study and analysis of TSN

general scenario, a network is configured and initialization files are not part of the NED since they
can change on every run. Initialization data is stored in the INI files [23]. OMNeT++ offers its own
IDE which contains a graphical editor. The graphical editor offers the change of network topology
graphically or directly through NED source view. Messages are the central concept in OMNeT++
and in the model they can represent events, packets, commands, or other kinds of entities [24].
They can be configured by using MSG files. OMNeT++ has data analysis tool integrated into the
Eclipse environment. The results of the simulation can be stored as scalar values, vector values, or
histograms. Different statistical methods can be used to extract useful data to draw conclusions,
and this process is automated by using Analysis Files (ANF). 23

Figure 6: a) simple module output gate, b) compound module output


gate, c) simple module input gate, d) compound module input gate 24

2.71 The NED language


NED language is used to describe the structure of the simulation model. NED language is utilized
in the NED files, where the user can declare the simple modules, combine them into compounds,
define channels, etc. NED language has high scalability because of the following features:
• Hierarchical - all complex modules can be broken down into simpler entities,
• Component-Based - simple modules and compounds are completely reusable and this allows
usage of component libraries such as INET,
• Interfaces - instead of using the module or channel types, module and channel interfaces can
be used as placeholders, concrete module or channel types have to implement the interface
they substitute and are determined at network setup time,
• Inheritance - modules and channels can be subclassed, derived modules and channels may
add new parameters, gates, and new submodules and connections,
• Packages - to reduce the risk of name clashes between different models NED language features
similar packages to those in Java,
• Inner types - to reduce the namespace pollution, channel and module types used locally can
be defined within compounds,
• Metadata annotations - metadata can be annotated, it carries extra information for various
tools.
Mentioned features are the part of NED language only since the 4.0 version. NED language
has developed a lot since the time it was created. Even the basic syntax is updated, so old NED
files need to be converted to the new syntax in order to be used. NED files can be converted to
XML files and back because they use the identical tree representation. Even the comments can be
converted, and this feature of NED makes it easily manipulable, more information can be extracted
from the data and NED files can be generated from information stored on other systems [24].
23 OMNeT++ - Simulation Manual. [Online]. Available: https://doc.omnetpp.org/omnetpp/manual/#sec:
simple-modules:simple-modules-in-opp

11
Haris Suljić, Mia Muminović Performance study and analysis of TSN

2.72 Messages and packets


As it was previously said, messages are the central concept in OMNeT++. Messages are repre-
sented with the cMessage class and its cPacket subclass. The later is used for network packets such
as frames, datagrams, transport packages, etc. The former is used for everything else. cMessage
has following fields: name, message kind, scheduling priority, send time, arrival time, source mod-
ule, source gate, destination module, destination gate, time stamp, parameter list, control info and
context pointer. cPacket subclass extends the fields list with following fields: packet length, encap-
sulated packet, bit error flag, duration, is-reception-start and deliver-on-reception. In practise, large
number of fields should to be declared for the packet/message to be useful, and writing a required
C++ code can prove to be tiresome. OMNeT++ introduced new, more effective way of doing so
by using message definitions. Message definitions offer concise syntax to set the message contents,
and C++ code is automatically generated from the definitions. In OMNeT++ IDE messages and
packages are handled through MSG files [24].

2.73 Configuring simulation


Configuration and the input data for the simulation are defined in the INI file usually named
omnetpp.ini. The configuration file is an ASCII text file, but non-ASCII characters are allowed
in the comments and string laterals. File size and line length are not limited. Comments are
placed at the end of any line after the hash mark “#”, they extend to the end of the line and
are ignored during processing [24]. INI file is line oriented and consists of section heading lines,
key-value lines and directive lines. An INI file can contain [General] section and several [Config
<configname>] sections and the order doesn’t matter. In the [General] section one usually
selects the model to be run by setting up the network option, and simulation length is defined by
setting up the sim-time-limit or cpu-time-limit options. NED files can contain several named
configurations which are the sections of the form [Config <configname>]. Tkenv lets the user
select the configuration of interest in a separate dialog window before starting the simulation.

2.74 Result analysis


Result analysis in OMNeT++ can be done in several ways. Simulation IDE offers its own analysis
tool in which one can quickly select the data of interest, browse and plot them to get better insight
and draw a conclusion. This is done through Analysis Files (ANF). Data can be formatted as
scalars, vectors, or histograms. It can be filtered, processed and graphically presented in a chart
or plot form. All of these steps can be stored as “recipes”, so the same treatment can be applied
after every run automatically. The recipe for extracting information out of the data is recorded in
an ANF file and this process is reproducible. In order to use this analysis tool, one must first select
the input files. Input files can either be scalar files with the extension .sca or vector files with the
extension .vec. After this step, one can browse the input and use the filtering expressions to filter
the raw data. Result of first two steps are the Datasets and Charts which can also be edited [25].
Result analysis can become more sophisticated and advance by using some other program besides
OMNeT++ tool. Python, R, Octave or Matlab can be used for producing more exquisite reports
and in order to do so, one must export the result in a format understandable to those programs.
This task can be elegantly solved by using scavetool. This tool is part of OMNeT++ and it
has four commands: query, export, index and help. It processes the result files written by
simulations.

2.8 INET Framework


INET25 framework is an open-source model library for the OMNeT++ simulation environment.
It serves as a base for numerous other simulation frameworks such as NeSTiNg because it provides
protocols, agents and other models for researchers working with a communication network. It
is built around the concept of message passing; new components can be easily created by using
contents of this framework and existing contents are written in an understandable manner.
25 What Is INET Framework? [Online]. Available: https://inet.omnetpp.org/Introduction

12
Haris Suljić, Mia Muminović Performance study and analysis of TSN

This framework offers many useful features, but the ones of interest for this research are the
following:

• Implemented OSI layers,


• Pluggable protocol implementations for various layers,
• Wired/wireless interfaces (Ethernet, PPP, IEEE 802.11, etc.),

• Wide range of application models.


OSI model is a reference tool that divides the communication mechanism into seven different layers.
Each layer offers service to layers below and supports the above layers by performing necessary
functions [3]. These seven layers starting from the lowest towards highest are:
• Physical layer - defines medium requirements, connector, and interface specifications.

• Data link layer - allows a device to access the network, offers it a physical address, works
with device’s networking software and provides error detection.
• Network layer - provides logical addressing system, so the data can be routed through several
lower layer networks.

• Transport layer - offers end-to-end network communication between devices and can supply
the upper layers with connection-oriented or connectionless best-effort communication.
• Session layer - offers many functionalities, but most general one is allowing applications on
devices to establish, manipulate and terminate a dialog through the network.

• Presentation layer - formats the data the application sends and receives through the network.
• Application layer - provides the interface for the end user that operates the device connected
to the network.
First three layers are mostly hardware related, while the upper four layers are more abstract and are
software related. All these layers except the session and presentation layer are implemented in the
INET framework. Many protocols for various layers are included in the framework. Some of them
are: Address Resolution Protocol (ARP) and Ethernet on data link layer, Internet Protocol (IP)
and Internet Control Message Protocol (ICMP) on network layer, Transmission Control Protocol
(TCP) and User Datagram Protocol (UDP) on transport layer, Simple Mail Transfer Protocol
(SMTP) and Hypertext Transfer Protocol (HTTP) on application layer, etc 26 . INET framework
is very comprehensive, so it is frequently used as a base for some new network frameworks, which
is the case with the Network Simulator for Time-Sensitive Networking (NeSTiNg) too.

2.9 Network Simulator for Time-Sensitive Networking - NeSTiNg


IEEE 802.1 Working Group provides a list of simulation tools and simulation models that are
available and support protocols developed by them. NeSTiNg Network Simulator for Time-
sensitive Networking [26] is on that list as well. Simulation framework OMNeT++ is used as a
core for building the network simulator NeSTiNg. It utilizes many components and modules from
INET framework. NeSTiNg extends INET with three most important TSN features:
• Time-aware shaper (IEEE Std 802.1Qbv),

• Credit-based shaper (IEEE Std 802.1Qav),


• Frame preemption (IEEE Std 802.1Qbu).
26 List of network protocols (OSI model). [Online]. Available: https://en.wikipedia.org/wiki/List_of_
network_protocols_(OSI_model)

13
Haris Suljić, Mia Muminović Performance study and analysis of TSN

A TSN switch is the main component of NeSTiNg. It is mostly implemented based on existing
modules from INET. Figure 7 gives an insight on modules and sub-modules of the TSN switch.
The number of ports is configurable. Each port consists of three modules, eth, processingDelay
and relayUni. An incoming frame goes first through eth module. Since it is an incoming
frame, it is forwarded from mac module through the express queue. The frame is delayed by
the processingDelay component. The relayUnit module routes the frame to the correspond-
ing output port. The frame goes then through the processingDelay component of the output
port. As an outgoing frame, it first goes through the queuingFrames component. It evaluates the
priority of the frame based on the PCP field of the frame and maps it to a corresponding queue
according to the matrix presented in Table 3. The transmissionSelection component selects
the next frame for transmission according to the priorities and states of the queue gates. Gates
are controlled by the gateController component which stores a gate control list. The frame
is then transmitted according to strict priority or credit-based algorithm. For each queue, the
tsAlgorithms component is configurable by the user. Finally, the frame is forwarded to the mac
module through the preemptable or express queue.

Figure 7: TSN switch and its sub-modules

14
Haris Suljić, Mia Muminović Performance study and analysis of TSN

3 Related Work
Since there is a number of different research areas that are closely related to this thesis, the
related work is divided into subsections. Firstly, the fact that TSN is not the only solution for
real-time Ethernet has to be taken into consideration. Subsection 3.1 provides an overview of
existing solutions, their strongest features, advantages and disadvantages. Many papers focus only
on TSN and its evaluation. Section 3.2 provides a summary of extensive research that is done
so far regarding TSN. Few papers are examining the key components of TSN individually, while
others are providing an evaluation of TSN as a communication system reaching its full potential
with all standards included. Many different ideas for evaluation are proposed, and the simulation
of TSN is one of them. Section 3.3 presents simulators for TSN that are developed so far using
different simulation tools. This section is of utmost importance for the thesis.

3.1 Comparison of solutions for real-time Ethernet


General applicability, compatibility and cost efficacy are just some out of many beneficial features
of Ethernet. However, its non-deterministic behaviour due to CSMA/CD arbitration mechanism
makes it not suitable for real-time communication. There are several ways to deal with this issue
as it is shown in the following publications.
P. Doyle in [27] specifies the properties of typical Ethernet which limit its utilization to non-
deterministic applications. Collision avoidance can be achieved by utilizing full-duplex links and
including switches in a network topology. Switched Ethernet concept can be combined with real-
time protocols to ensure determinism. In the next article issue [28] several existing real-time
solutions, which rely on the mentioned concepts, are examined. EtherNet/IP, PROFInet, Ether-
CAT and ETHERNET Powerlink are examined, advantages and disadvantages of every approach
are shown. Typical network topology and frame format of every solution are presented. This
publication is of the first research papers that deal with real-time Ethernet and it served as a base
for many following research papers. It also provides an introduction to IEEE 1588 standard that
ensures sub-microsecond synchronization accuracy of distributed clocks.
White paper done by Kingstar [29] examines five different protocols for real-time Ethernet. Com-
pared real-time protocols are EtherCAT, EtherNet/IP, Ethernet Powerlink, PROFINET IRT, and
SERCOS III. This work favors the EtherCAT as the solution with the best performance and price.
All of the examined protocols are open standard and usage of the technology of originating vendor
is not required. The paper provides good state-of-the-practice for real-time Ethernet. However,
the approach used in the paper is not suitable for the academic research community, experiment
design is not explained; therefore the results are not reproducible.
T. Steinbach et al. [30] examine the applicability of switched Ethernet in in-car networking.
Numerous existing in-car communication solutions such as Controller Area Network (CAN) and
FlexRay are not highly scalable, and that is an important network feature since the number of
network nodes constantly increases. Switched Ethernet combined with real-time protocols can
satisfy all the temporal constraints. In this paper, researchers examine TTEthernet as a possible
future in-car network backbone. TTEthernet offers three classes of traffic: time-triggered, rate-
constrained and best-effort. The use-case shows that time constraints can be met, even though the
best-effort class jitter did not meet the requirements. The paper, however, does not provide the
simulation tool nor experiment configuration used for this analysis so the attained result cannot
be reproduced.
M. Ashjaei et al. in [2] analyze the performance of switched Ethernet vehicle network that utilizes
Audio-Video Bridging (AVB) protocol. AVB is the predecessor of TSN, and consist of 4 follow-
ing standards: Audio Video Bridging Systems (IEEE 802.1BA), Timing and Synchronization for
Time-Sensitive Applications (IEEE 802.1AS), Stream Reservation Protocol (IEEE 802.1Qat) and
Forwarding and Queuing for Time-Sensitive Streams (IEEE 802.1Qav)27 . It defines two classes
of traffic: high priority class A and low priority class B. The standard defines the Credit-Based
Shaper (CBS) algorithm that prevents the traffic burst. Use-case model is the same as one used
in paper [6] by the BMW Group Research and Technology. Obtained result showed that the task
27 Audio video bridging task group. [Online]. Available: http://ieee802.org/1/pages/avbridges.html

15
Haris Suljić, Mia Muminović Performance study and analysis of TSN

set is schedulable with AVB. This thesis heavily relies on the work done by M. Ashjaei et al. by
using the same use-case and identical model configuration.

3.2 Performance Evaluation of TSN amendments


Performance analysis of TSN standard implies performance evaluation of all amendments it con-
sists of. A thorough analysis of AVB has already been conducted and since TSN extends that
standard, this thesis focuses on scheduled traffic (IEEE 802.1Qbv) and frame preemption (IEEE
802.1Qbu).
Luxi Zhao et al. [17] provide thorough worst-case analysis of IEEE 802.1Qbv for TSN using net-
work calculus. This paper solely focuses on scheduled traffic and excludes the frame preemption as
it would make the analysis more complex. The first part of the paper describes the functionality of
IEEE 802.1Qbv-capable switch with three inputs and one output port. After the switch revises the
routing table, it forwards the input traffic towards the addressed output port. Priority filtering is
done based on the PCP, and the traffic is redirected towards the appropriate queue. GCL control
the gates of each priority queue. If the gate is closed, the traffic is being buffered in the queue;
otherwise it is forwarded towards the output port. In case of multiple gates being simultaneously
opened, traffic selection is based on the queue priority. Worst-case analysis of end-to-end latency
of scheduled traffic in this paper assumes that GCLs are not restricted, i.e. the scheduled traffic
windows are not overlapping. This allows multiple scheduled traffic classes with a different prior-
ity. Accuracy of the analysis is tested in two test cases, a synthetic one with six nodes and two
switches and a realistic test case that consist of 31 nodes and 15 switches. In both cases, real-time
constraints of scheduled traffic are met. Provided analysis guarantees the real-time property of
TSN and can be used for the synthesis of GCLs which makes TSN highly scalable.
The utilization of the amendment IEEE 802.1Qbv arises the problem of creating a valid schedule
for the time-triggered traffic. Silviu S. Craciunas et al. [8] address this problem by identifying
functional parameters of the amendment and defining its constraints and deterministic Ethernet
constraints. Based on these constraints, they propose an offline scheduler which guarantees low
and deterministic latency for the time-triggered traffic. Two key parameters identified in the paper
are device capabilities and queue configuration. Device capabilities refer to whether switches and
end-systems are scheduled or not. Focus is put on deterministic networks, which implies that both
switches and end-systems are scheduled. The number of queues is one of the functional parame-
ters; it represents the number of priorities that can be handled. The gate related to the queue can
operate in different modes. It can be always open and follow the strict priority policy, or it can be a
timed-gate. Utilization of gating leads to a fully deterministic latency of scheduled traffic. Different
scheduling constraints are defined. Frame constraint requires that offset and frame duration fit the
frame period. Frames that use the same link cannot overlap by link constraint. Frame transmission
constraint implies that flow must follow the sequential order. The arrival time and sending time
define end-to-end constraint. The amendment puts flow isolation and frame isolation constraints.
According to previously named constraints, the scheduler finds offset for each frame and queue
assignments. Satisfiability Modulo Theories (SMT) are used for solving the scheduling problem.
Based on the experiments, they come to the conclusion that the problem becomes more difficult
with increasing the utilization of the network. Several optimization directions are proposed.
Performance evaluation of IEEE 802.1Qbu is conducted by W. Jia et al. [19]. Time-critical traffic
(TCT) must have deterministic end-to-end latency and without frame preemption that is impos-
sible. This can cause deadline misses and this issue is solved by introducing a frame preemption
mechanism. In the case of non-time-critical traffic (NTCT) transmission, TCT would experience
MTU transmission delay. IEEE 802.1Qbu allows TCT to preempt NTCT as soon as it arrives to
frame preemption switch and after TCT transmission is done, NTCT transmission is continued.
MAC layer of a switch recognizes the priority of a frame by the 3-bit priority field added to the
standard Ethernet header. Two control symbols HOLD and RETRIEVAL are used to inform the
receiver that frame preemption occurred. Once the receiver reads the HOLD, it realizes that re-
ceiving of current traffic is preempted by higher priority one and starts receiving the new traffic.
Once the receiver reads the RETRIEVAL symbol, it resumes with previous traffic receiving. In
the simulation part, three different scheduling schemes are applied: FIFO, non-preemptive priority
scheduling (NPPS) and preemptive priority scheduling (PPS). Result shows that the average jitter

16
Haris Suljić, Mia Muminović Performance study and analysis of TSN

and delay are reduced significantly by using PPS. Beneficial characteristics of IEEE 802.1Qbu are
accurately pinpointed by this paper and it is shown that real-time communication via the Ethernet
network is significantly improved with this amendment.
Lin Zhao et al. [14] provide a comparative analysis of TSN and TTEthernet. One of several simi-
larities of mentioned technologies is that both implement Time-Division Multiple Access (TDMA)
strategy, i.e., traffic collision is avoided by precisely dividing the transmission time of traffic. After
introducing both technologies, authors also explain their network architecture. In terms of band-
width allocation, TSN outperforms TTEthernet. In TTE low priority flows might starve in case of
large high priority traffic transmission. On the other hand, Stream Reservation Protocol (SRP) in
TSN provides fairness in bandwidth allocation for low priority traffic. Researchers then compare
TSN and TTEthernet in terms of delay analysis. TSN mechanisms CBS and frame preemption
allow high bandwidth, so end-to-end delay is minimal. TTEthernet can combine Time-Triggered
(TT) and Rate Constrained (RC) traffic in three ways: shuffling in which TT traffic is just de-
layed until RC finishes transmission, preemption in which RC traffic is suspended and TT traffic
is forwarded after which RC traffic continues with transmission and timely block in which RC
will not be transmitted before TT if it might affect the transmission of TT. TSN also outperforms
TTEthernet in terms of redundancy approach. TSN avoids frame loss by sending packets along
several paths while TTEthernet does not have a concrete mechanism to deal with packet loss. In
general, TSN is more flexible and adaptive to any modifications in the topology while TTE requires
an entirely new schedule. This paper once again shows the enormous potential of TSN and justifies
this thesis research goals.

3.3 Simulating Time Sensitive Networking


In paper [9], authors present a TSN simulation model developed using OMNeT++. Among nu-
merous frameworks that extend OMNeT++ functionalities, they employ an INET framework and
a Communication over Real-time Ethernet (CoRE4INET) to develop their TSN simulator. In ad-
dition to these frameworks, amendments IEEE 802.1Qbv and IEEE 802.1QAS are implemented.
They provide the scheduling of network traffic in a time-based manner. Purpose of the simulation
is to confirm that TSN can guarantee deterministic end-to-end latency. The development process
of TSN simulation model is described. The first part consists of building a TSN switch that allows
prioritizing and gating. Traffic is assigned to a queue according to its priority, and transmission is
controlled by Gate Control Lists (GCLs). The second part consists of building a network topology
to verify the mentioned functionalities of the switch. Paper also provides a detailed explanation of
how to derive GCLs for certain network topology and its parameters. They use GCLs to provide
different types of transmission modes, protected transmission of time-triggered messages, unpro-
tected transmission of other messages and the guard band. By considering the worst case, the
guard band is set to the maximum frame transmission time. This guarantees that the last unpro-
tected message will not interfere with protected messages. The simulation results show that TSN
and its two amendments can guarantee a deterministic and low end-to-end latency for protected
traffic. Best case, average case and worst case response times of protected traffic are identical,
while the latency of the unprotected traffic keeps rising making it not deterministic.
P. Heise et al. present a simulation framework TSimNet that employs all non-time-based TSN stan-
dards in [31]. Since time synchronization is not suitable in avionic networks due to certification
reasons, its implementation was not necessary. TSimNet is implemented using the OMNeT++ sim-
ulation tool and its extension INET framework. Frame preemption, frame replication and recovery
and per-stream filtering are amendments that they have built on top of the existent framework.
They provide a performance evaluation of TSN. First, they use a simple topology to test the per-
formance of the frame preemption. They compare minimum, maximum and average end-to-end
latencies with and without frame preemption. It can be concluded that frame preemption reduces
the latency to a high degree. They use an industrial topology for the second evaluation of the TSN.
The results show that frame preemption reduces latency significantly, but that highly depends on
the configuration of the network.
Thesis report by H. Laine [18] provides a concise overview of TSN amendments and focuses on
three of them, Time-aware Shaper, Credit-based Shaper and Frame preemption. Author imple-
ments a TSN simulator using Java. It does not have a graphical user interface, so in order to

17
Haris Suljić, Mia Muminović Performance study and analysis of TSN

simulate different system setups, files that define messages, specify topology, virtual links and con-
figuration of simulation have to be provided. Simulation results confirm that time-critical frames
are transmitted through the network as planned and that their latency is bounded. It is noticed
that amendments for frame preemption and Credit-based Shaper favor Audio Video frames and
Best effort traffic.
Most of the work in this thesis is based on NeSTiNg simulator implemented by J. Falk et al. [26].
The technical details of this simulator are explained in subsection 2.9. The simulator is available to
the research community and our contribution to their work is an extensive evaluation of the TSN
functionalities implemented in the network simulator. Results provided in the paper are regard-
ing the network simulator itself; the average runtime for simulations and memory consumption
are measured. There is no information given about the performance of the TSN, only about the
performance of the simulator. Work of this thesis is more orientated to that part.

18
Haris Suljić, Mia Muminović Performance study and analysis of TSN

4 Method
In this section, a scientific research method applied within this thesis is described briefly. Two
main parts are discussed: a motivation for employing this particular method and an overview of
how the method actually applies to the thesis work.

4.1 System Development Research Method


The legitimacy of system development as a research method has been questioned over the years.
Nunamaker, Chen and Purdin [1] describe and defend the use of system development as a scientific
research method. In their paper, a multi-methodological method is proposed. It incorporates four
research strategies presented in Figure 8:
• Theory building,
• Experimentation,
• Observation,
• Systems development.

Figure 8: A multi-methodological research approach [1]

Theory building is usually used to construct the research questions and to give a justification
for their significance to a research community. In some cases, research problems cannot be solved
mathematically or tested empirically. Instead of that, the development of a system can be used to
provide answers to research questions. System development usually leads to new theories and
improvement of existing. Once the system is built, researchers can use it for experimentation.
Experiments are usually driven by theories and affected by system development itself, and results
provided by the experiment should be evaluated. Observation includes case studies, fields studies,
and similar methodologies. It helps with the formulation of the research questions that are going
to be tested and answered through the experimentation. No research method alone is sufficient

19
Haris Suljić, Mia Muminović Performance study and analysis of TSN

within the field of computer science. Usually, there are multiple methods that are applicable and
valuable feedback can be achieved by combining them.
System development is an essential strategy of this research method and it is interconnected with
other strategies. The partial goal of this thesis is developing simulation models for TSN that include
802.1Qbu and 802.1Qbv amendments. This part can be considered system development. The
purpose of developing a simulation model of TSN is to examine the performance of the amendments,
individually and when they are combined, in order to provide a proof-by-demonstration that TSN
can deliver real-time data transmission and still provide excellent transmission for remaining traffic
classes. In order to confirm ideas and concepts developed about TSN as part of the theory building,
several case studies are created. This methodology is included in observation. Experimentation
found its place between theory building and observation. It includes computer and experimental
simulations and has a purpose to validate proposed theories. In this thesis, simulation allows us
to do a performance analysis of TSN and answer research questions stated in Section 1.2. In
order to test the performance of the network, several performance metrics can be tracked, such as
end-to-end transmission latency and utilization [32]. These are dependent variables in the research
because they are being observed. Their change represents a result of experimental manipulation
of the independent variables [33]. Independent variables are network topology, data transmission
rate, ports bandwidth, etc. These are the values that are being manipulated in an experiment and
are unaffected by the other variables [33].
The research methodology consists of several milestones that are presented in Figure 9. In order
to understand the state-of-the-art, the literature review is conducted as the first stage. In the
second stage, the existing simulation tools are revised, with OMNeT++ in focus. Since the goal
of this thesis is not “reinventing the wheel”, developing entirely new simulation framework would
be unnecessary. Multiple existing simulation frameworks for TSN networks can be found, but
NeSTiNg is the most promising one. This framework is altered to fit the scope of the research.
The third stage is the design stage, and it implies developing the simulation models that are used
to demonstrate the functionality and efficiency of two amendments of interest. The fourth stage
of the research is an implementation which implies synthesis combining requirements elicitation
phase, analysis phase, and model checking phase. It provides a solution that can later be tested
and improved. The final stage of the research is the evaluation in which the performance analysis
of the TSN network is conducted by comparing it with the AVB network. Last three stages are
closely related so in to order to explain them, a holistic approach is used. Section 6 covers this
three stages.

Figure 9: A research methodology

20
Haris Suljić, Mia Muminović Performance study and analysis of TSN

5 Simulation framework architecture


This section provides further explanation on the architecture of the framework. The main focus is
on the following components:
• Time-aware shaper,
• Credit-based shaper,
• Frame preemption mechanism.
TAS is partly explained in Section 2. Scheduled traffic transmission must not be disturbed because
its end-to-end latency must be bounded. This is ensured by using GCLs which are handled by
gateController module, gates represented by tGates and the global clock of the switch. GCLs
it contains are stored within an XML file and they control the behaviour of the gates. These
modules have been sufficiently tested and are compliant with the standard. CBS can be selected
by changing the transmission selection algorithm to “CreditBasedShaper” which is done through
tsAlgorithms module. Figure 10 contains mentioned modules. Modules emphasized by an ellipse
are responsible for TAS, while the modules within the rectangle implement CBS.

Figure 10: Modules responsible for shapers

Figure 11 presents the modules responsible for frame preemption. After the frame goes through
the gate it exits the queing module and can use either express or preemptable path. Before it
exits the switch the traffic needs to be encapsulated again. Module etherEncap adds the missing
Ethernet frame fields to the payload and vlanEncap adds the 802.1Q header.

Figure 11: Modules responsible for frame preemption

21
Haris Suljić, Mia Muminović Performance study and analysis of TSN

It needs to be mentioned that original NeSTiNg had some issues that prevented the simulation.
One of them was an error with spending the credit in module CreditBasedShaper. It was resolved
by canceling the credit spending before rescheduling and this allowed usage of any slope factors
which was not possible before. Since dealing with these errors was not systematic, i.e., it was done
by trial and error method, all the changes cannot be mentioned in this report. This updated version
of NeSTiNg was uploaded to the GitHub in order to allow the researchers further investigation of
TSN.

22
Haris Suljić, Mia Muminović Performance study and analysis of TSN

6 Evaluation of TSN
Performance analysis of TSN is conducted through five scenarios. The first scenario tests the
functionality and performance of enhancements for scheduled traffic (IEEE 802.1Qbv). The second
scenario aims to demonstrate the functionality and test the performance of frame preemption
(IEEE 802-1Qbu). The third scenario is the unification of the first two and demonstrates the
functionality of the combination of two mentioned amendments. The fourth scenario demonstrates
the efficacy of frame preemption when combined with scheduled traffic. Final scenario combines
all of the TSN amendments and is based on the industrial use-case model proposed in [2] and [6].
Following sections describe the model configuration and present the simulation results. All relevant
configuration parameters of the following scenarios are given in Table 4. It needs to be mentioned
that most of these configuration parameters values are default, datarate value was adopted from
the work of M. Ashjaei et. al. [2]. Simulation time of all scenarios is 300s and from those simulation
results, relevant data is extracted and presented in the following tables. However, graphs show
only simulation time intervals of interest. All switches share the same buffer size and it is 100
Maximum Transmission Units (MTU). Another important reminder that is related to all scenarios
is that simulation framework does not provide the data about the latency of packets that are lost.
Whenever the switch buffer is full, packet loss occurs. That explains the saturation point of some
graphs featured in the following section.

Table 4: Model configuration

Configuration parameter: Value:


Processing delay 5µs
Datarate 100M bps
Simulation time resolution 1ps
Simulation time 300s
Buffer Capacity 100 · M T U

Message size discussed in following sections refers to message payload size. Since the Ethernet
frame that is used is with 802.1Q tag all the messages gain 22B intended for destination MAC,
Source MAC, 802.1Q header, EtherType field and CRC. Minimum payload size is 42B, while max-
imum one is 1500B. After adding 7B for Ethernet Preamble and 1B for Start of Frame Delimiter
(SFD) transmitted message size ends up in 72B-1530B range.

6.1 Scenario 1 - IEEE 802.1Qbv


The goal of this scenario is to present the functionality of gating to ensure low and deterministic
end-to-end latency for scheduled traffic. Network topology is presented in Figure 12.

Figure 12: Scenario 1 - network topology

The model consists of three end systems and one switch. ST node generates scheduled traffic,
BE node constantly generates best-effort traffic and Sink node is the recipient of all the generated
traffic.
ST node transmits control traffic periodically every 500µs. Gating mechanism ensures protected

23
Haris Suljić, Mia Muminović Performance study and analysis of TSN

window in which scheduled traffic transmits, unprotected window in which best-effort data trans-
mits and guard band window in which all the gates are closed in order to guarantee that BE traffic
transmission will not interfere with ST traffic transmission [31]. End-to-end latency is presented
in Figure 13. It is shown that scheduled traffic has a constant latency of 16.72µs. It is not possible
to show how end-to-end latency stabilizes in the full simulation time graph, so the first graph
shows only the first 100ms of the simulation. End-to-end latency of best-effort traffic is linearly
increasing until the buffer of the switch gets full. Buffer capacity of the queue is set to 100 MTU
packets. When the queue is completely full, end-to-end latency of best effort traffic becomes sta-
ble. However, best-effort traffic experiences packet loss because of the buffer overflow. Best-effort
traffic cannot be forwarded to output port as fast as it is being generated. In Table 5 minimum,
maximum and average latency are presented for both types of traffic.

Figure 13: Scenario 1 - traffic latency

Table 5: Scenario 1 - end-to-end latency

End-to-end latency (ms)


Traffic type: Min Max Avg
Scheduled traffic 0.01672 0.01672 0.01672
Best-effort 0.24984 25.25000 25.18600

Utilization of each link is presented in Table 6. Scheduled traffic is generated every 500ms and
the payload of it is 20B, so low utilization of that link is understandable. BE node constantly
generates MTU packets so the utilization is high.

24
Haris Suljić, Mia Muminović Performance study and analysis of TSN

Table 6: Scenario 1 - utilization of each link

Link Utilization (%)


ST → Switch 1.1720
BE → Switch 99.1602
Switch → Sink 50.1400

Figure 14 graphically presents traffic flow on its way to the destination. Scheduled traffic directly
goes towards switch input port eth[0] after which it is forwarded to the appropriate output port
eth[2]. It is queued in the highest priority queue which checks the setup of the Transmission
Selection Algorithm (TSA) and the state of the gate. TSA is set to StrictPriority and gate
state is open, so the scheduled traffic is forwarded towards Sink. Best-effort traffic is sent towards
switch input port eth[1] where it is queued. If the gate is closed, traffic waits in the queue along
with all the other traffic sent to that queue. As soon as the gate opens, traffic is forwarded towards
Sink. Gating mechanism is configured to ensure the protected window length of 17µs, unprotected
window length of 360µs and guard band window of 123µs [31]. The guard band ensures that best-
effort traffic finishes its transmission before protected window starts and it is set to its maximum
value, i.e., the time needed to transmit the MTU.

Figure 14: Scenario 1 - execution trace

6.2 Scenario 2 - IEEE 802.1Qbu


In this scenario, the functionality of frame preemption is presented. The purpose of preemption
is to allow higher priority frames to start their transmission immediately by preempting the lower
priority frames, which leads to reduced latency. In this case, it is used in conjunction with the
credit-based shaper. Network topology for this scenario is shown in Figure 15.

Figure 15: Scenario 2 - network topology

There are two types of traffic transmitted through this network. Audio-video traffic generated by
AV node can preempt best-effort traffic generated by BE node. Sink node is the recipient of all
frames that are generated. Model configuration is the same as for Scenario 1 (Table 4). TSA for
Audio-Video traffic is set to CreditBasedShaper and IdleSlopeFactor of the shaper is set to

25
Haris Suljić, Mia Muminović Performance study and analysis of TSN

70M bps. TSA for best-effort traffic is set to StrictPriority.


End-to-end latency is presented in Figure 16. The first graph shows traffic end-to-end latency
for first 100ms. The latency of both traffic types is constantly increasing as expected until the
switch buffers reach a maximum capacity of 100 MTU packets. Since frames are being generated
constantly, the switch cannot process and forward them instantly. Because of that, frames are
being queued. Audio-video frames are being forwarded as soon as the credit becomes zero and
they preempt best-effort frames if necessary. Audio-video traffic has lower latency than best-
effort traffic. Even though audio-video frames are delayed by credit-based shaper, credit is being
replenished fast enough to allow their transmission and delay best-effort traffic for consequence.
In Table 7 minimum, maximum and average latency are presented for both types of traffic.

Figure 16: Scenario 2 - traffic latency

Table 7: Scenario 2 - end-to-end latency

End-to-end latency (ms)


Traffic type: Min Max Avg
Audio-Video 0.23384 19.904 19.834
Best-effort 0.47369 34.253 34.073

Utilization of the links is shown in Table 8. The payload of traffic is 1400B, while BE node
constantly generates MTU packets (1500B). This explains the high utilization of links from end
systems to the switch. Utilization from the switch to the Sink is 98.5128%.

26
Haris Suljić, Mia Muminović Performance study and analysis of TSN

Table 8: Scenario 2 - utilization of each link

Link Utilization (%)


AV → Switch 98.4012
BE → Switch 99.1602
Switch → Sink 98.5128

Figure 17 presents execution trace for this scenario. Audio-video traffic uses the port eth[0] as
an input port. The switch forwards packets to an output port eth[2]. Its PCP is set to 7, so this
traffic is using queue[1] according to the mapping matrix (Table 3). The input port for best-
effort traffic is eth[1] and it uses the same output port as audio-video. The PCP value is set to
0, so best-effort traffic is queued in queue[0]. Gates associated with these queues are constantly
open. If transmission selector requests packet from queue[1] and credit is zero, the frame is being
instantly forwarded to upper layers through express queue preempting the transmission of any
other traffic. Best-effort frames are transmitted through the preemptable queue.
In this scenario, the maximum end-to-end latency of AV traffic is 19.904ms. Audio-video traffic is
delayed by credit-based shaper, so frame preemption does not reduce the latency of frames with
higher priority.

Figure 17: Scenario 2 - execution trace

6.3 Scenario 3 - IEEE 802.1Qbv and IEEE 802.1Qbu


Scenario 3 topology is presented in Figure 18. It consists of four end nodes, AV node which gener-
ates 1400B traffic constantly, BE node which generates MTU packets throughout entire simulation,
ST generates 20B traffic periodically every 500ms, and Sink node that receives all the traffic. PCP
of audio-video traffic is set to 7, PCP of scheduled traffic is set to 5 and best-effort traffic has PCP
of 0. Audio-video traffic goes through credit-based shaper which has IdleSlopeFactor of 70M bps.
It can preempt frames that have lower priority. Gating mechanism ensures the protected window
length of 17µs when only ST gate is open, unprotected window length of 360µs when AV and BE
gates are open and maximum guard band length of 123µs when all the gates are closed.

Figure 18: Scenario 3 - network topology

27
Haris Suljić, Mia Muminović Performance study and analysis of TSN

Figure 19 shows the trend of end-to-end latency for every traffic type throughout simulation time.
Best-effort and audio-video traffic latency increases linearly until their buffers get full, after which
end-to-end latency becomes approximately constant. Credit-based shaper allows best-effort traffic
to be transmitted even though it has the lowest priority. Scheduled traffic transmission is fully
deterministic and has constant latency value of 16.72µs. Because of the gating mechanism, it
is never preempted by audio-video traffic. Table 9 presents concrete minimum, maximum and
average end-to-end latency values of all three traffic types.

Figure 19: Scenario 3 - traffic latency

Table 9: Scenario 3 - end-to-end latency

End-to-end latency (ms)


Traffic type: Min Max Avg
Audio-Video 0.23384 27.05200 26.91600
Scheduled traffic 0.01672 0.01672 0.01672
Best-effort 0.35712 50.48300 50.31600

Increasing the number of sources affects the utilization of the link towards Sink. Gating mechanism,
credit-based shaping and frame preemption cause traffic queuing, so the utilization of mentioned
link is 71.3415%. Utilization of remaining links is as expected and this is presented in Table 10.

Table 10: Scenario 3 - utilization of each link

Link Utilization (%)


AV → Switch 99.2210
ST → Switch 1.1520
BE → Switch 99.1657
Switch → Sink 71.3415

28
Haris Suljić, Mia Muminović Performance study and analysis of TSN

6.4 Scenario 4
The previous scenario demonstrates how gating, frame preemption and credit-based shaping work
when they are combined. Obtained results show that frame preemption does not decrease end-to-
end latency significantly. Scenario 4 combines frame preemption and gating. Topology is presented
in Figure 20. There are two types of control frames, time-triggered (TT) and event-triggered(ET).
Event-triggered frames have the highest priority and are able to preempt best-effort traffic, which
has the lowest priority. Frame preemption guarantees that these frames are going to be transmitted
as soon as they arrive providing low end-to-end latency. Gating mechanism assures that time-
triggered frames are scheduled and not preempted by any other traffic, not even higher priority
frames. PCP of event-triggered traffic is set to 7, PCP of time-triggered traffic (equivalent to
scheduled traffic from the previous scenario) is set to 5 and best-effort traffic has PCP of 0. ET
node generates 40B frames, T T node generates 20B frames every 500ms, BE node generates
MTU packets throughout the entire simulation. The receiver of these frames is node Sink. Gating
mechanism is regulated the same way as it is in the previous scenario.

Figure 20: Scenario 4 - network topology

Table 11 presents minimum, maximum and average end-to-end latency values for all there types
of traffic. In order to evaluate the performance of frame preemption, the same results are gathered
for the scenario with frame preemption disabled. These results are presented in Table 12.

Table 11: Scenario 4 - end-to-end latency with preemption

End-to-end latency (ms)


Traffic type: Min Max Avg
Event-triggered 0.01672 0.02172 0.01986
Time-triggered 0.01672 0.01672 0.01672
Best-effort 0.25788 25.25800 25.07000

Table 12: Scenario 4 - end-to-end latency without preemption

End-to-end latency (ms)


Traffic type: Min Max Avg
Event-triggered 0.01672 0.02672 0.02423
Time-triggered 0.01672 0.01672 0.01672
Best-effort 0.24984 25.25000 25.03500

By comparing these two tables, it can be concluded that frame preemption decreases the average
end-to-end latency of event-triggered traffic for 4.366µs. For time-critical packets, this is a very
significant improvement in end-to-end transmission latency. Even though the latency is low, it is
not deterministic. It depends on whether there is a frame being transmitted through a preemptive
queue or not. If not, end-to-end latency is lower. Preemption also takes some time; the transmission
of the preemptable frame has to be stopped; the indicators of preemption have to be inserted.

29
Haris Suljić, Mia Muminović Performance study and analysis of TSN

The express frame can be transmitted then. When no frames are being transmitted through the
preemptive queue, the end-to-end latency of event-triggered traffic is 0.01672ms. Otherwise, it is
0.02172ms. When frame preemption is not enabled, end-to-end latency is the same for frames that
are not being queued. However, frames that are queued and transmitted as soon as transmission
of the previous frame is completed, experience the end-to-end latency of 0.02672ms. End-to-end
latency of scheduled traffic is low and deterministic, as shown in previous scenarios. Best-effort
traffic experiences higher end-to-end latency when preemption is enabled as expected because it is
being preempted.

6.5 Scenarios 5 - Industrial use-case


Network topology of this scenario is presented in Figure 21. It is an industrial use-case model that
represents in-car network. Topology consists of 14 end nodes and 2 TSN switches, Switch A is in
the rear part of the vehicle, while Switch B is located in the front part. Head Unit is the Electronic
Control Unit (ECU) of the vehicle and it collects the data from the nodes. AV sink is the node
that collects the data from Audio and Video nodes to provide rear seat entertainment. Control
nodes transmit control data that is time critical, such as sensor data. Bulk node is connected to
the Internet and forwards the data to Head Unit. Head Unit combines the data from three video
cameras and calculates the bird’s eye view [6]. FCAM generates video data for the front view
system terminated by PUCAM.

Figure 21: Scenario 5 - network topology

Traffic generated by all source nodes is presented in Table 15. Control traffic is labeled “ST”
and should have minimal and deterministic end-to-end latency. Video cameras generate traffic of
identical size.
The configuration of GCL for switch A is given in Table 13. Even though this switch has two
output ports, GCL for output port towards switch B is given. Protected window assures that the
scheduled traffic transmission is not disturbed by other types of traffic. The duration of protected
window is the time required for control messages from nodes Control 3 and Control 4 to reach the
switch B. Other output port transmits only one type of traffic, so there is no need to change the
states of gates, i.e. all gates are always open.

Cycle - 10ms
Gate configuration
Duration
(bit vector)
Protected window 0001 24µs
Unprotected window 1110 9853µs
Guard band 0000 123µs

Table 13: Gate Control List for the switch A

Table 14 contains the configuration of switch B output port towards Head Unit. The duration of
the protected window is few microseconds longer than the one mentioned above. Messages from
nodes Control 1 and Control 2 are transmitted first, followed by messages from nodes Control 3

30
Haris Suljić, Mia Muminović Performance study and analysis of TSN

and Control 4, which are transmitted upon their arrival from switch A. The output port towards
PUCAM does not require any gate state changes.

Cycle - 10ms
Gate configuration
Duration
(bit vector)
Protected window 0001 37µs
Unprotected window 1110 9840µs
Guard band 0000 123µs

Table 14: Gate Control List for the switch B

Table 15: Scenario 5 - traffic characteristics

Type of traffic Source Destination Size Period


ST Control 1 Head Unit 20 10ms
ST Control 2 Head Unit 20 10ms
ST Control 3 Head Unit 20 10ms
ST Control 4 Head Unit 20 10ms
AV Camera 1 Head Unit 31440 10ms
AV Camera 2 Head Unit 31440 10ms
AV Camera 3 Head Unit 31440 10ms
AV FCAM PUCAM 31440 10ms
AV Audio AV Sink 1472 10ms
AV Video AV Sink 1472 10ms
BE Bulk Head Unit 1400 10ms

Simulation time of this scenario is 5 minutes. Table 16 presents achieved maximum end-to-end
latency of all messages when frame preemption is active. All control messages are delivered under
36.88µs which is satisfying for critical applications. Audio-video data can preempt Bulk data.
IdleSlopeFactor is set to 95M bps.

Table 16: Scenario 5 - end-to-end latency

Message End-to-end latency (ms)


Control1 0.01672
Control2 0.02344
Control3 0.03016
Control4 0.03688
Camera1 8.45036
Camera2 8.58914
Camera3 9.15941
FCAM 3.12860
Audio 0.39390
Video 0.25243
Bulk 1.84834

Utilization of every link is presented in Table 17 and the tests show that it is not altered by
activating frame preemption. Head Unit link has the highest utilization as expected, because it
receives packets from eight nodes.

31
Haris Suljić, Mia Muminović Performance study and analysis of TSN

Table 17: Scenario 5 - utilization of each link

Link Utilization (%)


Control 3 → Switch A 0.0586
Control 4 → Switch A 0.0586
Camera 3 → Switch A 25.6392
Audio → Switch A 1.2008
Video → Switch A 1.2008
Bulk → Switch A 1.1442
Switch A → AV Sink 2.4016
Switch A → Switch B 26.8986
Control 1 → Switch B 0.0576
Control 2 → Switch B 0.0576
Camera 1 → Switch B 25.6392
Camera 2 → Switch B 25.6392
FCAM → Switch B 25.6392
Switch B → PUCAM 25.6392
Switch B → Head Unit 78.2922

In paper [2], this model was used for the response-time analysis of the Ethernet AVB protocol.
Results that researchers have obtained are shown in Table 18. It is important to point out that
model configuration is different. In this case, control frames are considered as Class A, forwarded
through credit-based shaper with the idle slope of 10M bps. Camera frames represent Class B, and
the idle slope of the shaper is 85M bps. Best-effort traffic (Bulk) is transmitted as a non-real time
frame.
Comparing these results with results stated above, we can see that control frames have much lower
end-to-end latency if transmitted through TSN switch. When frames are transmitted as Class A,
they are delayed by the credit-based shaper. If they are transmitted as scheduled traffic, they can
be transmitted immediately. Camera and audio-video frames also have lower end-to-end latency
when transmitted through TSN switch, because the idle slope is 95M bps. Control frames and
bulk data do not take more than 5% of overall network utilization, so there is no need to decrease
the data rate of camera and audio-video frames by assigning lower idle slope to the credit-based
shaper.

Table 18: Scenario 5 - end-to-end latency (with AVB only) [2]

Message End-to-end latency (ms)


Control1 0.336
Control2 0.336
Control3 0.569
Control4 0.569
Camera1 12.313
Camera2 12.313
Camera3 15.490
FCAM 6.094
Audio 0.416
Video 0.416

6.6 Discussion
This subsection summarizes all observations related to the performance of TSN amendments.
Obtained simulation results are discussed, and two main amendments are evaluated according to
them. Enhancement for scheduled traffic and frame preemption are analyzed individually and in
conjunction.
Scenario 1 analyzes enhancement for scheduled traffic individually. Results obtained utilizing

32
Haris Suljić, Mia Muminović Performance study and analysis of TSN

this network topology show that this amendment can guarantee low and deterministic latency for
scheduled traffic, but only if the gating schedule is created correctly. The main disadvantage is
that the schedule has to be created offline, i.e., before the execution. It has to be precise in order
to achieve maximum utilization. However, network delays have to be thought of as well. Even if
precise transmission times are calculated, the safety margin should be added. This is a trade-off. In
order to maximize the utilization, safety margins have to be decreased and vice versa. The guard
band decreases the utilization of network links because it stops transmission of all types of traffic.
Setting the guard band to the time interval needed for the transmission of MTU packet guarantees
the deterministic end-to-end latency of scheduled traffic. However, this is sometimes redundant.
A better approach would be determining what the size of the packet that is being transmitted just
before the scheduled traffic is and then calculate the guard band length based on the size. This
could decrease the guard band length and increase utilization. The utilization of Switch to Sink
link with the guard band is 50.14%. It increases to 76.796% without the guard band. For this
topology, the scheduled traffic is unharmed, and the end-to-end latency is the same. However, it is
more important to provide a guarantee that time-critical and scheduled packets are transmitted in
a predictable manner. End-to-end latency of best-effort traffic is continuously increasing until the
buffer of the queue gets full. Latency reaches its maximum value then and becomes stable. Once
the buffer gets full, packet loss occurs.
There are time-critical packets that are not scheduled, but it is essential that they are transmitted
as soon as they arrive. Since creating an offline schedule for gating is not an option; in this case,
frame preemption is used. Scenario 2 analyzes the amendment for frame preemption. Audio-video
traffic should be transmitted evenly, and credit-based shaper potentiates this. Frame preemption
allows the immediate transmission of audio-video traffic right after the credit reaches zero. This
reduces the end-to-end latency of audio-video traffic, and these frames are evenly distributed in
time. However, it does not make a significant improvement in this scenario. Preemption should be
used only for the frames that do not require determinism in their latency but do require immediate
transmission. It is not suitable to use preemption instead of gating, because the latency would be
completely unpredictable.
Scenario 3 is a combination of the first two scenarios. Even though scheduled traffic does not
have the highest priority, the gating assures that nothing disrupts its transmission, not even the
AV traffic that is able to preempt packets of the lower priority. Credit-based shaper favors the
best-effort traffic and prevents its starvation. Preemption favors the audio-video traffic and slightly
increases the latency.
Scenario 4 is similar to the previous one, except it does not include credit-based shaping. Two types
of critical frames are transmitted through the network; time-triggered and event-triggered. Gating
mechanism assures low and deterministic end-to-end latency of time-triggered frames. Frame
preemption assures low, but not deterministic end-to-end latency of event-triggered frames. This
scenario shows that frame preemption is not immediate; it takes some time. Because of that,
end-to-end latency of frame that is preempting other frame is increased. However, latency is much
lower with frame preemption than without. End-to-end latency of preempted traffic (best-effort
traffic) is increased, as expected.
The purpose of scenario 5 is to demonstrate the differences between TSN and AVB switch. AVB
switch can assure low end-to-end latency of time-critical frames, but it cannot be considered
deterministic. Non-critical-time frames could interfere with them, while that is not the case when
they are transmitted through TSN switch. If gating is implemented correctly, no other traffic class
can disrupt scheduled traffic. Frame preemption assures that AV frames are distributed evenly in
time and transmitted as soon as the credit reaches zero. However, for this scenario, it does not
improve overall network performance.

33
Haris Suljić, Mia Muminović Performance study and analysis of TSN

7 Conclusion
The main goal of this thesis is conducting performance analysis of TSN by providing concrete and
valid results in a network simulation framework. TSN consists of numerous substandards, but only
enhancement for scheduled traffic (IEEE 802.1 Qbv) and frame preemption (IEEE 802.1Qbu) are in
the scope of the research. In order to address this topic, the thorough background has been provided
starting with network architecture followed by TSN features and OMNeT++ introduction. The
background was finalized with concrete TSN simulation framework NeSTiNg, that was used to
build models of interest. Related work summarizes important research papers that investigate
TSN networks and justifies the thesis goal. The evaluation has been performed by creating five
scenarios that each provide simulation results. First two scenarios demonstrate the functionality
of mentioned amendments, the third and fourth one provide interesting results on combining those
two. The fifth scenario is a real-world use-case that presents a vehicle network. The discussion
that concludes the evaluation provides a detailed analysis of mentioned scenarios based on which
research questions can be answered.

7.1 Reflection on research questions


Conducted analysis and provided scenarios allow answering the research questions stated in Section
1.2. First question, how does TSN based switched Ethernet network perform in terms of end-to-end
data transmission latency, is answered in all five scenarios of Section 6. End-to-end latency of the
first scenario is given for both traffic types; scheduled traffic has low and deterministic latency
that is constant during the simulation time, while best-effort traffic experiences greater end-to-end
latency due to the fact that is being generated 99.1602% of simulation time. This causes traffic
queuing and that increases the latency. The second scenario provides similar results since both
best-effort and audio-video traffic latency increases linearly. The third scenario combines the first
two and shows that end-to-end latency of scheduled traffic remains the same regardless of other
traffic types. The fourth scenario is similar to the previous one. One part of control frames is
transmitted as scheduled traffic, while the other part utilizes frame preemption. All of the frames
are transmitted as soon as they arrive into queues; their end-to-end latency is low. Frames are
not disrupted by any other type of traffic. Scheduled traffic has constant end-to-end latency, while
end-to-end latency of frames that are preempting other frames vary. It is still lower than it is in
the case without preemption. The final scenario provides the most interesting results, as it is an in-
dustrial use-case that can be implemented. Scheduled traffic experiences deterministic end-to-end
latency, which is achieved with the gating mechanism. Video camera data that goes through CBS
towards Head Unit has the greatest end-to-end latency, which is expected since the data size is the
greatest. Audio and video data transmitted towards rear seat screen experience low end-to-end
latency, whereas video data is received before audio data. Bulk data experiences latency in the
millisecond range and is not starving. This has been mentioned in the Section 6.6.
The following research question, how does the combination of 802.1Qbv and 802.1Qbu amendments
affect the network performance for critical messages, has been partially answered in the previous
paragraph. Critical messages are the messages that must be delivered under a certain time limit
and are an important part of safety mechanisms, as mentioned in Section 2.1. Section 6 has five
scenarios out of which four have critical messages, named scheduled traffic. In all four scenarios,
scheduled traffic has deterministic behaviour. For scenarios 1 and 3, end-to-end latency of critical
messages is constant and in the microsecond range, which is acceptable for most critical applica-
tions. In the fifth scenario, critical messages, named control traffic, are delivered under 36.88µs,
which is also satisfying. Both of the mentioned amendments are utilized in the last two scenarios.
Gating mechanism guarantees deterministic behaviour of scheduled traffic, while frame preemption
does not affect scheduled traffic. When utilized properly, frame preemption can lower the latency
of traffic classes that do not require determinism.
The third significant question this thesis aims to answer is, how can a simulation framework help in
evaluating the performance of TSN networks. Simulation framework such as NeSTiNg, mentioned
in Section 2.9, covers amendments of interest of this research. All of the packages are according to
the standard and it can help with understanding and efficient utilizing of its characteristics. OM-
NeT++ was an easy choice for the platform of the framework, as it is open sourced and utilized

34
Haris Suljić, Mia Muminović Performance study and analysis of TSN

by the research community all around the world. Advantage of using simulation framework is its
ability to be easily reconfigured, all the parameters are controlled and researchers can focus on the
analysis. On the other hand, performing an experiment could provide more accurate results, but
the experiment is not easily scalable. The process of wiring and configuring the network mentioned
in Section 6.5 would be too exhausting, and collecting the data from the nodes would not be an
easy task. NeSTiNg can provide the latency, utilization, queuing time, express and preempt-able
frames delay, etc., to understand the TSN. Also, animation speed can be configured so the traffic
flow can be easily tracked.
The initial goal of this thesis was to perform a performance analysis of TSN networks by developing
a simulation framework based on OMNeT++, but since there were several already implemented
such as NeSTiNg and CORE4INET28 the focus of the thesis shifted towards deeper performance
analysis of the standard. With the increase of data being transmitted in the automotive industry,
this technology is predicted to be the next step of in-car networking. With this being said, the
authors of this thesis believe that some interesting results have been shown by examining these
two amendments; the potential and utility of TSN, in general, are unquestionable.

28 CORE4INET. [Online]. Available: https://core4inet.core-rg.de/trac/wiki/CoRE4INET_Background

35
Haris Suljić, Mia Muminović Performance study and analysis of TSN

8 Future Work
While creating gate control lists for each of the scenarios, one can conclude that it is a tiring process
which requires much planning in advance. However, it is not impossible to implement an automatic
offline gate scheduler with respect to traffic characteristics. It requires taking a lot of information
about a single frame into a consideration, like period, release time, priority, destination, size of the
package, size of the previous and the next frame and also information about network configuration
like data rate, period, number of queues that are used, processing delay of the switches, etc. Based
on these inputs, GLCs that provide minimal protected windows and guard bands could be easily
generated and then tested using simulation tools.
Utilized simulation framework required some modifications such as solving errors with credit spend-
ing or canceling other modules self-messages. Mentioned issues would not allow one to run simula-
tions longer than several seconds, and there are still some errors not resolved. Future work could
be related to developing more stable simulation framework and some comparison with another
simulation framework could lead to its greater credibility.
Future work could provide a performance analysis of TSN used in networks that are more complex
to deliver the limits on its efficient utilization. That kind of research could demonstrate TSN’s
scalability.

36
Haris Suljić, Mia Muminović Performance study and analysis of TSN

References
[1] J. F. Nunamaker Jr, M. Chen, and T. D. Purdin, “Systems development in information
systems research,” Journal of management information systems, vol. 7, no. 3, pp. 89–106,
1990.
[2] M. Ashjaei, S. Mubeen, J. Lundbäck, M. Gålnander, K.-L. Lundbäck, and T. Nolte, “Modeling
and timing analysis of vehicle functions distributed over switched Ethernet,” in IECON 2017-
43rd Annual Conference of the IEEE Industrial Electronics Society. IEEE, 2017, pp. 8419–
8424.

[3] P. Simoneau, “White paper: The OSI Model: Understanding the Seven Layers of Computer
Networks,” Global Knowledge Training LLC, Tech. Rep., 2006.
[4] I. C. Society, “IEEE 802.1Q Standard.” IEEE, 2014.
[5] I. C. Society, “IEEE 802.3 Standard for Ethernet.” IEEE, 2016.

[6] H.-T. Lim, K. Weckemann, and D. Herrscher, “Performance study of an in-car switched Eth-
ernet network without prioritization,” in International Workshop on Communication Tech-
nologies for Vehicles. Springer, 2011, pp. 165–175.
[7] I. L. S. Committee et al., “IEEE standard for local and metropolitan area networks,” 2014.

[8] S. S. Craciunas, R. S. Oliver, M. Chmelı́k, and W. Steiner, “Scheduling real-time communi-


cation in IEEE 802.1 Qbv time sensitive networks,” in Proceedings of the 24th International
Conference on Real-Time Networks and Systems. ACM, 2016, pp. 183–192.
[9] J. Jiang, Y. Li, S. H. Hong, A. Xu, and K. Wang, “A Time-sensitive Networking (TSN) Simu-
lation Model Based on OMNET++,” in 2018 IEEE International Conference on Mechatronics
and Automation (ICMA). IEEE, 2018, pp. 643–648.
[10] Z. Ullah, “Use of Ethernet Technology in Computer Network,” Global Journal of Computer
Science and Technology, 2012.
[11] K. C. Lee and S. Lee, “Performance evaluation of switched Ethernet for real-time industrial
communications,” Computer standards & interfaces, vol. 24, no. 5, pp. 411–423, 2002.

[12] Z. Lin, S. Pearson et al., “An inside look at industrial Ethernet communication protocols,”
White Paper Texas Instruments, 2013.
[13] T. Brand, “Time Sensitive Networks: Real-Time Ethernet,” 2018.
[14] L. Zhao, F. He, E. Li, and J. Lu, “Comparison of Time Sensitive Networking (TSN)
and TTEthernet,” in 2018 IEEE/AIAA 37th Digital Avionics Systems Conference (DASC).
IEEE, 2018, pp. 1–7.
[15] G. A. Ditzel and P. Didier, “Time sensitive network (TSN) protocols and use in Ethernet/IP
systems,” in 2015 ODVA Industry Conference & 17th Annual Meeting, 2015.

[16] T. Wan and P. Ashwood-Smith, “A performance study of CPRI over Ethernet with IEEE 802.1
Qbu and 802.1 Qbv enhancements,” in Global Communications Conference (GLOBECOM),
2015 IEEE. IEEE, 2015, pp. 1–6.
[17] L. Zhao, P. Pop, and S. S. Craciunas, “Worst-case latency analysis for IEEE 802.1 Qbv time
sensitive networks using network calculus,” Ieee Access, vol. 6, pp. 41 803–41 815, 2018.

[18] H. Laine, “Simulating Time-Sensitive Networking,” 2015.


[19] W.-K. Jia, G.-H. Liu, and Y.-C. Chen, “Performance evaluation of IEEE 802.1 Qbu: Ex-
perimental and simulation results,” in Local Computer Networks (LCN), 2013 IEEE 38th
Conference on. IEEE, 2013, pp. 659–662.

37
Haris Suljić, Mia Muminović Performance study and analysis of TSN

[20] J. Pan and R. Jain, “A survey of network simulation tools: Current status and future devel-
opments,” Email: jp10@ cse. wustl. edu, vol. 2, no. 4, p. 45, 2008.

[21] S. Siraj, A. Gupta, and R. Badgujar, “Network simulation tools survey,” International Journal
of Advanced Research in Computer and Communication Engineering, vol. 1, no. 4, pp. 199–
206, 2012.
[22] E. Weingartner, H. Vom Lehn, and K. Wehrle, “A performance comparison of recent network
simulators,” in 2009 IEEE International Conference on Communications. IEEE, 2009, pp.
1–5.
[23] A. Varga and R. Hornig, “An overview of the OMNeT++ simulation environment,” in Pro-
ceedings of the 1st international conference on Simulation tools and techniques for communi-
cations, networks and systems & workshops. ICST (Institute for Computer Sciences, Social-
Informatics and , 2008, p. 60.

[24] A. Varga, “OMNeT++ Simulation Manual Version 5.4.1.” OpenSim Ltd, 2016, pp. 123, 425,
133, 280.
[25] A. Varga, “OMNeT++ User Guide Version 5.4.1.” OpenSim Ltd, 2016, p. 128.
[26] J. Falk, D. Hellmanns, B. Carabelli, N. Nayak, F. Dürr, S. Kehrer, and K. Rothermel, “NeST-
iNg: Simulating IEEE time-sensitive networking (TSN) in OMNeT++,” in Proceedings of the
2019 International Conference on Networked Systems (NetSys), Garching b. München, Ger-
many, Mar. 2019.
[27] P. Doyle, “Introduction to real-time Ethernet I,” The ExtensionA Technical Supplement to
Control Network, vol. 5, no. 3, 2004.

[28] P. Doyle, “Introduction to real-time Ethernet II,” The ExtensionA Technical Supplement to
Control Network, vol. 5, no. 4, 2004.
[29] W. S. S. APART, “5 Real-Time, Ethernet-Based Fieldbuses Compared.”
[30] T. Steinbach, F. Korf, and T. C. Schmidt, “Real-time Ethernet for automotive applications:
A solution for future in-car networks,” in 2011 IEEE International Conference on Consumer
Electronics-Berlin (ICCE-Berlin). IEEE, 2011, pp. 216–220.
[31] P. Heise, F. Geyer, and R. Obermaisser, “TSimNet: An industrial time sensitive networking
simulation framework based on OMNeT++,” in 2016 8th IFIP International Conference on
New Technologies, Mobility and Security (NTMS). IEEE, 2016, pp. 1–5.

[32] J. Farkas, “Introduction to IEEE 802.1 - Focus on the Time-Sensitive Networking Task
Group.” IEEE, 2017.
[33] C. Wohlin, M. Höst, and K. Henningsson, “Empirical research methods in software engineer-
ing,” in Empirical methods and studies in software engineering. Springer, 2003, pp. 7–23.

38
Haris Suljić, Mia Muminović Performance study and analysis of TSN

A Installation guidelines
This section provides guidelines and technical tips for installing OMNeT++, INET framework and
NeSTiNg simulation framework. It is important to point out that simulations have been tested in
the latest version of the NeSTiNg simulation framework with OMNeT++ version 5.4.1 and INET
version 4.1.0 under Linux (version 18.04).

1.1 OMNeT++
If you are installing OMNeT++ on one of Linux distributions or macOS, prerequisite packages
have to be installed. No prerequisite packages are necessary for Windows. The next step is
downloading OMNeT++ from the official web-page29 . You have to manually copy the downloaded
OMNeT++ archive to the directory where you want to install it. It is important that you choose
a directory whose full path does not contain any space (do not put OMNeT++ under
Program Files). OMNeT++ Installation Guide30 describes how to install OMNeT++ on various
platforms. Instructions provided within this document are sufficient for successful installation.
Java Runtime must be installed before you can use the IDE. It is strongly recommended to utilize
OpenJDK, version 8.0 or later.

1.2 INET framework


Latest version of the OMNeT++ IDE can download and install INET (the latest stable version)
for you. However, this feature is not working always. It is possible to install INET framework
manually and the procedure is following:
1. Download the framework from the official web-page31 .
2. Unpack archive into the directory of your choice (<workspace> directory called later on).
3. Start the OMNeT++ IDE.
4. Set the <workspace> directory as the workspace and launch the IDE.
5. Import the project via File > Import > Existing Projects to the Workspace. Select inet.
6. Project folder should now appear and be checked under Projects.
7. Build the project.
8. Launch example simulations.

1.3 NeSTiNg framework


The original version of NeSTiNg framework can be downloaded from GitLab 32 . The modified
version with simulation scenarios used in the thesis is available on GitHub33 .
In order to add the mentioned framework to OMNeT++, you should do the following steps:
1. Download the framework from GitLab/GitHub.
2. Unpack archive into the <workspace> directory where INET is.
3. Start the OMNeT++ IDE.
4. Set the <workspace> directory as the workspace and launch the IDE.
5. Import the project via File > Import > Existing Projects to the Workspace. Select nesting.
29 OMNeT++. [Online]. Available: https://omnetpp.org/
30 OMNeT++ Installation Guide. [Online]. Available: https://doc.omnetpp.org/omnetpp/InstallGuide.pdf
31 INET framework. [Online]. Available: https://inet.omnetpp.org/
32 NeSTiNg repository. [Online]. Available: https://gitlab.com/ipvs/nesting
33 NeSTiNg repository (modified). [Online]. Available: https://github.com/miamuminovic/tsn_thesis

39
Haris Suljić, Mia Muminović Performance study and analysis of TSN

6. Project folder should now appear and be checked under Projects with inet project folder.

7. Build the project.


8. Launch example simulations.
It is important to point out that simulation scenarios can be run only with modified version of
NeSTiNg.

Figure 22: Projects inet and nesting checked under Projects

40
Haris Suljić, Mia Muminović Performance study and analysis of TSN

B Getting started
In this section NeSTiNg utilization is demonstrated on a simple example model. It guides the users
through some concrete steps and provides practical tips in order to encourage them to experiment
with the framework.
• After starting the OMNeT++ we need to select the desired workspace. Welcome screen
offers several options such as: Overview, What’s New, Tutorials, Migrate and Workbench. In
this guideline we will not use them so just exit Welcome screen.
• Popping screen will notify you that your workspace is empty and ask if you would like to
install or import some projects. However, clicking OK will lead to an error screen because
web source cannot be found, so we will skip this by clicking Cancel.
• IDE is usually slow unless we create makefiles for INET through terminal. First we need to
navigate to INET folder and use commands make makefiles and make sequentially.
• Instead creating new project which is straight forward, we will just add new folder to the
NeSTiNg project in order to minimize the importing from this framework.
• We can add the HelloWorld folder to nesting->Simulations folder by right clicking on it,
selecting New->Folder. Next we need to add the NED file to that folder and to do that we
right click on created folder and click New->Network Description File (NED), now we need
to name it and later we select NED file with one item for easiest start.
• Next step is to add INI file to the same folder by similar fashion, right click on it and New-
>Initialization File (INI) that we customize to be empty. This will result in NED file similar
to one in Figure 23.

Figure 23: Example NED file

• In the submodules window we can see nodes that are included in NeSTiNg package. Let’s
add some to create a simple network such as one in the Figure 24. In order to make it able
to run we need to define the routing table and the schedule for the traffic that source needs
to transmit.

Figure 24: Simple network topology

41
Haris Suljić, Mia Muminović Performance study and analysis of TSN

• We will add xml folder to HelloWorld folder and there we will create two xml files, Rout-
ing.xml and Schedule.xml.
• Now we can start the simulation and analyze the results. Simulation window is presented in
Figure 25.

Figure 25: Simulation window

• In order to analyze the data we need to add ANF file to the folder HelloWorld. In the INI
file we stated the directory for storing the results and in it we can find sca, vec and sci files.
We can drag them to the input files of the ANF file. This is presented in Figure 26.

Figure 26: Selecting the input files for analysis

• Now we can browse the data, filter them, plot and do much more. In the Figure 27 we can
see chart of end-to-end delay of the traffic in the Example scenario.

Figure 27: Chart of end-to-end delay for the example

42

View publication stats

You might also like