Computer Science 1812415 AM6-Proposed RD
Computer Science 1812415 AM6-Proposed RD
Computer Science 1812415 AM6-Proposed RD
Project by:
Emmanuel Jeremie Daniel (1812415)
UNIVERSITY OF MAURITIUS
FACULTY OF INFORMATION, COMMUNICATION AND
DIGITAL TECHNOLOGIES (FoICDT)
Supervised by:
DR. Avinash Utam Mungur
June 2021
Dissertation Declaration Form
On submission of my dissertation to the UoM, I solemnly declare that:
28/06/21
Turnitin Digital Receipt
Dissertation Supervisor Statement
Table of Contents
Acknowledgement vi
Abstract vii
List of Tables x
List of Abbreviations xi
Preface xii
1 INTRODUCTION 1
1.4 Methodology 2
2 BACKGROUND STUDY 3
Research paper 2.2.8: The QUIC Transport Protocol: Design and Internet-Scale
Deployment 14
1
Research paper 2.2.9: Game of Protocols: Is QUIC Ready for Prime Time
Streaming? 15
Research paper 2.2.12: QUIC: Better for what and for whom? 19
3 ANALYSIS 29
3.5.3 Tools 36
4 DESIGN 39
4.1.1 Portability 39
2
4.1.2 Security 39
4.1.3 Performance 40
4.1.4 Robustness 40
4.1.5 Flexibility 40
5 IMPLEMENTATION 51
3
5.6 Building Issue with quiche 64
6 TESTING 67
7 EVALUATION 75
4
7.4.1 Strengths 92
7.4.2 Weaknesses 92
8 CONCLUSION 95
8.1 Achievements 95
8.2 Difficulties 95
LIST OF REFERENCES 98
5
Acknowledgement
I really wish to thank my supervisor Dr. Avinash Utam Mungur for letting
me propose this title as Final Year Assignment and accepted to supervise this
project. I would like to extend my sincere gratitude to him for the continuous
support and assessment for every step taken during this project’s timeline.
6
Abstract
In this project, QUIC will be described and how it may play a better role than
TCP in the future. An implementation of QUIC, named quiche, will be used to
implement three new modules and an experiment will be provided to show
how it affects quiche. The new modules will provide a better performance in
speed and keeping the security aspect of the quiche untouched. However, by
using TLS 1.3 in this project, it will provide the security aspect of TLS 1.3 in
quiche. The three modules implemented are Certificate Compression, Key
Update Mechanism and Speed up Handshake Mechanism. These modules
will greatly help quiche for a better performance in time. During this project,
after implementing the three modules, an experiment will be done to test
whether the goals of this project was achieved.
After the experiment was done, the result obtained shows that the goals were
achieved. Afterwards, an explanation about each module and how they
affected the functional and non-functional requirements was provided.
Moreover, a domain recommendation was given so that other implementation
of QUIC could use the modules implemented. Following the evaluation,
achievement and difficulties of the project was discussed and the future works
that may provide quiche a better performance is discussed.
7
● List of Figures
8
Fig 7. 3: Key Update Traffic 80
Fig 7. 4: Speed up Handshake Traffic 82
Fig 7. 5: Chart representing the difference between each test case and the
original traffic 83
Fig 7. 6: L1 89
Fig 7. 7: C1 89
●
●
9
● List of Tables
10
List of Abbreviations
Abbreviation Description
CC Congestion Control
IP Internet Protocol
11
Preface
This project is divided into eight main phases.
Phase 1: Introduction
This phase gives an overview of the project issues that are tackled. It also
includes the aims and objectives, the scope of this project and the project
timeline.
This phase consists of the research made on QUIC and the criteria obtained
will be put in table of criteria.
Phase 3: Analysis
Phase 4: Design
This phase consists of the design issues and the architecture of the system.
The structure, architecture and other architectural design will be explained.
Phase 5: Implementation
This phase will provide the hardware and software specification and the
standard and convention used will be discussed. The modules implemented
will be provided as well as an explained.
Phase 6: Testing
This phase consists of the tests to be carried out on the modules implemented
and how the tests were done.
Phase 7: Evaluation
12
This phase consists of the results that were obtained after the testing process
and the results will be explained. The results obtained will be used to compare
the functional and non-functional requirements, written in phase 3. Moreover,
the criteria gotten in the phase 2 will be assessed against the new
implemented modules. The strengths and weaknesses will also be discussed.
The domain recommendation will be given at the end of this phase.
Phase 8: Conclusion
This phase consists on the achievements and difficulties of the project and
how the project concluded with the results obtained. Afterwards, the future
works for improving quiche will be discussed.
13
1 INTRODUCTION
QUIC is a new transport layer network protocol that is designed and
implemented by Google and IETF and each has a different version of QUIC.
QUIC was designed to improve the performance and outperforms TCP in
connection establishment and with web apps. In this chapter, the problem
having with the currently used transport protocol, namely TCP, will be
explained and how QUIC will replace it. The aims and Objectives of this
project and how it will be managed to accomplish the objectives will be
discussed. Also, a project timeline will be given at the end of this chapter.
For the past years, QUIC was designed to be faster than TCP and is
somewhat statically true as nearly all the research paper reviewed in chapter
2(Background Study). QUIC is a new transport layer network protocol that
takes TCP, adds the latest TLS 1.3 encryption and establish a better, more
secure and faster connection. QUIC was originally created to replace TCP/IP
as it has several advantages over TCP/IP. QUIC’s connection establishment is
faster and most of the time will not require any acknowledgement from the
server if a connection has already been established before. Even if QUIC is
mostly faster than TCP, QUIC is being worked on to be faster than ever
without reducing the security properties of QUIC. Nowadays, webpages are
becoming heavier and heavier with all the items it contains meaning a longer
wait time to open a webpage. Our solution, to this problem or future problems
with waiting time, is QUIC and implement new extensions to QUIC making it
quicker.
1
Aims
The aim of this project is to provide quiche with the missing functions to make
QUIC send and receive packets at a faster rate and to implement them so that
they are reliable and efficient to the system.
Objectives
1.4 Methodology
2
can be written using TLS’s KDF function (used to update write keys with
TLS 1.3).
For a project to succeed properly, the project manager has to schedule all the
steps from start to finish. Below is a detailed timeline for the project’s schedule
that was set up so that the project could move along at a constant speed.
3
4
2 BACKGROUND STUDY
This is the section where all the research works will be discussed and
emphasis on all the research works done on QUIC will be done and the
breaking down and studying of the 14 research papers is done below and for
each research paper, a critical analysis is provided. This will help us
understand the project better and help us to evaluate the implementation later
on.
In this part, several research papers, which are expanded essays that consists
of explanation, analysis and argument on the topic named QUIC, will be
summarised. The methodology used, the results gained and critical analysis
will be provided in a summarized form.
In this research paper, Nepomuceno et al. write about how TCP differs from QUIC
and tested both protocols by comparing the page load time from several websites.
From this experiment, they also observed if the new protocols meet the demand of
the internet at that time and if it works as designed originally.
They have taken a set of popular websites from Alexa top sites and run several
combinations to test the different configurations such as testing packet loss ratio
between 0% and 2% or testing RTT values between 20ms and 200ms. Then they
observed the HTTP archive record information to get what actions were performed in
the loading of the pages. The disk cache was also turned on and off for the testing.
5
For an average result, each combination was done 30 times for every single
page(Nepomuceno et al., 2018). Table 1 shows all the configurations used for the
experiment.
The results obtained from the experiment are given and discussed by Nepomuceno
et al. The main goal was to observe the difference between the QUIC and TCP
protocols, so they plotted a graph with the different combinations. They derived an
equation from y =mx +c to plot the graph. They draw 2 graphs with the combination of
the event onLoad with and without cache and 2 more graphs with the combination of
the event onContentLoad with and without cache. As they observed from the 4
graphs, TCP performed better on at least 60% but QUIC had also better page load
time than TCP. They observed that the configurations used have a great impact on
the results gained. For some pages, with varying cache and for onContentLoad
events, they observed that QUIC was performing better than TCP. They made a
boxplot from the results using the metric onLoad event without cache. From the
boxplot, they saw that the RTT variation has a significant impact on the page load
time metric. They used the factors of RTT and packet loss ratio to observe the
difference. By fixing the packet loss ratio and varying the RTT in 20ms, 100ms and
200ms, they made an observation that the page load time is faster but when we fix
the RTT and vary the packet loss to 0%, 0.5%, 1%, and 2%, the results were nearly
the same. Meaning that RTT has a great impact on the page load time while the
packet loss ratio is nearly meaningless(Nepomuceno et al., 2018).
Critical Analysis
This study has compared the performance of QUIC vs TCP to note the better one.
The most accessed pages were used under several configurations to evaluate the
6
performance of the protocols. QUIC has the worst performance under onLoad and
onContentLoad events for several scenarios. As discussed above, varying the RTT
values has a great influence on the experiment for QUIC while varying the packet
loss ratio value was nearly not noticeable. Despite the fact that QUIC is already being
used in part of the internet, there are still some improvements needed. Still, QUIC is
faster when creating connections to the server while TCP is time-consuming but
QUIC loses time when reading the server’s answer. Also, the reason why TCP overall
results were better than QUIC is that the accessed pages in this experiment were not
structured in a way that could be better for QUIC to perform. Most pages are not
necessarily optimized for QUIC yet and QUIC is still adapting.
In this research paper, Rüth et al. talk about how it has been hard to update the
internet transport layer protocol and that optimizing latency and providing encryption
has been very hard to update. They also talk about how Google’s Quick UDP Internet
Connections (QUIC) protocol is similar to TCP but yet it still has other features better
than TCP such as stream multiplexing over a single connection. Also, QUIC will fully
encrypt at the transport layer level, providing security and excluding middlebox
optimizations. Yet there are still a limited number of tools supporting the analysis of
QUIC. Rüth et al. carry out an experiment about the QUIC deployment and its traffic
share. In this experiment, they identified QUIC hosts that are QUIC capable IPs
meaning that a valid version negotiation packet or a QUIC public reset packet is
received. QUIC hosts also can support multiple versions. They developed a tool that
finishes the handshake and enables to further classify hosts and infrastructures. After
that, they studied how QUIC support reflects on actual traffic shares. First, they have
analyzed 3 traces that represent different vantage points and choose 3 different
factors to observe the traffic shares namely HTTPS, HTTP and QUIC. Then they
observed Netflow traces from European Tier-1 ISP for the different factors on Google,
Akamai and other servers. They also gained information on a large European IXP
7
with the same factors. Lastly, They have published tools that Enumerate QUIC hosts
and grab and decode QUIC protocol parameters(Rüth et al., 2018).
The result for the analysis for the deployment of QUIC is that there are a small
number of providers using and experimenting on QUIC, there is not a big number of
domains that QUIC capable and a lower number of domains returns valid certificates.
Even so, there is an increasing number of domains having QUIC support. Then they
have searched for QUIC support on the IPv4 space which resulted in a growing
adoption on a large number of IPs supporting QUIC since October 2017. They also
searched on different domains as well as Alexa top sites. Then for the result for the
view on traffic shares from 3 different vantage points for QUIC, they found that within
nine months after QUIC was activated by Google, QUIC traffic share was increasing
slowly. QUIC traffic shares do not yet reflect server support. QUIC traffic is mostly
served by Google servers(Rüth et al., 2018).
Critical Analysis
As explained above, after nine months that QUIC was deployed by google, QUIC was
getting used and experimented with. When they searched the entire IPv4 address
space, they found out that there was an increasing number of QUIC-enabled IPs and
this kept on increasing. It can also be deduced that this growth was driven by Google
and Akamai servers. Google accounts more on traffic share for QUIC than Akamai
despite having a large number of QUIC-capable users. It can be deduced that QUIC
traffic share will obviously increase in the future as shown in this research paper.
QUIC is still under development but it can be observed that it is increasing in
popularity.
8
In this research paper, Rüth et al. talks about how QUIC was made to be better in
performance, security and how it progresses more than TCP. They also talk about
Chromium’s Google QUIC uses packet pacing and an initial congestion window of 32
segments. They also say that QUIC gives new levels of protocol customization and
progress. So they ask themselves if QUIC impacts the Quality of Experience and if
changing from TCP to QUIC is noticeable for humans. In this paper, they will answer
these questions in two user studies. First, they modified the Mahimahi framework to
have control on network parameters. Then they used their testbed using Chromium
browser with an empty cache. They took sites from Alexa top sites, chose four
different network settings namely DSL, LTE, DA2GC and MSS, and they recorded
videos of their test at least 31 times to replay them and derive technical and visual
metrics for their experiment. For the first study, they ask if the user notices the
protocol switch, so they recorded two videos of loading of the same website with
different protocol configurations to some contestants. For the second study, they ask
if users still care about the quality of the loading process of a webpage, so they let the
contestants rate the recorded video to see if they are satisfied with the loading speed
and the quality of the loading process(Rüth et al., 2019).
When they used Chromium browser with an empty cache, there was no support for
TLS1.3 in Chromium which is a 0-RTT connection establishment but had a 1-RTT
connection which is an advantage for QUIC. This advantage is a primary factor that
outperforms the traditional Web stack which is faster than TCP. So, to answer the first
study’s question, most contestants do not see a difference between QUIC and TCP
for DSL setting and for LTE also. For DA2GC, QUIC was faster. So, they concluded
that people saw a difference and QUIC was faster than TCP. For the second study’s
question, they saw only little variance between the protocols but observed that
QUIC+BBR is more satisfying than TCP+BBR. They also specify that QUIC without
BBR is generally faster than TCP. Afterwards, they talk about the studies saying that
the participants did not care about the protocol even if QUIC was the preferred
protocol for some(Rüth et al., 2019).
Critical Analysis
9
After reading this research paper, it can be seen that QUIC was the preferred
protocol even for a little percentage of participants. For the first study, QUIC
outperformed TCP, meaning that the participants have eventually seen the difference
in speed and performance. For the second study, they observed that the difference is
negligible and that users do not distinguish which protocol is faster. Still, some people
prefer QUIC. Even though if QUIC evolves, user satisfaction will not radically
increase. QUIC is not only for speed but for enhanced security and future proof
design that will perform better in the future.
In this research paper, Megyesi, Krämer and Molnár explain how nowadays the
Internet is widely used and that we service providers are working hard to provide
better web page transfers. Newer technologies are created from HTTP 1.1 to HTTP/2
and now QUIC which uses UDP in the transport layer. The goal of this research paper
is to find out the performance of QUIC compared to HTTP 1.1 and SPDY. They
explain the limitations of HTTP 1.1 with example and also talk about the studies they
made on SPDY performance vs HTTP and QUIC vs HTTP and finally QUIC vs SPDY.
Afterwards, they explain the limitations and how TCP outperforms in certain areas.
Then, they talk about an important goal that QUIC needs, which is reducing the
latency of web traffic. QUIC uses a new encryption mechanism that supports the
round trips between client and server during connection establishment and replaces
SSL or TLS without any loss of security measures. QUIC can make the client achieve
0-RTT when a connection with the server was made before. Meaning that the extra
round trip that TLS takes is not required, and QUIC implements the new encryption
that resembles DTLS. For congestion control, QUIC presented a new feature called
packet pacing and shows that it can reduce congestion and can hinder performance
in a low-loss network environment(Megyesi, Krämer and Molnár, 2016).
After the experiment of QUIC against SPDY and HTTP, they shared the result
regarding the speed of downloading installed pages from the Google Sites server.
10
They shared some scenarios with the best results that captured the pros and cons of
the three protocols named. There were 6 different cases, where bandwidth, packet
loss RTT and object number values were varied, to test the page load time for QUIC,
SPDY, and HTTP, and this data was plotted in a graph. For a smaller number of large
objects and for a high number of small objects, it is shown that the three protocols
were comparable. For large websites and with high bandwidth it be can clearly shown
that QUIC performs worse than SPDY and HTTP. With a higher percentage for
packet loss, SPDY performs worse than the two others. Afterwards, they shared the
results with the settings that QUIC outperforms both SPDY and HTTP. The common
settings used for the three cases were high RTT and a high number of small
objects(Megyesi, Krämer and Molnár, 2016).
Critical Analysis
For low numbers of large objects and high numbers of small objects, the three
protocols give similar results. For higher bandwidth, QUIC performed badly because
of the packet pacing mechanism. QUIC performs badly when facing very high-speed
links and large amounts of download data. But when there are a high number of small
objects and a high RTT, QUIC performs the best and is the fastest protocol being
roughly 25-30% faster than SPDY and 35-40% faster than HTTP. It can also be said
that QUIC works best with low bandwidth.
In this research paper, Alvise De Biasio et al. talks about how QUIC addresses some
issues that the TCP protocol had. As shown in other research papers, QUIC is said to
use UDP to avoid issues regarding middleboxes in the network. They also mention
that QUIC is not integrated in the OS, so the user does not have to update the OS
several times, QUIC is actually into the userspace. Another advantage that was
mentioned in other research papers is that it reduces the initial handshake delay. The
main goal of this research paper is to implement QUIC for ns-3 and to show how it is
11
easy to use and how it can be compared different congestion control designs of
different protocols. They also describe some of the main features in QUIC such as it
is designed to support multiplexing so several data streams over a single connection,
therefore meaning that the packet will certainly be delivered if any of the streams are
affected by packet loss. QUIC also uses the functionalities of TLS 1.3. Additionally,
they tell us about the main features of QUIC that will be supported in ns-3, after that
they describe the code used to implement QUIC in ns-3 and define any classes and
functions they wrote. For QUIC Headers, Packets and Frames, they already made the
necessary class and used some built-in functions. For Congestion Control, they made
use of TCP Congestion functionalities. Before testing, since some features for QUIC
are not yet available, they decided not to implement any missing features that are in
the QUIC internet drafts(Alvise De Biasio et al., 2019).
They then tested the simulation with QUIC implemented. They check the correctness
of the behavior of the socket and stream transmission buffers. There were some
problems with the insertion of events from the buffer. Another test made, they
observed the performance of addition and removal of operations on the reception
buffers of the socket and stream. Lastly, the third test made was to observe the
correctness of the QUIC header and subheader implementations. Furthermore, they
tested the functionality of stream multiplexing. Additionally, they found out that the
QUIC congestion control and the NewReno’s one shared similar trends, both having
a faster window ramp-up during the congestion avoidance phase(Alvise De Biasio et
al., 2019).
Critical Analysis
In this research paper, it is found how QUIC is implemented in ns-3 and how QUIC
works with stream multiplexing, low-latency initial handshake, and improve SACK with
ACK frames. On top of that, they designed QUIC socket implementation to plug both
the TCP and new QUIC congestion control algorithms.
12
Research paper 2.2.6: 0-RTT Attack and Defense
of QUIC Protocol
Identification Process
In this research paper, Cao, Zhao and Zhang talk about a new transport layer
protocol named QUIC which provides security functionalities similar to TLS. QUIC
also provides the shortest round-trip delay for connection establishment and for
recovery and packet loss retransmission. They also tell us that most research done
on QUIC is mostly concerned with performance than security. QUIC has implemented
measures to prevent replay attacks, a dynamic mechanism that confirms the peer
identity and a timestamp. Then, they provide more information about the security part
of the protocol. They investigate the differences between QUIC and other protocols to
found out vulnerabilities of QUIC for the different version updates. They compared
QUIC with TCP and UDP on different aspects namely packet structure, connection
establishment, connection release, congestion control and version negotiation. They
propose a new attack named 0-RTT attack that will refuse service to a client that has
not established a connection by forging the data packet. Afterwards, they try three
different experimental environments for the attack scenarios. Furthermore, they
experiment on types of attacks namely QUIC RST and Version Forgery, and they
explain the steps taken to reach that goal(Cao, Zhao and Zhang, 2019).
They observed that QUIC does not provide protection during the connection
establishment for the client but after the connection establishment, QUIC gives
adequate protection. Since QUIC uses UDP, it needs less handshake thus the client
is vulnerable to attacks. They provide advice for preventing this problem and tell us
that LAN is a must for the environment of attack. They also mentioned that QUIC had
a defect that could make the client disconnect, meaning the attack of the forge packet
can be easily succeeded.
Critical Analysis
13
As it is shown, this research paper focuses mostly on the security part of QUIC and
the authors of this paper prove that the 0-RTT attack can be feasible for the client’s
first connection establishment. But for the server-side, the connection is safe.
In this research paper, Kharat et al. talk about the future for IP traffic usage and how
IP-enabled devices grow fast. QUIC over UDP will be used as an alternative for
HTTP/2 over TCP/IP and the QUIC protocol is still left unexplored for the wireless
connection domain. So, the aim of Kharat et al. is to explore the wireless connection
domain and the properties that lie in QUIC and they also provide a proposal for the
Forward Error Correction to ameliorate the congestion control, retransmission
latencies and throughput of the connection. Before the testing section, they provided
us with a background study on HTTP/2 over TCP/IP and QUIC over UDP. They also
give an insight on UDP fundamentals and on the QUIC protocol. An amazing
advantage of QUIC is that after a successful connection between a server and a
client, the client can achieve 0-RTT by using a connection ID that is found in every
packet. Afterwards, the congestion control mechanism is also explained as well as
the Forward error correction and retransmissions. After the description of all the
functionalities above they did an experiment to examine the improvement of QUIC
when put in a low BCP network and fairness in a wireless environment. They used a
MacBook Pro laptop to undergo the experiment to test competing flows serviced by
both QUIC and HTTP/2(Kharat et al., 2018).
After the experiments, they found out that QUIC is better than TCP/IP, especially for a
single dominant QUIC flow. Since QUIC uses multiplexed UDP streams, this causes
a fall in speed measured in throughput and speedup of QUIC. However, they found
out that QUIC achieves lower with multiple competing flows but TCP/IP achieves
better throughput. They also concluded that a dominant QUIC flow improves
14
throughput and overall speedup of a wireless connection. Since the experiment was
in a wireless environment, the percentage of packet loss was significant. QUIC
showed that it had potential since the UDP protocol does not require changes(Kharat
et al., 2018).
Critical Analysis
Nowadays it can clearly be seen that video streaming is especially used, so the QUIC
protocol is critically important here. The experiment they made showed that QUIC is
nearly 50% better than TCP/IP in unrestricted flows for wireless video download.
Since QUIC uses 0-RTT for a client already connected to the server, this means that
a wireless connection achieves low response time.
In this research paper, Langley et al. talk about QUIC replaces most of HTTPS such
as HTTP/2, TLS and TCP. They also mention that UDP allows QUIC packets to go
through middleboxes and this prevents the modification and limiting ossification by
middleboxes. As described in most of the research papers, they also talk about the
minimization of handshake latency and about the multiplexed streams over a single
connection. In this paper, they make use of the pre-IETF QUIC design and
deployment. The authors explain in detail why use QUIC with some main points such
as Protocol Entrenchment, Implementation Entrenchment, handshake delay and
head-of-line blocking delay. Afterwards, they talk about the QUIC design and
implementation. First, they explain carefully how the connection is established for the
first time and after a connection has already been established. Then, they explain
how stream multiplexing works in QUIC and how it is better than TCP’s. After that,
they explain the authentication and encryption for a packet in QUIC and they talk
about the loss recovery and how the packet is retransmitted. The authors also explain
the flow and congestion control in QUIC as well as NAT rebinding and connection
migration, QUIC discovery for HTTPS and finally about the opensource
15
implementation. After all the explanation, they experimented heavily on QUIC’s
features and various parameters. They mentioned that they used Chrome to undergo
the QUIC experimentation and they also mentioned that they added the QUIC
support to their mobile video and search apps. After the experimentation, they
described the road they took to deploy QUIC globally since 2013. Then, they shared
the performance of QUIC with the YouTube mobile app and google searches such as
the search and video latency.
The authors shared their results with us for the experiment done such as the usage of
forwarding error correction(FEC) and also the deployment of QUIC. The packet size
was determined to be 1350 bytes as default. FEC had a significant impact on search
and video latency and also on video rebuffer rate. Since QUIC is deployed on user-
space and not on the kernel, people can freely interact with other systems in the
server. Updates for QUIC such as security fixes are easier to update and also to play
with.
Critical Analysis
It can be said that one of QUIC’s best features is that it can be used as a platform for
experimental purposes for both server and client. It is found out that working and
deploying in the userspace is very beneficial as it makes testing faster and easier. It
can also be concluded that FEC works well with search latency as well as video
latency and video rebuffer rate. The IETF work on QUIC will replace QUIC’s
cryptography handshake with TLS 1.3 which will benefit the latency of QUIC’s
handshake.
16
In this research paper, Arisu, Yildiz and Begen explain that the aim of this new
protocol named QUIC is to reduce the connection establishment latency, improve the
congestion control, give us multiplexing streams and encryption at the transport level.
They also refer that QUIC is replacing most of the HTTPS stack such as HTTP/2, TLS
and TCP. In this paper, they study how the feature of multiplexing stream is
beneficial, especially that nowadays people watch videos mostly and do not like
buffering. They also measure the fairness of TCP and QUIC clients when competing
with other clients. The first question they want to answer is how QUIC’s multiplexing
streams perform against HTTP/1.1 over several connections and HTTP/2 over a
single connection for random losses and network delay. The second question they
attempt to answer is how the fairness of TCP and QUIC clients when competing with
other clients with different amounts of losses and delays. Beginning with the
experiment, first, they made some modifications to the python-based player for a fair
comparison of QUIC against TCP. They explained the setup they used to test QUIC
an example can be the QUIC server that is provided by Google. Afterwards, they
explained the approach they use to evaluate the performance of HAS applications
over QUIC and TCP for several frame-seek events then they installed a script at the
client machine for the evaluation of QUIC’s performance against network
disconnection and reconnection. Furthermore, they evaluated the QUIC’s multiplexing
feature and finally how QUIC provides fairness in a controlled environment(Arisu,
Yildiz and Begen, 2020).
For each test, they repeated the experiment 10 times for no anomalies. So, they
found out after testing for the frame-seek scenario that QUIC has a shorter average
wait time after seeking and that the media streams started faster. Compared to TCP,
QUIC reduced the wait time by nearly 50% and also reduce the rebuffer rate. For the
connection-switch scenario, they tested that when a user’s WiFi connection fades
away, the mobile disconnect to the active connection through WiFi, then creates a
new one through LTE or 3G mobile network. So, they found out that when a long-
lived connection that had a large congestion window is lost, the client will endure a
low throughput till the handshake and any slow-start phases are completed. Based on
their experiments, they concluded that QUIC increased the congestion window faster
and that it is able to achieve higher throughput. QUIC provides faster download for
17
the segments, meaning that media streams faster. Now for multiplexing, QUIC was
found mostly better than HTTP/1.1 and HTTP/2, especially for the large delay and
typical loss scenarios. For the evaluation of fairness, they found out that QUIC
provides fairness among multiple QUIC flows in every case.
Critical Analysis
It can be concluded that QUIC when streaming, is faster than TCP. QUIC
performance is better when frequent IP changes since QUIC makes use of a unique
Connection Identifier, this makes QUIC fast for switching of IP. This feature will make
the user have no idea that the device changed networks. Even though CID was not
used in the authors’ test, the results came out that QUIC outperformed TCP for
average playback bitrate and rebuffer rate during IP changes. It is also found that for
long delays in the network, QUIC provided higher playback bitrate and lower rebuffer
rates. Finally, it can be said that QUIC provided fairness among QUIC flows. QUIC
will eventually become a prime interest for video streaming.
In this research paper, Corbel et al. explain the fairness issue that several protocols
still raise. They mention that network operators want to satisfy their customers by
sharing the available resources fairly according to their needs. They also say that
QUIC is much better than TCP when deploying a modification for QUIC since QUIC is
within user-space and at the application layer. They also shared a diagram that
demonstrates how QUIC transports multiple streams in a single UDP flow. In this
paper, the authors want to find out the difference between QUIC and TCP flows by
varying the number of TCP emulated connections within QUIC in order to assess the
impact of this number on network fairness. They also give us an insight into the
background of QUIC and CUBIC. They mentioned that the behavior of the congestion
window is impacted by the number of emulated TCP connections which then gives us
the allowed data rate. They vary the number of emulated TCP connections to gain
different results on congestion window size. They also mentioned that the larger the
18
number of emulated TCP connections, the faster the congestion window size
increases(Corbel et al., 2019).
First, they assumed that both QUIC and TCP use CUBIC as CCA for the experiment.
They used IETF QUIC for their testing. First, they found out that buffer size and
network latency do not affect session fairness. They considered two cases where
they analyzed the impact of fairness, first, a single TCP connection competing with a
single QUIC connection emulating N TCP connection, then, N TCP connections
competing with a single QUIC connection emulation N TCP connections. For the first
case, when the number of emulated TCP connections increase, there are more
unfairness, this is due to how CUBIC is implemented in QUIC, for a large number of
emulated TCP connections QUIC increases the data rate faster. Now for the second
case, QUIC is not able to increase the window size when the maximum is reached
due to the limitation in the QUIC’s CUBIC algorithm.
Critical Analysis
It is found that latency, packet loss and buffer size do not mostly alter fairness. But it
is also found that the number of emulated TCP connections by QUIC’s CUBIC alter
fairness the most. Furthermore, the throughput reached by independent TCP
connections is larger than by a single QUIC connection.
In this research paper, Piraux, De Coninck and Bonaventure talk about how internet
transport protocols are evolving such as TCP getting new extensions or QUIC being
developed for a better protocol. An advantage they mention for QUIC is that it is
implemented in the user space library so that updating is easy. They also mention
that QUIC prevents middlebox interferences. They explain that the IETF QUIC has
19
published fourteen versions. So, in this paper, they explain that their objective is to
provide, publicly, details of the test suite for QUIC. Firstly, they give a description of
the approach and architecture of the test suite. They want to test how QUIC works
and they observe the external behaviors such as the packets sent or received. Their
other objective is also to observe all the specifications of QUIC server-side
implementations. Afterwards, they describe the architecture of the test suite then they
test all the implementation drafts for QUIC(Piraux, De Coninck and Bonaventure,
2018).
The authors did the testing over a 6-month period, then giving the results into three
phases. First, they present the three metrics extracted from the data collected. The
metrics are the deployment of QUIC version, handshake success and the test
outcome percentage. For the deployment of QUIC, they found out that when a new
draft is published, the implementation for the older versions is stopped and even not
maintaining backward compatibility. They concluded that QUIC needs 15days to a
month for changes to be published in a new version of QUIC. For the handshake
success, they found out that there are fluctuations in the graph they plotted, these
fluctuations mean that there is a rapid pace at which changes are deployed. For the
test suite outcome percentage, they found that implementations that are slower to
evolve will have a lower ratio and the most active implementations will have a positive
impact on the ratio. Secondly, they report the case studies. They reported several test
scenarios about the flow control and stream transitions reordering of QUIC. Finally,
they generated a result grid from the available implementations. They observed that
most of the connection migration test were unsuccessful or was not executed(Piraux,
De Coninck and Bonaventure, 2018).
Critical Analysis
In this paper, it can seen that the authors have proposed a test suite for QUIC and
have given the architecture and the test scenarios. For the flow control, they found
that the implementation was incorrect and the developer was not active during that
period. For the stream transitions reordering, there were missing packets due to
overflow but this was a hypothesis so the authors did not release the implementation
20
source code. After each new draft, the previous drafts of QUIC do not get any
updates, and the implementers do not provide backward compatibility.
In this research paper, Cook et al. explain the advantages of HTTP/2 over HTTP/1.1
and how TCP encrypts and how the handshake of TCP work. Afterwards, they
introduce QUIC and how QUIC encryption is comparable to TLS and establish the
handshake in 1-RTT only for the first time. Furthermore, they give more details about
QUIC, the congestion control and they talk about other evaluations of QUIC. In this
paper, the authors give an analysis of QUIC for end-users, network and service
providers. They provide a fast overview of QUIC, then they compare QUIC with
HTTP/1.1 and HTTP/2 on a local testbed so that any network settings can be
changed easily. Their main goal is to evaluate the page load time of websites under
several network settings for the three protocols(HTTP/1.1, HTTP/2 and QUIC). There
are two types of client-server connections for handshaking they will test, they are
namely, first connection and repeat the connection. For the first connection, they will
observe how many RTTs are needed for each protocol. For the repeat connection,
they will observe the loading time and how many RTTs are needed for each protocol
again. Now they test the three protocols on wireless networks or similar lossy
networks with their remote testbed and they used public 4G and ADSL. Afterwards,
they tested the page load time varied in the time of the day where network load was
the greatest on ADSL. The tests were done on the YouTube website. Furthermore,
they did testing on a complex website(YouTube) and a simple website(Doctor) to
observe the page load time(Cook et al., 2017).
For the testing, the authors analyzed QUIC’s performance depending on the access
type network, packet loss and delay generation. The tests were performed on the
21
public YouTube website(local testbed). Different values of delay were tested. For the
first connection, the page load time of QUIC was more than the other protocols but for
the repeat connection, QUIC was only increased by 400ms. For HTTP/2 repeat
connection, it was 5 times more when delay increases. This test showed that HTTP/2
connections were more sensitive to delay and QUIC’s ones were much less sensitive.
Now for different values of loss, QUIC first connection was much sensitive as the loss
percentage was increased but for repeat connection, it showed that it was less
sensitive to different values of loss. For HTTP/2 repeat connection was significantly
much more than QUIC. The tests for different values of loss showed that QUIC
connections are less sensitive than HTTP/2 ones. For the ADSL links, QUIC and
HTTP/2 results were similar and both were better than HTTP/1.1. For the 4G links,
HTTP/1.1 and HTTP/2 results were similar but the QUIC result was better. For the
network load impact on page load time, QUIC and HTTP/2 had a similar result and
were better than HTTP/1.1. QUIC outperformed HTTP/2 in lossy links and this is
mostly because QUIC uses UDP which avoids HOL(Head-Of-Line) blocking issues.
For complex and simple websites, page load time performance is better for a less
complex site(Cook et al., 2017).
Critical Analysis
It can be observed that QUIC will outperform the other protocols in certain conditions
such as in wireless networks or lossy networks. The advantage of QUIC that provides
to end-users is not so obvious since HTTP/2 is nearly comparable to QUIC regarding
the page load time metric. The main difference between QUIC and the other protocol
is that QUIC uses 0-RTT in most case for repeat connections. But the time difference
is not that obvious to end-users.
22
In this research paper, Wang et al. talk about the history of TCP giving details about
how it was designed and how TLS came into place with TCP. They provide a lot of
details on TCP and afterwards, they also give details about the history of QUIC. The
authors provide advantages of QUIC over TCP and also provide some experiments
that were in other research papers. They also mention that TCP is implemented in the
kernel space and QUIC is implemented in user-space. So, in this paper, the authors
implemented QUIC in the Linux kernel and compare TCP and QUIC in the kernel
space. Afterwards, they provide the details about the Linux kernel network subsystem
and also about the steps taken to implement QUIC in the Linux kernel. Firstly, they
register the protocol in the ipv4 files which is quite easy, then they added functions for
QUIC. After that, they defined ports, addresses and sockets so that handshake can
be done or a connection can be established. The virtual box was used on both client
and server machines and the measurements were done by deploying the program in
both virtual boxes. For QUIC and TCP, the experiments were carried out with different
network settings. They also compared the protocols under a wireless environment
using WiFi with different network settings such as RTT and packet loss rate(Wang et
al., 2018).
For the measurement using virtual machines, there are two tests they made. One is
the network throughput against packet loss rate and the other is the friendliness of
QUIC and TCP. For the first one, they find out that QUIC always has a better
performance than TCP and QUIC also may have a higher jitter for low packet loss
rate. Even if both use the same congestion control algorithm, QUIC has shown that it
reaches its peak transmission rate faster than TCP. they mention that QUIC recovers
quicker than TCP even when the initial window collapse for TCP has a larger window
than QUIC. Now for the QUIC and TCP friendliness, they found out that both QUIC
and TCP can share a network bandwidth together when they achieve an average
throughput of nearly 1MBps. For the measurement in a wireless network, they have
two types of experiments they made. The first one is the comparison of the
performance when the packet loss rate is varied in a wireless network and the second
one is QUIC and TCP friendliness in a wireless network. So, for the first one, they
showed that for the first connection and for the rate management of QUIC is far better
than TCP in small data transfer. For low latency and low packet loss rate, QUIC has
23
the fastest handshake protocol. They also found out that QUIC may resend
retransmitted packets once or twice. For large data rates, TCP outperformed QUIC.
For the QUIC and TCP friendliness in a wireless network, the performance of QUIC
and TCP are not stable and TCP shows to have a high throughput than QUIC with
packet loss rates of 0% and 0.1%. But for the 1% packet loss rate, QUIC is a bit
better than TCP(Wang et al., 2018).
Critical Analysis
It is found that QUIC outperforms TCP with all the different packet loss rates they
tested and the average achieved throughput of QUIC is always higher. QUIC also has
better performance with low RTT and high packet loss. Moreover, QUIC and TCP can
share network bandwidth with a high degree of fairness. It is found that QUIC
outperformed TCP with short data packets and it reaches peak data rate faster than
TCP.
In this research paper, Burghard and Jaeger explain how QUIC was designed after
the SPDY protocol to improve TCP and TLS. They mention that the goal of QUIC is to
reduce latency and improve performance. Moreover, they provide some details about
the history of QUIC such as who submitted the first draft of the IETF QUIC. They also
mention the difference between IETF QUIC and Google’s QUIC. In this paper, the
authors are focused on the IETF QUIC and for real-world data usage, Google’s
QUIC(gQUIC) will be used since it is only large scale deployment. Afterwards, they
evaluate the strengths and weaknesses of the goals set by the QUIC working group.
The goals are secure transport using TLS 1.3, enable deployment without any need
to change the network equipment, multiplexing without HOL blocking, minimize the
connection establishment and transport latency and finally enable extensions for FEC
and multipath connections. For secure transport, the authors explain how the initial
design of QUIC predated TLS 1.3. They also mention that it can provide better
24
security but still, TLS 1.3 being too secure, constrains bank companies. For enabling
future changes to QUIC, the authors explain that QUIC is built on top of UDP and
most network equipment already supports UDP, also QUIC is implemented in the
user space which can be deployed easily. To prevent ossification, QUIC encrypts
more data to hide it from network equipment. For the HOL blocking, the authors
mention that QUIC supports multiple streams over a single connection and when
packets are lost, recovery only happens in the streams that the packets were lost so it
is not blocked. Also, when a packet is lost, QUIC will not retransmit the packet but it
will check every stream if the data is still needed. For the connection establishment
and transport latency, QUIC improves it by reducing the RTT and only 1-RTT
handshake needed for a connection, and if there were already a connection, QUIC
supports 0-RTT handshake. Finally, for the multipath and Forward Error Correction,
the multipath extension in QUIC is not scheduled for the time period in this research
paper and FEC support is removed in gQUIC and for the IETF QUIC, FEC is out of
scope(Burghard and Jaeger, 2019).
For secure transport, QUIC is vulnerable when using 0-RTT since there might be
application layer replay attacks. For HOL blocking, they mention that QUIC eliminates
HOL blocking for better performance in lossy networks. The authors also mention that
QUIC in a mobile environment is far better because of the 0-RTT handshake for
network switching. The 0-RTT handshake results in a better video latency when users
are on the Youtube app, the handshake is performed in the background. They
mention that QUIC has a better performance with the respective network settings;
high delay, low bandwidth and lossy networks. Another evaluation they mention is
how QUIC performs better for Page Load Time in lossy networks. For mobile
environments, QUIC is better for the Page Load Time metric. The multipath and FEC,
is still under consideration and will eventually be implemented(Burghard and Jaeger,
2019).
Critical Analysis
It can be concluded that the performance of QUIC is better than TCP for mobile
environments and since TCP uses HOL, QUIC performs better when eliminating
25
HOL. For the 0-RTT handshake of QUIC, it is observed that even if it is faster, there
still some vulnerabilities to it such as the application layer replay attack. Also, to
prevent ossification, QUIC will encrypt as much data as possible to prevent future
changes to the protocol. When there are high latency and lossy networks, QUIC is
well suited to take over TCP.
In this section, the data given in the research papers in section 2.1, will be
identified, evaluated and explained as well as the different metrics found in
each research paper. Then, in the Background-Qualitative Evaluation, a table
of criteria will be provided as well as the definitions of each metrics. Some
proposed, outside of the research papers, metrics will be given.
Key 1:
26
Fig 2. 2: Research Paper Names - Key
Key 2(Metrics):
Table of Criteria:
27
Page Load U U Good U U U U U Good U Moderat
e
Time Good U U
Throughput U U s U
Note: As shown in table 2.1, many research papers did not do an experiment
on several criteria and this means that a graph could not be plotted.
1) Round-Trip Time(RTT)
RTT is the time taken for a packet to be sent by a client and the time it
takes for an acknowledgement of the packet then comes back to the
client.
3) Congestion Control
28
This is a method used to keep packets traffic levels where it does not
affect the network performance significantly. It manages the total
number of frames entering the network.
5) Bandwidth
This is the measurement of the maximum amount of packets sent and
received in a timeframe.
6) Throughput
This is the measurement of the total amount of packets that have been
successfully arrived at the destination in a timeframe.
7) Fairness
It is used to determine if all the users in the network are receiving a fair
share of the system resources.
8) Multiplexing
This is a method of combining multiple logical streams into a single
shared transport medium.
1) Latency
This is the time taken for a packet to be transferred and received from
sender to destination.
2) Competing Flows
29
This is a method that has additional flows in the same direction as a
QUIC flow. A QUIC flow is an end-to-end connection and how the data
flows through the network.
3) Bottleneck Capacity
A bottleneck capacity means the amount of data that can be stored
before having a bottleneck when a high volume of data/frames is being
received.
30
3 ANALYSIS
For this section, the requirements of the system will be discussed and several
investigations which will help to analyse the structure of the system will be
described. Moreover, this section contains functional and non-functional
requirements, proposed solution and the different tools and technologies that
will be used for this system.
Functional Requirements:
This means the description of the inputs and features that affect the
system and defines how the system should behave or should not.
Non-Functional Requirements:
31
3.2.1 Functional Requirements
Table 3.1 below displays all the functional requirements of this system.
QUIC Protocol
FR 5 The system shall allow the data to be collected and put in a log file for any
information after a successful or unsuccessful connection.
FR 6 The system shall send and receive packets faster with respect to the 4
extensions added.
QUIC Protocol
Security The system shall be able to encrypt the data as usual and establish a
safe connection.
32
Scalability The system shall be able to accept packet sizes from 0 up to 128
bytes(128bytes since using Wireshark to capture frames).
Testability The system shall be able to test the different extensions given by
quiche and the four extensions that are implemented in this project.
Maintainability The system shall be able to correct any rising problem at a fast rate.
Performance The system shall be able to achieve high throughput for better
Requirements performance.
Reliability The system shall be robust against errors and shall not be vulnerable
to problems.
33
The first step to do is to create and add the key update in the list where
all the other labels. Then create the function for the key update adding
the label “quic ku” with the key length and the secret. Afterwards, the
secrets for protecting client and server packet will be created, there is
also another step to protect a packet that uses ChaCha20-Poly1305
that can be implemented. In quiche, there are several tests’ functions
for the protection of packet.
In this chapter, the tools and technology that are available will be discussed
and which programming language used to implement the extensions in quiche
will as well be discussed.
34
Programming Pros Cons
Languages
● Compilation is slow
● Memory management
● Compatible with C
● Support Memory
allocation
35
● Platform-Independent
● Can connect to
other .Net frameworks
● No brackets
● Automatic memory
allocation
● Many supported
libraries
● It is a compiled
language
36
3.4.2Integrated Development Environment (IDE)
Here, the different Integrated Development Environment will be evaluated to
use in this system.
● Good at debugging
code
● It is cross-platform
● Associated to GitHub
● Easy to use
37
● Command-line tools
are easy to set up
● Associated to GitHub
38
In this section, the explanation of why the tools chosen are the appropriate
one and programming language that will be used during this project, will be
discussed.
3.5.3 Tools
Here, the tools to be used will be discussed as well as what the tools are
originally designed to do.
3.5.3.2 Quiche
This is like a simulation of QUIC and there were several extensions that
needed to be added in QUIC so the best implementation of the QUIC protocol
from Cloudflare, is chosen, to test the different extensions. This is the easiest
way to experiment with the extensions that will be added in quiche.
3.5.3.3 Git
Git is an open-source control system that is used to track changes in files and
manage files. This is used to download quiche from GitHub using the “git
clone” command in the terminal and Git will be used to update, commit and
39
push the changes on the quiche GitHub repository. “git diff” will also be used
to observe the changes made in quiche.
3.5.3.4 Wireshark
Wireshark is a free network protocol analyzer; it can be used to capture and
analyze frames. Wireshark will be used to capture QUIC packets and
observes the packets if any changes occurred.
3.5.3.5 Docker
Docker is a tool that uses containers to create, deploy and run the application.
Docker will be used to deploy the quiche package.
Here, a use case diagram, will be used, to describe how the user can use this
system
40
Fig 3. 1: Use Case Diagram interactions of the User with the system
41
4 DESIGN
In this section, an idea of the architectural design of the system will be
provided, and the focus will be put on the structure of the system as well as
the different extensions it contains. Different kinds of design issues that can
happen with this project will be provided and different diagrams will be given to
explain the architecture of this system and how they work.
4.1.1 Portability
Portability of the system should be available for almost every platform; Quiche
uses Rust as a programming language that is new and is available on all
platforms. The primary platform that I recommend to work with quiche is Linux.
4.1.2 Security
Security is the most essential aspect of this system. Before setting up any
communication, a tunnel is established. Private/public keys will be
authenticated by the system. Data from the network or from the client is
authenticated and handled by quiche. Quiche makes use of TLS 1.3, which is
part of the connection handshake, thus meaning that the security layer for the
QUIC protocol is the best up to now. Quiche uses a very popular library in
Rust named “ring”, which provides safe and fast cryptographic primitives.
42
4.1.3 Performance
To improve performance, Rust language is used that uses immutability,
aliasing rules, inline functions, combinable iterators to make them into chains,
reads and writes, and even more. Rust is more efficient and faster for low-level
languages. The different extensions that will be implemented will provide
better overall performance in quiche.
4.1.4 Robustness
The systems shall provide proper errors using exception handling. Rust makes
the system more robust as it will provide any errors found and handle them
before the deployment of the system. Any problem that the system encounters
will be shown as well as a proper message.
4.1.5 Flexibility
The system should be able to support any IP Addresses and ports that use
QUIC protocol. The user shall be alerted when a server has stopped working.
In this section, the different architecture diagrams that will be used to satisfy
the requirements of this project will be provided. Small descriptions also will be
provided about the diagrams.
43
4.2.1 QUIC Layers of OSI Model
QUIC makes use of a UDP-based approach. The architecture was based on
the latest IETF’s QUIC having HTTP/3 as application layer and TLS 1.3 as
encryption.
44
Fig 4. 2: QUIC Structure
TLS – Provide keying material for protecting a part of the QUIC traffic. TLS
record layer is included.
45
Public Reset – This is used to terminate a QUIC connection when an
intermediary sends an unprotected packet as a request.
Envelope – This is the common packet layout. This includes markers for
framing, version connection, connection identification and public reset to be
identified.
46
Fig 4. 4: QUIC and TLS interactions
Here, as shown in fig 4.4 there are two main relations between QUIC and TLS:
1) While QUIC provides a valid stream abstraction to TLS, TLS uses the
QUIC component to send and receive messages.
2) There are a series of updates that TLS provides QUIC such as:
a. Installation of new packet protection.
b. State changes(server certificate, handshake completion, etc.).
47
4.3.1 Authentication Flowchart Diagram
This is a typical server-client session.
48
4.3.2 Sequence Diagram
In QUIC-TLS, the connection establishment and TLS handshake are both
combined in one.
Fig 4. 6: 1-RTT
Handshake starts when the client sends a “Hello” packet to the server, that
contains connections identifiers and QUIC version for establishing a
connection. The payload of the packet contains data used by TLS for
negotiation of encryption. Then, after the server receives the “Client Hello”
packet, it will check the QUIC version first. Now, there are two things that can
happen, either the client’s version is not found in the list of supported versions
or it is found. If the client’s version is not found then, the server will create and
send a version negotiation packet to the client. If the client’s version is found,
the server will respond with a negotiation packet containing the “Hello” from
49
the client and generate the “Hello” again. The negotiation is finished. After the
client receives the “Server Hello”, the handshake is completed with the client
and can derive keys from its TLS stack to protect the packet’s payload.
Furthermore, the client needs to let the server know that the handshake was
successful. A negotiation packet is sent back to the server containing data
from the client’s TLS stack so that all data sent after this packet is protected
with the 1-RTT packet. When the server receives this packet, which is the last
handshake packet, will let the server know that the negotiation with the client
is done. The server will send a session ticket to let the client resume the
session at the next connection made with the server again.
Fig 4. 7: 0-RTT
50
QUIC provides a nice change here, which is, a client can connect with a server
without the wait for a single RTT for the handshake. This happens when a
client had already initiated a connection to a server previously, so the new
connection is made with the additional information from the last session. The
payload can be encrypted with the encryption keys that are exported with the
information from the last session. The server uses the payload received from
the “Client Hello” packet to export the same encryption keys. Finally, the
server decrypts the payload using the first data keys. The client can start the
session again.
51
4.3.3.1 Client
52
4.3.3.2 Server
53
Fig 4. 9: TLS server handshake
54
5 IMPLEMENTATION
This section is the phase where all the components and extensions, that was
discussed in the chapter 1 and 3, will be built from scratch or by composition.
Quiche will be modified to add the different extensions that were discussed in
chapter 3 and the architecture documented in chapter 4 will be used.
In this section, the hardware and software tools used to implement the
different modules of this project will be discussed.
Platform Specifications
55
5.1.2 Software Specifications
These are the following software tools used for the implementation of the
different modules.
Specifications
Cargo 1.52.0
Make 4.1
Cmake 3.10.2
Docker 20.10.2
Wireshark 3.4.2
Git 2.17.1
In this section, the standards and conventions that were used to implement
the extensions in quiche will be discussed, which makes the code look simpler
and understandable for other programmers to read. Moreover, this will make
any maintenance done for each extension easier for the software.
56
5.2.2 Coding Conventions
For software development, a coding convention is needed to improve the code
readability and to make any maintenance for the software a lot easier.
5.2.2.1 Indentations
During software development, it is a must to put indentation as it will provide
better readability and better clarify the code.
5.2.2.2 Declarations
The declaration will define a variable and also could define a variable
containing a value or text.
5.2.2.3 Comments
Comments are really useful to help to understand part of the code or to help
when debugging.
In this part, the codes implemented will be discussed and a small description
of the code and what they are supposed to do will be given.
The code below is to define the CBB struct that is used in the compression
function.
#[allow(non_camel_case_types)]
#[repr(transparent)]
struct CBB(c_void);
The function below is created so that the API is enabled in the tls.rs file.
57
#[cfg(any(feature = "brotlienc", feature = "brotlidec"))]
map_result(unsafe {
SSL_CTX_add_cert_compression_alg(
self.as_ptr(),
2, // TLSEXT_cert_compression_brotli
#[cfg(feature = "brotlienc")]
Some(compress_brotli_cert),
#[cfg(not(feature = "brotlienc"))]
None,
#[cfg(feature = "brotlidec")]
Some(decompress_brotli_cert),
#[cfg(not(feature = "brotlidec"))]
None,
)
})?;
Ok(())
}
The 2 functions below are the compression and decompression function and
both are using the brotli algorithm.
#[cfg(feature = "brotlienc")]
extern fn compress_brotli_cert(
_ssl: *mut SSL, out: *mut CBB, in_buf: *mut u8, in_len: usize,
) -> c_int {
let mut out_buf: *mut u8 = std::ptr::null_mut();
if out_len == 0 {
return 0;
}
58
}
let rc = unsafe {
BrotliEncoderCompress(5, 17, 0, in_len, in_buf, &mut out_len,
out_buf)
};
if rc == 0 {
return 0;
}
return 1;
}
#[cfg(feature = "brotlidec")]
extern fn decompress_brotli_cert(
_ssl: *mut SSL, out: *mut *mut CRYPTO_BUFFER, uncompressed_len:
usize,
in_buf: *mut u8, in_len: usize,
) -> c_int {
let mut out_buf: *mut u8 = std::ptr::null_mut();
let decompressed =
unsafe { CRYPTO_BUFFER_alloc(&mut out_buf, uncompressed_len)
};
if decompressed.is_null() {
return 0;
}
let rc =
59
unsafe { BrotliDecoderDecompress(in_len, in_buf, &mut
out_len, out_buf) };
if rc != 1 || out_len != uncompressed_len {
return 0;
}
return 1;
}
#[allow(dead_code)]
fn SSL_CTX_add_cert_compression_alg(
ctx: *mut SSL_CTX, alg_id: u16,
compress: Option<
extern fn(
ssl: *mut SSL,
out: *mut CBB,
in_buf: *mut u8,
in_len: usize,
) -> c_int,
>,
decompress: Option<
extern fn(
ssl: *mut SSL,
out: *mut *mut CRYPTO_BUFFER,
uncompressed_len: usize,
in_buf: *mut u8,
in_len: usize,
) -> c_int,
>,
) -> c_int;
60
The code below will declare the functions and variables to be used for
compression and decompression.
#[allow(dead_code)]
fn CRYPTO_BUFFER_alloc(
out_data: *const *mut u8, len: usize,
) -> *mut CRYPTO_BUFFER;
// CBB
#[allow(dead_code)]
fn CBB_reserve(cbb: *mut CBB, out_data: *const *mut u8, len: usize) -
> c_int;
#[allow(dead_code)]
fn CBB_did_write(cbb: *mut CBB, len: usize) -> c_int;
The code below will provide the brotli encoder and decoder.
// Brotli
#[cfg(feature = "brotlienc")]
fn BrotliEncoderMaxCompressedSize(input_size: usize) -> usize;
#[cfg(feature = "brotlienc")]
fn BrotliEncoderCompress(
quality: c_int, lgwin: c_int, mode: c_int, input_size: usize,
input_buffer: *const u8, encoded_size: *mut usize,
encoded_buffer: *mut u8,
) -> c_int;
#[cfg(feature = "brotlidec")]
fn BrotliDecoderDecompress(
encoded_size: usize, encoded_buffer: *const u8, decoded_size:
*mut usize,
decoded_buffer: *mut u8,
) -> c_int;
61
5.3.2 Key Update Module
This module will help packets to be sent and received faster by updating the
write keys.
The code below is the declaration of each function for seal and open.
The code below is the declaration of each function in the initial key function.
The code below is the creation of the function update_key where the label is
set to “quic ku”.
pub fn update_key(
aead: Algorithm, secret: &[u8], out: &mut [u8],
) -> Result<()> {
62
const LABEL: &[u8] = b"quic ku";
The test vectors below for client and server is for the test of initial_secrets_v1
The test vectors below for client and server is for the test of
initial_secrets_draft29
63
0x17, 0x52, 0x57, 0xa3, 0x1e, 0xb0, 0x9d, 0xea, 0x93, 0x66,
0xd8,
0xbb, 0x79, 0xad, 0x80, 0xba,
];
assert_eq!(&pkt_key, &expected_client_update_key);
The test vectors below for client and server is for the test of
initial_secrets_draft27
The test vectors below for client is for the test of chacha20_secrets
64
assert!(update_key(aead, &secret, &mut pkt_key).is_ok());
let expected_update_key = [
0x12, 0x23, 0x50, 0x47, 0x55, 0x03, 0x6d, 0x55, 0x63, 0x42, 0xee,
0x93, 0x61, 0xd2, 0x53, 0x42, 0x1a, 0x82, 0x6c, 0x9e, 0xcd, 0xf3,
0xc7, 0x14, 0x86, 0x84, 0xb3, 0x6b, 0x71, 0x48, 0x81, 0xf9,
];
assert_eq!(&pkt_key, &expected_update_key);
connection_.SetEncrypter(ENCRYPTION_INITIAL,
make_unique<TaggingEncrypter>(0x01));
connection_.SetDefaultEncryptionLevel(ENCRYPTION_INITIAL);
// Send INITIAL 1.
connection_.SendCryptoDataWithString("foo", 0,
ENCRYPTION_INITIAL);
65
QuicTime expected_pto_time =
connection_.sent_packet_manager().GetRetransmissionTime();
clock_.AdvanceTime(QuicTime::Delta::FromMilliseconds(5));
connection_.SetEncrypter(ENCRYPTION_HANDSHAKE,
make_unique<TaggingEncrypter>(0x02));
connection_.SetDefaultEncryptionLevel(ENCRYPTION_HANDSHAKE);
EXPECT_CALL(visitor_, OnHandshakePacketSent()).Times(3);
66
frames.push_back(QuicFrame(QuicPaddingFrame(3)));
ProcessFramesPacketAtLevel(31, frames, ENCRYPTION_HANDSHAKE);
EXPECT_EQ(clock_.Now() + kAlarmGranularity,
connection_.GetAckAlarm()->deadline());
if (GetQuicReloadableFlag(quic_retransmit_handshake_data_early))
{
// Verify handshake data gets retransmitted early.
EXPECT_FALSE(writer_->crypto_frames().is_empty ());
} else {
EXPECT_TRUE(writer_->crypto_frames().is_empty());
}
}
In this part, all the problems that were encountered during the implementation
chapter will be discussed:
● The first problem was with the compilation of quiche. There were errors
that were incorrect and some were nonexistent as cargo was updated
and those errors were also updated. Since cargo is new and is always
upgrading, cargo on my machine was lagging behind so an update was
required and after the update, new errors were seen but it was more
descriptive and easier to understand what was the error.
67
● Rust being a new language and a bit hard to understand some new
features, it was required to understand rust more to start coding and
understanding past codes.
● Getting the Certificate Compression code to work was a bit difficult as
the RFC 8879 which is about TLS Certificate Compression was not
easy to understand. So, to face this I tried to look at the Boringssl
binding to see if there are any connection between quiche and
Boringssl, and luckily there was.
● There was a problem where quiche was not up to date and I had to go
on GitHub to update it to my forked repository of quiche directly. Then I
had to use the “git pull” command to update it.
In this part, all the problems that were encountered when implementing codes
for the different modules will be discussed:
● Initially, rust was difficult to cope with as there was new syntax and type
was not able to be defined in an extern block. So, to go pass through
this problem, the type had to be created before the extern block.
● A function had to be passed as a parameter in another function, but an
error always occurred. So, to counter this, a type option had to to
created first as a parameter then add the function in the option.
To re-create this issue, the steps taken to get the error and how it was fixed is
provided.
68
(Where jeremie1112 is my username on GitHub)
2) Enter the quiche directory
3) Then, compile quiche with this command:
69
git submodule update --init
6) Again, quiche was compiled and it was a success as shown in Fig 5.3
70
6 TESTING
In this section, the testing process of this project will be discussed and the
tests of the different extensions that are implemented will be explained. The
requirements that were described in chapter 3 will be observed if they are
functioning well in the implementation.
Unit test is the process of breaking down a whole into small individual
components and validating each of the small components to observe if the
codes are performing as required. White box and black box testing will be
used to identify test cases and validate the functional and non-functional
requirements.
In this part, the different implementation modules will be put as test cases and
the description, steps taken to perform testing, data tested and expected
71
results will be provided for each test case. Each module will be tested one by
one and not every module at one go.
Afterwards, the file needs at least a word or some text in it to prove that the
client connected to the server.
72
--root will provide the root directory and this is how the server will use the
index.html file.
One way to test if this works is with netstat with the following code:
Another way to test if the server is working is by connecting the client to the
server. So, the following command is run to get the client to connect:
73
This will provide the following result as shown in Fig 6.5.
The server is running since the result given(SERVER WORKS) is the text in
the index.html file that is in the server root.
First, the packets are located on localhost which is the loopback IP(127.0.0.1)
and the filter is set on loopback as shown in Fig 6.6.
Afterwards, the client connected to the server and Wireshark recorded the
packets as shown in Fig 6.7.
74
Fig 6. 7: QUIC Packets recorded
To get the timestamp of the packets, the UDP drop-down will provide
timestamps of the previous packet and the sum of all the packets since the
first packet.
75
Fig 6. 8: QUIC Packet Timestamps
As shown in Figin the Fig 6.8, the timestamps for the previous packet and the
time since the first packet is given.
Test Case 1
76
2) Run the server locally.
Test Case 2
Description This module will make the handshake faster by updating write
keys.
Steps taken to 1) Uncomment the Key update codes and comment on all
perform testing the other different modules codes.
77
6.2.5 Speed up Handshake
Table 6.3 will provide the first test case with a description, steps taken to
perform testing, data tested and the expected results.
Test Case 3
Test Case 4
Description This module will help to provide fix for the loss recovery during
the handshake.
Steps taken to 1) Uncomment the loss recovery codes and comment all
perform testing the other different module’s codes.
78
3) Start Wireshark with the loopback filter.
Expected Results Bugs during handshake for loss recovery failures are fixed.
7 EVALUATION
In this chapter, what was implemented and tested in the chapters before will
be assessed. Moreover, the reason how the modules implemented are
achieving their goal or not will be provided. To check whether the modules
have been well refactored to see if they met the requirements will be
discussed as well. This chapter will be divided into several parts such as
Quantitative and Qualitative Evaluation, Critical Analysis, Issues faced during
evaluation, and Domain recommendation.
Quantitative evaluation is the method of assessing the data collected after the
testing chapter by using these numerical data on graphs. The information
gathered will be observed whether the goal of this project is achieved. Several
tables will be provided, one for the original traffic without any implementation
modules, and one for each implementation module. Since the timestamps will
change every time that the client is connected to the server, the test will be
done 10 times for each test cases and the average time will be recorded on
the tables below.
79
7.1.1 Original Traffic
Table 7.1 will provide the values of timestamp capture by Wireshark with no
implementation module.
Original Traffic
Packet Time since first frame(in sec) Time since previous frame(in sec)
No
1 0 0
2 0.000358941 0.000358941
3 0.001927852 0.001568911
4 0.002177915 0.000250063
5 0.003179496 0.001001581
6 0.016629008 0.013449512
7 0.016740059 0.000111051
8 0.016844906 0.000104847
9 0.018982427 0.002137521
10 0.019103536 0.000121109
11 0.019218958 0.000115422
12 0.023864929 0.004645971
13 0.024177290 0.000312361
14 0.024388881 0.000211591
15 0.024517312 0.000128431
80
16 0.024642842 0.000125530
17 0.025899755 0.001256913
18 0.029059069 0.003159314
Original traffi c
Time Since First Frame(in sec)
0.03
0.02
0.01
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Packet No
The total time taken of the packets to be sent or/and received is nearly
0.0269(3 s.f) without any extensions added. As shown in Fig 7.1, the packets
sending or/and receiving rate areis not stable, this is why an averageing will
be done so that a better result is recorded.
Packet Time since first frame(in sec) Time since previous frame(in sec)
No
81
1 0 0
2 0.000131586 0.000131586
3 0.000825739 0.000694153
4 0.001870600 0.001044861
5 0.002596239 0.000725639
6 0.005447835 0.002851596
7 0.005548960 0.000101125
8 0.012585708 0.007036748
9 0.012738285 0.000152577
10 0.016690160 0.003951875
11 0.016769275 0.000079115
12 0.016844889 0.000075614
13 0.016932668 0.000087779
14 0.017044237 0.000111569
15 0.017166784 0.000122547
16 0.017261940 0.000095156
17 0.017379052 0.000117112
18 0.018335687 0.000956635
82
Certifi cate compression traffi c
0.02
0.01
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Packet No
The first thing that is found is the difference between the original traffic data is
that the total time taken of the packets is significantly less. The total time taken
here is nearly around 0.0187(3 s.f) and the certificate compression has
reduced the size of the handshakes by compressing the redundant and
predictive data. The TLS certificate compression has a good impact on QUIC
handshakes’ performance.
Packet Time since first frame(in sec) Time since previous frame(in sec)
No
1 0 0
2 0.000173688 0.000173688
83
3 0.000717013 0.000543325
4 0.000885924 0.000168911
5 0.001431289 0.000545365
6 0.004246820 0.002815531
7 0.004336835 0.000090015
8 0.012491734 0.008154899
9 0.012642863 0.000151129
10 0.012746035 0.000103172
11 0.012820050 0.000074015
12 0.012903138 0.000083088
13 0.017219050 0.004315912
14 0.017973841 0.000754791
15 0.01817640 0.000202559
16 0.018284296 0.000107896
17 0.018348979 0.000064683
18 0.018414148 0.000065169
84
Fig 7. 3: Key Update Traffic
The first thing that is found is the difference between the original traffic data is
that the total time taken of the packets is significantly less. The total time taken
here is nearly around 0.0184(3 s.f) and the key update mechanism has
reduced the time taken of the handshakes by updating packet protection write
secret which is used to protect newer packets.
Packet Time since first frame(in sec) Time since previous frame(in sec)
No
1 0 0
2 0.000233691 0.000233691
3 0.001078920 0.000845229
85
4 0.009162173 0.008083253
5 0.009994742 0.000832569
6 0.010164257 0.000169515
7 0.010250026 0.000085769
8 0.013486421 0.003236395
9 0.015264947 0.001778526
10 0.015354640 0.000089693
11 0.015442179 0.000087539
12 0.015524546 0.000082367
13 0.015788157 0.000263611
14 0.015909416 0.000121259
15 0.016006208 0.000096792
16 0.016091139 0.000084931
17 0.017936078 0.001844939
18 0.019338372 0.001402294
86
Fig 7. 4: Speed up Handshake Traffic
The first thing that is found is the difference between the original traffic data is
that the total time taken of the packets is significantly less. The total time taken
here is nearly around 0.0195(3 s.f) and the speed up handshake mechanism
has reduced the time taken of the handshakes by sending a packet containing
unacknowledged CRYPTO data before the PTO expiry.
87
Fig 7. 5: Chart representing the difference between each test case and the original traffic
As shown in Fig 7.5, it can be deduced that the three implemented modules
are working and have a better performance than the original traffic. Even if the
timestamps are decreased by merely milliseconds, it actually makes a big
difference compared to the original traffic.
Qualitative evaluation is the method of assessing the data collected after the
testing chapter. The functional and non-functional requirements provided in
chapter 3 will be used to observe if the implemented modules are really
working by evaluating them against the data collected. The criteria that were
identified in chapter 2 will be used to assess the modules implemented.
QUIC Protocol
FR Description Implemented
88
FR 1 The system shall make use of the certificate Yes
compression.
FR 5 The system shall allow the data to be collected and put in Yes
a log file for any information after a successful or
unsuccessful connection.
FR 6 The system shall send and receive packets faster with Yes
respect to the 4 extensions added.
89
7.2.2 Evaluating Non-Functional Requirements
Table 7.6 will assess the Non-Functional Requirements that were previously
written in the Analysis Chapter.
QUIC Protocol
90
7.2.3 Discussion About Each Non-Functional
Requirements
In this section, each non-functional requirement will be discussed according to
the modules implementedimplemented modules in quiche. Through actual
observation of test cases and data sets, the non-functional requirements will
be assessed.
7.2.3.1 Security
Based on the IETF draft for TLS Certificate Compression, the Certificate
message will be using the TLS security properties as it normally would, even
when being compressed. The attacker will not be able to see the information
about the certificate or take control of the certificate message. Therefore, the
security aspect is still working and is somewhat better than before when
encrypting or decrypting data.
7.2.3.2 Scalability
Based on the data set for each module, it can be said that the packet
timestamps are scalable. Their timestamps are mostly around milliseconds
and nanoseconds. For QUIC the maximum packet size is 1500 bytes, which is
really scalable.
7.2.3.3 Testability
Based on chapter 6 and sub-heading 6.2.1, it is shown that testing with quiche
can be done and for each and every module implemented. Several options to
test each module is available as well.
7.2.3.4 Availability
Based on the system, quiche server can be run 24/7, and a client can be run
against the server anytime.
7.2.3.5 Maintainability
Since the system is stored as a repository on GitHub, any issues, updates or
changes can be quickly updated on the client-side.
91
7.2.3.6 Performance Requirements
Based on the data set for each module, it can be said that they outperformed
the original traffic, and they delivered a higher throughput.
7.2.3.7 Reliability
Based on the system, each function created will provide a good error message
and try-catch methods are also implemented for robustness.
7.2.3.8 Portability
The system can be built on Linux, Windows, Android and IOS.
92
7.2.4.4 Page Load Time
Since the modules have been implemented, it can be deduced that the time
taken for packets is less and it will have a better page load time.
7.2.4.5 Bandwidth
It can be deduced that the bandwidth has a better result as the maximum
number of packets sent and received is the same and is faster with the
modules implemented.
7.2.4.6 Throughput
It can be deduced that the throughput is the same as the project did not tweak
with functions that changed the throughput.
7.2.4.7 Fairness
It can be deduced that the fairness was not changed as the system was able
to record with only one client against a local server.
7.2.4.8 Multiplexing
Multiplexing was not actually a work in progress in this project, so it was not
changed.
7.2.4.9 Latency
The latency has a better result now since the packets sent and received are
faster with the implemented modules.
93
7.2.5 Loss Recovery Failure
The loss recovery failure generally means that there are packet losses during
the handshake and there are also cases of handshake corruption where
packets may be corrupted. An example of packet losses can be when using
Wi-Fi some packets are lost in transit when sending or receiving. The loss
recovery in QUIC is used so that it will be tough against high loss rates and so
that it will be improved for high RTTs’ Search Latency as well. The problem
with quiche(not really sure) is that there is handshake loss and handshake
corruption with different QUIC clients and servers. Quiche client against
Quiche server have a successful result and most of the other results, against
other servers or clients, have failed results as you can see on this website
‘https://interop.seemann.io/’. The work here is to investigate the loss recovery
failures during handshakes of different clients against quiche server and
quiche client against different servers. As it is shown on the website, there are
“L1” and “C1” tests that fail most frequently. Fig 7.5 and Fig 7.6 shows the L1
and C1 definition respectively.
Fig 7. 6: L1
94
Fig 7. 7: C1
In this part, all the problems that were encountered during the evaluation
chapter will be discussed as well as how these problems were solved:
● The packet timestamps for each test were changing every time, so to fix
this problem, 10 tests were made for each test case and for each test
case an averaging was made.
● During evaluation when running the client against the server, there was
a problem that didn’t let the client be run. To fix this, it took a while to
find out that the server root was in “tools/apps/src/bin/” and all data was
sent to the server root.
95
● While testing and running the client against the server, there were a
different total number of packets. For instance, occasionally the total
number was 18 and rarely 19. So, to fix this problem, only the total
number 18 was taken.
● When recording packets on Wireshark initially, the filter QUIC was not
available then to fix this, Wireshark had to be updated.
● Primarily, the server was not running and the error “Not Found!” was
being displayed. After some searching, a fix was found on
“https://github.com/cloudflare/quiche/issues/444”. The fix was to put
some data in the server root and to provide the “--root” option in the
server launch command.
7.4.1 Strengths
● This system uses Certificate Compression that decreases the
timestamp of the packet.
● This system uses the Key Update Mechanism that decreases the
timestamp of the packet.
● This system uses the Speed up Handshake Mechanism that decreases
the timestamp of the packet.
96
● Generally, one of the best compression algorithms was used for
certificate compression.
● The system can use several options when running a server and when
running a client.
● Since quiche uses TLS, a lot of the security properties of TLS will be
available to quiche.
7.4.2 Weaknesses
● Since quiche is still new, the client(when downloading on git) should
download the updates from GitHub.
● Cargo should be updated as well, since there may be some errors that
were not updated and will provide invalid errors.
● The codes for the speed up handshake are not well refactored, which
can lead to a longer processing time.
● Some tasks may take a long period of time to code since QUIC is new
and there are few amounts of documentation.
There are a lot of implementations of the QUIC protocol that uses the IETF
version of QUIC. Based on the data obtained in this chapter, the modules
implemented can be exported to other implementations of QUIC.
97
performance for quiche. The
Certificate Compression will
compress the message by removing
redundant/predictive data and
keeping cryptographic material. In
quiche, the brotli compression
algorithm is used.
98
8 CONCLUSION
In this last section, all the achievements, difficulties and future works will be
discussed. The modules created in this project werewas originally done
because TLS 1.3 and BoringSSL had these modules but were not
implemented in quiche, and these modules would greatly help quiche get a
better result. During the development of this project, a lot of tools, different
technologies and way of thinking was developed. Rust skill has been
improved, but still needs a lot more skills to be a good programmer in Rust
and for quiche. QUIC is still a new protocol and a lot of tasks to make QUIC a
better protocol than TCP. There are a lot of interesting tasks for quiche that
can be implemented. These tasks will be discussed in the future works sub-
section.
8.1 Achievements
Now that the project is completed, even if the Loss recovery failure was not
successful, it can be said that a lot has been done in quiche. The modules
implemented in quiche have a big impact on the timestamp mostly and this is
a great achievement for quiche. Even if the security part of quiche was not
mostly touched, at least a high performance on the speed of the packets have
been achieved. Another achievement done is the ability to understand RFCs
and trying to figure out what it means and how it helps in the implementation
of technology. Building quiche was fun and running a client against the local
server was a good experience to test all the different things that are in quiche.
While working on quiche, it helped to learn a lot more about the structure of
QUIC and how it differs from the old TCP protocol and trying new things with
QUIC. By doing the main objective of this project, it also helped to learn more
about rust and QUIC as it seems that it might be the future and more of it will
be discussed in the Future works subsection.
99
8.2 Difficulties
The implementation part was the longest section since quiche and cargo/rust
were new technologies and learning rust and applying it to the code took some
time. For each module implemented, a lot of time was dedicated to the reading
of RFCs and understanding each RFC, so that the codes would work, was
time-consuming. Moreover, even after building quiche, it was time-consuming
to understand how to run the client against the local server. Now, for the local
server, it was challenging to make it run as an error was constantly being
displayed, so this delayed the implementation time. This is the reason why the
last module to implement was delayed and it could not be done in time. Since
the cargo was not updated, some errors that were displayed were errors that
were not updated and it showed the wrong errors or showed outdated errors.
When searching for the outdated errors, nothing appeared, it was non-
productive. So, an idea struck me to update cargo and it worked fine, but still,
it was time-consuming.
100
● Code a quiche congestion control based on ECN, which will detect or
control congestion.
● Implement BBR congestion control.
Another work that can be done is test quiche against spinquic(QUIC stress
tester built by Microsoft). This can help to get familiar with quiche.
101
● LIST OF REFERENCES
Agilites (2017). Pros and cons of using c# as your backend programming
language. [online] Agilites.com. Available at: https://agilites.com/pros-and-
cons-of-using-c-as-your-backend-programming-language.html.
Alvise De Biasio, Chiariotti, F., Polese, M., Zanella, A. and Zorzi, M. (2019). A
QUIC implementation for ns-3. Acm, 2 Penn Plaza, Suite 701, New York, Ny
1-, Usa 06-19.
Anderson, W. (2015). What are the drawbacks of using visual studio code? -
quora. [online] www.quora.com. Available at: https://www.quora.com/What-
are-the-drawbacks-of-using-Visual-Studio-Code [Accessed Mar. 2021].
Arisu, S., Yildiz, E. and Begen, A.C. (2020). Game of protocols: Is QUIC ready
for prime time streaming? International Journal of Network Management,
[online] 30. Available at:
https://ideas.repec.org/a/wly/intnem/v30y2020i3ne2063.html [Accessed Dec.
2020].
Cao, X., Zhao, S. and Zhang, Y. (2019). 0-RTT attack and defense of QUIC
protocol. IEEE Xplore, [online] pp.1–6. Available at:
https://ieeexplore.ieee.org/document/9024637 [Accessed Nov. 2020].
Cary, I. (2020). Python: Pros and cons. [online] scholarlyoa.com. Available at:
https://scholarlyoa.com/python-pros-and-cons/ [Accessed Mar. 2021].
102
Cook, S., Mathieu, B., Truong, P. and Hamchaoui, I. (2017). QUIC: Better for
what and for whom? IEEE Xplore, [online] pp.1–6. Available at:
https://ieeexplore.ieee.org/abstract/document/7997281 [Accessed Dec. 2020].
Corbel, R., Tuffin, S., Gravey, A., Marjou, X. and Braud, A. (2019). Assessing
the impact of QUIC on network fairness. Journal of Communications, 14,
pp.908–914.
Educba (2019). What is visual studio code? | features and advantages | scope
& career. [online] EDUCBA. Available at: https://www.educba.com/what-is-
visual-studio-code/.
Holý, J. (2011). Comparison of eclipse 3.6 and IntelliJ IDEA 10.5: Pros and
cons - DZone java. [online] dzone.com. Available at:
https://dzone.com/articles/comparison-eclipse-36-and [Accessed Mar. 2021].
103
K, K.P., Rege, A., Goel, A. and Kulkarni, M. (2018). QUIC protocol
performance in wireless networks. IEEE Xplore, [online] pp.0472–0476.
Available at: https://ieeexplore.ieee.org/document/8524247 [Accessed Nov.
2020].
Langley, A., Riddoch, A., Wilk, A., Vicente, A., Krasic, C., Zhang, D., Yang, F.,
Kouranov, F., Swett, I., Iyengar, J., Bailey, J., Dorfman, J., Roskind, J., Kulik,
J., Westin, P., Tenneti, R., Shade, R., Hamilton, R., Vasiliev, V. and Chang,
W.-T. (2017). The QUIC transport protocol: Design and internet-scale
deployment. Proceedings of the Conference of the ACM Special Interest
Group on Data Communication.
Lazar, A. (2018). The battle of the IDEs. [online] JAXenter. Available at:
https://jaxenter.com/battle-ides-150122.html.
Megyesi, P., Krämer, Z. and Molnár, S. (2016). How quick is QUIC? IEEE
Xplore, [online] pp.1–6. Available at:
https://ieeexplore.ieee.org/document/7510788 [Accessed Oct. 2020].
Nepomuceno, K., Oliveira, Igor Nogueira de, Aschoff, R.R., Bezerra, D., Ito,
M.S., Melo, W., Sadok, D. and Szabó, G. (2018). QUIC and TCP: A
performance evaluation. IEEE Xplore, [online] pp.00045–00051. Available at:
https://ieeexplore.ieee.org/document/8538687 [Accessed Oct. 2020].
104
Piraux, M., De Coninck, Quentin and Bonaventure, O. (2018). Observing the
evolution of QUIC implementations. Proceedings of the Workshop on the
Evolution, Performance, and Interoperability of QUIC.
Rescorla, E. (2018). The transport layer security (TLS) protocol version 1.3.
[online] tools.ietf.org. Available at: https://tools.ietf.org/id/draft-ietf-tls-tls13-
23.html [Accessed May 2021].
Rüth, J., Poese, I., Dietzel, C. and Hohlfeld, O. (2018). A first look at QUIC in
the wild. Passive and Active Measurement, [online] pp.255–268. Available at:
https://link.springer.com/chapter/10.1007%2F978-3-319-76481-8_19.
Rüth, J., Wolsing, K., Wehrle, K. and Hohlfeld, O. (2019). Perceiving QUIC.
Proceedings of the 15th International Conference on Emerging Networking
Experiments And Technologies. [online] Available at:
https://www.researchgate.net/publication/337779239 [Accessed Oct. 2020].
Wang, P., Bianco, C., Riihijärvi, J. and Petrova, M. (2018). Implementation and
performance evaluation of the QUIC protocol in linux kernel. Proceedings of
the 21st ACM International Conference on Modeling, Analysis and Simulation
of Wireless and Mobile Systems.
105
UNIVERSITY OF MAURITIUS
FACULTY/ CENTRE OIDT
PROGRESS
LOG
Student Name : Emmanuel Jeremie Daniel
Student ID : 1812415
Department : ICT
● Its purpose is to help you to plan your own dissertation and to record the
outcomes.
● As well as gaining valuable skills, you will find that the information
accumulated in this Log will prove helpful during the write up of the
dissertation.
You should sign the appropriate statement below when you submit your
Progress Log:
106
I confirm that the information I have given in this Log is a true and accurate
record:
PROGRESS LOG
107
-do 14 research
papers then
observe if the
table of criteria is
applied.
-start
downloading the
pre-requisite for
quiche/ns3 and
start playing with
the code to
understand how it
works.
-write each
definition for each
criterion.
4 18/11/ -Format of a E.J.D
20 research paper is
good.
-add a transport
layer in the table
of criteria.
-
implementation( s
tart adding
features to
quiche).
5 09/12/ -Do all the E.J.D
20 research papers
left.
-the scope
changed so that
quiche will be
faster.
-Add more
elements to the
table of criteria
6 13/01/ -All research E.J.D
21 papers are done.
-Do
implementation.
-Start doing
Design and
Analysis.
7 18/03/ -Write the E.J.D
21 definitions for
108
each chapter.
-write more for
the problem
statement.
-propose 4 more
criteria and write
the definition for
each.
-re-write FR and
NFR.
-Expand
methodology.
-write definitions
for tools.
8 21/04/ -Talking about E.J.D
21 working on
Design.
-Reviewed the
architectural
design provided.
-Start working on
all
implementation
module.
9 15/06/ -Talking about E.J.D
21 the guideline and
how to submit.
-Remove all “we”
and rephrase the
sentences.
-Changed an NF.
-Provide a graph
for the results in
the evaluation.
-All tables and
figures have to be
referenced.
-Do an average
of 10 for every
module
implemented.
-Talk about the
NFR in
evaluation.
-Explain the
criteria provided
109
in the evaluation.
-Talking about
the domain
recommendation.
N.B: Both the supervisor(s) and the student should retain a copy of this Project Progress
Log.
A copy of the duly filled and signed Progress Log should be included and
submitted in the section ‘Appendices’ of the Dissertation.
110