Web Protocol Future
Web Protocol Future
1 INTRODUCTION
The objective of this project is to explore various evolutions of the TCP/IP protocol suite
towards a better support of data byte-streams. This paper is organized as follows. Section 2
describes the background of SPDY, HTTP/2 and QUIC, also giving a comparison of them, and
how SPDY, HTTP/2 and QUIC reduce the page load latency by making a more efficient use of
TCP. Section 3 describes two of the major proposals to change TCP so to support multi path,
they are SCTP and MPTCP, with full description of each proposal, a point to point comparison,
congestion control, mobility and multihoming and how HTTP/2 can benefit from multipath TCP.
1/19
HTTP pipelining. It helps with reducing SSL (Secure Sockets Layer) overhead, avoiding
network congestion and improves server efficiency. Streams can be created on either the
server- or the client side, can concurrently send data interleaved with other streams and
are identified by a stream ID which is a 31 bit integer value; odd, if the stream is initiated
by the client, and even if initiated by the server [1].
● Request prioritization The client is allowed to specify a priority level for each object
and the server then schedules the transfer of the objects accordingly. This helps avoiding
the problem when the network channel is congested with non-critical resources and
high-priority requests, example: JavaScript code. Style Sheet.
● Server push mechanism is also included in SPDY thus servers can send data before the
explicit request from the client. Without this feature, the client must first download the
primary document, and only after it can request the secondary resources. Server push is
designed to improve latency when loading embedded objects but it can also reduce the
efficiency of caching in a case where the objects are already cached on the clients side
thus the optimization of this mechanism is still in progress.
● HTTP header compression SPDY compresses request and response HTTP headers,
resulting in fewer packets and fewer bytes transmitted.
● Furthermore, SPDY provides an advanced feature, server-initiated streams.
Server-initiated streams can be used to deliver content to the client without the client
needing to ask for it. This option is configurable by the web developer in two ways:
2.2 HTTP/2
HTTP/2 is the next evolution of HTTP. Based on Google’s SPDY, the new protocol is presented
in a formal, openly available specification. While HTTP/2 maintains compatibility with SPDY
and the current version of HTTP. This below show brief of protocol.
2/19
Figure 2.2: Binary Framing Layer
3/19
Figure 2.3: Streams, Messages, and Frames
4/19
Stream prioritization
Once an HTTP message can be split into many individual frames, and we allow for frames from
multiple streams to be multiplexed, the order in which the frames are interleaved and delivered
both by the client and server becomes a critical performance consideration. To facilitate this, the
HTTP/2 standard allows each stream to have an associated weight and dependency:
● Each stream may be assigned an integer weight between 1 and 256.
● Each stream may be given an explicit dependency on another stream.
Server push
Another powerful new feature of HTTP/2 is the ability of the server to send multiple responses
for a single client request. That is, in addition to the response to the original request, the server
can push additional resources to the client (Figure 2.5), without the client having to request each
one explicitly.
Header Compression
Each HTTP transfer carries a set of headers that describe the transferred resource and its
properties. In HTTP/1.x, this metadata is always sent as plain text and adds anywhere from
500–800 bytes of overhead per transfer, and sometimes kilobytes more if HTTP cookies are
being used. To reduce this overhead and improve performance, HTTP/2 compresses request and
response header metadata (see Figure 2.6) using the HPACK compression format that uses two
simple but powerful techniques:
● It allows the transmitted header fields to be encoded via a static Huffman code, which
reduces their individual transfer size.
● It requires that both the client and server maintain and update an indexed list of
previously seen header fields, which is then used as a reference to efficiently encode
previously transmitted values.
Huffman coding allows the individual values to be compressed when transferred, and the
indexed list of previously transferred values allows us to encode duplicate values by transferring
5/19
index values that can be used to efficiently look up and reconstruct the full header keys and
values.
Although HTTP/2 is built on SPDY, it introduces some important new changes [3].
6/19
Table 1: Comparison of SPDY with HTTP/2
SPDY HTTP/2
SSL Required. In order to use the protocol SSL Not Required. However - even though
and get the speed benefits, connections must the IETF doesn’t require SSL for HTTP/2 to
be encrypted. work, many popular browsers do require it.
Fast Encrypted Connections. Does not use the Faster Encrypted Connections. The new
ALPN (Application Layer Protocol ALPN extension lets browsers and servers
Negotiation) extension that HTTP/2 uses. determine which application protocol to use
during the initial connection instead of after.
Compression. SPDY leaves a small space for Faster, More Secure Compression. HTTP/2
vulnerabilities in its current compression introduces HPACK, a compression format
methods.(DEFLATE) designed specifically for shortening headers
and preventing vulnerabilities.
2.3 QUIC
QUIC stands for Quick UDP Internet Connections. It is an experimental web protocol from
Google that is an extension of the research evident in SPDY and HTTP/2. QUIC is premised on
the belief that SPDY performance problems are mainly TCP problems and that it is infeasible to
update TCP due to its pervasive nature. QUIC sidesteps those problems by operating over UDP
instead. Although QUIC works on UDP ports 80 and 443 it has not encountered any firewall
problems. QUIC is a multiplexing protocol for exchanging requests and responses over the
Internet with lower latency and faster recovery from errors than HTTP/2 over TLS/TCP. QUIC
contains some features not present in SPDY such as roaming between different types of
networks.
7/19
QUIC provides connection establishment with zero round trip time overhead. It promises also to
remove Head of Line Blocking on multiplexed streams. In SPDY/HTTP2.0, if a packet is lost in
one stream, the whole set of streams is delayed due to the underlying TCP behavior; no stream
on the TCP connection can progress until the lost packet is retransmitted. In QUIC if a single
packet is lost only one stream is affected [4].
8/19
QUIC HTTP2
Promises to solve Head Of Line Blocking at Promises to solve Head Of Line Blocking at
the Transport Layer (caused by TCP the Application layer (caused by HTTP 1.1
behaviour) pipelining)
Best case scenario (in repeat connections, Best Case Scenario (1 to 3 Round Trips for
client can send data immediately (Zero Round TCP connection establishment and/or TLS
Trips) connection)
HTTP/2 or SPDY can layer on top of QUIC, HTTP/2 or SPDY can layer on top of QUIC
all features of SPDY are supported in QUIC. or TCP
Packet-level Forward Error Correction TCP selective reject ARQ used for error
correction
Security in QUIC is TLS-like but with a more Security provided by underlying TLS
efficient handshake TCP Cubic-based Congestion control provided by underlying
congestion control TCP
● Reducing latency with multiplexing: In SPDY/HTTP2, multiple asset requests can reuse
a single TCP connection. Unlike HTTP 1.1 requests that use the Keep-Alive header, the
requests and response binary frames in SPDY/HTTP2 are interleaved and head-of-line
blocking does not happen. [6] The cost of establishing a connection three-way handshake
has to happen only once per host, each establishing a connect will take 1 RTT. Beside
that Multiplexing is especially beneficial for secure connections because of the
performance cost involved with multiple TLS negotiations.
9/19
● Reduces the congestion window with single TCP connection more aggressively than
parallel connections [6].
● Compressing header reduce the used bandwidth and eliminating unnecessary headers.
● It allows servers to push responses proactively into client caches instead of waiting for a
new request for each resource. Server Push potentially allows the server to avoid this
round trip of delay by pushing the responses it thinks the client will need into its cache.
[7]
● QUIC use UDP as a transport protocol, that will remove the roundtrip time of
establishing a connection three-way handshake of TCP and TLS auth ans key exchange.
Figure 2.7 show the flow to establish connection each protocol, and table 3 show the
comparison connection RTT (Round Trip Times) in TCP, TLS and QUIC protocols,
QUIC reduce RTT to 0.
Figure 2.7: Connection Round Trip Times in TCP, TLS and QUIC protocols
Table 3: Connection Round Trip Times in TCP, TLS and QUIC protocols
● Additionally, UDP is decrease usage bandwidth by reduce the header length compare to
TCP header. Another benefit of using UDP is multiplexing stream avoid head-of-line
10/19
blocking, each stream frame can be immediately dispatched to that stream on arrival, so
streams without loss can continue to be reassembled and make forward progress in the
application.
● QUIC introduces Forward Error Correction, which is used to reconstruct lost packets
instead of requesting it again. Therefore, redundant data has to be sent (see in Figure 2.8).
3 MPTCP, SCTP
3.1 Multipath TCP (MPTCP)
MPTCP is currently an experimental protocol defined in RFC 6824. It’s stated goal is to exist
alongside TCP and to “do no harm” to existing TCP connections, while providing the extensions
necessary so that additional paths can be discovered and utilized. Multipath TCP starts and
maintains additional TCP connections and runs them as subflows underneath the main TCP
connection. See Figure 3.1 for a quick visualization of this:
11/19
Figure 3.1: Comparison of Standard TCP and MPTCP Protocol Stacks
The IP addresses for these additional subflows are discovered one of two ways; implicitly when
a host with a free port connects to a known port on the other host, or explicitly using an inband
message. Each subflow is treated as an individual TCP connection with it’s own set of
congestion control variables. Subflows can also be designated as backup subflows which do not
immediately transfer data, but activate when primary flows fail. [9]
Research has shown that MPTCP congestion control as defined in RFC 5681 does not result in
fairness with standard TCP connections if two flows from an MPTCP connection go through the
same bottlenecked link. As such, there’s a great deal of ongoing research about alternative
congestion control schemes specifically for multipath protocols [10].
12/19
This is crucial in protecting systems against SYN Flooding attacks which are a persistent
problem in TCP based communications.
B. Congestion Control
On a subflow to subflow basis, MPTCP and SCTP both act either identically or similarly to TCP
and utilize slow start algorithms and congestion windows for end to end flow control on a path.
Additionally, MPTCP and CMT-SCTP both couple all subflow congestion windows together
under a global congestion window. Load balancing decisions on which subflow to use using
these parameters are a constant subject of research and are not trivial.
However, MPTCP can have significantly more flows to manage, as MPTCP allows for fully
meshed connections compared to even CMT-SCTP. See figure 3.3 for an example of a fully
meshed connection in MPTCP as opposed to the parallel connections in SCTP.
In this picture, each host has two ports but the protocols set up connections between the two
13/19
ports in different ways. In SCTP, these connection pair may be explicitly defined while in
MPTCP it is up to the protocol to detect and use the correct one. As such, choosing efficient port
pairs ahead of time is crucial to the operation of SCTP and unfortunately this is neither trivial nor
done automatically in most implementations. On the plus side, SCTP connection scheme means
that it does not suffer from the unfairness problem mentioned in the background section on
MPTCP. As currently defined, SCTP is not designed for concurrent multipath transfer the same
way that MPTCP is. Instead, SCTP uses only one path at a time, and it switches to another path
only after the current path fails. There has been a fair amount of academic work on an SCTP
extension to provide concurrent multipath transmission (CMT-SCTP)
Finding a suitable Congestion Control mechanism able to handle multiple paths is nontrivial [9].
Simply adopting the mechanisms used for the singlepath protocols in a straightforward manner
does neither guarantee an appropriate throughput [9] nor achieve a fair resource allocation when
dealing with multipath transfer [12]. To solve the fairness issue, Resource Pooling has been
adopted for both MPTCP and CMT-SCTP. In the context of Resource Pooling, multiple
resources (in this case paths) are considered to be a single, pooled resource and the Congestion
Control focuses on the complete network instead of only a single path. As a result, the complete
multipath connection (i.e. all paths) is throttled even though congestion occurs only on one path.
This avoids the bottle- neck problem described earlier and shifts traffic from more congested to
less congested paths. Releasing resources on a congested path decreases the loss rate and
improves the stability of the whole network. In three design goals are set for Resource Pooling
based multipath Congestion Control for a TCP-friendly Internet deployment. These rules are:
● Improve throughput: A multipath flow should perform at least as well as a singlepath
flow on the best path.
● Do not harm: A multipath flow should not take more capacity on any one of its paths
than a singlepath flow using only that path.
● Balance congestion: A multipath flow should move as much traffic as possible off its
most congested paths.
The Congestion Control proposed for MPTCP was designed with these goals in mind already.
The Congestion Control of the original CMT-SCTP proposal did not use Resource Pooling, but
we already proposed an algorithm for CMT-SCTP which uses Resource Pooling and fulfills the
requirements. This algorithm behaves slightly different from the MPTCP Congestion Control
and, therefore, we also adapted the MPTCP Congestion Control to SCTP which will be called
“MPTCP-like” in the following. While both mechanisms are still candidates for CMT-SCTP in
the IETF discussion, we will only use the MPTCP-like algorithm in this paper to get an unbiased
comparison with MPTCP. The MPTCP and MPTCP-like Congestion Control treat each path as a
self-contained congestion area and reduce just the path congestion window of the path
experiencing congestion. In order to avoid an unfair overall bandwidth allocation, the congestion
window growth behavior of the Congestion Control is adapted: a per-flow aggressiveness factor
is used to bring the increase and decrease of into equilibrium.
The MPTCP Congestion Control is based on counting bytes as TCP and MPTCP are
byte-oriented protocols. SCTP, however, is a message-oriented protocol and the Congestion
Control is based on counting bytes which are limited in size by the Maximum Transmission Unit
(MTU). The limit for the calculation is defined as Maximum Segment Size (MSS) for TCP and
14/19
SCTP. So it is, e.g., 1,460 bytes for TCP or 1,452 bytes for SCTP using IPv4 over an Ethernet
interface with a typical MTU of 1,500 bytes.
C. Path Management
15/19
using direct as well as cross paths: 𝑃𝐴1−𝐵1, 𝑃𝐴1−𝐵2, 𝑃𝐴2−𝐵1 and 𝑃𝐴2−𝐵2.
16/19
example use case is suppose you are downloading a file with HTTP/2 multistreaming
and you are over your WiFi connection. Even if you walk out of your WiFi connection
range, the file streaming should not be affected because it should automatically stop
sending data through WiFi connection and should now only use cellular network.
17/19
4 CONCLUSIONS AND RELATED WORK
This report presented a describe QUIC, SPDY and HTTP/2 and comparison of the these
protocols. HTTP/2 is the next evolution of HTTP. Based on Google’s SPDY, the new protocol is
presented in a formal. HTTP/2 maintains compatibility with SPDY and the current version of
HTTP. Although HTTP/2 is built on SPDY, it introduces some important new changes, the main
difference between HTTP/2 and SPDY comes from their header compression algorithms.
HTTP/2 uses HPACK algorithm for header compression, compared to SPDY, which uses
DEFLATE. QUIC is a very recent protocol developed by Google in 2013 for efficient transfer of
web pages. QUIC aims to improve performance compared to SPDY and HTTP by multiplexing
web objects in one stream over UDP protocol instead of traditional TCP.
Additionally, The page also present two of the major proposals to change TCP so to support
multipath: SCTP and MPTCP and comparison between them on path management, connection
establishing, congestion control and HTTP/2 benefits from these proposals. Multipath TCP
allows existing TCP applications to achieve better performance and robustness over today’s
networks, and it has been standardized at the IETF. Now multipath is very important. Mobile
devices have multiple wireless interfaces, data-centers have many redundant paths between
servers, and multihoming has become the norm for big server farms . TCP is essentially a
single-path protocol: when a TCP connection is established, If one of these addresses changes
the connection will fail. In fact, a TCP connection cannot even be load balanced across more
than one path within the network, because this results in packet reordering, and TCP
misinterprets this reordering as congestion and slows down .Example if a smartphone’s WiFi
loses signal, the TCP connections associated with it stall; there is no way to migrate them to
other working interfaces, such as 3G . This makes mobility a frustrating experience for users .
Modern data-centers are another example: many paths are available between two endpoints, and
multipath routing randomly picks one for a particular TCP connection .
We survey related work in 2 topics (i) Multipath QUIC and (ii) Optimized Cooperation of
HTTP/2 and Multipath TCP.
i) Multipath QUIC is an extension to the QUIC protocol that enables hosts to exchange data
over multiple networks over a single connection on end hosts are equipped with several network
interfaces and users expect to be able to seamlessly switch from one to another or use them
simultaneously to aggregate bandwidth as well as enables QUIC flows to cope with events
affecting the such as NAT rebinding or IP address changes.
ii) Optimized Cooperation of HTTP/2 and Multipath TCP: HTTP/2 is the next evolution of
HTTPs and Multipath TCP allows existing TCP applications to achieve better performance and
robustness, The optimization of HTTP2 run over MP-TCP have a chance to make applications
faster, simpler, and more robust.
18/19
5 REFERENCES
1. SPDY Protocol - Draft 3. Retrieved November, Accessed May 16, 2018.
http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3
2. Introduction to HTTP/2, Ilya Grigorik, Surma, Accessed May 16, 2018.
https://developers.google.com/web/fundamentals/performance/http2/
3. Shifting from SPDY to HTTP/2, Justin Dorfman. Accessed May 16, 2018
https://blog.stackpath.com/spdy-to-http2
4. QUIC Protocol Official Website. Available at: https://www.chromium.org/quic.
5. QUIC Crypto. Accessed May 16, 2018.
https://docs.google.com/document/d/1g5nIXAIkN_Y-7XJW5K45IblHd_L2f5LTaDUDw
vZ5L6g/edit.
6. How Speedy is SPDY, Xiao Sophia Wang, Aruna Balasubramanian, USENIX, 2014
7. HTTP/2 Frequently Asked Questions, Accessed May 16, 2018 https://http2.github.io/faq/
8. Ford, Et Al., RFC 6824 TCP Extensions for Multipath Operation with Multiple
Addresses., RFC 6824, January 1, 2013. Accessed May 16, 2018
http://tools.ietf.org/html/rfc6824.
9. Ford, et al., RFC 6182 Architectural Guidelines for Multipath TCP Development, RFC
6182. March 2011. Accessed May 16, 2018 http://tools.ietf.org/html/rfc6182
10. Singh, et al. Enhancing Fairness and Congestion Control in Multipath TCP, 6th Joint
IFIP Wireless and Mobile Networking Conference, 2013
11.Iyengar, J. R. et al. Concurrent Multipath Transfer Using SCTP Multihoming, SPECTS,
2004
12.Stewart, et al., RFC 4960 Stream Control Transmission Protocol, RFC 4960, September
2007, Accessed Accessed May 16, 2018. http://tools.ietf.org/html/rfc4960
13.A. Ford, C. Raiciu, M. Handley, S. Barre ́, and J. R. Iyengar, Architectural Guidelines for
Multipath TCP Development, IETF, Informational RFC 6182, Mar. 2011, ISSN
2070-1721.
14.R.R.Stewart, Stream Control Transmission Protocol, IETF,Standards Track RFC 4960,
Sept. 2007, ISSN 2070-1721.
15.Martin Becke, Fu Fa, Comparison of Multipath TCP and CMT-SCTP based on
Intercontinental Measurements, IEEE 12 June 2014, ISSN: 1930-529X
16.Maximilian Weller, Optimized Cooperation of HTTP/2 and Multipath TCP, May 1, 2017
17.Slashroot, How does MULTIPATH in TCP work, Accessed May 17, 2018
https://www.slashroot.in/what-tcp-multipath-and-how-does-multipath-tcp-work
19/19