0% found this document useful (0 votes)
3 views

Web Protocol Future

The document explores the evolution of the TCP/IP protocol suite, focusing on protocols like SPDY, HTTP/2, and QUIC, which enhance data byte-stream support and reduce page load latency. It details the features and improvements of each protocol, including multiplexing, prioritization, and header compression, as well as the advantages of QUIC over TCP. Additionally, it discusses multipath protocols like SCTP and MPTCP, emphasizing their potential benefits for HTTP/2.

Uploaded by

myngcode
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Web Protocol Future

The document explores the evolution of the TCP/IP protocol suite, focusing on protocols like SPDY, HTTP/2, and QUIC, which enhance data byte-stream support and reduce page load latency. It details the features and improvements of each protocol, including multiplexing, prioritization, and header compression, as well as the advantages of QUIC over TCP. Additionally, it discusses multipath protocols like SCTP and MPTCP, emphasizing their potential benefits for HTTP/2.

Uploaded by

myngcode
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Web Protocol Future

1 INTRODUCTION
The objective of this project is to explore various evolutions of the TCP/IP protocol suite
towards a better support of data byte-streams. This paper is organized as follows. Section 2
describes the background of SPDY, HTTP/2 and QUIC, also giving a comparison of them, and
how SPDY, HTTP/2 and QUIC reduce the page load latency by making a more efficient use of
TCP. Section 3 describes two of the major proposals to change TCP so to support multi path,
they are SCTP and MPTCP, with full description of each proposal, a point to point comparison,
congestion control, mobility and multihoming and how HTTP/2 can benefit from multipath TCP.

Keywords: TCP/IP, HTTP/2, SPDY, QUIC, SCTP, MPTCP, Web, Browser

2 SPDY, HTTP/2, QUIC


2.1 SPDY
SPDY protocol is designed to fix the aforementioned issues of HTTP [1]. The protocol operates
in the application layer on top of TCP. The framing layer of SPDY is optimized for HTTP-like
response request streams enabling web applications that run on HTTP to run on SPDY with little
or no modifications. The key improvements offered by SPDY are described below.

Figure 2.1: Streams in HTTP, SPDY


●​ Multiplexed Stream with single TCP connection to a domain as shown in Figure 2.1
There is no limit to the requests that can be handled concurrently within the same SPDY
connection (called SPDY session). These requests create streams in the session which are
bidirectional flows of data. This multiplexing is a much more fine-tuned solution than

1/19
HTTP pipelining. It helps with reducing SSL (Secure Sockets Layer) overhead, avoiding
network congestion and improves server efficiency. Streams can be created on either the
server- or the client side, can concurrently send data interleaved with other streams and
are identified by a stream ID which is a 31 bit integer value; odd, if the stream is initiated
by the client, and even if initiated by the server [1].
●​ Request prioritization The client is allowed to specify a priority level for each object
and the server then schedules the transfer of the objects accordingly. This helps avoiding
the problem when the network channel is congested with non-critical resources and
high-priority requests, example: JavaScript code. Style Sheet.
●​ Server push mechanism is also included in SPDY thus servers can send data before the
explicit request from the client. Without this feature, the client must first download the
primary document, and only after it can request the secondary resources. Server push is
designed to improve latency when loading embedded objects but it can also reduce the
efficiency of caching in a case where the objects are already cached on the clients side
thus the optimization of this mechanism is still in progress.
●​ HTTP header compression SPDY compresses request and response HTTP headers,
resulting in fewer packets and fewer bytes transmitted.
●​ Furthermore, SPDY provides an advanced feature, server-initiated streams.
Server-initiated streams can be used to deliver content to the client without the client
needing to ask for it. This option is configurable by the web developer in two ways:

2.2 HTTP/2
HTTP/2 is the next evolution of HTTP. Based on Google’s SPDY, the new protocol is presented
in a formal, openly available specification. While HTTP/2 maintains compatibility with SPDY
and the current version of HTTP. This below show brief of protocol.

Binary framing layer: ​


At the core of all performance enhancements of HTTP/2 is the new binary framing layer, which
dictates how the HTTP messages are encapsulated and transferred between the client and server.
The HTTP semantics, such as verbs, methods, and headers, are unaffected, but the way they are
encoded while in transit is different. All HTTP/2 communication is split into smaller messages
and frames, each of which is encoded in binary format.

2/19
Figure 2.2: Binary Framing Layer

Streams, Messages, and Frames


Now checking how the data is exchanged between the client and server for new binary framing
mechanism. Before start let explain some HTTP/2 terminology:
●​ Stream: A bidirectional flow of bytes within an established connection, which may carry
one or more messages.
●​ Message: A complete sequence of frames that map to a logical request or response
message.
●​ Frame: The smallest unit of communication in HTTP/2, each containing a frame header,
which at a minimum identifies the stream to which the frame belongs.

Here are some of the mechanism:


●​ All communication is performed over a single TCP connection that can carry any number
of bidirectional streams.
●​ Each stream has a unique identifier and optional priority information that is used to carry
bidirectional messages.
●​ Each message is a logical HTTP message, such as a request, or response, which consists
of one or more frames.
●​ The frame is the smallest unit of communication that carries a specific type of data - e.g.,
HTTP headers, message payload, and so on. Frames from different streams may be
interleaved and then reassembled via the embedded stream identifier in the header of
each frame.
HTTP/2 breaks down the HTTP protocol communication into an exchange of binary-encoded
frames, which are then mapped to messages that belong to a particular stream, all of which are
multiplexed within a single TCP connection. This is the foundation that enables all other features
and performance optimizations provided by the HTTP/2 protocol.

3/19
Figure 2.3: Streams, Messages, and Frames

Request and response multiplexing


In HTTP/1 if client wants to improve performance, it will make multiple parallel requests TCP
connections however, this will be root cause of head-of-line blocking and inefficient use of the
underlying TCP connection. The new binary framing layer in HTTP/2 resolves that problem by
break down an HTTP message into independent frames, interleave or reassemble them on the
other end and eliminates the need for multiple connections to enable parallel processing. As a
result, this makes our applications faster, simpler, and cheaper to deploy.

Figure 2.4: Request and response multiplexing

4/19
Stream prioritization
Once an HTTP message can be split into many individual frames, and we allow for frames from
multiple streams to be multiplexed, the order in which the frames are interleaved and delivered
both by the client and server becomes a critical performance consideration. To facilitate this, the
HTTP/2 standard allows each stream to have an associated weight and dependency:
●​ Each stream may be assigned an integer weight between 1 and 256.
●​ Each stream may be given an explicit dependency on another stream.

Server push
Another powerful new feature of HTTP/2 is the ability of the server to send multiple responses
for a single client request. That is, in addition to the response to the original request, the server
can push additional resources to the client (Figure 2.5), without the client having to request each
one explicitly.

Figure 2.5: HTTP/2 Server Push

Header Compression
Each HTTP transfer carries a set of headers that describe the transferred resource and its
properties. In HTTP/1.x, this metadata is always sent as plain text and adds anywhere from
500–800 bytes of overhead per transfer, and sometimes kilobytes more if HTTP cookies are
being used. To reduce this overhead and improve performance, HTTP/2 compresses request and
response header metadata (see Figure 2.6) using the HPACK compression format that uses two
simple but powerful techniques:
●​ It allows the transmitted header fields to be encoded via a static Huffman code, which
reduces their individual transfer size.
●​ It requires that both the client and server maintain and update an indexed list of
previously seen header fields, which is then used as a reference to efficiently encode
previously transmitted values.

Huffman coding allows the individual values to be compressed when transferred, and the
indexed list of previously transferred values allows us to encode duplicate values by transferring

5/19
index values that can be used to efficiently look up and reconstruct the full header keys and
values.

Figure 2.6: HTTP/2 Header Compression

Although HTTP/2 is built on SPDY, it introduces some important new changes [3].

6/19
Table 1: Comparison of SPDY with HTTP/2

SPDY HTTP/2

SSL Required. In order to use the protocol SSL Not Required. However - even though
and get the speed benefits, connections must the IETF doesn’t require SSL for HTTP/2 to
be encrypted. work, many popular browsers do require it.

Fast Encrypted Connections. Does not use the Faster Encrypted Connections. The new
ALPN (Application Layer Protocol ALPN extension lets browsers and servers
Negotiation) extension that HTTP/2 uses. determine which application protocol to use
during the initial connection instead of after.

Single-Host Multiplexing. Multiplexing Multi-Host Multiplexing. Multiplexing


happens on one host at a time. happens on different hosts at the same time.

Compression. SPDY leaves a small space for Faster, More Secure Compression. HTTP/2
vulnerabilities in its current compression introduces HPACK, a compression format
methods.(DEFLATE) designed specifically for shortening headers
and preventing vulnerabilities.

Prioritization. While prioritization is available Improved Prioritization. Lets web browsers


with SPDY, HTTP/2’s implementation is determine how and when to download a web
more flexible and friendlier to proxies. page’s content more efficiently.

2.3 QUIC
QUIC stands for Quick UDP Internet Connections. It is an experimental web protocol from
Google that is an extension of the research evident in SPDY and HTTP/2. QUIC is premised on
the belief that SPDY performance problems are mainly TCP problems and that it is infeasible to
update TCP due to its pervasive nature. QUIC sidesteps those problems by operating over UDP
instead. Although QUIC works on UDP ports 80 and 443 it has not encountered any firewall
problems. QUIC is a multiplexing protocol for exchanging requests and responses over the
Internet with lower latency and faster recovery from errors than HTTP/2 over TLS/TCP. QUIC
contains some features not present in SPDY such as roaming between different types of
networks.

7/19
QUIC provides connection establishment with zero round trip time overhead. It promises also to
remove Head of Line Blocking on multiplexed streams. In SPDY/HTTP2.0, if a packet is lost in
one stream, the whole set of streams is delayed due to the underlying TCP behavior; no stream
on the TCP connection can progress until the lost packet is retransmitted. In QUIC if a single
packet is lost only one stream is affected [4].

●​ Multiplexing, Prioritization and Dependency of Streams: QUIC multiplexes multiples


streams over a single UDP set of end points. This is of course is not obligatory as it rarely
happens on the web due to having several domains. QUIC uses the same prioritization
and dependency mechanisms as SPDY.
●​ Congestion control: UDP lacks congestion control so in order to be TCP Fair QUIC has
a pluggable congestion control algorithm option. This is currently TCP Cubic.
●​ Security: QUIC provides an ad-hoc encryption protocol named “QUIC Crypto” which is
compatible with TLS/SSL. The handshake process is more efficient than TLS.
Handshakes in QUIC require zero round trips before sending payloads. In TLS on top of
TCP this needs between one to three RTTs. QUIC aligns cryptographic block boundaries
with packet boundaries. The protocol has protection from IP Spoofing packet reordering
and Replay attack [5].
●​ FRAID-4 is available. In the case of one packet being lost in a group, it can be recovered
from the FEC packet for the group.
●​ Connection Migration Feature: QUIC connections are identified by a randomly
generated 64 bit CID (Connection Identifier) rather than the traditional 5-tuple of
protocol, source address, source port, destination address, destination port. In TCP,
whenever a client changes any of these attributes, the connection is no longer valid. In
contrast, QUIC has the ability to allow users to roam between different types of
connections (for example changing from WiFi to 3G) .Forward Error Correction
(FEC): A Forward Error Correction mechanism inspired by

This table show difference between QUIC with HTTP/2 Protocol


Table 2: Comparison of QUIC with HTTP/2

8/19
QUIC HTTP2

Runs over UDP Runs over TCP (ports 80, 443)

Multiplexing multiple requests/responses over Multiplexing multiple requests/responses over


one UDP pseudo- connection per domain one TCP connection per domain

Promises to solve Head Of Line Blocking at Promises to solve Head Of Line Blocking at
the Transport Layer (caused by TCP the Application layer (caused by HTTP 1.1
behaviour) pipelining)

Best case scenario (in repeat connections, Best Case Scenario (1 to 3 Round Trips for
client can send data immediately (Zero Round TCP connection establishment and/or TLS
Trips) connection)

Reduction in RT gained by features of the Reduction in RTs in comparison to HTTP 1.X


protocol such as Multiplexing over one gained by features such as Multiplexing over
connection etc… one connection, and Server Push

HTTP/2 or SPDY can layer on top of QUIC, HTTP/2 or SPDY can layer on top of QUIC
all features of SPDY are supported in QUIC. or TCP

Packet-level Forward Error Correction TCP selective reject ARQ used for error
correction

Connection migration feature N/A

Security in QUIC is TLS-like but with a more Security provided by underlying TLS
efficient handshake TCP Cubic-based Congestion control provided by underlying
congestion control TCP

TCP Cubic-based congestion control Congestion control provided by underlying


TCP

2.4 How SPDY/HTTP2 Reduce The Page Load Latency

●​ Reducing latency with multiplexing: In SPDY/HTTP2, multiple asset requests can reuse
a single TCP connection. Unlike HTTP 1.1 requests that use the Keep-Alive header, the
requests and response binary frames in SPDY/HTTP2 are interleaved and head-of-line
blocking does not happen. [6] The cost of establishing a connection three-way handshake
has to happen only once per host, each establishing a connect will take 1 RTT. Beside
that Multiplexing is especially beneficial for secure connections because of the
performance cost involved with multiple TLS negotiations.

9/19
●​ Reduces the congestion window with single TCP connection more aggressively than
parallel connections [6].
●​ Compressing header reduce the used bandwidth and eliminating unnecessary headers.
●​ It allows servers to push responses proactively into client caches instead of waiting for a
new request for each resource. Server Push potentially allows the server to avoid this
round trip of delay by pushing the responses it thinks the client will need into its cache.
[7]

2.5 QUIC Reduce The Page Load Latency

●​ QUIC use UDP as a transport protocol, that will remove the roundtrip time of
establishing a connection three-way handshake of TCP and TLS auth ans key exchange.
Figure 2.7 show the flow to establish connection each protocol, and table 3 show the
comparison connection RTT (Round Trip Times) in TCP, TLS and QUIC protocols,
QUIC reduce RTT to 0.

Figure 2.7: Connection Round Trip Times in TCP, TLS and QUIC protocols
Table 3: Connection Round Trip Times in TCP, TLS and QUIC protocols

TCP TCP/TLS QUIC

First Connection 1 RTT 3 RTT 1 RTT

Repeat Connection 1 RTT 2 RTT 0 RTT

●​ Additionally, UDP is decrease usage bandwidth by reduce the header length compare to
TCP header. Another benefit of using UDP is multiplexing stream avoid head-of-line

10/19
blocking, each stream frame can be immediately dispatched to that stream on arrival, so
streams without loss can continue to be reassembled and make forward progress in the
application. ​

Figure 2.8: Streams in QUIC protocols

●​ QUIC introduces Forward Error Correction, which is used to reconstruct lost packets
instead of requesting it again. Therefore, redundant data has to be sent (see in Figure 2.8).

3 MPTCP, SCTP
3.1 Multipath TCP (MPTCP)
MPTCP is currently an experimental protocol defined in RFC 6824. It’s stated goal is to exist
alongside TCP and to “do no harm” to existing TCP connections, while providing the extensions
necessary so that additional paths can be discovered and utilized. Multipath TCP starts and
maintains additional TCP connections and runs them as subflows underneath the main TCP
connection. See Figure 3.1 for a quick visualization of this:

11/19
Figure 3.1: Comparison of Standard TCP and MPTCP Protocol Stacks

The IP addresses for these additional subflows are discovered one of two ways; implicitly when
a host with a free port connects to a known port on the other host, or explicitly using an in­band
message. Each subflow is treated as an individual TCP connection with it’s own set of
congestion control variables. Subflows can also be designated as backup subflows which do not
immediately transfer data, but activate when primary flows fail. [9]
Research has shown that MPTCP congestion control as defined in RFC 5681 does not result in
fairness with standard TCP connections if two flows from an MPTCP connection go through the
same bottlenecked link. As such, there’s a great deal of ongoing research about alternative
congestion control schemes specifically for multipath protocols [10].

3.2 Stream Control Transmission Protocol (SCTP)


SCTP is a transport layer protocol in the TCP/IP stack (similar to TCP and UDP). It is message
­oriented like UDP, but also ensure reliable, in­sequence transport of messages with congestion
control like TCP. It achieves this by using multihoming to establish multiple redundant paths
between two hosts. Init’s current specification, SCTP is designed to transfer data on one pair of
IP addresses at a time while the redundant pairs are used for failover and path health or control
messages. [12] However, significant research is being done to allow SCTP to use multiple
concurrent paths at once as needed [11].​
SCTP requires that endpoint IP addresses are provided to the protocol at initialization. It does not
include any way for endpoints to communicate possible other paths with each other. Ports must
also connect in such a way that no port on either host is used more than once for the connection.​
SCTP is currently not in widespread use, and as such routers and firewalls may not route SCTP
packets properly. In the absence of native SCTP support in operating systems it is possible to
tunnel SCTP over UDP, as well as mapping TCP API calls to SCTP ones.

3.3 MPTCP and SCTP Comparison


A. Handshakes
Multipath TCP uses a ­3-way handshake to initialize a new flow the same way as basic TCP.
SCTP however follows a 4-way Handshake for its connection setup. This is shown in figure 3.2.
As such, SCTP places more solid importance on authentication with explicit verification tags.

12/19
This is crucial in protecting systems against SYN Flooding attacks which are a persistent
problem in TCP ­based communications.

Figure 3.2: TCP Handshake, MPTCP Handshake and SCTP Handshake.

B. Congestion Control​
On a subflow to subflow basis, MPTCP and SCTP both act either identically or similarly to TCP
and utilize slow start algorithms and congestion windows for end to end flow control on a path.
Additionally, MPTCP and CMT­-SCTP both couple all subflow congestion windows together
under a global congestion window. Load balancing decisions on which subflow to use using
these parameters are a constant subject of research and are not trivial.​
However, MPTCP can have significantly more flows to manage, as MPTCP allows for fully
meshed connections compared to even CMT­-SCTP. See figure 3.3 for an example of a fully
meshed connection in MPTCP as opposed to the parallel connections in SCTP.

Figure 3.3: Connections established in SCTP vs MPTCP

In this picture, each host has two ports but the protocols set up connections between the two

13/19
ports in different ways. In SCTP, these connection pair may be explicitly defined while in
MPTCP it is up to the protocol to detect and use the correct one. As such, choosing efficient port
pairs ahead of time is crucial to the operation of SCTP and unfortunately this is neither trivial nor
done automatically in most implementations. On the plus side, SCTP connection scheme means
that it does not suffer from the unfairness problem mentioned in the background section on
MPTCP. As currently defined, SCTP is not designed for concurrent multipath transfer the same
way that MPTCP is. Instead, SCTP uses only one path at a time, and it switches to another path
only after the current path fails. There has been a fair amount of academic work on an SCTP
extension to provide concurrent multipath transmission (CMT-­SCTP)

Finding a suitable Congestion Control mechanism able to handle multiple paths is nontrivial [9].
Simply adopting the mechanisms used for the singlepath protocols in a straightforward manner
does neither guarantee an appropriate throughput [9] nor achieve a fair resource allocation when
dealing with multipath transfer [12]. To solve the fairness issue, Resource Pooling has been
adopted for both MPTCP and CMT-SCTP. In the context of Resource Pooling, multiple
resources (in this case paths) are considered to be a single, pooled resource and the Congestion
Control focuses on the complete network instead of only a single path. As a result, the complete
multipath connection (i.e. all paths) is throttled even though congestion occurs only on one path.
This avoids the bottle- neck problem described earlier and shifts traffic from more congested to
less congested paths. Releasing resources on a congested path decreases the loss rate and
improves the stability of the whole network. In three design goals are set for Resource Pooling
based multipath Congestion Control for a TCP-friendly Internet deployment. These rules are:
●​ Improve throughput: A multipath flow should perform at least as well as a singlepath
flow on the best path.
●​ Do not harm: A multipath flow should not take more capacity on any one of its paths
than a singlepath flow using only that path.
●​ Balance congestion: A multipath flow should move as much traffic as possible off its
most congested paths.
The Congestion Control proposed for MPTCP was designed with these goals in mind already.
The Congestion Control of the original CMT-SCTP proposal did not use Resource Pooling, but
we already proposed an algorithm for CMT-SCTP which uses Resource Pooling and fulfills the
requirements. This algorithm behaves slightly different from the MPTCP Congestion Control
and, therefore, we also adapted the MPTCP Congestion Control to SCTP which will be called
“MPTCP-like” in the following. While both mechanisms are still candidates for CMT-SCTP in
the IETF discussion, we will only use the MPTCP-like algorithm in this paper to get an unbiased
comparison with MPTCP. The MPTCP and MPTCP-like Congestion Control treat each path as a
self-contained congestion area and reduce just the path congestion window of the path
experiencing congestion. In order to avoid an unfair overall bandwidth allocation, the congestion
window growth behavior of the Congestion Control is adapted: a per-flow aggressiveness factor
is used to bring the increase and decrease of into equilibrium.​
The MPTCP Congestion Control is based on counting bytes as TCP and MPTCP are
byte-oriented protocols. SCTP, however, is a message-oriented protocol and the Congestion
Control is based on counting bytes which are limited in size by the Maximum Transmission Unit
(MTU). The limit for the calculation is defined as Maximum Segment Size (MSS) for TCP and

14/19
SCTP. So it is, e.g., 1,460 bytes for TCP or 1,452 bytes for SCTP using IPv4 over an Ethernet
interface with a typical MTU of 1,500 bytes.

C. Path Management

Figure 3.4: Paths combinations

Path Management in MPTCP: A MPTCP connection consists, in principle, of several TCP-like


connections (called subflows) using the different network paths available. A MPTCP connection
between Peer A (𝑃𝐴) and Peer B (𝑃𝐵 ) (see Figure 3.4(a)) is initiated by setting up a regular TCP
connection between the two endpoints via one of the available paths, e.g., 𝐼𝑃𝐴1to 𝐼𝑃𝐵1. During
the connection setup, the new TCP option MP_CAPABLE is used to signal the intention to use
multiple paths to the remote peer [13]. Once the initial connection is established, additional
sub-connections are added. This is done similar to regular TCP connection establishment by
performing a three-way-handshake with the new TCP option MP_JOIN present in the segment
headers. By default MPTCP uses all available address combinations to set up subflows resulting
in a full mesh using all available paths between the endpoints. The option ADD_ADDR is used
in the Linux implementation to announce an additional IP address to the remote host. In the case
of Figure 3.4(a), the MPTCP connection is first set up between 𝐼𝑃𝐴1 and 𝐼𝑃𝐵1. Both hosts then
include all additional IP addresses in an ADD_ADDR option, since they are both multi-homed.
After that, an additional subflow is started between 𝐼𝑃𝐴2 and 𝐼𝑃𝐵1 by sending a SYN packet
including the MP_JOIN option. The same is done with two additional sub-connections between
𝐼𝑃𝐴2 and 𝐼𝑃𝐵2 as well as 𝐼𝑃𝐴1 and 𝐼𝑃𝐵2. The result of these operations is the use of 4 subflows

15/19
using direct as well as cross paths: 𝑃𝐴1−𝐵1, 𝑃𝐴1−𝐵2, 𝑃𝐴2−𝐵1 and 𝑃𝐴2−𝐵2.

Path Management in CMT-SCTP: CMT-SCTP is based on SCTP as defined in [14]. Standard


SCTP already provides multi-homing capabilities which are directly usable for CMT-SCTP. An
SCTP packet is composed of an SCTP header and multiple information elements called Chunks
which can carry control information (Control Chunks) or user data (DATA Chunk). A
connection, denoted as Association in SCTP, is initiated by a 4-way handshake and is started by
sending an INITIATION (INIT) chunk. With this first message, the initiating host 𝑃𝐴 informs the
remote host 𝑃𝐵 about all IP addresses available on 𝑃𝐴. Once 𝑃𝐵 has received the INIT chunk it
answers with an INITIATION-ACKNOWLEDGMENT (INIT-ACK) chunk. The INIT-ACK also
includes a list of all the IP addresses available on 𝑃𝐵.​
When 𝑃𝐴 initiates an SCTP connection to 𝑃𝐵, it uses the primary IP addresses of both hosts 𝐼𝑃𝐴1
and 𝐼𝑃𝐵1 as source and destination address, respectively. This creates a first path between these
two addresses, denoted as 𝑃𝐴1−𝐵1 in Figure 3.4(b) which is designated as “Primary Path”. In
standard SCTP this is the only path used for exchange of user data, the others are only used to
provide robustness in case of network failures. SCTP, and consequently also CMT-SCTP, uses all
additional IP addresses to create additional paths. In contrast to MPTCP, each secondary IP
address is only used for a single additional path in an attempt to make the established paths
disjoint. In the example, the secondary path 𝑃𝐴2−𝐵2 is established.
As a result, while the MPTCP creates a full mesh of possible network paths among the available
addresses, CMT-SCTP only uses pairs of addresses to set up communication paths. CMT- SCTP
only determines the specific source address to specify which path has to be used (source address
selection) and leaves it to the IP layer to select the route to the next hop. MPTCP, however,
maintains a table in the Transport layer identifying all possible combinations of local and remote
addresses and uses this table to predefine the network path to be used.

3.3 HTTP2 Benefits from Multipath TCP


●​ Multipath TCP should be backward compatible. That mean HTTP/2 should able to run
over MPTC, in case due to any reason a successful multipath tcp connection cannot be set
up, it must always fall back to the normal TCP connection.
●​ MPTCP will increase the bandwidth It will increase the bandwidth because two
connection links with two separate paths are used in a single connection. Due to
congestion if one path is only providing a small percentage of its bandwidth, the other
path can also be utilized. Hence the total bandwidth for a MPTCP connection will be the
combined bandwidth used by both the paths. HTTP/2 over MP-TCP has clear benefits
compared to HTTP/1.0 over MPTCP since there are fewer transport connections and
these will carry more data, giving time for the MPTCP subflows to correctly utilise the
available paths.
●​ MPTCP provides Better Redundancy Multipath TCP provides a better connection
redundancy, because your connection will not be affected even if one link goes down. An

16/19
example use case is suppose you are downloading a file with HTTP/2 multistreaming
and you are over your WiFi connection. Even if you walk out of your WiFi connection
range, the file streaming should not be affected because it should automatically stop
sending data through WiFi connection and should now only use cellular network.

Figure 3.5: Optimization across layers


In the detail of workflow, HTTP/2 using multiplexer mechanism to generate the long live
communication between hosts to send and receive many requests/responses with only a
connection., HTTP/2 will use multiplexer mechanism to establish a connection be able to carry
multiple messages at application layer. Next, The Multipath TCP will work in network layer. The
data was divided into many segments which were delivery to multiple connections be generated
by inverse multiplexer. The connections will be merged by demultiplexer of MPTCP for the
destination host. Finally, HTTP/2 will handle data for the requests and response of applications.​

17/19
4 CONCLUSIONS AND RELATED WORK
This report presented a describe QUIC, SPDY and HTTP/2 and comparison of the these
protocols. HTTP/2 is the next evolution of HTTP. Based on Google’s SPDY, the new protocol is
presented in a formal. HTTP/2 maintains compatibility with SPDY and the current version of
HTTP. Although HTTP/2 is built on SPDY, it introduces some important new changes, the main
difference between HTTP/2 and SPDY comes from their header compression algorithms.
HTTP/2 uses HPACK algorithm for header compression, compared to SPDY, which uses
DEFLATE. QUIC is a very recent protocol developed by Google in 2013 for efficient transfer of
web pages. QUIC aims to improve performance compared to SPDY and HTTP by multiplexing
web objects in one stream over UDP protocol instead of traditional TCP.

Additionally, The page also present two of the major proposals to change TCP so to support
multipath: SCTP and MPTCP and comparison between them on path management, connection
establishing, congestion control and HTTP/2 benefits from these proposals. Multipath TCP
allows existing TCP applications to achieve better performance and robustness over today’s
networks, and it has been standardized at the IETF. Now multipath is very important. Mobile
devices have multiple wireless interfaces, data-centers have many redundant paths between
servers, and multihoming has become the norm for big server farms . TCP is essentially a
single-path protocol: when a TCP connection is established, If one of these addresses changes
the connection will fail. In fact, a TCP connection cannot even be load balanced across more
than one path within the network, because this results in packet reordering, and TCP
misinterprets this reordering as congestion and slows down .Example if a smartphone’s WiFi
loses signal, the TCP connections associated with it stall; there is no way to migrate them to
other working interfaces, such as 3G . This makes mobility a frustrating experience for users .
Modern data-centers are another example: many paths are available between two endpoints, and
multipath routing randomly picks one for a particular TCP connection .

We survey related work in 2 topics (i) Multipath QUIC and (ii) Optimized Cooperation of
HTTP/2 and Multipath TCP.

i) Multipath QUIC is an extension to the QUIC protocol that enables hosts to exchange data
over multiple networks over a single connection on end hosts are equipped with several network
interfaces and users expect to be able to seamlessly switch from one to another or use them
simultaneously to aggregate bandwidth as well as enables QUIC flows to cope with events
affecting the such as NAT rebinding or IP address changes.

ii) Optimized Cooperation of HTTP/2 and Multipath TCP: HTTP/2 is the next evolution of
HTTPs and Multipath TCP allows existing TCP applications to achieve better performance and
robustness, The optimization of HTTP2 run over MP-TCP have a chance to make applications
faster, simpler, and more robust.

18/19
5 REFERENCES
1.​ SPDY Protocol - Draft 3. Retrieved November, Accessed May 16, 2018.
http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3
2.​ Introduction to HTTP/2, Ilya Grigorik, Surma, Accessed May 16, 2018.
https://developers.google.com/web/fundamentals/performance/http2/
3.​ Shifting from SPDY to HTTP/2, Justin Dorfman. Accessed May 16, 2018
https://blog.stackpath.com/spdy-to-http2
4.​ QUIC Protocol Official Website. Available at: https://www.chromium.org/quic.
5.​ QUIC Crypto. Accessed May 16, 2018.
https://docs.google.com/document/d/1g5nIXAIkN_Y-7XJW5K45IblHd_L2f5LTaDUDw
vZ5L6g/edit.
6.​ How Speedy is SPDY, Xiao Sophia Wang, Aruna Balasubramanian, USENIX, 2014
7.​ HTTP/2 Frequently Asked Questions, Accessed May 16, 2018 https://http2.github.io/faq/
8.​ Ford, Et Al., RFC 6824 ­TCP Extensions for Multipath Operation with Multiple
Addresses., RFC 6824, January 1, 2013. Accessed May 16, 2018
http://tools.ietf.org/html/rfc6824.
9.​ Ford, et al., RFC 6182 ­Architectural Guidelines for Multipath TCP Development, RFC
6182. March 2011. Accessed May 16, 2018 http://tools.ietf.org/html/rfc6182
10.​ Singh, et al. Enhancing Fairness and Congestion Control in Multipath TCP, 6th Joint
IFIP Wireless and Mobile Networking Conference, 2013
11.​Iyengar, J. R. et al. Concurrent Multipath Transfer Using SCTP Multihoming, SPECTS,
2004
12.​Stewart, et al., RFC 4960 ­Stream Control Transmission Protocol, RFC 4960, September
2007, Accessed Accessed May 16, 2018. http://tools.ietf.org/html/rfc4960
13.​A. Ford, C. Raiciu, M. Handley, S. Barre ́, and J. R. Iyengar, Architectural Guidelines for
Multipath TCP Development, IETF, Informational RFC 6182, Mar. 2011, ISSN
2070-1721.
14.​R.R.Stewart, Stream Control Transmission Protocol, IETF,Standards Track RFC 4960,
Sept. 2007, ISSN 2070-1721.
15.​Martin Becke, Fu Fa, Comparison of Multipath TCP and CMT-SCTP based on
Intercontinental Measurements, IEEE 12 June 2014, ISSN: 1930-529X
16.​Maximilian Weller, Optimized Cooperation of HTTP/2 and Multipath TCP, May 1, 2017
17.​Slashroot, How does MULTIPATH in TCP work, Accessed May 17, 2018
https://www.slashroot.in/what-tcp-multipath-and-how-does-multipath-tcp-work

19/19

You might also like