0% found this document useful (0 votes)
75 views7 pages

Experimental Study of Low-Latency HD VoD Streaming Using Flexible Dual TCPUDP Streaming Protocol

This document summarizes an experimental study on using a flexible dual TCP-UDP streaming protocol (FDSP) for low-latency HD video on demand streaming. FDSP combines the reliability of TCP with the low latency of UDP by prioritizing more critical video data over TCP and sending the rest over UDP. The amount of TCP data can be adjusted based on network congestion level. The study implemented FDSP on a testbed and found it achieved lower rebuffering times and instances than TCP-only streaming, and lower packet loss than UDP-only streaming, making it suitable for low-latency live and subscription video services.

Uploaded by

Robert
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views7 pages

Experimental Study of Low-Latency HD VoD Streaming Using Flexible Dual TCPUDP Streaming Protocol

This document summarizes an experimental study on using a flexible dual TCP-UDP streaming protocol (FDSP) for low-latency HD video on demand streaming. FDSP combines the reliability of TCP with the low latency of UDP by prioritizing more critical video data over TCP and sending the rest over UDP. The amount of TCP data can be adjusted based on network congestion level. The study implemented FDSP on a testbed and found it achieved lower rebuffering times and instances than TCP-only streaming, and lower packet loss than UDP-only streaming, making it suitable for low-latency live and subscription video services.

Uploaded by

Robert
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/323861272

Experimental study of low-latency HD VoD streaming using flexible dual TCP-


UDP streaming protocol

Conference Paper · January 2018


DOI: 10.1109/CCNC.2018.8319234

CITATION READS
1 127

4 authors, including:

Kevin Gatimu Kiwoong Lee


Oregon State University Kyung Hee University
5 PUBLICATIONS   6 CITATIONS    96 PUBLICATIONS   1,098 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Adaptation algorithm for DASH video streaming View project

All content following this page was uploaded by Kevin Gatimu on 29 August 2018.

The user has requested enhancement of the downloaded file.


Experimental Study of Low-Latency HD VoD
Streaming using Flexible Dual TCP-UDP Streaming
Protocol
Kevin Gatimu, Arul Dhamodaran, Taylor Johnson and Ben Lee

School of Electrical Engineering and Computer Science


Oregon State University
Corvallis, Oregon 97331
Email: {gatimuk, dhamodar, johnstay, benl}@eecs.oregonstate.edu

Abstract—The Flexible Dual TCP-UDP Streaming Protocol client requests video from a selection of multiple quality ver-
(FDSP) combines the reliability of TCP with the low latency sions based on its perceived network conditions. Several HAS
characteristics of UDP. FDSP delivers the more critical parts implementations exist, including proprietary ones such as Mi-
of the video data via TCP and the rest via UDP. Bitstream
Prioritization (BP) is a sliding scale that is used to determine crosoft Smooth Streaming (MSS) [4], Adobe HTTP Dynamic
the amount of TCP data that is to be sent. BP can be ad- Streaming [5], Apple’s HTTP Live Streaming (HLS) [6], and
justed according to the level of network congestion. FDSP-based the open-source standard, Dynamic Adaptive Streaming over
streaming achieves lower rebuffering time and less rebuffering HTTP (DASH) [7].
instances than TCP-based streaming as well lower packet loss However, even the combination of HAS and CDNs is
than UDP-based streaming. Our implementation and experiments
on a real testbed shows that FDSP with BP delivers high quality, challenged by extremely large audiences, resulting in high
low-latency video, which is especially suitable for live video and bandwidth requirements for Internet video content providers.
subscription-based video. This is especially the case for live video streaming for events
Index Terms—Low latency; HD Video Streaming; Hybrid such as sports (e.g., the Olympics and the World Cup) and
Protocol; FDSP. presidential debates. Furthermore, HAS suffers from high
latency – often 20 seconds or more [8]. This is because
I. I NTRODUCTION
two or more substreams, typically 10 seconds each, need
Global Internet traffic is projected to increase nearly three- to be buffered prior to playout. Such initial startup delay
fold until 2021, with video accounting for 82% of the total is acceptable for pre-recorded content (e.g., movies) as this
traffic [1]. Currently, consumer video is dominated by High maximizes the client’s video quality with reduced rebuffering.
Definition (HD), but higher resolutions such as 4K are gaining However, the latency for live events needs to be minimized.
mainstream popularity [2]. Furthermore, there is an increasing Low latency is also required for subscription-based live video
number of video-capable devices and platforms being added services such as Internet Protocol television (IPTV). When a
globally everyday. For instance, the current 2 billion LTE client switches between different channels of streaming video,
subscribers are expected to double by 2021 [3]. Together, these the transition needs to be as close as possible to traditional
factors will continue to increase global network congestion and broadcast television, with hardly any noticeable delay.
pose even greater challenges to seamlessly delivering video at The Transmission Control Protocol (TCP) is the transport
HD resolution and beyond. layer protocol used in HAS. When outstanding packets are
This situation is further exacerbated by the unicast delivery acknowledged by the receiver, TCP additively increases the
model in major Video on Demand (VoD) services such as transmission rate of the sender by a constant amount. On
Netflix, Hulu, and Amazon Video, where each client requests the other hand, when acknowledgments are lost due to con-
video directly from a server. Therefore, as more clients con- gestion, the sender retransmits the lost packets and halves
nect to the server, the bandwidth requirements grow rapidly. the transmission rate. This is detrimental towards meeting
VoD content providers have mitigated increased bandwidth playout deadlines for achieving low-latency video streaming.
demands by decentralizing their infrastructure through Content The User Datagram Protocol (UDP) is better suited for low-
Delivery Networks (CDNs), which brings proxy servers closer latency applications compared to TCP. As a result, there
to the end-user. have been hybridization efforts at the transport layer in order
Another major development in managing VoD network to combine the reliability of TCP with the low latency of
resources is HTTP Adaptive Streaming (HAS). In HAS, the UDP, pioneered by Reliable UDP [9] and culminating in the
more advanced Quick UDP Internet Connections (QUIC) [10].
However, QUIC has been shown to have higher protocol Server
DEMUX
overhead than TCP at low bitrates [11]. UDP has also been H.264
SPS, PPS, SH &
TCP
H.264 MPEG-TS Prioritized Data
useful from an infrastructural point-of-view by supplementing Syntax
Video Packetizer
Parser Rest of Data UDP
CDNs with UDP-based peer-to-peer (P2P) networks [12], [13].
Based on the aforementioned discussion, the objective of BP
this paper is to show that low-latency VoD streaming can Selection

be achieved using a hybrid streaming protocol called Flexible Network

Dual Streaming Protocol (FDSP). Our previous work showed


that FDSP is suitable for improving direct device-to-device Client
MUX
SPS, PPS, SH &
TCP
streaming using simulation studies [14]–[16]. In this paper, Display
H.264 Reorder Prioritized Data
Decoder Packets
FDSP is tailored for a physical testbed with network emulation Rest of Data UDP
for a VoD streaming environment. Our findings show that
FDSP-based streaming achieves lower latency than pure-TCP- Fig. 1: Flexible Dual TCP-UDP Streaming Protocol (FDSP) Architec-
based streaming while having less packet loss than pure-UDP- ture [14].
based streaming.
II. R ELATED W ORK container formats [25]. They were then transmitted across two
concurrent TCP connections and reassembled by the client.
HAS is the most popular streaming mechanism for deliv- However, this method uses HTTP chunked encoding.
ering Internet video today. For this reason, there has been Peer-to-peer (P2P) networks have been used to supplement
research and development in trying to reduce the latency CDNs and help content providers save on deployment and
that is caused by video segmentation. A client maintains a maintenance costs [26]. This also reduces HTTP requests
video buffer of two or more segments of typically 10 seconds made to CDN servers thus lowering the latency for live
each [6], [17], which results in latency of 20 seconds or streaming [27], [28]. In fact, CDN caching increases delay
more. Reducing the segment size to just a few seconds can by 15-30 seconds [29]. CDN-P2P architectures have been
reduce the size of a client’s playout buffer, which in turn commercialized for some time now by global CDN companies
reduces latency. However, this increases the total number of such as ChinaCache [12] and Akamai [13]. These hybrid
segments and, therefore, the number of HTTP requests that architectures primarily rely on CDNs for HTTP-based retrieval
the client sends to the server in order to retrieve the video of initial or critical video segments while using P2P net-
segments. These requests use precious bandwidth at a rate of works for bandwidth relief or for retrieving future segments.
one round-trip time (RTT) per video segment. For instance, Even though the P2P networks are UDP-based, standardized
a client that requests 2-second video segments on a network NAT/firewall traversal for UDP-based transmission is gaining
path with an RTT delay of 300 ms will experience 300 ms traction primarily through WebRTC [30], which is a collection
of additional delay every 2 seconds. In [18], Swaminathan of protocols and browser APIs.
et al. use HTTP chunked encoding to disrupt this correlation This paper shows that FDSP-based streaming achieves much
between live latency and segment duration by using partial lower latency compared to HTTP-based streaming at compa-
HTTP responses. However, the persistent connections that are rable video quality levels. Our study also shows that FDSP
needed for chunked encoding transfer are prone to timeout transmission results in lower packet loss compared to UDP-
issues and security concerns such as injection attacks and based streaming, even in congested networks. Furthermore,
denial-of-service attacks [19]. Alternatively, HTTP/2 provides FDSP is orthogonal to adaptive streaming and can thus be
server push mechanisms such that the client receives multiple used as a transport protocol for today’s segment-based video
video segments per request [20]–[22]. However, HTTP/2 is not delivery systems.
as widely available as legacy HTTP. HTTP/2 only has 15%
worldwide deployment and, at a current growth rate of 5% III. FDSP OVERVIEW
additional coverage every year, it has a long way to go before This section provides an overview of FDSP, including its
becoming a widely recognized standard [23]. architectural features and video streaming using substreams.
Other improvements in reducing video latency include mod- For more details, see [14], [15] and [16]. FDSP is a hybrid
ifications to the transport layer. For instance, Chakareski et al. streaming protocol that combines the reliability of TCP with
used multiple TCP connections in conjunction with Scalable the low latency characteristics of UDP. Figure 1 shows the
Video Coding (SVC) [24]. More important packets were trans- FDSP architecture consisting of a server and a client.
mitted via better quality TCP connections and were, therefore, At the server, the H.264 Syntax Parser processes video
less prone to retransmissions. While this method addresses data in order to detect critical H.264 video syntax elements
delay within the transport layer, there is still significant delay (i.e., Sequence Parameter Set (SPS), Picture Parameter Set
in the application layer due to the typical video segment sizes (PPS), and slice headers). The MPEG-TS Packetizer within
in HAS. On the other hand, Houze et al. proposed a multi- the Demultiplexer (DEMUX) then encapsulates all the data
path TCP streaming scheme based on the application layer, according to the RTP MPEG-TS specification. The DEMUX
where larger video frames were subdivided based on media then directs the packets containing critical data to a TCP
Parameters Value(s)
T1 T2 …. Tn U1 U2 T1 U3 …. Tn Un Substream 1

Transmission
Bridge interface eth2, eth3
Stream T1,…., Tn U1 U2 U3 T U4 …. Tn Un Substream 2 Delay (ms) 0, 25, 50, 75, 100, 125
1
Jitter (ms) 0, 5, 10, 15, 25
Receiver
Stream Tinit Tjitter
Substream 1 Substream 2 Loss 0.2%
Playback Playout deadline Duplicate 0.2%
For substream 1
Corrupt 0.2%
Fig. 2: Substream overlapping. Each packet is either UDP (U) or TCP (T), Reorder 0.2%
where the subscript represents the packet number within a substream.
TABLE I: Network emulation settings for traffic control (tc).

utility was then used to perform traffic control on the network


bridge. The tc configures the Linux kernel primarily through
queueing disciplines (qdiscs). A qdisc is an interface between
the kernel and a network interface, where packets are queued
and released according to tc settings. For example, a loss
setting drops packets from the qdisc according to a specified
Fig. 3: Experiment testbed. percentage, while a delay setting keeps the packets in the qdisc
longer. Multiple settings can be used together. A summary of
socket and the rest to a UDP socket as Dual Tunneling keeps tc settings used in the experiments is shown in Table I.
both TCP and UDP sessions simultaneously active during The tc parameters chosen represent an array of Wide Area
video streaming. The BP Selection module sets the Bitstream Network (WAN) scenarios, which would typically plague
Prioritization (BP) parameter, which is a percentage of I-frame Internet video streaming performance. The Delay setting was
data that is to be sent via TCP in addition to the original critical primarily used to simulate different levels of real-world Inter-
data. At the client, the Multiplexer (MUX) sorts TCP and net congestion [32]. The core network RTT latency is about
UDP packets based on their RTP timestamps. This reordering 30 ms within Europe, 45 ms within North America, and 90
is essential for the H.264 Decoder to decode incoming data ms for Trans-Atlantic routes [33]. However, the edge network
correctly. introduces additional latency. Therefore, Delay ranging from
When a stream is initiated, the FDSP server transmits the 0 to 125 ms in increments of 25 ms was used for each of the
packets for the first 10-second substream. All the TCP packets two bridged interfaces (eth2 and eth3), resulting in a total
for this substream must be received (i.e., buffered) before RTT delay range of 0 to 250 ms. The corresponding random
playback begins. This startup delay (Tinit ) is low since only Jitter value was set at 20% of the delay. The Duplicate setting
the TCP portion of the data is sent rather than the whole simulates duplicate packets, e.g., due to TCP retransmissions.
10 seconds of video. In order to minimize rebuffering, the The Loss setting simulates packets randomly dropped by the
TCP packets for the next substream are sent at the same network. The Corrupt setting introduces a random bit error in a
time as the UDP packets for the current substream through a specified percentage of the packets. Finally, the Reorder setting
process called substream overlapping as illustrated in Figure 2. simulates multi-hop routing by further delaying a specified
Substream overlapping is repeated throughout the duration of percentage of packets according to the delay and jitter settings.
the stream. However, when playback for a particular substream The test videos used for streaming are two full HD
is complete and the TCP packets for the upcoming substream (1920×1080 @30fps) 30-second clips from an animation
are not yet all available, the client has to wait thus causing a video, Bunny, and a documentary video, Nature. These videos
rebuffering instance. The playout deadline for all subsequent are encoded using x264 with an average bit rate of 4 Mbps and
packets is then incremented by the rebuffering time. four slices per frame. They are then streamed from the server
to the client using FDSP, TCP, and UDP. For each streaming
IV. E XPERIMENT S ETUP
protocol, the five different levels of network congestion are
Our experimental testbed is shown in Figure 3, which created via the network delay settings (i.e., 50 ms, 100 ms, 150
consists of a client-server pair and a traffic controller. The ms, 200 ms, and 250 ms). Furthermore, FDSP-based streaming
client-server pair is running VLC Media Player [31] on Mac is done for five different BP values (i.e., 0%, 25%, 50%, 75%,
OS X. The following modifications were made to integrate and 100%) per congestion level.
FDSP with BP into VLC:
1) Simultaneous streaming via UDP and TCP protocols. V. R ESULTS
2) Parsing H.264 video data at the server and subdividing This section discusses the results of our experiments. FDSP-
it into TCP-bound and UDP-bound elements. based streaming generally outperforms TCP-based streaming
3) Reordering TCP and UDP packets and reconstructing in terms of both rebuffering time and number of rebuffering
the H.264 bitstream at the client prior to decoding. instances. FDSP also incurs lower PLR than UDP. Figure 4
The traffic controller, running on CentOS, connects the shows a sample of the video streaming improvements of FDSP
server to the client via a network bridge across interfaces over either TCP or UDP at 100 ms delay. The other levels of
eth2 and eth3, respectively. The Linux traffic control (tc) network congestion show similar results. Overall, FDSP re-
1200 12 1200 12
FDSP Rebuffering Time UDP PLR = 32.71%
FDSP PLR
1000 TCP Rebuffering Time 10 1000 10
UDP PLR
FDSP Rebuffering Time
Rebuffering Time (ms)

Rebuffering Time (ms)


800 8 800 FDSP PLR 8
TCP Rebuffering Time

PLR (%)

PLR (%)
600 6 600 6

400 4 400 BP Range 4


BP Range Recommendation
Recommendation
200 2 200 2

0 0 0 0
25 50 75 100 25 50 75 100
BP (%) BP (%)

(a) Nature video (b) Bunny video


Fig. 4: Rebuffering time and PLR for FSDP, TCP and UDP at 100 ms delay.

14000 14000
Rebuff 7 Rebuff 6
Rebuff 6 Rebuff 5
12000 Rebuff 5 12000 Rebuff 4
Rebuff 4 Rebuff 3
Rebuffering Time (ms)

Rebuffering Time (ms)


10000 Rebuff 3 10000 Rebuff 2
Rebuff 2 Rebuff 1
Rebuff 1
8000 8000

6000 6000
BP 100%

BP 100%
4000 4000
BP 25%
BP 50%
BP 75%

BP 25%
BP 50%
BP 75%
BP 0%

BP 0%
TCP

TCP
2000 2000

0 0
50 100 150 200 250 50 100 150 200 250

Delay (ms) Delay (ms)

(a) Nature video (b) Bunny video


Fig. 5: Rebuffering for different levels of network congestion for FDSP-based streaming at different values of BP and TCP-based streaming.

buffering time is significantly lower than TCP rebuffering time. ms with 5 instances for TCP. Note that the first rebuffering
In addition, as BP increases within a recommended range, PLR instance (Rebuff 1 in Figure 5) is the startup delay. As can be
decreases. The BP range recommendations are 0% to 75% for seen, FDSP exhibits lower startup delay than TCP at almost
Nature and 0% to 25% for Bunny. Since the overall rebuffering all BP levels.
of FDSP-based streaming is significantly lower than that of While FDSP is significantly better than TCP in terms
TCP-based streaming, BP range recommendation was based of rebuffering, it is important to note that rebuffering does
on minimizing PLR. The rest of this section provides more increase with BP.
details in the context of the two major improvements, i.e.,
B. FDSP Improvement over UDP in PLR
lower rebuffering and lower PLR.
FDSP-based streaming results in not only less rebuffering,
A. FDSP Improvement over TCP in Rebuffering but it also produces better video quality by reducing PLR.
Reduction in both rebuffering time and instances is im- Figure 6 shows the effect of BP on PLR across different levels
portant towards improving the user’s Quality of Experience of network congestion for both Nature and Bunny. For each
(QoE). Figure 5 shows the total amount of rebuffering time congestion level, PLR is shown for FDSP with different values
and the number of rebuffering instances for the different levels of BP as well as for UDP. As BP increases, there is less PLR
of network congestion. For each congestion level, rebuffering and thus better video quality. For Nature, the best BP value is
is shown for FDSP with different values of BP as well for TCP. 75% while for Bunny it is 25%. This implies that there is an
For instance, in Nature at 150 ms delay, FDSP rebuffering time optimal range of BP values based on the type of video.
ranges from 108 ms to 1,616 ms, compared to 9,410 ms in As BP increases within the optimal range, more packets
TCP. In addition, the number of rebuffering instances ranges are sent via TCP rather than UDP. This protects them from
from 2 to 3 for FDSP compared to 7 for TCP. Meanwhile, in network-induced losses. Since the bulk of PLR is due to lost
Bunny at 150 ms delay, FDSP rebuffering time ranges from UDP packets, the overall PLR decreases as BP increases. For
92 ms to 1,441 ms with 1 to 6 instances, compared to 8,764 example, in Nature, the PLR at 50 ms delay decreases from
35 35

30 30

UDP
25 25

20 20
PLR (%)

PLR (%)
BP 0%

15 15

BP 100%
BP 50%

BP 100%
BP 25%

10 10

BP 75%
BP 50%
BP 25%
BP 75%

BP 0%
5 5
UDP

0 0
50 100 150 200 250 50 100 150 200 250
Delay (ms) Delay (ms)

(a) Nature video (b) Bunny video


Fig. 6: PLR for different levels of network congestion for FDSP-based streaming at different values of BP and UDP-based streaming.

(a) UDP (b) Basic FDSP (0% BP)


Fig. 7: Visual comparison between UDP-based streaming and FDSP-based streaming for Bunny.

9% to 0.32% as BP increases from 0% to 75%. Similarly, packets (both UDP and TCP) arrive at the client too late, past
in Bunny, the PLR decreases from 1.19% to 0.51% as BP the decoder’s playout deadline, and are thus also considered
increases from 0% to 25%. Figure 7 shows a sample of lost.
the visual improvement of FDSP-based streaming with 0% The frequency of I-frames can be used to categorize the type
BP over pure-UDP streaming in Bunny. The video frame in of video and determine the optimal range of BP. For videos
Figure 7b is intact while the frame in Figure 7a shows the such as Bunny, where there are many scene changes, there is
effects of packet loss under UDP-based streaming. In such usually a corresponding higher number of I-frames. In fact,
situations, the loss of just a slice header or the first few there are 37 I-frames in Bunny compared to just 5 in Nature.
bytes of a slice renders the rest of the slice data useless to Since I-frames contain significantly more data than other
the decoder, thus resulting in error concealment as shown frames, the probability of network saturation increases with the
in slice 4 of Figure 7a. On the other hand, FDSP-based frequency of I-frames, which leads to high PLR. For instance,
streaming, even with no BP, protects slice headers through Figure 6 shows much higher PLR for UDP-based streaming
TCP transmission thus producing better quality video frames in Bunny (26.4%∼33.3%) compared to Nature (2.2%∼4.3%).
as shown in Figure 7b. In such scenarios (Bunny), small BP values (0%∼25%) are
If BP surpasses the optimal range and becomes too high, effective towards reducing PLR while higher values (>25%)
the network can become saturated with TCP packets. This is will saturate the network with TCP packets from I-frame data.
because when there is network congestion, more packets are In comparison, videos exemplified by Nature have lower
delayed, reordered or lost. The TCP packets are then more PLR to begin with for UDP-based streaming. This is be-
prone to retransmissions so as to guarantee in-order, reliable cause of less network saturation as a result of lower I-frame
delivery. Meanwhile, the IP queue is filled with staged TCP frequency. When such videos are streamed through FDSP,
and UDP packets. As the IP queue fills up with TCP packets, the introduction of TCP packets increases the likelihood of
additional UDP packets are dropped. This is the cause of most network saturation and UDP PLR. However, higher BP values
of the PLR when BP becomes too high. In addition, some (up to 75% in the case of Nature) can be applied to the point
of lowering UDP PLR below that of UDP-based streaming. [16] A. Dhamodaran, M. Sinky, and B. Lee, “Adaptive Bitstream Priori-
tization for Dual TCP/UDP Streaming of HD Video,” in The Tenth
VI. C ONCLUSION AND F UTURE W ORK International Conference on Systems and Networks Communications
(ICSNC 2015), Barcelona, Spain, November 2015, pp. 35–40.
This paper shows that FDSP with BP is suitable for low- [17] C. Liu, I. Bouazizi, and M. Gabbouj, “Rate Adaptation for
latency HD video streaming over the Internet while maintain- Adaptive HTTP Streaming,” in Proceedings of the Second Annual
ing high video quality by combining the reliability of TCP with ACM Conference on Multimedia Systems, ser. MMSys ’11. New
York, NY, USA: ACM, 2011, pp. 169–174. [Online]. Available:
the low-latency characteristics of UDP. Our implementation http://doi.acm.org/10.1145/1943552.1943575
and experiments on a real testbed consisting of a server [18] V. Swaminathan and S. Wei, “Low latency live video streaming using
and a client and an intermediate node for network emulation HTTP chunked encoding,” in 2011 IEEE 13th International Workshop
on Multimedia Signal Processing, Oct. 2011, pp. 1–6.
through the Linux traffic control utility showed that FDSP with [19] G. Wilkins, S. Salsano, S. Loreto, and P. Saint-Andre, “Known
BP results in significantly less rebuffering than TCP-based Issues and Best Practices for the Use of Long Polling and Streaming
streaming and much lower PLR than UDP-based streaming. in Bidirectional HTTP.” [Online]. Available: https://tools.ietf.org/html/
rfc6202#page-16
As future work, BP will be dynamically adjusted with [20] S. Wei and V. Swaminathan, “Low Latency Live Video Streaming over
varying network conditions. A separate QoE study based on HTTP 2.0,” in Proceedings of Network and Operating System Support
FDSP streaming is currently in progress. Its results will be on Digital Audio and Video Workshop, ser. NOSSDAV ’14. New
York, NY, USA: ACM, 2014, pp. 37:37–37:42. [Online]. Available:
used to determine when BP should be changed based on http://doi.acm.org/10.1145/2578260.2578277
variation in PLR and rebuffering. [21] W. Cherif, Y. Fablet, E. Nassor, J. Taquet, and Y. Fujimori, “DASH Fast
Start Using HTTP/2,” in Proceedings of the 25th ACM Workshop on
R EFERENCES Network and Operating Systems Support for Digital Audio and Video,
ser. NOSSDAV ’15. New York, NY, USA: ACM, 2015, pp. 25–30.
[1] “VNI Global Fixed and Mobile Internet Traffic Fore- [Online]. Available: http://doi.acm.org/10.1145/2736084.2736088
casts.” [Online]. Available: http://www.cisco.com/c/en/us/solutions/ [22] R. Huysegems, J. van der Hooft, T. Bostoen, P. Rondao Alface,
service-provider/visual-networking-index-vni/index.html S. Petrangeli, T. Wauters, and F. De Turck, “HTTP/2-Based Methods to
[2] “4k Internet TV & Video to be Viewed by 1 in 10 US Residents,” Improve the Live Experience of Adaptive Streaming,” in Proceedings of
Aug. 2016. [Online]. Available: https://www.juniperresearch.com/press/ the 23rd ACM International Conference on Multimedia, ser. MM ’15.
press-releases/4k-internet-tv-video-content-to-be-viewed-by-1-i New York, NY, USA: ACM, 2015, pp. 541–550. [Online]. Available:
[3] “Ericsson Mobility Report – Ericsson,” Nov. 2016. [Online]. Available: http://doi.acm.org/10.1145/2733373.2806264
https://www.ericsson.com/en/mobility-report [23] A. Theedom. (2016) Tracking HTTP/2 Adoption: Stagnation -
[4] A. Zambelli. Smooth Streaming Technical Overview. [Online]. Avail- DZone Web Dev. [Online]. Available: https://dzone.com/articles/
able: http://www.iis.net/learn/media/on-demand-smooth-streaming/ tracking-http2-adoption-stagnation
smooth-streaming-technical-overview [24] J. Chakareski, R. Sasson, A. Eleftheriadis, and O. Shapiro,
[5] Adobe Systems. HTTP Dynamic Streaming. [Online]. Available: “System and method for low delay, interactive communication
http://www.adobe.com/products/hds-dynamic-streaming.html using multiple TCP connections and scalable coding,” U.S. Patent
[6] Apple Inc. HTTP Live Streaming Internet—Draft. [Online]. Available: US8 699 522 B2, Apr., 2014, u.S. Classification 370/474, 370/536,
https://tools.ietf.org/html/draft-pantos-http-live-streaming-19 375/240.05, 709/231; International Classification H04J3/24; Cooperative
[7] “ISO/IEC 23009-1:2012 - Information technology – Dynamic adaptive Classification H04L65/607, H04L47/32, H04L47/10, H04L47/193,
streaming over HTTP (DASH) – Part 1: Media presentation description H04L47/2416, H04L65/4015, H04L47/283, H04L65/80. [Online].
and segment formats.” [Online]. Available: http://www.iso.org/iso/iso Available: http://www.google.com/patents/US8699522
catalogue/catalogue tc/catalogue detail.htm?csnumber=57623 [25] P. Houzé, E. Mory, G. Texier, and G. Simon, “Applicative-layer mul-
[8] “What YOU Need to Know About HLS: Pros and tipath for low-latency adaptive live streaming,” in 2016 IEEE Interna-
Cons,” Jan. 2016. [Online]. Available: http://blog.red5pro.com/ tional Conference on Communications (ICC), May 2016, pp. 1–7.
what-you-need-to-know-about-hls-pros-and-cons/ [26] D. Xu, S. S. Kulkarni, C. Rosenberg, and H.-K. Chai, “Analysis
[9] T. Bova and T. Krivoruchka, “Reliable UDP Protocol.” [Online]. of a CDN–P2p hybrid architecture for cost-effective streaming
Available: https://tools.ietf.org/html/draft-ietf-sigtran-reliable-udp-00 media distribution,” Multimedia Systems, vol. 11, no. 4, pp. 383–399,
[10] A. Wilk, J. Iyengar, I. Swett, and R. Hamilton, “QUIC: A UDP-Based Apr. 2006. [Online]. Available: https://link.springer.com/article/10.1007/
Secure and Reliable Transport for HTTP/2.” [Online]. Available: s00530-006-0015-3
https://tools.ietf.org/html/draft-hamilton-early-deployment-quic-00 [27] S. M. Y. Seyyedi and B. Akbari, “Hybrid CDN-P2p architectures for
[11] C. Timmerer and A. Bertoni, “Advanced Transport Options for live video streaming: Comparative study of connected and unconnected
the Dynamic Adaptive Streaming over HTTP,” arXiv preprint meshes,” in 2011 International Symposium on Computer Networks and
arXiv:1606.00264, 2016. [Online]. Available: https://arxiv.org/abs/1606. Distributed Systems (CNDS), Feb. 2011, pp. 175–180.
00264 [28] T. T. T. Ha, J. Kim, and J. Nam, “Design and Deployment
[12] X. Liu, H. Yin, and C. Lin, “A Novel and High-Quality Measurement of Low-Delay Hybrid CDN–P2P Architecture for Live Video
Study of Commercial CDN-P2p Live Streaming,” in 2009 WRI Interna- Streaming Over the Web,” Wireless Personal Communications,
tional Conference on Communications and Mobile Computing, vol. 3, vol. 94, no. 3, pp. 513–525, Jun. 2017. [Online]. Available:
Jan. 2009, pp. 325–329. https://link.springer.com/article/10.1007/s11277-015-3144-1
[13] Z. Lu, Y. Wang, and Y. R. Yang, “An Analysis and [29] C. Michaels. (2017, February) HLS Latency Sucks, But Here’s How
Comparison of CDN-P2p-hybrid Content Delivery System and to Fix It | Wowza. [Online]. Available: https://www.wowza.com/blog/
Model,” Journal of Communications, vol. 7, no. 3, Mar. hls-latency-sucks-but-heres-how-to-fix-it
2012. [Online]. Available: http://www.jocm.us/index.php?m=content& [30] “WebRTC 1.0: Real-time Communication Between Browsers,” 2017.
c=index&a=show&catid=39&id=90 [Online]. Available: https://www.w3.org/TR/webrtc/
[14] J. Zhao, B. Lee, T.-W. Lee, C.-G. Kim, J.-K. Shin, and J. Cho, “Flexible [31] (2011). [Online]. Available: http://www.videolan.org/
Dual TCP/UDP Streaming for H.264 HD Video over WLANs,” in Proc. [32] “Network Latency and Packet Loss Emulation @ Calomel.org.”
of the 7th International Conference on Ubiquitous Information Manage- [Online]. Available: https://calomel.org/network loss emulation.html
ment and Communication (ICUIMC 2013), Kota Kinabalu, Malaysia, [33] (2017, May) IP Latency Statistics. [Online]. Available: http://www.
2013, pp. 34:1–34:9. verizonenterprise.com/about/network/latency/
[15] M. Sinky, A. Dhamodaran, B. Lee, and J. Zhao, “Analysis of H.264 Bit-
stream Prioritization for Dual TCP/UDP Streaming of HD Video Over
WLANs,” in IEEE 12th Consumer Communications and Networking
Conference (CCNC 2015), Las Vegas, USA, Jan. 2015, pp. 576–581.

View publication stats

You might also like