0% found this document useful (0 votes)
53 views10 pages

Pseudo-DHT: Distributed Search Algorithm For P2P Video Streaming

good

Uploaded by

Jayesh Kasera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views10 pages

Pseudo-DHT: Distributed Search Algorithm For P2P Video Streaming

good

Uploaded by

Jayesh Kasera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 10

Tenth IEEE International Symposium on Multimedia

Pseudo-DHT: Distributed Search Algorithm For P2P Video Streaming

Jeonghun Noh Sachin Deshpande


Stanford University Sharp Laboratories of America
Electrical Engineering 5750 NW Pacific Rim Blvd, Camas, WA, USA
350 Serra Mall, Stanford, CA, USA [email protected]
[email protected]

Abstract aspects, such as locating peers with contents in their inter-


est. In Type (3) systems, peers are fully in charge of the
In this paper, we propose pseudo-DHT, an efficient re- dis-tribution of media data among themselves as well as
source location algorithm in peer-to-peer (P2P) stream-ing seeking supplier peers.
systems. A lookup overlay formed by participating peers In this paper, we propose pseudo-DHT, a descent dis-
provides a foundation for pseudo-DHT’s register(key, value) tributed search algorithm to locate contents in a P2P time-
and retrieve(key) services. Using pseudo-DHT, peers register streaming system. This search algorithm provides a server-
their video contents information (key) with their network free content discovery, which allows Type (1) or (2) sys-tems
identity (value). To reduce retrieval latency, register(·) to become more distributed, thus avoiding a single point of
performs the alteration of a given key on a key collision. For failure in the system. In our approach, a distributed lookup
retrieve(·), a query for a key returns a value associated with overlay is constructed on top of the peers. This lookup
the key or a key closest to the key. overlay provides a foundation for pseudo-DHT ser-vices.
We apply pseudo-DHT to P2TSS, a P2P system that Pseudo-DHT services include registration, retrieval, and
provides both live and time-shifted streams. In P2TSS, a live deletion; With registration, peers register their cache
video is divided and spread out in peers’ buffers. Peers information (key) and their network address (value). Regis-
construct a Chord overlay that serves as the base of pseudo- tered information is retrieved later for other peers seeking a
DHT. A theoretical analysis is presented to predict the search random video block (retrieval). When peers leave a system,
performance of P2TSS with pseudo-DHT. Exten-sive their registration information is discarded (deletion). Our
simulations show that our pseudo-DHT provides good pseudo-DHT is a variant of DHT, a distributed version of a
scalability and low overhead, matching our analysis. hash table. Pseudo-DHT is different from classical DHTs in
the following aspects:

1 Introduction • Register(key, value) may perform the alteration of a


given key when a key collision occurs.
Traditionally, servers have been used for streaming video • Retrieve(key) may return a value associated with a
to a group of clients. However, these servers easily become given key or a key closest to the given key, referred to
bottlenecks due to their limited bandwidth resources. Re- as best effort search.
cently, peer-to-peer (P2P) streaming systems have emerged
because they are self-scalable (more users bring more re- We apply pseudo-DHT to P2TSS [5]. P2TSS is a novel
sources to the system), lowering server load. Depending on P2P streaming system which can provide live and time-
how systems are organized and maintained, P2P stream-ing shifted streams. Time-shifting allows viewers to watch a live
systems can be classified into the following types: (1) server- stream with an arbitrary offset at a later time. It also allows
dependent type [10], (2) hybrid type [4], and (3) fully viewers to pause and resume video playback. In P2TSS, a
distributed type [2] [3] [12]. In Type (1) systems, peers pro- live video stream is segmented into blocks. Peers store video
vide their resources to the system with the construction and blocks that correspond to a continuous portion of a video in
maintenance of data delivery overlays controlled by central their local buffer. Peers perform a registra-tion of the first
servers. In Type (2) systems, peers actively participate in video block in the buffer instead of all video blocks in the
constructing and maintaining data delivery overlays. How- buffer. This reduces the overhead of hold-ing keys and makes
ever, there may be a central server that assists peers in some peers available to other peers earlier.

978-0-7695-3454-1/08 $25.00 © 2008 IEEE 348


DOI 10.1109/ISM.2008.57
However, a decrease in the number of keys may increase Forward key modification
2nd
re-trieval latency. To improve retrieval performance, 1st attempt
Pseudo-DHT attempts to spread out keys by modifying
keys. This is feasible because video streams are
continuous and slight temporal shifts are acceptable. i-1 i empty Video block ID
The structure of the paper is as follows. In section 2, we
present the pseudo-DHT algorithm. In section 3, we in-
troduce P2TSS, to which we apply pseudo-DHT. Section 4 Backward key modification
provides a theoretical analysis of the performance of P2TSS 3rd 2nd 1st attempt
with pseudo-DHT. Section 5 provides simulation results.

2 Pseudo-DHT Algorithm
empty i-1 i
Video block ID

This section describes how peers form a lookup overlay : Video blocks available
: Keys
and information about video blocks are stored and
retrieved. Like Skype [1], not all peers need to participate
in forming the lookup overlay. Peers with high-speed Figure 1. Two approaches of key
network access can form a lookup overlay while the modification for registration
remainder of peers only need to consult them. In this
paper, however, for simplic-ity, we assume all peers
participate in building the lookup overlay. Algorithm 1 Registration with successive key modification
nAttempt ← 0
2.1 Lookup overlay if Register(key, value) fails then
key = key + of f set
nAttempt = nAttempt + 1
Pseudo-DHT is based on a distributed lookup protocol,
if key < 0 then
such as Chord [11], and Pastry [7]. These lookup
RegisterW ithF orce(0, value)
protocols provide a robust, distributed service to locate a else if nAttempt = T hres then
node in the network. To form a lookup overlay, a newly
m
RegisterW ithF orce(key, value)
arriving node selects a random number between 0 and 2 end if
− 1 as its node ID. ( m is a positive integer. We chose m = end if
160 in or-der to keep the collision of node IDs very low.)
Node IDs are mapped to an interval [0, 1) by normalizing
m
node IDs with 2 . Nodes follow the necessary steps to failure. This enhances resource availability in the system.
collect and maintain information about neighbor nodes in In this paper, a key can refer to either a video block ID, or
the overlay. Details can be found in [11] [7]. a hashed value. As a rule of thumb, an original key (video
block ID) must be hashed before used in register(·),
2.2 Registration retrieval(·), and deletion(·).
Once a peer computes a hash value, it executes regis-
Peers register the index of the first video block they store ter(key, value), where value is its network address. When
in the buffer. Let block index i denote a key. i is hashed using multiple peers attempt to register with the same key, keys
SHA-1, a base hash function. Like the node ID, the hashed may become associated with multiple values. This is
m−1
value of the key ranges from 0 to 2 (and mapped into [0, called key collision. To prevent a key from being
1) with normalization). The space of (normalized) keys and registered with multiple values, successive key
node IDs is referred to as a key/node space. modifications are performed by the registering peer until
We choose to use hashed block indexes over block in- no key collision occurs. Reg-istration with successive key
dexes themselves as a key for the following reasons. First, it modification is illustrated in Algorithm 1. key is a hashed
is difficult to spread out keys over a key/node space with-out index of a video block. value represents the network
knowing the length of a media stream a priori. For in-stance, address of the peer associated with the key. of f set is
when a program is broadcast live, its time length is not added to a key when a key collision occurs. If the of f set
known beforehand. Second, continuous video blocks are is of positive value, a subsequent at-tempt is made forward
spread out over peers that are not adjacent to each other in the in the key/node space. If the offset is negative, the attempt
key/node space. This prevents a certain portion of the video is made backward in the key/node space.
chunks from being permanently lost due to node Although the algorithm attempts to associate each

349
key with one value, a compromise is made in the fol- 3rd 2nd 1st attempt

lowing cases: when the modified key falls below 0,


RegisterW ithF orce(·) is used to insert the key
forcefully. RegisterW ithF orce(·) is also used when the
number of attempts reaches a threshold value T hres. i-2 i-1 i Video block ID

When a registration is successful, a peer can start to fill : Requested video block
their buffer. If a forward key modification is made, the peer : Keys
starts to buffer video after the video block with the modified
key becomes available. If a backward key modification or no
Figure 2. Retrieval with backward key
key modification is made, a peer can start to buffer video
immediately after it finds a supplier peer. Fig. 1 depicts the modifi-cation. The offset base is 1.
two different approaches of successive key modification for
registration: forward and backward. Algorithm 2 Retrieval with successive key modification
nAttempt ← 0
2.3 Retrieval if Retrieval(key) fails then
key = key + of f set
When a peer seeks a video block with ID i, it sends a nAttempt = nAttempt + 1
retrieve(i) message to the overlay. Peers cooperate to re-turn if key < 0 or nAttempt = T hres then
the value associated with the key to the peer. The peer then Return NULL
contacts the potential supplier associated with the value. This end if
end if
process of content discovery is called retrieval. Similar to
registration, retrieval may involve a series of suc-cessive key
modification, see Algorithm 2. If the poten-tial supplier has
information returned by the overlay may not correctly re-flect
no available bandwidth, the peer performs a subsequent
the system status. Suppose that P2 has left the system. Peer
retrieval attempt. A negative offset, kO, is added to the
th P1 receives a retrieval response that contains P2. P1 will get
original key for the k retrieval attempt. We discuss the effect
no response when P1 contacts P2. Then, P1 sends
of the offset base O in Section 4. Once a supplier with
available bandwidth is found, the supplier starts to deliver Delete(key, Addr(P2)) to the overlay. This message is re-
video from video block i. If no appropriate supplier is found ceived by peer P3 that holds (key, Addr(P2)) registration
with T hres attempts, the peer connects to the server to information. P3 confirms that P2 is no longer present by
receive the video. Forward successive lookup is not sending a direct message to P2. If there is no response, P3
appropriate because the cache contents of a supplier needs to
discards (key, Addr(P2)). This operation is referred to as
span the requested video block i. Figure 2 depicts the retrieval Deletion. Data inconsistency due to transient system states
with backward successive key modification. can be tackled by a few techniques. One technique is called
As we view the value in (key, value) pair as a pointer to replication, which was introduced in [8]. The overlay stores k
a peer, the lookup overlay is independent of a data copies of each (key, value) pair among k peers. Unless all k
delivery overlay. In other words, pseudo-DHT provides a peers leave the system simultaneously, no information is
permanently lost. This robustness is obtained at the cost of
basic func-tionality to store and seek random video blocks.
replication traffic and increased overhead for key storage.
There are a number of approaches to enhance retrieval
performance. One approach is to send simultaneous
retrieval requests. In simultaneous retrieval, n retrieval 3 P2TSS: P2P Time-Shifted Streaming
attempts are made in par-allel with key i, i − O, . . ., and i
− (n − 1)O, respectively. When more than one reply is 3.1 System description
received, the earliest reply may be used to reduce the
retrieval latency. Alternatively, a reply with a key closest P2TSS is a novel P2P streaming system which can pro-
to the original key may be used to decrease the frequency vide live and time-shifted streams. To reduce the video
of supplier switching. Retrieval latency reduction with source’s bandwidth requirement, peers store part of the past
simultaneous retrieval is evaluated in Section 5. video stream in their buffer, which will be served to other
peers. V [0, T ] denotes a video stream available at time T ,
2.4 Handling peer churn the system’s current time. Peers may request either live
stream or past video stream at any given time. V [T ] at time
Every time peers join and leave the system, the overlay T is called live stream. V [T − o], 0 < o ≤ T , at time T , is
experiences a transient state. During its transient state, the called time-shifted stream.

350
In P2TSS, each peer has two separate buffers: head imposed on the lookup overlay. Suppose that peers
playback buffer and distributed stream buffer (DSB). arrive at the system according to Poisson Process of rate
The playback buffer stores video packets to be played N
λ = Lp . Np peers arrive at the system before time L,
back locally. It is used for handling network jitter and which denotes the total length of the video stream. Let i =
T
short term bandwidth variations. DSB stores video Q . Peers that arrive during the time interval [iQ, (i +
blocks that can be shared with other peers. For video 1)Q) perform registration with key i because Bi is the first
packets to be stored in DSB, V [0, T ] video block stored in their buffer. Ii denotes this interval
T
is divided into blocks −1 th
Q
i=0 Mi, where Mi is the i [iQ, (i + 1)Q). Ai denotes the number of peer arrivals in Ii.
block in the stream of the interval [iQ, (i + 1)Q). Each video The first peer that arrives in Ii experiences no key col-
block is of size Q in seconds, comprised of video frames. lision. If another peer arrives during Ii, it collides with its
Peers fill their local DSB with the live stream af-ter joining predecessor’s registration. This peer attempts another reg-
the system at time T , regardless of their initial playback istration with key i − 1. This attempt is successful only
position. Although peers’ DSB size can be arbi-trary, for when no peer arrives in Ii−1 (Ai−1 = 0). If Ai−1 > 0, the
simplicity’s sake, we assume that all peers provide DSBs of second attempt also fails. These successive attempts re-
common size D in seconds. When cache filling peat until an interval with no registration is found or when
ends, T+ D
Q −1 RegisterW ithF orce(·) is required.
i= Mi video blocks are stored in the DSB of
QT
Since peers arrive according to a Poisson process, Ai of
a peer that joined at time T . L
each non-overlapping interval Ij , j = 1, · · · , Q , is a Pois-
L
3.2 Application of pseudo-DHT algorithm son random variable with parameter of λ Q , where the peer
N
arrival rate λ = Lp . Suppose Ai = k, Ai−1 = m, Aj = 0, 0 ≤
We apply pseudo-DHT to maintain peers’ DSB registra- j ≤ i − 2. The first peer of interval Ii sees no col-lision. On
th
tion information and retrieve the information. Chord was the other hand, the l peer (1 < l ≤ k) that arrives during
adopted as a lookup protocol. Peers register to the overlay this interval sees m + l − 1 collision because m previous
before they start caching the first video block. The informa- slots are occupied and additionally collides with m − 1
tion used in registration includes an index of the first video peers that have arrived in interval Ii. Table 1 shows the
block in the cache, a peer’s IP address, a TCP or UDP port number of collisions depending on the value of Ai.
number, cache size (if peers have different DSC size), and the
time the registration is made. In order to find suppliers, peers
Ai Number of collisions
send a query to the overlay. The query includes the index of
0 0
the first video block. 1 0
Since DSBs are of finite size, peers need to switch to an- 2 (m+1)
other supplier to continue to retrieve video blocks beyond the 3 (m+1)+(m+2)
blocks stored in the current supplier peer’s DSB. Peers start 4 (m+1)+(m+2)+(m+3)
to search for the next supplier ahead of time as peers can 5 (m+1)+(m+2)+(m+3)+(m+4)
estimate when the last video block will arrive from its . . .. . .
supplier. This is called proactive switching. The time when
proactive switching starts can be determined by the aver-age Table 1. Number of total collisions when Ai−1
time of the search operation, the probability of supplier
= m, Aj = 0, 0 ≤ j < i − 1
acceptance, and the supplier connection operation. Once
proactive switching is completed, the next supplier is ready to
C denotes the number of key modifications a
deliver later video blocks as the current supplier is near-ing
registration experiences. E(C) is computed as
the end of its video delivery [5] [9].

4 Analysis

E(C|Ai = k) Pr (Ai = k)
E(C) =
In this section, we analyze the performance of P2TSS
k=2
[5] with pseudo-DHT. A backward direction is used in ∞ ( Np k
Np )
succes-sive key modification. − Q ,
= E(C|Ai = k)e
Q
k!
k=2
4.1 Registration

The average number of attempts for registration reflects where


the latency of a registration process and the control over-

351
0.14 0.35 0.8

Average number of key modifications


Average number of key modifications

Average number of key modifications


Simulation (forward attempts) Simulation (forward attempts) Simulation (forward attempts)
0.12 Simulation (backward attempts) 0.3 Simulation (backward attempts) 0.7 Simulation (backward attempts)
Second−order model Second−order model Second−order model
First−order model First−order model
0.6
0.1 0.25 First−order model
zeroth−order model zeroth−order model
0.5 zeroth−order model
0.08 0.2
0.4
0.06 0.15
0.3
0.04 0.1
0.2
0.02 0.05 0.1
0 50 100 150 200 250 300 0 50 100 150 200 250 300 0 50 100 150 200 250 300
0 0 0
Number of registrations Number of registrations Number of registrations

Figure 3. Average number of Figure 4. Average number of Figure 5. Average number of


key modifications with Q=5. key modifications with Q=10. key modifications with Q=15.

Hitprobability
0.34
0.3

E(C|Ai = k) = E(E(C|Ai = k, Ai−1 = m)) 0.32

≈ E(C|Ai = k, Ai−1 = m, Ai−2 = 0, . . . , A0 = 0)


1 2 3 4 5 6 7 8 9 10
a=0 Offset
k−1
= (m + i) Figure 6. Conditional probability
i=1
Pr (a hit in Ii−1|a miss in Ii) for successive
= (k − 1)(2m + k) . (1) 2 retrieval attempts with different offset bases.
As the offset increases, hit probability
An approximation is introduced because we do not take into account converges to the load factor α, 0.3472.

Np
L
Q +1
Np
L simulation results due to the increased number of key col-
Q +1
lisions, which requires higher order of approximations for
cases other than Ai−1 = m, Aj = 0, 0 ≤ j <
better predictions.
i − 1. When is small, Pr (A i > 1) is also small and

our analysis matches simulation results reasonably. 4.2 Retrieval


is called load factor, denoted as α. We can increase the
accuracy of our analysis by considering further cases. For In this section, we analyze the expected latency of re-
instance, we can refine the solution by considering the non-
trieval in terms of the number of successive key modifica-
zero values of Ai−2. We refer to this case as the second order tions. For simplicity, no peers are assumed to join and leave
case. The previous solution is referred to as the first order during a retrieval operation. If a peer seeks a random video
case. The zeroth order case is when Aj = 0, j = block uniformly chosen in [0, T ), where T is the current
1, . . . , i − 1. clock of the system, the probability of a successful retrieval
Fig. 3 - 5 plot zeroth, first, and second order solutions with one attempt is identical to α. A successful retrieval is
along with simulation results for Q = 5, 10, 15s, respec- referred to as a hit. Registration with successive key modi-
tively. We ran 1000 simulations to get each point on the fication cause registered keys to group together when colli-
curves. To see the effects of the direction in successive at- sions occur. Such key grouping makes consecutive failures of
tempts, both forward and backward directions were experi- retrieval attempts more likely. Yet as of f set increases, the
mented. All the models match the simulation results when the correlation between two consecutive retrieval attempts
number of registrations is small. As the number of reg- becomes more statistically independent, see Fig.6. Thus, the
istrations increases, higher order models match the simula- negative impact of key grouping diminishes. Assuming that
tion results better. We observe that the average number of key the offset for key modification is sufficiently large to make
modifications increases as the number of registrations consecutive retrieval attempts independent, the prob-ability
increases. Since a registration is a one-time event and re- that a hit occurs with k attempts, Pr (N = k), is:
trievals occur more often, registration latency may be tol-
erable for lower retrieval latency. The figures also show that
directions of key modifications do not affect registra-tion
latency. With a larger Q, our model deviates from the
352
0.7
function
0.6 1
Pr (Mi = k) = 0 Pr (Mi = k|Li = l)f (Li = l)dl
0.5
Cumulativemass

0.4 = 1 k (4)
(Kl) f (Li = l)dl
0.3 Offset base: 3Q 0
e −Kl
k!
= 1 k
0.2 Offset base: 1Q (Kl) N e−N ldl, (5)
Offset base: 2Q 0
e−Kl
0.1 Independence model
k!

1 2 3 4
where K denotes the total number of keys.
00 In Eqn. (4), the likelihood of a key falling into an inter-
Number of retrieval attempts
val is proportional to the interval length, l, because a key is
Figure 7. Hit probability approaches the inde- uniformly likely to fall to any point in the space. f (Li = l)
in Eqn. (5) represents the distribution of the length of the
pendence model with larger offset bases. th
i interval. Since there is no distinction among intervals,
the distribution of the interval length is identical in every
interval. Since each node randomly chooses one point in
Pr (N = k) = (1 − α)k−1α (2) [0, 1) after normalization, N points in (0, 1] are indepen-
dently and uniformly distributed. If we view the points as
where α is the load factor. events following a Poisson process with rate N at time 1,
In Fig. 7, the CMFs for the number of retrieval at- an inter-arrival time between adjacent points is exponen-
tempts are presented. The independence model is based on tially distributed with parameter N . Thus, we conclude f
−N l
Eqn. (2). For offset base Q, 2Q, and 3Q, random access (Li = l) = N e , i = 1, · · · , N . We can drop the
patterns were generated uniformly for all available video subscript i in Eqn. (3) as Pr (Mi = k) is independent of i,
blocks. Note that the actual latency of retrieval depends on shown in Eqn. (5). Finally, the expected fraction of inter-
the physical distance between peers, in addition to the vals that hold k keys is given as
number of retrieval attempts and the logical hops needed
for each retrieval attempt.
1 N
Pr (Mi = k) = Pr (M = k)
4.3 Distributed overhead of key storage N
i=1
k 1
= K N
To evaluate the scalability of Pseudo-DHT, we exam- lk e−(K+L)ldl, (6)
ine the overhead of peers by computing the probability of k! 0
a small number of nodes holding an excessive number of
keys. The key/node space is divided into N intervals by N where Pr (M = k) = Pr (Mi = k), i = 1, · · · , N .
nodes. In this space, nodes hold keys that fall into the in- Fig. 8 plots the expected fraction of intervals that hold
terval between its node ID and its predecessor’s node ID. k keys based on Eqn. (6). In Fig. 13, this model is
Suppose that Node 0 is the node whose Node ID is closest compared against simulation results.
th
to 0. The i interval has Node (i − 1) and Node i as its
each end. Mi denotes the number of keys that fall into the 5 Experimental Results
th th
i interval. Li denotes the length of the i interval, rang-
ing from 0 to 1. I(Mi = k) is an indicator function that takes To verify our theoretical analysis in Section 4, we
th imple-mented Pseudo-DHT to perform simulations within
on 1 when the i interval holds k keys. I(Mi = k) takes on a Plan-etSim simulator [6]. In our experiments, the
0 otherwise. The expected fraction of intervals that hold k
keys is expressed as number of nodes (peers) Np was set to 300. Peers arrived
in the system at an arbitrary time, independent of each
other. Peers re-mained in the system until the video
1 1 N session ended. The up-link/downlink bandwidths of peers
N E(Intervals with k balls) = N E( I(Mi = k)) were 3R, where R is the video bitrate. The video length L
i=1
N was 7200s (2 hours) and video block size Q ranged from
1
5s to 30s. As peers joined the system, peers cached the live
=
N i=1 Pr (M i =
k , (3)
) video stream in their local video buffer (DSB). The DSB
size was set to 240s for all peers. The threshold for
In Eqn. (3), Pr (Mi = k) can be formulated as successive key modification was set to 4.

353
massfunction
massfunction

0.7
0.6
0.3 1

function
0.5
0.25 0.8

mass
0.4
0.2
0.6

Probability
Probability

0.3
0.15

Cumulative
4 lookups
0.2
0.1 0.4 3 lookups
0.1 2 lookups
0.05
0.2 1 lookup
0
0 2 4 6 8
0 2 4 6 8 10 1 2 3 4 5 6 7 8 9 10 00
Number of keys in a node Number of hops
Number of hops
Figure 8. Expected fraction of Figure 9. Number of hop
Figure 10. Effects of simulta-
nodes holding k keys counts per Chord operation
neous retrieval

Cumulative fraction of nodes


1
Mass Function

1
0.8 functi Model 0.8 Simulation
on0.5
0.6 Offset base: 3Q
0.6 Model
Offset base: 2Q 0.6
mass

Offset base: 1Q
Cumulative

0.4 Q=5 0.4 0.4


Q=30 0.3
Q=15 0.2
Cumulative

0.2 Q=10 0.2


1 2 3 4 0.1 2 4 6 8
00 00
1 2 3 4
Number of retrieval attempts 00 Number of keys in a node
Number of retrieval attempts
Figure 11. Effects of video Figure 13. Fraction of nodes
block size on retrieval latency Figure 12. Effects of offset holding k keys
bases on retrieval latency
In Fig. 9, the distribution of the number of logical hops Fig. 12 illustrates the effects of offset bases for retrieval.
is depicted for the basic operations of the Chord lookup The independence model in Fig. 7 is shown as a reference.
over-lay, insert and lookup. The maximum probability The gap between the model and the simulation results arose
300 primarily because the retrieval requests were not uniformly
occurs at 5 hops, which is approximately half of log 2 ≈
8.23, where 300 is the number of nodes and 2 is the factor distributed; peers requested the most recent video blocks
of bi-nary search. (Binary search is the property of the more often than the other blocks, when caching the live video
Chord protocol.) stream. As we observed a small gain in Fig. 12, we gained a
Fig. 10 shows the effects of simultaneous retrieval. slight increase in the hit probability, which trans-lates into a
With 4 simultaneous retrieval attempts, more than 98% of smaller retrieval latency. The downside of hav-ing a larger
the re-trieval operations were completed with the earliest offset for key modification is that the larger off-sets may
reply be-ing less than 5 hops. Simultaneous retrieval is cause a decrease in the number of video blocks actually
useful when a peer finds a new supplier after being served to a requester. This is similar to the tradeoff for a
disconnected from its current supplier, or when a peer is to larger Q in Fig. 11.
reduce its initial playback latency. Fig. 13 shows the fraction of nodes locally holding a
How different Q values affect the algorithm performance particular number of keys. This result closely matches the
are depicted in Fig. 11. With a larger Q, the load factor α in- model (a PMF version of the model was shown in Fig. 8).
creases and the retrieval latency decreases. Assuming video About half of the nodes take responsibility for holding keys,
blocks are uniformly requested, the hit probability with the yet very few nodes hold more than 6 keys. 6 keys account for
first retrieval attempt is linearly proportional to α. This re-sult only 2% of the entire key population. Both the experi-ments
implies larger values for Q are preferable to reduce re-trieval and the model clearly indicate that the lookup overlay
latency. However, a supplier found with several key distributes the overhead among nodes, preventing no partic-
modifications supplies a smaller number of video blocks. ular node from being excessively stressed out.

354
6 Related Work Throughout our extensive performance analysis, we
showed that pseudo-DHT achieves a well-balanced key
storage overhead among peers without employing a sophis-
Recently, researchers have been working on reducing
ticated load balancing mechanism. We also analyzed var-ious
the overhead in the content location used for P2P systems.
aspects of pseudo-DHT including the relationship be-tween
DHT has been adopted by several systems [2], [13].
search latencies and system parameters. Our simula-tions
SplitStream [2] is a P2P content distribution system based
confirms the findings concluded by the analysis.
on Pastry [7]. There are two fundamental differences between
SplitStream and pseudo-DHT. First, SplitStream combines
content discovery with content delivery. Pseudo-DHT serves 8 Acknowledgments
only as a content discovery network. Second, in SplitStream,
live video streaming is assumed and the search for spare The authors would like to thank Professor Bernd Girod
uplink capacity among peers is key in building multiple at Stanford University for many useful discussions.
overlay trees. In pseudo-DHT, video blocks are tar-gets of the
search, making pseudo-DHT adequate for both live and References
video-on-demand streaming.
VMesh [13] is perhaps the closest work to Pseudo-DHT. [1] Skype. www.skype.com.
In VMesh, a video content is divided into segments, each of [2] M. Castro, P. Druschel, A.-M. Kermarrec, A. Nandi, A.
which is stored in the local buffer of one or multiple peers. Row-stron, and A. Singh. SplitStream: High-bandwidth
Peers register video segments in their buffer to a DHT over- content distribution in a cooperative environment. Proc.
IPTPS’03, Berkeley, CA, Feb. 2003.
lay. However, video segments are placed in their segment ID
[3] H. Chi, Q. Zhang, and S. Shen. Efficient Search and
order in a key/node space. Video blocks stored in the local Scheduling in P2P-Based Media-on-Demand Streaming
buffer are not necessarily continuous in their IDs, re-quiring Ser-vice. IEEE Journal on Selected Areas in
individual registrations for each segment. There-fore, VMesh Communications, 25(1):119, 2007.
requires more information to be registered and retrieved than [4] Y. Cui, B. Li, and K. Nahrstedt. oStream: asynchronous
Pseudo-DHT. To reduce control traffic, nodes in a VMesh streaming multicast in application-layer overlay networks.
system stores additional pointers to the peer lo-cations of the IEEE Journal on Selected Areas in Communications, 22(1),
2004.
next required segments. [5] S. Deshpande and J. Noh. P2TSS: Time-Shifted and Live
Distributed structures other than DHT, such as a binary Streaming of Video in Peer-to-Peer Systems. IEEE Interna-
tree and a link list, have also been investigated. In [3], a tional Conference on Multimedia and Expo, Hanover, Ger-
hierarchical, minimal size of index overlay is formed many, June 2008.
based on the overlapping of peers’ local video buffers. The [6] N. A. Hamid. A lightweight framework for peer-to-peer
programming. Journal of Computing Science in Colleges,
sep-aration of an index overlay and a delivery overlay
22(5):98–104, May 2007.
places BAS and Pseudo-DHT in a common category. [7] Y. C. H. M. Castro, P. Druschel and A. Rowstron. Proxim-
While BAS attempts to reduce a search latency by ity neighbor selection in tree-based structured peer-to-peer
exploiting the overlap of peers’ buffers, Pseudo-DHT overlays. Technical report MSR-TR-2003-52, 2003.
achieves a lower search la-tency by avoiding collisions of [8] P. Maymounkov and D. Mazieres. Kademlia: A peerto -peer
the starting points of peers’ buffers. information system based on the xor metric. Proceedings of
IPTPS02, Cambridge, USA, Mar 2002.
DSL (Dynamic Skip List) [12] is a distributed skip-list [9] J. Noh and S. Deshpande. Time-Shifted and Live Streaming
overlay that provides both content discovery and con-tent of Video in Peer-to-Peer Systems. Technical Report, Sharp
delivery. A skip list is an ordered linked list of keys with Labs of America, Sep 2007.
randomly generated additional links for faster search. [10] V. N. Padmanabhan, H. J. Wang, P. A. Chou, and K. Sri-
Similar to Pseudo-DHT, DSL provides a search latency of panidkulchai. Distributing Streaming Media Content Us-ing
O(log N ). Cooperative Networking. Proc. ACM NOSSDAV, Miami
Beach, FL, May 2002.
[11] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Bal-
7 Conclusions akrishnan. Chord: A scalable peer-to-peer lookup protocol for
Internet applications. ACM SigComm 2001, 2001.
[12] D. Wang and J. Liu. Peer-to-Peer Asynchronous Video
In this paper, we proposed pseudo-DHT, a descent dis- Streaming using Skip List. IEEE International Conference
tributed search algorithm for a P2P video streaming sys-tem. on Multimedia and Expo, pages 1397–1400, 2006.
By avoiding a central query server, pseudo-DHT al-lows [13] W. Yiu, X. Jin, and S. Chan. VMesh: Distributed Segment
Storage for Peer-to-Peer Interactive Video Streaming. IEEE
users to locate video segments in a scalable fashion. It also
Journal on Selected Areas in Communications, 25(9):1717–
exploits the temporal dependencies of video segments, thus 1731, 2007.
resulting in a flexible and robust search algorithm.

355

You might also like