Context: Reliable Communication Over Packet Erasure Networks
Context: Reliable Communication Over Packet Erasure Networks
Network
A B
p1
A B
p2
Ap1 + p2 A+B B
Wants A, B Wants A, B
2
Introduction
3
Introduction
4
Introduction
5
Congestion
control
Network
Coding
+
Feedback
Queue Decoding
management delay
6
Motivation
• Goal:
– To incorporate network coding into the TCP/IP protocol stack in order to
improve throughput
Rate = R pkts / s
Bandwidth = B pkts / s
P3 PSource
2 P1 Sink
Bottleneck link
Buffer overflow A1
A2
8
TCP and network coding – compatible?
Late ACK →
P1 long RTT ! Symbol-wise
finite field operation
P1+P2
+
Eg. Field size
P2+P 3 = 256,
P2
Symbol ≡ 1 Byte
Rate = R pkts / s Network P1+P2+P3
with
P3 PSource
2 P1 network Sink
coding
A1
"P1+P2"
A2
A3
Network coding header –
to identify the linear combination
Can we ACK
Due every “innovative”
to batch processing,linear
ACKscombination so that the
arrive too late
TCP window
– wrong continues
inference to advance
of round-trip without
time, hence waiting
low for
throughput
decoding? 9
‘Seeing’ a packet
P2 + P3
P1 C1
0 1 1 0 0 0 0 0
P2 C2
---- ---- ---- ---- ---- ---- ---- ----
---- ---- ---- ---- ---- ---- ---- ---- P3
C3
---- ---- ---- ---- ---- ---- ---- ---- P4 =
P5 C4
---- ---- ---- ---- ---- ---- ---- ----
P6 C5
P7
P8 Coded packets
Original packets
10
‘Seeing’ a packet
Seen Unseen
Decoded
P1 P2 P3 P4 P5 P6 P7 P8
1 0 0 0 Coefficient vectors of received
1 0 0 0 linear combinations,
1 ------- after Gaussian elimination
1 -------
1 -------
P1 P2 P3 P4 P5 P6 P7 P8
1 0 0 0 Coefficient vectors of received
1 0 0 0 linear combinations,
1 ------- after Gaussian elimination
1 -------
1 -------
P1 P2 P3 P4 P5 P6 P7 P8
1 0 0 0 Coefficient vectors of received
1 0 0 0 linear combinations,
1 ------- after Gaussian elimination
1 -------
1 -------
TCP TCP
IP IP
Lower layers
Data ACK
14
The sender module
Redundancy factor
• Buffer packets given by TCP. For every packet coming from
TCP, transmit R random linear combinations of buffered
packets into the network
• Ideally, we want R = 1/(1-pe) where pe is the loss rate
Too low R :
– Does not mask losses from TCP effectively
– Hence, TCP eventually times out and backs off drastically
Too high R :
– Losses are recovered well – TCP window advances smoothly
– However, throughput is reduced owing to low code rate
• How to choose R in practice?
– Can estimate pe using a learning mechanism
– Can also find pe from the lower layers
15
Re-encoding at intermediate node
• Intermediate node buffers incoming linear
combinations
• For every incoming linear combination, the node
transmits R random linear combinations of buffer
contents – just like the sender
• Benefits:
– Add only as much redundancy to get to next
re-encoding node → increases capacity
– Add redundancy only where necessary – just before
the lossy link → reduces congestion
16
Simulation results
SRC 1 Mbps , SINK
1 100 ms 1
1 2 3 4 5
SRC SINK
2 2
Improvement in
throughput due to
effective masking of losses
from TCP
Assumptions:
Network
Coding
+
Feedback
Queue Decoding
management delay
18
Setup
Packet erasure broadcast with
perfect feedback
Arrival rate
Prob. of
successful
reception
19
Representing knowledge
• Knowledge space: With linear coding, state of knowledge can be
represented as a vector space called the knowledge space
• Innovation: A linear combination that increases the dimension of a
node’s knowledge space (rank) is said to be innovative for that node
• Virtual queues: (similar to [HV ’05, EL ’07])
One virtual queue for each receiver
Maintains the backlog in information yet to be delivered from sender to receiver
• Arrival: Whenever there is a real arrival at the sender
• Service: When an innovative packet is conveyed to the receiver
• Queue size: Difference between the dimension of the knowledge space of the
sender and receiver
Behave like a Geom/Geom/1 queue
• Goal: Physical queue size at sender should track virtual queue sizes
20
Coding and queue management
| S | | S |
n n
|S | S j j
j 1 j 1
μ
Geom/Geom/1 queues
1
1 O
2 1
(1 )
Network
Coding
+
Feedback
Queue Decoding
management delay
25
Definition of delay
• Let:
si : time of arrival of packet i at the sender
ti : time when packet i is decoded at a particular receiver
yi : time when packet i is delivered in-order to a particular
receiver
Define: (ti – si) the decoding delay of packet i
(yi – si) the delivery delay of packet i
• We study the expected decoding and delivery delay
E (ti si ) , E ( yi si )
• Essentially, what is the long term average delay per packet
at one particular receiver?
26
Our coding module
• Central idea: Transmit a linear combination that involves only
the oldest undecoded packet (called ‘request’) of each
receiver
• An outline of the algorithm
– Group the receivers by their next request
– Process the groups in decreasing order of request
– For each group, pick a coefficient for the request that is
simultaneously innovative for each receiver in the group
(Always possible if the field size is at least the number of receivers)
Key observation: Coefficients chosen for subsequent groups will not affect
innovation of earlier groups, as they have already decoded the requests of
subsequent groups
27
Example of algorithm
Request
(oldest undecoded pkt)
P1 P2
Group 1
P1+P2 P1
Group 2
P2 P1
x P2 + y P1
Work over GF(3)
Group 1: Coefficient of P2 – 0 1 2
Transmit 2 P1 + P2
Group 2: Coefficient of P1 – 0 1 2
28
Simulation results
Log-scale plot 0.5
2.6
1.8
log 10(Delay)
1.6
1.4
1.2
0.8
Unit slope of the lines in the log-scale plot confirms the linear asymptotic growth 29
Summary
• Goal 1: 100% Throughput
Innovation guarantee: Every transmitted linear combination increases the
rank of every node that receives it, except if the node already knows all
that the sender knows
• Goal 2: Queue size at sender
Physical queue size must track virtual queue sizes
Instantaneous
- Tracks virtual delivery not
Instantaneous
queue size possible
2 receivers Required - Optimal scaling in decoding Optimal scaling
[DFT ‘07]
heavy traffic in heavy traffic
(conjectured)
Instantaneous Instantaneous
- Tracks virtual decoding not delivery not
More than 2 queue size possible [DFT ‘07] possible
Required - Optimal scaling
receivers Optimal scaling in Optimal scaling
in heavy traffic heavy traffic in heavy traffic
(conjectured) (conjectured)
31
Conclusions and future work
• Proposed new notion of ‘seeing’ a packet and corresponding ACK
mechanism for coded networks, based on Gaussian elimination
• New ACK acknowledges every ‘degree of freedom’, even if it cannot be
decoded immediately
• Interfacing congestion control over network coding
– New ACK allows TCP-compatible sliding-window network coding
– With coding, TCP can be used over multipath or opportunistic routing
scenarios without reordering issue
• Queue management
– Drop packet when seen by receiver(s)
• Decoding delay
– New coding module proposed – ‘mix oldest undecoded’
– Proof of asymptotic optimality of proposed scheme is still open
– Can use notion of seen packets to show connection to traditional queuing
theory problems (resequencing buffers)
32
Congestion
control
Network 2.6
μ 2
Feedback 1.8
log 10(Delay)
1.6
1.4
μ management delay 1
Geom/Geom/1 queues
0.8