0% found this document useful (0 votes)
116 views34 pages

Context: Reliable Communication Over Packet Erasure Networks

The document discusses using network coding within the TCP/IP stack to improve throughput over packet erasure networks. It proposes acknowledging packets upon "seeing" them via linear combinations instead of waiting for decoding. This allows TCP's window to advance smoothly with network coding by acknowledging innovative combinations, avoiding timeouts even with losses. Intermediate nodes can also re-encode by generating new random linear combinations from their buffers.

Uploaded by

nick_csd
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
116 views34 pages

Context: Reliable Communication Over Packet Erasure Networks

The document discusses using network coding within the TCP/IP stack to improve throughput over packet erasure networks. It proposes acknowledging packets upon "seeing" them via linear combinations instead of waiting for decoding. This allows TCP's window to advance smoothly with network coding by acknowledging innovative combinations, avoiding timeouts even with losses. Intermediate nodes can also re-encode by generating new random linear combinations from their buffers.

Uploaded by

nick_csd
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 34

Introduction

Network

Context: Reliable communication over packet erasure networks

• Packet erasure networks are a good model for today’s


networks (especially wireless networks)
• Erasures can happen due to:
– Bursty channel errors (fading)
– Losses due to collision (interference)
– Buffer overflow (congestion)
• Goal: Efficient and fair use of network resources
• Two crucial ideas: feedback and coding
1
Introduction

A B
p1
A B
p2
Ap1 + p2 A+B B

Wants A, B Wants A, B

2
Introduction

3
Introduction

4
Introduction

How to fit network coding into


the existing feedback-based framework?

5
Congestion
control

Network
Coding
+
Feedback

Queue Decoding
management delay

6
Motivation
• Goal:
– To incorporate network coding into the TCP/IP protocol stack in order to
improve throughput

• Some related earlier work


Theory of rate control for network coding Chen et al., INFOCOM ’07
Coding vs. queueing in large networks Bhadra and Shakkottai, ’06
Feedback for network coding Fragouli et al., CISS ‘07
Practical implementation of network coding (MORE) Chachulski et al., SIGCOMM ’07
Brockners, Kommunikation in Verteilten
FEC-fueled TCP-like congestion control
Systemen, ’99

In practice, need to consider deployment problem


7
TCP basics
No A3 → Congestion!
Must slow down

Rate = R pkts / s
Bandwidth = B pkts / s
P3 PSource
2 P1 Sink
Bottleneck link
Buffer overflow A1
A2

8
TCP and network coding – compatible?
Late ACK →
P1 long RTT ! Symbol-wise
finite field operation
P1+P2
+
Eg. Field size
P2+P 3 = 256,
P2
Symbol ≡ 1 Byte
Rate = R pkts / s Network P1+P2+P3
with
P3 PSource
2 P1 network Sink
coding
A1
"P1+P2"
A2
A3
Network coding header –
to identify the linear combination

Can we ACK
Due every “innovative”
to batch processing,linear
ACKscombination so that the
arrive too late
TCP window
– wrong continues
inference to advance
of round-trip without
time, hence waiting
low for
throughput
decoding? 9
‘Seeing’ a packet

P2 + P3
P1 C1
0 1 1 0 0 0 0 0
P2 C2
---- ---- ---- ---- ---- ---- ---- ----
---- ---- ---- ---- ---- ---- ---- ---- P3
C3
---- ---- ---- ---- ---- ---- ---- ---- P4 =
P5 C4
---- ---- ---- ---- ---- ---- ---- ----
P6 C5
P7
P8 Coded packets

Original packets

10
‘Seeing’ a packet
Seen Unseen
Decoded

P1 P2 P3 P4 P5 P6 P7 P8
1 0 0 0 Coefficient vectors of received
1 0 0 0 linear combinations,
1 ------- after Gaussian elimination
1 -------
1 -------

Definition: A node has seen a packet Pk if it can compute a linear


combination (Pk +Q) where Q is a linear combination of packets with
index larger than k.

Number of seen packets = Rank of matrix . 11


A new kind of ACK
Seen Unseen
Decoded

P1 P2 P3 P4 P5 P6 P7 P8
1 0 0 0 Coefficient vectors of received
1 0 0 0 linear combinations,
1 ------- after Gaussian elimination
1 -------
1 -------

• Acknowledge a packet upon “seeing” it


– Same ACK format as before → ease of implementation
– TCP window advances smoothly → good throughput
– Premature acknowledgment of packets does not affect reliability
12
A new kind of ACK
Seen Unseen
Decoded

P1 P2 P3 P4 P5 P6 P7 P8
1 0 0 0 Coefficient vectors of received
1 0 0 0 linear combinations,
1 ------- after Gaussian elimination
1 -------
1 -------

• Acknowledge a packet upon seeing it


– Allows ACK of every “degree of freedom” even if it does not reveal a packet
immediately
– Every RLC will cause next unseen packet to be seen with high probability (if
field size is large)
Good for multipath! 13
TCP using network coding
Network coding
layer
SOURCE SIDE RECEIVER SIDE between
TCP and IP
Application Application

TCP TCP

Network coding layer Network coding layer

IP IP

Lower layers
Data ACK

14
The sender module
Redundancy factor
• Buffer packets given by TCP. For every packet coming from
TCP, transmit R random linear combinations of buffered
packets into the network
• Ideally, we want R = 1/(1-pe) where pe is the loss rate
Too low R :
– Does not mask losses from TCP effectively
– Hence, TCP eventually times out and backs off drastically
Too high R :
– Losses are recovered well – TCP window advances smoothly
– However, throughput is reduced owing to low code rate
• How to choose R in practice?
– Can estimate pe using a learning mechanism
– Can also find pe from the lower layers
15
Re-encoding at intermediate node
• Intermediate node buffers incoming linear
combinations
• For every incoming linear combination, the node
transmits R random linear combinations of buffer
contents – just like the sender
• Benefits:
– Add only as much redundancy to get to next
re-encoding node → increases capacity
– Add redundancy only where necessary – just before
the lossy link → reduces congestion

16
Simulation results
SRC 1 Mbps , SINK
1 100 ms 1

1 2 3 4 5

SRC SINK
2 2

Improvement in
throughput due to
effective masking of losses
from TCP

Assumptions:

- No link layer retransmission


- Overhead ignored
- Redundancy (R) fixed manually
- Finite field is very large
- TCP Vegas is used
(can be changed to Reno)
17
Congestion
control

Network
Coding
+
Feedback

Queue Decoding
management delay

18
Setup
Packet erasure broadcast with
perfect feedback
 Arrival rate


 

 Prob. of
successful
reception

• Stochastic arrivals into an infinite-capacity buffer


• Traffic pattern: Send every packet to every receiver
• Erasures independent across time and across receivers

19
Representing knowledge
• Knowledge space: With linear coding, state of knowledge can be
represented as a vector space called the knowledge space
• Innovation: A linear combination that increases the dimension of a
node’s knowledge space (rank) is said to be innovative for that node
• Virtual queues: (similar to [HV ’05, EL ’07])
One virtual queue for each receiver
Maintains the backlog in information yet to be delivered from sender to receiver
• Arrival: Whenever there is a real arrival at the sender
• Service: When an innovative packet is conveyed to the receiver
• Queue size: Difference between the dimension of the knowledge space of the
sender and receiver
Behave like a Geom/Geom/1 queue
• Goal: Physical queue size at sender should track virtual queue sizes

20
Coding and queue management


• Queue management – Drop when seen


• For FIFO queuing, use deterministic coding scheme:
– Mixes only next unseen packets of all receivers (similar to Larsson ‘08)
– Computes a linear combination that causes each receiver to see its
next unseen packet
– Requires field size ≥ number of receivers
21
Queuing analysis
Virtual queues
μ
Physical queue 
λ 

μ
Geom/Geom/1 queues

  | S |  | S |
n n
|S | S j  j
j 1 j 1

where S : set of all packets arrived so far;


Sj : set of packets seen by receiver j
n : number of receivers
• LHS is the physical queue size
• RHS is the sum of the virtual queue sizes
22
Queuing analysis
Virtual queues
μ
Physical queue 

λ 

μ
Geom/Geom/1 queues

Drop when decoded Drop when seen


Decoding is possible when virtual queue Packet is seen when it departs from the
becomes empty virtual queue
Time packet spends in physical queue is Time packet spends in physical queue is
related to length of current busy period of related to its time until departure from
the virtual queues the virtual queues
(1   )  (1   ) 1
 
 (1   ) 2  (1   ) 23
24

 1 
 1  O 
 
2  1  
 (1   ) 

ρ = Load factor = Arrival Rate / Service Rate ; Asymptotics: ρ → 1 24


Congestion
control

Network
Coding
+
Feedback

Queue Decoding
management delay

25
Definition of delay
• Let:
si : time of arrival of packet i at the sender
ti : time when packet i is decoded at a particular receiver
yi : time when packet i is delivered in-order to a particular
receiver
Define: (ti – si) the decoding delay of packet i
(yi – si) the delivery delay of packet i
• We study the expected decoding and delivery delay
E (ti  si ) , E ( yi  si )
• Essentially, what is the long term average delay per packet
at one particular receiver?
26
Our coding module
• Central idea: Transmit a linear combination that involves only
the oldest undecoded packet (called ‘request’) of each
receiver
• An outline of the algorithm
– Group the receivers by their next request
– Process the groups in decreasing order of request
– For each group, pick a coefficient for the request that is
simultaneously innovative for each receiver in the group
(Always possible if the field size is at least the number of receivers)
Key observation: Coefficients chosen for subsequent groups will not affect
innovation of earlier groups, as they have already decoded the requests of
subsequent groups

27
Example of algorithm
Request
(oldest undecoded pkt)
P1 P2
Group 1

P1+P2 P1

Group 2
P2 P1

x P2 + y P1
Work over GF(3)

Group 1: Coefficient of P2 – 0 1 2
Transmit 2 P1 + P2

Group 2: Coefficient of P1 – 0 1 2
 28
Simulation results
Log-scale plot   0.5
2.6

2.4 log(Decoding delay): 3 receivers


log(Delivery delay): 3 receivers
2.2 log(Decoding delay): 5 receivers
log(Delivery delay): 5 receivers

1.8
log 10(Delay)

1.6

1.4

1.2

0.8

0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6


log 10[1/(1-)]

Unit slope of the lines in the log-scale plot confirms the linear asymptotic growth 29
Summary
• Goal 1: 100% Throughput
Innovation guarantee: Every transmitted linear combination increases the
rank of every node that receives it, except if the node already knows all
that the sender knows
• Goal 2: Queue size at sender
Physical queue size must track virtual queue sizes

• Goal 3: Decoding delay


Instantaneous decodability: Every innovative reception causes a new
packet to be decoded
• Goal 4: Delivery delay
In-order decoding: Every innovative reception causes the very next packet
in-order to be decoded

For one receiver, ARQ achieves all four goals


How much of this extends to the multiple receiver case?
30
Summary (contd.)
Innovation
Sender queue size Decoding delay Delivery delay
guarantee

1 receiver Required Optimal Optimal Optimal

Instantaneous
- Tracks virtual delivery not
Instantaneous
queue size possible
2 receivers Required - Optimal scaling in decoding Optimal scaling
[DFT ‘07]
heavy traffic in heavy traffic
(conjectured)
Instantaneous Instantaneous
- Tracks virtual decoding not delivery not
More than 2 queue size possible [DFT ‘07] possible
Required - Optimal scaling
receivers Optimal scaling in Optimal scaling
in heavy traffic heavy traffic in heavy traffic
(conjectured) (conjectured)
31
Conclusions and future work
• Proposed new notion of ‘seeing’ a packet and corresponding ACK
mechanism for coded networks, based on Gaussian elimination
• New ACK acknowledges every ‘degree of freedom’, even if it cannot be
decoded immediately
• Interfacing congestion control over network coding
– New ACK allows TCP-compatible sliding-window network coding
– With coding, TCP can be used over multipath or opportunistic routing
scenarios without reordering issue
• Queue management
– Drop packet when seen by receiver(s)
• Decoding delay
– New coding module proposed – ‘mix oldest undecoded’
– Proof of asymptotic optimality of proposed scheme is still open
– Can use notion of seen packets to show connection to traditional queuing
theory problems (resequencing buffers)

32
Congestion
control

Network 2.6

2.4 log(Decoding delay): 3 receivers


Coding 2.2
log(Delivery delay): 3 receivers
log(Decoding delay): 5 receivers
Virtual queues + log(Delivery delay): 5 receivers

μ 2

Feedback 1.8

log 10(Delay)
1.6

1.4

Queue Decoding 1.2

μ management delay 1

Geom/Geom/1 queues
0.8

0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6


log 10[1/(1-)]

Acknowledging degrees of freedom –


A step towards realizing the potential of network coding in practice
Thank you

You might also like