0% found this document useful (0 votes)
54 views

Hybrid Control and Switched Systems: Why Model Network Traffic?

1) The document discusses modeling communication networks by describing different types of models including packet-level, fluid-based, and hybrid models. It focuses on hybrid modeling which tracks packet rates and models some discrete events. 2) It then describes modeling network traffic using a simplified dumbbell topology with TCP and fluid queue dynamics, showing how TCP congestion avoidance leads to linearized dynamics and periodic behavior. 3) Finally, it discusses extending the modeling to general network topologies using a hybrid automata approach to model routing, queueing, and end-to-end congestion control through discrete events like drops.

Uploaded by

Mihaela Manisor
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Hybrid Control and Switched Systems: Why Model Network Traffic?

1) The document discusses modeling communication networks by describing different types of models including packet-level, fluid-based, and hybrid models. It focuses on hybrid modeling which tracks packet rates and models some discrete events. 2) It then describes modeling network traffic using a simplified dumbbell topology with TCP and fluid queue dynamics, showing how TCP congestion avoidance leads to linearized dynamics and periodic behavior. 3) Finally, it discusses extending the modeling to general network topologies using a hybrid automata approach to model routing, queueing, and end-to-end congestion control through discrete events like drops.

Uploaded by

Mihaela Manisor
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Hybrid Control and Switched Systems Lecture #17 Hybrid Systems Modeling of Communication Networks

Joo P. Hespanha University of California at Santa Barbara

Motivation

Why model network traffic? to validate designs through simulation (scalability, performance) to analyze and design protocols (throughput, fairness, security, etc.) to tune network parameters (queue sizes, bandwidths, etc.)

Types of models
Packet-level modeling tracks individual data packets, as they travel across the network ignores the data content of individual packets sub-millisecond time accuracy computationally very intensive Fluid-based modeling tracks time/ensemble-average packet rates across the network does not explicitly model individual events (acknowledgments, drops, queues becoming empty, etc.) time accuracy of a few seconds for time-average only suitable to model many similar flows for ensemble-average computationally very efficient (at least for first order statistics)

Types of models
captures fast dynamics even for a small number of flow

provide information about both average, peak, and instantaneous resource utilization (queues, bandwidth, etc.)

Hybrid modeling keeps track of packet rates for each flow averaged over small time scales explicitly models some discrete events (drops, queues becoming empty, etc.) time accuracy of a few milliseconds (round-trip time) computationally efficient

Summary
Modeling 1st pass: Dumbbell topology & simplified TCP Modeling 2nd pass: General topology, TCP and UDP models Validation Simulation complexity

1st pass Dumbbell topology


f1 f2
r2 bps rate B bps r3 bps
q( t ) queue size

r1 bps

f1
queue

f2 f3

f3

Several flows follow the same path and compete for bandwidth in a single bottleneck link Prototypical network to study congestion control routing is trivial single queue

B is unknown to the data sources and possibly time-varying

Queue dynamics
f1 f2
r2 bps rate B bps r3 bps
q( t ) queue size

r1 bps

f1
queue

f2 f3

f3

When f rf exceeds B the queue fills and data is lost (drops)

drop (discrete event relevant for


congestion control)

Queue dynamics
f1 f2
r2 bps rate B bps r3 bps
q( t ) queue size

r1 bps

f1
queue

f2 f3

f3

Hybrid automaton representation:

transition enabling condition

exported discrete event

Window-based rate adjustment


wf (window size) number of packets that can remain unacknowledged for by the destination

e.g., wf = 3
1st packet sent 2nd packet sent 3rd packet sent

source f
t0 t1 t2 t3

destination f

0 1 2

1st ack. received 4th packet can be sent

1st packet received & ack. sent 2nd packet received & ack. sent 3rd packet received & ack. sent

t wf effectively determines the sending rate rf :

round-trip time

Window-based rate adjustment


wf (window size) number of packets that can remain unacknowledged for by the destination sending rate
total round-trip time per-packet transmission time

propagation delay

time in queue until transmission


queue gets empty

queue gets full

longer RTT

rate decreases negative feedback

This mechanism is still not sufficient to prevent a catastrophic collapse of the network if the sources set the wf too large

TCP congestion avoidance


1. 2. While there are no drops, increase wf by 1 on each RTT (additive increase) When a drop occurs, divide wf by 2 (multiplicative decrease) (congestion controller constantly probes the network for more bandwidth) TCP congestion avoidance additive increase

multiplicative increase disclaimer: this is a very simplified version of TCP Reno, better models later

TCP congestion avoidance


1. 2. While there are no drops, increase wf by 1 on each RTT (additive increase) When a drop occurs, divide wf by 2 (multiplicative decrease) (congestion controller constantly probes the network for more bandwidth) Queuing model TCP congestion avoidance

rf

additive increase

RTT
drop

multiplicative increase disclaimer: this is a very simplified version of TCP Reno, better models later

TCP congestion avoidance


1. 2. While there are no drops, increase wf by 1 on each RTT (additive increase) When a drop occurs, divide wf by 2 (multiplicative decrease) (congestion controller constantly probes the network for more bandwidth) TCP + Queuing model additive increase

multiplicative increase

disclaimer: this is a very simplified version of TCP Reno, better models later

Linearization of the TCP model


Time normalization define a new time variable by

1 unit of 1 round-trip time TCP + Queuing model additive increase

multiplicative increase In normalized time, the continuous dynamics become linear

Switching-by-switching analysis
x1 t0
additive increase

T t1
additive increase

x2 t2
additive increase

t3

multiplicative decrease

multiplicative decrease

continuous state before the kth multiplicative decrease


multiplicative decrease

x1 x2
ns transitio urface

additive increase

impact map state space

Switching-by-switching analysis
x1 t0
additive increase

T t1
additive increase

x2 t2
additive increase

t3

multiplicative decrease

multiplicative decrease

continuous state before the kth multiplicative decrease Theorem. The function T is a contraction. In particular,

Therefore xk x as k x( t ) x ( t ) as t

x constant x(t) periodic limit cycle

NS-2 simulation results


flow 1 flow 2

n1 n2

s1

Bottleneck link
Router R1 Router R2

s2

TCP Sources
Window and Queue Size (packets)
flow 7 flow 8
400

TCP Sinks

20Mbps/20ms
500

n7 n8

s7 s8
window size w1 window size w2 window size w3 window size w4 window size w5 window size w6 window size w7 window size w8 queue size q

300

200

100

0 0 10 20 30 40 50

time (seconds)

Results
t0
additive increase

t1
additive increase

t2
additive increase

t3

multiplicative decrease

multiplicative decrease

Window synchronization:

convergence is exponential, as fast as .5 k Steady-state formulas: average drop rate average RTT average throughput (well known TCPfriendly formula)

2nd pass general topology


A communication network can be viewed as the interconnection of several blocks with specific dynamics network dynamics (queuing & routing)

server

data

a) Routing: in-node rate out-node rates

acks

b) Queuing: out-queue rate queue size in-queue rate

client
congestion control

c) End2end cong. control acks & drops

server

sending rate

Routing
determines the sequence of links followed by each flow f n

n'

Conservation of flows:

end2end sending rate of flow f

in-queue rate of flow f

upstream out-queue rate of flow f indexes l and l determined by routing tables

Routing
determines the sequence of links followed by each flow Multicast Multi-path routing

n'

l1

n'

l l

l2
n''

Queue dynamics
in-queue rates

l M
drop rates

out-queue rates

M
link bandwidth

total queue size

queue size due to flow f the packets of each flow are assumed uniformly distributed in the queue

Queue dynamics:

Queue dynamics
in-queue rates

l M
drop rates

out-queue rates

queue empty

M
same in and outqueue rates

no drops

queue not empty/full

queue full drops proportional to fraction inqueue rates

out-queue rates proportional to fraction of packets in the queue

Drops events
in-queue rates

l M
drop rates

out-queue rates

When? t0 t1 t2

total in-queue rate

packet size total out-queue rate (link bandwidth)

Drops events
in-queue rates

l M
drop rates

out-queue rates

When? t0 t1 t2

Which flows?
(drop tail dropping) flow that suffers drop at time tk

Hybrid queue model


l-queue-not-full transition enabling condition

discrete modes l-queue-full

exported discrete event

Hybrid queue model


l-queue-not-full stochastic counter

discrete modes l-queue-full

Random Early Drop active queuing

Network dynamic & Congestion control


routing

in-queue rates

out-queue rates

sending rates end2end congestion control

queue dynamics

TCP/UDP
drops

Additive Increase/Multiplicative Decrease


1. 2. While there are no drops, increase wf by 1 on each RTT (additive increase) When a drop occurs, divide wf by 2 (multiplicative decrease) (congestion controller constantly probe the network for more bandwidth) imported discrete event propagation delays

congestionavoidance

set of links transversed by flow f TCP-Reno is based on AIMD but uses other discrete modes to improve performance

Slow start
In the beginning, pure AIMD takes a long time to reach an adequate window size 3. 4. Until a drop occurs (or a threshold ssthf is reached), double wf on each RTT When a drop occurs, divide wf and the threshold ssthf by 2

slow-start

cong.-avoid.

especially important for short-lived flows

Fast recovery
After a drop is detected, new data should be sent while the dropped one is retransmitted

5. During retransmission, data is sent at a rate consistent with a window size of wf /2

slow-start

cong.-avoid.

fast-recovery

(consistent with TCP-SACK for multiple consecutive drops)

Timeouts
Typically, drops are detected because one acknowledgment in the sequence is missing. source
1st packet sent 2nd packet sent 3th packet sent 4th packet sent

destination

drop

three acks received out of order drop detected, 1st packet re-sent

2nd packet received & ack. sent 3th packet received & ack. 4th packet received & ack. sent sent

When the window size becomes smaller than 4, this mechanism fails and drops must be detected through acknowledgement timeout.

6. When a drop is detected through timeout: a. the slow-start threshold ssthf is set equal to half the window size, b. the window size is reduced to one, c. the controller transitions to slow-start

Fast recovery, timeouts, drop-detection delay

TCP SACK version

Network dynamic & Congestion control


routing

in-queue rates

out-queue rates

sending rates

RTTs

queue dynamics

end2end congestion control

drops

see SIGMETRICS paper for on/off TCP & UDP model

Validation methodology
Compared simulation results from ns-2 packet-level simulator hybrid models implemented in Modelica Plots in the following slides refer to two test topologies

dumbbell

Y-topology

10ms propagation delay drop-tail queuing 5-500Mbps bottleneck throughput 0-10% UDP on/off background traffic

45,90,135,180ms propagation delays drop-tail queuing 5-500Mbps bottleneck throughput 0-10% UDP on/off background traffic

Simulation traces
single TCP flow 5Mbps bottleneck throughput no background traffic

hybrid model
140 cwnd and queue size (packets) cwnd and queue size (packets) 120 100 80 60 40 20 0 0 2 4 6 8 10 12 time (seconds) 14 16 18 20 cwnd of TCP 1 queue size 140 120 100 80 60 40 20 0

ns-2
cwnd of TCP 1 queue size

8 10 12 time (seconds)

14

16

18

20

slow-start, fast recovery, and congestion avoidance accurately captured

Simulation traces
four competing TCP flow (starting at different times) 5Mbps bottleneck throughput no background traffic

hybrid model
140 cwnd and queue size (packets) 120 100 80 60 40 20 0 0 2 4 6 8 10 12 time (seconds) 1 4 16 18 20 cwnd and queue size (packets)
cwnd size of TCP 1 cwnd size of TCP 2 cwnd size of TCP 3 cwnd size of TCP 4 Queue size of Q1 Queue size of Q2

ns-2
140 120 100 80 60 40 20 0 0 2 4 6 8 10 12 time (seconds) 14 16 18 20
cwnd size of TCP 1 cwnd size of TCP 2 cwnd size of TCP 3 cwnd size of TCP 4 Queue size of Q1 Queue size of Q2

the hybrid model accurately captures flow synchronization

Simulation traces
four competing TCP flow (different propagation delays) 5Mbps bottleneck throughput 10% UDP background traffic (exp. distributed on-off times) hybrid model
140 cwnd and queue size (packets) 120 100 80 60 40 20 0 0 2 4 6 8 10 12 14 time (seconds) 16 18 20
CWND size of TCP 1 (Prop=0.045ms)

ns-2
140 cwnd and queue size (packets) 120 100 80 60 40 20 0 0 2 4 6 8 10 12 time (seconds) 14 16 18 20
CWND size of TCP 1 (Prop=0.045ms) CWND size of TCP 2 (Prop=0.090ms) CWND size of TCP 3 (Prop=0.135ms) CWND size of TCP 4 (Prop=0.180ms) Queue size of Q1 Queue size of Q3

CWND size of TCP 2 (Prop=0.090ms) CWND size of TCP 3 (Prop=0.135ms) CWND size of TCP 4 (Prop=0.180ms) Queue size of Q1 Queue size of Q3

Average throughput and RTTs


four competing TCP flow (different propagation delays) 5Mbps bottleneck throughput 10% UDP background traffic (exp. distributed on-off times)
45,90,135,180ms propagation delays drop-tail queuing 5Mbps bottleneck throughput 10% UDP on/off background traffic

Thru. 1 Thru. 2 Thru. 3 Thru. 4 RTT1 RTT2 RTT3 RTT4 ns-2 hybrid model relative error 1.873 1.824 2.6% 1.184 1.091 7.9% .836 .823 1.5% .673 .0969 .669 .0879 .7% 9.3% .141 .132 5.9% .184 .180 3.6% .227 .223 2.1%

the hybrid model accurately captures TCP unfairness for different propagation delays

Empirical distributions

0.15

hybrid model

CWND of TCP1 CWND of TCP2 CWND of TCP3 CWND of TCP4 Queue 3

0.18 0.16 0.14


probability

ns-2

CWND of TCP1 CWND of TCP2 CWND of TCP3 CWND of TCP4 Queue 3

probability

0.1

0.12 0.1 0.08 0.06 0.04 0.02

0.05

10

20 30 40 50 cwnd & queue size

60

70

10

20 30 40 50 cwnd & queue size

60

70

the hybrid model captures the whole distribution of congestion windows and queue size

Execution time
10000

1 flow
1000

500Mbps

execution time for 10min

3 flows ns-2
50Mbps

of simulation time [sec]

100

10

5Mbps
1 1 0.1 10 100

hybrid model
1000

bottleneck bandwidth [Mbps]

number of flows (# packets) per-flow throughput

ns-2 complexity approximately scales with

hybrid simulator complexity approximately scales with

hybrid models are particularly suitable for large, highbandwidth simulations (satellite, fiber optics, backbone)

You might also like