08 Qos
08 Qos
Overview
Why QoS? When QoS? One model: Integrated services Contrast to Differentiated Services (more modern; more practical; not covered)
What is QoS?
Providing guarantees (or rough bounds) on various network properties:
Available bandwidth for flows Delay bounds Low jitter (variation in delay) Packet loss
SLAs that specify rate guarantees, max rates, priorities, etc. Control who gets to use the network (admission control) (maybe, maybe not)
Inelastic Applications
Continuous media applications
Lower and upper limit on acceptable performance. BW below which video and audio are not intelligible Internet telephones, teleconferencing with high delay (200 - 300ms) impair human interaction
Claim: These apps are not as elastic or adaptive. Dont typically react to congestion. This is a bit questionable, but telephony has some of these attributes. Note about jitter: More jitter == more buffering == delay + memory.
Bandwidth
Admission Control
If U(bandwidth) is concave elastic applications
U
Elastic
Incremental utility is decreasing with increasing bandwidth Is always advantageous to have more flows with lower bandwidth
No need of admission control;
BW
BW
BW
Admission Control
If U is convex inelastic applications
U(number of flows) is no longer monotonically increasing Need admission control to maximize total utility U
Delay-adaptive
BW
Admission control deciding when the addition of new people would result in reduction of utility
Basically avoids overload
So?
Right answer depends on a lot of factors:
Cost of complexity vs. cost of bandwidth Can applications become adaptive?
Important features:
Maximizing V doesnt necessarily maximize Ui
In fact, it almost cant! It takes away from elastic Us to add to inelastic Us
2. Packet scheduling
How does the network meet promises?
3. Service interface
How does the application describe what it wants?
1. Type of commitment
What kind of promises/services should network offer? Depends on the characteristics of the applications that will use the network .
Playback Applications
Sample signal packetize transmit buffer playback
Fits most multimedia applications
Performance concern:
Jitter variation in end-to-end delay
Delay = fixed + variable = (propagation + packetization) + queuing
Solution:
Playback point delay introduced by buffer to hide network jitter
Application Variation
Rigid & adaptive applications
Rigid set fixed playback point Adaptive adapt playback point
Gamble that network conditions will be the same as in the past Are prepared to deal with errors in their estimate Will have an earlier playback point than rigid applications
4 combinations
Applications Variations
Really only two classes of applications
1) Intolerant and rigid 2) Tolerant and adaptive
Other combinations make little sense
3) Intolerant and adaptive
- Cannot adapt without interruption
4)
Type of Commitments
Guaranteed service
For intolerant and rigid applications Fixed guarantee, network meets commitment as long as clients send at match traffic agreement
Predicted service
For tolerant and adaptive applications Two components
If conditions do not change, commit to current service If conditions change, take steps to deliver consistent performance (help apps minimize playback delay) Implicit assumption network does not change much over time
2. Packet scheduling
How does the network meet promises?
3. Service interface
How does the application describe what it wants?
Use WFQ at the routers Parekhs bound for worst case queuing delay = b/r
Operation:
If bucket fills, tokens are discarded Sending a packet of size P Bucket depth b: capacity of bucket uses P tokens If bucket has P tokens, packet sent at max rate, else must wait for tokens to accumulate
Overflow
Packet
Enough tokens packet goes through, tokens removed
Packet
Not enough tokens wait for tokens to accumulate
BW 2 1 1
Flow B
Flow A: r = 1 MBps, B=1 byte
Flow A 2 3
Time
Predicted Service
Goals: Isolation
Isolates well-behaved from misbehaving sources
Sharing
Mixing of different sources in a way beneficial to all
Mechanisms: WFQ
Great isolation but no sharing
FIFO
Great sharing but no isolation
Principle: Mixing with FIFO shares jitter better than WFQ Reality: Complexity
Predicted Service
FIFO jitter increases with the number of hops
Use opportunity for sharing across hops
FIFO+
At each hop: measure average delay for class at that router For each packet: compute difference of average delay and delay of that packet in queue Add/subtract difference in packet header Packet inserted into queues expected arrival time instead of actual
More complex queue management!
Isolation
Fair queueing + token buckets => e2e delays
Jitter sharing
Benefits of stat mux. Helps reduce max jitter of one flow by slightly increasing jitter of all flows
Admission control
Utility functions