Cas Cs 835: Qos Networking Seminar
Cas Cs 835: Qos Networking Seminar
What to expect?
Discuss latest developments and research issues,
current and possible future QoS/CoS technologies
Queue management, traffic monitoring/shaping,
admission control, routing,
Focus on Internet extensions and QoS-sensitive
protocols/applications
Integrated Services and RSVP
Differentiated Services
Multi Protocol Label Switching
Traffic Engineering (or QoS Routing)
What to expect?
Proposal:
pre-proposal, full proposal, presentation
Paper presentation
2 quizzes
Active participation
What is QoS?
A broad definition: methods for differentiating traffic
and services
To some, introducing an element of predictability and
consistency into a highly variable best-effort network
To others, obtaining higher network throughput while
maintaining consistent behavior
Or, offering resources to high-priority service classes at
the expense of lower-priority classes (conservation law)
Or, matching network resources to application demands
Contract or SLA
Service Level Agreement between client (subscriber) and
network (provider): the network keeps its promise as long as
the flow conforms to the traffic specification
The network must monitor/police/shape incoming traffic
The shape is important: E.g. a gigabit network contracting
with a 100Mbps flow. A big difference between sending one
100Mb packet every second and sending 1Kb packet every
10 microsec.
Traffic Shaping
Leaky bucket:
- Data packets leak from a bucket of depth sigma at a rate
rho.
- Network knows that the flow will never inject traffic at a
rate higher than rho --- incoming traffic is bounded
- Easy to implement
- Easy for the network to police
- Accommodates well fixed-rate flows (rho set to rate),
but does not accommodate variable-rate (bursty) flows
unless rho is set to peak rate, which is wasteful
Token Bucket
Allows bounded burstiness
Tokens generated at rate rho are placed in bucket with
depth sigma
An arriving packet has to remove token(s) to be allowed
into the network
A flow can never send more than [sigma + rho * t] over an
interval of time t
Thus, the long-term average rate will not exceed rho
More accurate characterization of bursty flows means better
management of network resources
Easy to implement, but harder to police
Can add a peak-rate leaky bucket after a token bucket
Token Bucket
Example:
Flow A always send at 1 MBps
==> rho = 1 MBps, sigma = 1 byte
Flow B sends at 0.5 MBps for 2 sec, then at 2
MBps for 1sec, then repeats
==> rho = 1 MBps, sigma = 1 MB
We can also this Tspec for Flow A, but that would
be an inaccurate characterization
Link Scheduling
Challenge: Tspec may change at exit of scheduler (e.g., FCFS)
because of interactions among flows
FCFS increases worst-case delay and jitter, so we admit less flows
Solution: non-FCFS scheduler that isolates different flows or classes
of flows (hard, soft, best-effort)
Emulate TDM without wasting bandwidth
Virtual Clock provides flows with throughput guarantees and
isolation
Idea: use logical (rather than real) time
AR: average data rate required by a flow
When a packet of size P arrives, VC = VC + P/AR
Stamp packet with VC
Transmit packets in increasing order of time stamps
If a flow has twice AR, it will get double a double rate
Virtual Clock
If buffer is full, the packet with largest timestamp is
dropped
Problem: A flow can save up credits and use them to
bump other flows in the future
Fix: when a packet arrives, catch up with real time first
VC = MAX (real time, VC)
VC = VC + P/AR
Also, if AI is averaging interval, upon receiving AR*AI
bytes on this flow, if VC > real time + Threshold, then
send advisory to source to reduce its rate
WFQ
WFQ provides isolation and delay guarantees
FQ simulates fair bit-by-bit RR by assigning packets priority
based on finishing times under bit-by-bit RR
- E.g. Flow 1 packets of size 5 and 8, Flow 2 packet of size
10: size 5 first, then size 10, then size 8
Round number increases at a rate inversely proportional to
number of currently active flows
On packet arrival: recompute round number, compute finish
number, insert in priority queue, if no space drop packet
with largest finish number (max-min fairness)
Approximation error bounded by max_pkt_size / capacity
WFQ can assign different weights to different flows
Xmin1 = 10
3
Bounding jitter
Assume we want to eliminate all jitter
We can achieve this by making the network look like a
constant delay line
Idea: At the entry of each switch/router, have a regulator that
absorbs delay variations by delaying a packet that arrived
ahead of its local deadline at previous switch
Traffic characteristics are preserved as traffic passes through
the network
Schedulers with regulators are called rate-controlled
schedulers
Reduce burstiness within network, thus less buffers needed
Average packet delay is higher than with work-conserving
schedulers, but thats fine for hard real-time traffic
Statistical/soft/predictive QoS
Goal: bound the tail of the measure distribution
Not a good idea to use worst-case delay bounds since
very few packets (or none!) will actually experience this
worst-case delay
Computing statistical bounds (e.g., using effective
bandwidths) is usually approximate and often
conservative
FIFO+ attempts to reduce worst-case delay and jitter
using minimal isolation (and maximal statistical gain)
At each router, a packet is assigned lower priority if it
left previous routers ahead of measured average delay,
and higher priority if behind average delay
Measurement-based Admission
Key assumption: past behavior is a good indicator of
the future
User tells peak rate
If peak rate + measured average rate < capacity, admit
Over time, new call becomes part of average
Can afford to make some errors for predictive (or
controlled load) service
Obviously, can admit more calls than admission at
peak rate
Summary
Performance guarantees can be achieved by combining
traffic shaping and scheduling schemes
How good the bounds are? Loose or tight?
How easy to implement these schemes?
What kind of guarantees they provide?
How good is the utilization of the network?
How do clients signal their QoS requirements?
What is the best path to route a flow?
How to achieve QoS for multicast flows and with
mobility?