0% found this document useful (0 votes)
51 views52 pages

Lesson 14: Qos in Ip Networks: Intserv and Diffserv: Slide Supporting Material

Uploaded by

Sallam Sallakho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views52 pages

Lesson 14: Qos in Ip Networks: Intserv and Diffserv: Slide Supporting Material

Uploaded by

Sallam Sallakho
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 52

Slide supporting material

Lesson 14: QoS in IP


Networks: IntServ and
DiffServ
Giovanni Giambene
Queuing Theory and Telecommunications:
Networks and Applications
2nd edition, Springer

All rights reserved

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
Introduction

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
QoS Support in IP Networks

 The introduction of real-time traffic in the Internet (e.g.,


Voice over IP, VoIP) calls for new approaches to provide
Quality of Service (QoS).
 Internet that operates on the basis of Best Effort (BE)
does not provide QoS support (no bandwidth
guarantees, no delay guarantees, no admission control,
and no assurances about delivery).
 Real-time traffic (as well as other applications) may
require priority treatment to achieve good
performance.

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
An Example for the Need of
QoS Support in IP Networks
 Let us consider a phone application at 1 Mbit/s and an
FTP application sharing a bottleneck link at 1.5 Mbit/s.
 Bursts of FTP can congest the router and cause voice
packets to be dropped.
 In this example we need to give priority to voice
over FTP.
 Marking of packets is needed for the router to distinguish
between different classes; a and new router policy is needed
to treat packets accordingly.

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
QoS Metrics

 Main performance attributes:


 Bit error rate [%] at PHY layer
 Outage probability [% of time] at PHY layer
 Blocking probability [%] at PHY or MAC layer
 Throughput [bit/s] at MAC or transport layer
 Packet loss rate [%] at MAC and IP layers (e.g., buffer overflow)
 Fairness at PHY, MAC or transport layers
 (Mean) delay [s] at different layers
 Delay variation or jitter [s] at different layers (especially, application)

 The Service Level Agreement (SLA) is a contract between the user


and the service provider/operator, which defines suitable bounds for some
of the QoS performance attributes above provided that the user traffic
fulfills certain characteristics.
I. Stoica, “Stateless Core: A Scalable Approach for Quality of Service in the Internet”, in
Lecture Notes in Computer Science, Vol. 2979, 2001.
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ & DiffServ

 The key QoS approaches described in this lesson for IP-


based networks are:

 Integrated Services (IntServ) in RFC 1633 and RFC 2207.

 Differentiated Services (DiffServ) in RFC 2474 and RFC


2475.

 Note that in both cases CAC schemes are adopted:

 Traffic flow-based deterministic CAC with IntServ,

 Traffic class-based statistic CAC with DiffServ.


© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ

 The IntServ main concept is to reserve resources for each flow


through the network. There are per-flow queues at the
routers.
 IntServ adopts an explicit call set-up mechanism for the routers
in a source-to-destination path. These mechanisms enable each
flow to request a specific QoS level.
 RSVP (Resource reSerVation Protocol) is the most-widely-
used set-up mechanism enabling resource reservation over a
specific source-to-destination path (RFC 2205 and RFC 2210). RSVP
operates end-to-end.
 RSVP allows a fine bandwidth control. The main drawback of RSVP
is the adoption of per-flow state and per-flow processing that cause
scalability issues for large networks (heavy processing and
signaling loads at routers).
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ (cont’d)

 RSVP uses two types of FlowSpecs used by routers to set a path:


 Traffic specification (T-Spec) that describes the traffic
characteristics of the source (token bucket parameters such as bucket
depth b, token generation rate r, peak data rate p, etc.).

 Request specification (R-Spec) that describes the required service


level and is defined by the receiver.

 T-Spec is sent from source to destination. R-Spec is


sent back from destination to source.
 CAC and resource reservation along the source-
destination path is performed on a traffic flow basis by
RSVP and using both T-Spec and R-Spec.
 Routers will admit new flows based on their R-spec and T-spec and
based on the current resources allocated at the routers to other flows.
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
T-Spec and R-Spec in Detail

 T-Spec specifies the traffic characteristics of sender


 Bucket rate and sustainable rate (r) (bits/s)
 Peak rate (p) (bits/s)
 Bucket depth (b) (bits)
 Minimum policed unit (m) (bits) – any packet with size smaller than m
will be counted as m bits
 Maximum packet size (M) (bits) – the maximum packet size that can be
accepted.

 R-Spec defines the resource needed for the flow and


requested by receiver (bandwidth requirement)
 Service rate (R) (bits/s): bandwidth that is needed for the traffic flow.
 Slack term (S) (µs): extra amount of delay that a node may tolerate still
meeting the end-to-end delay requirement of the traffic flow.
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Internal Node
Structure
Per-flow state
Per traffic
flow state
managed at
the node

Flow #1

Flow #2
Classifier

… Scheduler
Flow #n

Buffer management
Per traffic flow
buffering and
scheduling at the node.
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Internal Node
Structure The flow state contains a flow
identifier, and instructions on how to
Per traffic manage
Per-flowitstate
(priority, buffer
flow state management rules, and R-Spec, as
managed at explained later).
the node

Flow #1

Flow #2
Classifier

… Scheduler
Flow #n

Buffer management
Per traffic flow
buffering and
scheduling at the node.
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ Example

… …

… …


IntServ network

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ Example

 Propagating the PATH message with T-Spec from source to


destination to establish a path.

PATH message
Source Destination

… …

… …


IntServ network

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ Example

 RESV message providing back R-Spec to be used by each node


along the path for per-flow admission control and resource
allocation; installing per-flow state in the nodes along the path.
RESV message
Source Destination

… …

… …


IntServ network

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ Example

 RESV message providing back R-Spec to be used by each node


along the path for per-flow admission control and resource
allocation; installing per-flow state in the nodes along the path.
RESV message
Source Destination

… …

… …


IntServ network

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ Example

 RESV message providing back R-Spec to be used by each node


along the path for per-flow admission control and resource
allocation; installing per-flow state in the nodes along the path.

Source RESV message Destination

… …

… …


IntServ network

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ Example

 RESV message providing back R-Spec to be used by each node


along the path for per-flow admission control and resource
allocation; installing per-flow state in the nodes along the path.
RESV message
Source Destination

… …

… …


IntServ network

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ Example

 Traffic delivery: use of per-flow classification.

Source Destination

… …

… …


IntServ network

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ Example

 Traffic delivery: use of per-flow buffer management.

Source Destination

… …

… …


IntServ network

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ Example

 Traffic delivery: use of per-flow traffic scheduling at the nodes.

Source Destination

… …

… …


IntServ network

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Buffer Management

 Instead of using a simple drop-tail mechanism, buffer


management is adopted by IntServ. Let us consider the
following definitions related to the management of traffic
at a generic buffer in the IntServ router.

MaxThresh : Max queue MinThresh : Min queue


length threshold length threshold Packet
scheduler
of the node
IP packet
IP packet
IP packet
IP packet
IP packet
IP packet

Input queue for a traffic flow


© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Buffer Management
(cont’d)
 Random Early Detection (RED)
IP packets are dropped randomly with a given probability when the
average queue length exceeds a minimum threshold (MinThresh). If
a maximum threshold (MaxThresh) is exceeded, all new IP packets
are dropped.

 Weighted RED (WRED)


This technique drops IP packets selectively on the basis of the IP
precedence.

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Class of Service

 IntServ allows two service types:


 Guaranteed Service
 For hard real-time applications.
 The user specifies traffic characteristics.
 Requires admission control at each router.
 Can mathematically guarantee bandwidth, delay, and jitter
(deterministic guarantees).

 Controlled-Load Service
 For applications that can adapt to network conditions within a
certain performance window.
 The user specifies traffic characteristics.
 Requires admission control at each router.
 Guarantees are not as strong as with the guaranteed service
(statistical guarantees based on average values).
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Guaranteed Service

 Guaranteed Service (GS) provides quantitative QoS guarantee


(i.e., guaranteed bandwidth and strict bounds on end-to-
end delay) on a flow basis.
 GS can manage applications with stringent real-time delivery
requirements, such as audio and video applications.
 With GS, each router guarantees for a specific flow a
minimum bandwidth R and a certain buffer space B.
 The sender sends an RSVP-PATH message to the receiver
specifying the traffic characteristics (T-Spec) and setting up the
path. The receiver computes R and responds with an RESV-
message to request resources for the flow (R-Spec).

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Guaranteed Service
(cont’d)
 A source is characterized according to a fluid traffic model: bit-
rate as a function of time (no packet arrivals).
 GS uses a token bucket filter (r, b, p) specified by T-Spec to
shape the traffic.
 In a perfect fluid model, a flow conformant to a token
bucket with rate r and depth b will have its delay bounded
by b/R, provided that R  r [Parekh 1992, Cruz 1988].
 GS uses a Weighted Fair Queuing (WFQ) scheduling scheme
at the routers to service the queues (one queue per flow).

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Guaranteed
r = regime (mean)Service
bit-rate
(cont’d) p = peak bit-rate
b = bucket depth
 A source is characterized according to a fluid traffic model: bit-
rate as a function of time (no packet arrivals).
 GS uses a token bucket filter (r, b, p) specified by T-Spec to
shape the traffic.
 In a perfect fluid model, a flow conformant to a token
bucket with rate r and depth b will have its delay bounded
by b/R, provided that R  r [Parekh 1992, Cruz 1988].
 GS uses a Weighted Fair Queuing (WFQ) scheduling scheme
at the routers to service the queues (one queue per flow).

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
Token Bucket Model
and Deterministic
Queuing

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Guaranteed Service -
Token Bucket Shaper Model
Tokens enter the
Source traffic flow

bucket at rate r
regulator

}
bit/s

bit/s
Token bucket
with depth b

Unregulated Regulated
flow flow
Transmission buffer
Source buffer
allocated to the flow
Max allowed transmission
bit-rate (capacity made
available to the flow), R.
Note that R is a portion of
the link bandwidth
Output line with max bit-rate p permitted to
the traffic source by the regulator
Source Network/node
Regulator
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Guaranteed Service -
Token Bucket Shaper (cont’d)
 If the bucket is full, new
tokens are discarded.
Tokens enter the
Source traffic flow

bucket at rate r  Sending a packet of size L


regulator

requires L tokens (1
token for 1 bit).

}  If the bucket contains L


bit/s

bit/s
Token bucket
with depth b tokens, the packet is sent
Unregulated Regulatedat the maximum rate p,
flow flowotherwise the packet is
Source buffer sent at a rate controlled by
Transmission buffer
allocated to the flow
the token rate r. transmission
Max allowed
bit-rate (capacity made
available to the flow), R
 In this study
R iswe consider
a portion of the link a
Output line with max bit-rate p permitted to fluid-flow traffic model:
bandwidth
the traffic source by the regulator no packets (M = 0 and m
Source Network/node
Regulator = 0).
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Guaranteed Service -
Token Bucket Shaper (cont’d)
 We start with an empty buffer and
Tokens enter the bucket
at rate r a bucket full with b tokens.
 The interval for which the token bucket
allows sending a burst at the maximum
rate p is Tb as:
B = Tbp = b + r*Tb (max burst size, MBS)
Bucket depth
(capacity) of b  Hence, given the token bucket
tokens parameters r and b we obtain Tb as:
Tb = b /(p-r), assuming r < p
Max allowed
transmission  The number of bits sent in Tb is:
rate p
B = Tbp = bp /(p-r)
 After Tb, the output rate becomes equal
toNetworks
© 2013 Queuing Theory and Telecommunications: r. and Applications – All rights reserved
IntServ: Guaranteed Service -
Token Bucket Shaper (cont’d)
a(t) represents the arrival curve at the output of the shaper, this is the
cumulative number of bits generated up to time t: a(t) = min{pt, rt + b}
a(t) = Maximum # of
bits sent by the source

 with the shaper. This is


Bit-rate (instantaneous)

bits (incremental curve)


p Fluid-flow traffic model an upper bound!
r)
/ (p- rt + b
bp
r B= slope r, bit-rate r

b
pt
slope p, bit-rate p

Tb time, t Tb time, t

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: The Departure
(output) Curve, b(t)
Input bit-rate Output bit-rate
with related node and related
a(t), according to cumulative curve
the token bucket b(t)
model (r, b, p) slope R
Service curve, s(t) at
the agreed rate R: s(t) = Rt
The departure curve b(t) bits (incr.) Arrival curve, a(t)
denotes the number of bits slope r case
departing the node up to r<R<p
time t. X
bp/(p-r) Output curve, b(t)
X is the point of the arrival D = delay experienced by bits at the
curve corresponding to the p
output
largest buffer occupancy
Bmax and max delay Dmax. B(t) = number of bits (backlog) in the buffer at time t

b(t) = min{s(t), a(t)}, t = Tb t* time


t© >
20130Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: QoS Guarantees and
Per-hop Reservation
 This system is characterized by bounded delay (Dmax) and bounded
buffer size (maximum buffer occupancy Bmax) determined as follows:

b  pR b
Dmax  t * Tb     , if R  r
R  p  r  R

 pR
Bmax  pTb  RTb  b     b, if R  r
 p  r 
 Given a traffic flow characterized by the token bucket
model (r, b, p), each router along the path from source to
destination has to allocate bandwidth R and a certain buffer
B to fulfill the condition that the e2e delay is lower than a
certain maximum value, D.

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: QoS Guarantees and
Per-hop Reservation
 This system is characterized by bounded delay (Dmax) and bounded
buffer size (maximum buffer occupancy Bmax) determined as follows:
This graphical approach to study
b  pdelay
 R  bounds
b belongs to the
Dmax  t * Tb      , if R  r
p  r  R called ‘network
R  discipline
calculus’ or ‘deterministic
 pR
Bmax  pTb  RTb  b queuing
   b, systems’.
if R  r
 pr 
 Given a traffic flow characterized by the token bucket
model (r, b, p), each router along the path from source to
destination has to allocate bandwidth R and a certain buffer
B to fulfill the condition that the e2e delay is lower than a
certain maximum value, D.

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: General Arrival-
Departure Model
 The generalized model considers both M (maximum packet size)
and T0 (latency due to propagation delay):
Service curve, s(t)
at the agreed rate R
If M < b, at the beginning a packet
slope p
of size M is soon delivered by the
Arrival curve, a(t) token bucket regulator.
slope r
bits (incr.)
Dmax T0 is responsible to translate the
slope R
b service curve and to increase
Output curve, b(t) accordingly the e2e delay.
M
Bmax
case
r<R<p
T0 time

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
RSVP: Soft-state Receiver-
Initiated, e2e Reservation
 Sender A periodically sends (downstream) PATH messages with T-Spec
(r,p,b) to receiver B. Each router updates the PATH message by
increasing the hop count and adding its propagation delay.
 When receiver B gets the PATH message, it knows T-Spec (r,p,b), the
number of hops and the total propagation delay.
 Receiver B computes the R value and sends back (upstream) T-Spec and
R-Spec and the propagation delay by means of the RESV message
 Each router allocates bandwidth R and a certain buffer B to the flow
(per-hop delay guarantee) and propagates back the RESV message (with
updated delay) to the next router that repeats the reservation process.
PATH me
ssage (r,
p,b,0,0 ) (r,p,b,3,D1+D2+D3)
Router (r,p, ) Router
b, 1,D , D +D 2
1) b,2 1
(r,p,
Sender A (r,p,b,0 Router Receiver B
,0,R)
(r,p, R) , R)
b, 1, D -D 3, RESV message (r,p,b,3,Dtot
-D - , D
b,2 t ot
(r,p,
tot
3 D ,R)
2

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
IntServ: Controlled Load

 CLS (RFC 2211) does not provide any quantitative


guarantee on delay bounds.
 With CLS, the packets of a given flow will experience delays and loss
comparable to a network with no load, always assuming compliance
with the traffic contract (SLA).

 The CLS service model provides only statistical


guarantees:
 A very high percentage of transmitted packets is successfully delivered.

 Data packets experience small average queuing delays.

 The important difference from the traditional Internet best-


effort service is that the CLS flow does not noticeably
deteriorate
© 2013 Queuingas the
Theory network Networks
and Telecommunications: loadand
increases.
Applications – All rights reserved
IntServ: Controlled Load
(cont’d)
 CLS uses T-Spec and an estimation of the mean bandwidth
requested (R-Spec is not used) that are submitted to the routers
along the source-destination path.
 The router has a CAC module to estimate whether the mean
bandwidth requested is available for the traffic flow. In the positive
case, the new flow is accepted and the related resources are
implicitly reserved. There is not an actual bandwidth
reservation with CLS.
 With the CLS service, there could be packet losses for the flows
admitted and no delay bound guarantees.
 CLS is intended for those applications (e.g., adaptive real-
time applications) that can tolerate a certain amount of loss
and delay. CLS is not suited to those applications requiring very low
latency.
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
Improving IntServ: Differen-
tiated Services (DiffServ)
 There are the problems below with IntServ and RSVP;
this is the reason why a new QoS approach has been
proposed for IP networks and called DiffServ.

 Scalability: maintaining per-flow states at the routers in high-


speed networks is difficult due to the very large number of
flows.

 Flexible service models: IntServ has only two classes; we


should provide more qualitative service classes with ‘relative’
service differentiation (Platinum, Gold, Silver, …)

 Simpler signaling (than RSVP): many applications and users


may only want to specify a more qualitative notion of QoS.
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
DiffServ

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
DiffServ

 To achieve scalability, the DiffServ architecture envisages treatment


for aggregated traffic flows rather than for single flows (as
IntServ). Much of the complexity is out of the core network at
edge routers, which process lower volumes of traffic and lower
numbers of flows.
 DiffServ operates classification for the packets entering the
DiffServ domain at edge routers. Instead, core router only
perform packet forwarding on the basis of the classification
decided at the entrance in the network.
 Edge routers classify each packet in a small number of aggregated flows or
classes, based on the DiffServ Code Point (DSCP) field in the IP packet header.

 Core routers apply Per-Hop Behavior (PHB) forwarding procedure depending on


DSCP.
 No per-flow state has to be maintained at core routers, thus
improving scalability.
© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
DiffServ (cont’d)

 The main DSCPs of DiffServ are:

 Expedited Forwarding (EF), RFC 3246, offering some


quantitative QoS guarantees for aggregate flows.

 Assured Forwarding (AF), RFC 2597 and RFC 3260, providing


some priority policies for aggregate flows.

 DiffServ traffic management mechanisms include:

 At edge routers of the DiffServ domain: single flows are


analyzed operating classification (on the basis of the DSCP),
marking, policing, and shaping functions.

 At core routers within a DiffServ domain: forwarding is


based on differentiated
© 2013 Queuing PHBs;Networks
Theory and Telecommunications: also buffering
and Applications –and scheduling
All rights reserved are
DiffServ Architecture

DSCP flow Traffic class-based


classification, queues and traffic
CB
Rt
raf
Shaping, Marking management (PHB)
fic
Edge Router Core Router Edge Router

ive t raffic
interact
ff
or
t
DiffServ domain
e
s t-
Be

Sources Destinations

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
DiffServ: Edge Router/Host
Functions
 Classifier: It classifies the
packets on the basis of
different elements (DSCP).
meter
 Meter: It checks whether
the traffic falls within the
negotiated profile (policer).
IP packets shaper / forward
classifier marker
 Marker: It writes/rewrites dropper

the DSCP value in the dropping


packet header. packets

 Shaper/dropper: It
delays some packets and Traffic Conditioner Block (TCB)
then forwards or discards at edge routers
exceeding packets.

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
DiffServ: Classification

 An IP packet is marked in the Type of Service (ToS) byte


in the IPv4 header or in the Traffic Class (TC) field in the
IPv6 header.
 6 bits are used for DSCP and determine the PHB that the
packet will receive.
 2 bits are Currently Unused (CU).

DSCP CU
ToS byte in IPv4 header or TC byte in IPv6 header

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
Expedited Forwarding PHB

 Expedited Forwarding (EF) - RFC 3246:

 The EF traffic class is for guaranteed bandwidth, low jitter, low


delay, and low packet losses for aggregate flows.

 The EF traffic is supported by a specific queue at the


routers. The EF traffic is not influenced by the other traffic
classes (AF and BE).

 Non-conformant EF traffic is dropped or shaped.

 EF traffic is often strictly controlled by CAC (admission


based on peak rate), policing, and other mechanisms.

 The recommended DSCP for EF is 101110.


© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
Assured Forwarding PHB

 Assured Forwarding (AF) - RFC 2597 and RFC 3260:

 AF is not a single traffic class, but 4 sub-classes: AF1, AF2,


AF3, and AF4. Hence, we can expect to have 4 AF queues at
the routers. The service priority for these queues at the
routers is: AF1 > AF2 > AF3 > AF4.

 Within each sub-class (i.e., within each queue), there are


three drop precedence values from a low drop level 1 up to a
high drop level 3 (with related DSCP coding) to determine which
packets will be dropped first in each AF queue if congested: the
drop precedence order for the generic queue AFx, x  {1, 2, 3,
4}, is AFx3 before AFx2 before AFx1.

 The packets
© 2013 of aandgeneric
Queuing Theory AFx class
Telecommunications: Networksqueue are–sent
and Applications All rightsin FIFO order.
reserved
Assured Forwarding PHB
(cont’d)
 AF is used to implement services that differ relatively to each
other (e.g., gold, silver, etc.).

 Non-conformant traffic is remarked, but not dropped.

 AF is suitable for services that require a minimum guaranteed


bandwidth (additional bandwidth can only be used if available)
with possible packet dropping above the agreed data rate in
bottom and from left to right.
Priority reduces from top to

case of congestion.
Class 1 Class 2 Class 3 Class 4

Low Drop AF11 (DSCP 10) AF21 (DSCP 18) AF31 (DSCP 26) AF41 (DSCP 34)

Medium Drop AF12 (DSCP 12) AF22 (DSCP 20) AF32 (DSCP 28) AF42 (DSCP 36)

High Drop AF13 (DSCP 14) AF23 (DSCP 22) AF33 (DSCP 30) AF43 (DSCP 38)

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
Traffic Management and
Scheduling at Nodes (DiffServ)
 Scheduling: Rather than using strict priority queuing,
more balanced scheduling algorithms such as fair
queuing or weighted fair queuing are used.

 Buffer Management: To prevent problems


associated with tail drop events (i.e., arriving
packets are dropped when queue is congested,
regardless of flow type or importance), RED or WRED
algorithms can be used to drop packets.
 If congestion occurs, the traffic in the higher class (e.g., class 1) has
priority and the packets with the higher drop precedence are discarded
first.

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
Comparison

Best-Effort DiffServ IntServ

Service Connectivity Per-aggregation Per-flow isolation


No isolation isolation Per-flow guarantee
No guarantees Per-aggregation
guarantee
Service Scope End-to-end Domain End-to-end

Complexity No set-up Long term setup Per-flow setup

Scalability Highly scalable Scalable (edge Not scalable (each


(nodes maintain routers maintains router maintains
only routing state) per-aggregate state; per-flow state)
core routers per-class
state)

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved
Thank you!

[email protected]

© 2013 Queuing Theory and Telecommunications: Networks and Applications – All rights reserved

You might also like