0% found this document useful (0 votes)
10 views34 pages

11 Chapter Study Material

Active Queue Management (AQM) techniques are designed to detect and manage network congestion before it escalates, primarily through packet dropping or marking to control traffic flow. The document outlines a taxonomy of AQM mechanisms, detailing the roles of Congestion Monitors, Bandwidth Controllers, Congestion Controllers, and Queue Controllers in managing network traffic. It emphasizes the importance of effective congestion detection and notification methods to optimize bandwidth utilization and ensure Quality of Service (QoS) in Peer-to-Peer networks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views34 pages

11 Chapter Study Material

Active Queue Management (AQM) techniques are designed to detect and manage network congestion before it escalates, primarily through packet dropping or marking to control traffic flow. The document outlines a taxonomy of AQM mechanisms, detailing the roles of Congestion Monitors, Bandwidth Controllers, Congestion Controllers, and Queue Controllers in managing network traffic. It emphasizes the importance of effective congestion detection and notification methods to optimize bandwidth utilization and ensure Quality of Service (QoS) in Peer-to-Peer networks.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

LoAi 1 Hilv

Active Queue Management


Techniques
Active Queue ManagementTechniques

2.1 Introduction

Active Queue Management (AQM) aims to detect congestion in the network

before it becomes severe by overfilling the router queue. It means that the router tries

to reduce the sending rate of the traffic sources by dropping or marking packets.

There exist two approaches to indicate congestion: Packets can be dropped and

packets can be marked. First strategy requires cooperation of the endpoints and latter

generates additional overhead through re-sending. Endpoints have to react on marked

packets as they have been dropped and decrease their throughput. With this the same

improvement of bandwidth utilization can be achieved, but without additional

overhead costs. Additionally some AQM mechanisms aim to reduce the bandwidth of

greedy flows by dropping their packets at higher rates. In this chapter we give a

survey on Active Queue Management algorithms that are suitable for Peer-to-Peer

networks.

Figure 2.1: Active Queue Management

Active Queue Management (AQM) refers to traffic management techniques at

a router that detect and notify traffic sources of imminent network congestion to

prevent outbound buffer over flow and control queuing delay. When notified of

network congestion, cooperative traffic sources like TCP reduce their transmission
30
Dept of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques

rates to participate in the congestion control. In the case network congestion cannot be

managed voluntarily by the traffic sources, AQMs may use buffer management

techniques to suppress traffic to the targeted traffic level and achieve the QoS goal. In

this section, we first propose an AQM taxonomy that provides a systematic way to

classify and analyze AQM mechanisms. In addition to the survey we present a

detailed flat taxonomy.

2.2 AQM Taxonomy

In general, the tasks of AQM can be divided into that of a Congestion Monitor

which detects and estimates congestion, a Bandwidth Controller that manages use of

the output bandwidth, a Congestion Controller which computes and applies the

congestion notification probability (CNP) to incoming traffic and a Queue Controller

which manages buffer usage and packet scheduling. We have developed an AQM

taxonomy based on the four AQM tasks

2.2.1 Congestion Monitor

The first task of an AQM is to monitor, detect and estimate congestion. This

estimation is used for bandwidth management decisions by the Bandwidth Controller,

or for congestion notification probability (CNP) computations in the Congestion

Controller.

In general, AQM congestion detection and estimation mechanisms can be

classified by the monitoring policy that uses either queue length as a measure of

congestion or the incoming traffic load as a measure of congestion. In each case,

either the instant or average measure of the quantity can be used. Traditionally, AQM

mechanisms used queue statistics such as instant or average queue length against
31
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques

queue thresholds as a measure of impending congestion. Yet, AQMs may measure

incoming traffic load and declare congestion when the measured load is greater than

the target load. The target traffic load is typically set to near 1, which defines

congestion as a state that the estimated incoming traffic rate (the offered load) is

greater than the service rate or link capacity. As in the case of the queue-based

congestion estimation policies, either an instant or an average measure of the traffic

load can be used in the load-based policies. An instant load refers to the load

measured in the last measurement interval, whereas an average load can be defined as

the average of instant loads over a specified period.

The traffic load based congestion estimation policies can be further classified

by the monitoring method (traffic rate or queue length) to estimate the traffic load. It

is more intuitive to measure the traffic load in terms of incoming traffic rate over

service rate. Yet, the traffic load can also be estimated in terms of queue length

differences over a measurement interval. Both rate-based and queue-based load

estimation methods have advantages and disadvantages. Rate-based methods usually

have a little more overhead than queue-based methods since rate-based methods need

to collect every incoming packet size while queue-based methods can sample the

queue size every measurement interval. However, an important advantage is that the

rate-based congestion estimation methods can reduce the estimation noise and detect

impending congestion before the queue starts to grow, allowing the Band- width

Controller or Congestion Controller to more accurately and promptly respond to the

imminent congestion.

The importance in load estimation is determining an effective measurement

interval, which may affect the stability of the feedback control system, link utilization

and queuing delay and buffer over flows. For example, choosing an insufficiently
32
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques

small interval can lead to the Congestion Controller making an over reactive decision

on network congestion while choosing an exceedingly large interval can make the

Congestion Controller less responsive. Similarly, when an averaging technique is used

for congestion estimation, the averaging factor will affect the responsiveness and

performance of the controller.

2.2.2 Bandwidth Controller

Following the Congestion Monitor, an AQM may have a Bandwidth

Controller that manages the use of the outbound bandwidth. Bandwidth controllers

can be categorized based on the nature of services they provide and the QoS goals. A

bandwidth controller may provide priority forwarding or loss differentiation service.

Priority forwarding is a priority class-based protection mechanism in which, upon

congestion, packets from a lower priority class are dropped before dropping packets

belonging to higher priority classes. A loss differentiation service is also a class-based

protection mechanism in which, upon congestion, a predefined proportion of traffic is

dropped from each class.

The most common type of bandwidth controller is one that provides fairness

protection in which individual flows or groups of flows are protected from one

another. We refer to such a Bandwidth Controller as a Bandwidth Guardian.

Bandwidth Guardians can be sub-categorized into using class-based, per flow or

pseudo per-flow bandwidth management. Class-based bandwidth fairness protection

mechanisms usually have the least overhead among the three, whereas per-flow

fairness protection mechanisms that maintain per-flow state have the most overhead.

Pseudo per-flow bandwidth management mechanisms detect outstanding high-

bandwidth flows without keeping per-flow information or by using a minimum per-

Dept of Computer Science & Technology, S.K, University, Anantapur.


&
Active Queue ManagementTechniques

flow information and rate limit the flows. Pseudo per-flow management protects

flows from misbehaving high-bandwidth flows rather than enforcing per-flow

fairness, but at a lower cost than per-flow management

Even within the same subcategory of the Bandwidth Guardians, the accuracy

and performance of the mechanisms can significantly differ in the complexity and

traffic information used for bandwidth management. For example, a simple class

based Bandwidth Guardian can pre-assign a fixed congestion bandwidth to each class,

while a more advanced class-based mechanism can dynamically assign a fair

bandwidth to each class for the price of estimating the number of active flows in each

class.

2.2.3 Congestion Controller

Incoming traffic that passes the Bandwidth Controller is forwarded to a

Congestion Controller. The job of the Congestion Controller is to prevent or control

network congestion by notifying traffic sources of the impending congestion earlier so

that congestion responsive traffic sources such as TCP can reduce their transmission

rate. Although an explicit binary congestion notification method called Explicit

Congestion Notification (ECN), the implicit mechanism of dropping incoming

packets has been historically used. For this reason, it is sometimes hard to distinguish

Congestion Controllers from Bandwidth Controllers as packet drops resulting from

the bandwidth management also act as implicit congestion notification. Yet

Bandwidth Controllers attempt to repressively manage outbound bandwidth usage at

congestion, while Congestion Controllers attempt to prevent congestion with the help

of responsive traffic sources. An easy way to distinguish between a Bandwidth

Controller and a Congestion Controller is, if it does not make sense for a mechanism

34
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques

to use ECN instead of packet drop, then it is a Bandwidth Controller, otherwise, it is a

Congestion Controller. Note that an AQM may have either a Bandwidth Controller or

a Congestion Controller, or both controllers.

More precisely, a Congestion Controller determines a congestion notification

probability (CNP) based on the estimated congestion level, its control history and

possibly other traffic information, and notifies traffic sources by randomly marking

(or dropping) the incoming packets with the estimated CNP. Every Congestion

Controller has its own QoS goal and thus has a specific CNP computation policy of

which flows should reduce or increase their transmission rate and by what amount.

The QoS goals can be simply to prevent congestion with minimized queuing delay, to

yield fair bandwidth allocation among responsive sources while preventing

congestion, or to provide a diverse QoS while preventing congestion. To achieve

these QoS and performance goals, the AQM may perform a uniform, class-based, per-

flow or per packet CNP computation. For example, to achieve only the basic goal of

preventing congestion, a Congestion Controller may compute and apply a uniform

CNP to all incoming traffic. In order to additionally yield fair bandwidth allocations

among different classes of responsive traffic, a Congestion Controller may compute

and apply per-class CNPs. Furthermore, a congestion controller may choose to

perform per-flow CNP computation to yield per-flow fairness among responsive

flows, or to provide a customized QoS to each flow.

The CNP computation methods can be further classified into two categories

based on how the CNP is computed. The first category is a Proportional (memory

less) controller that does not consult the recent control history but computes the CNP

based only on the current estimated congestion level and traffic information.

Proportional congestion controllers typically require knowledge on the traffic sources


35
Dept, of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques

to successfully control congestion. The fact that the stable state average transmission

rate of window-based traffic sources (like TCP) given a CNP can differ based on the

average RTTs they experience implies that a router should have some knowledge of

the average RTT and the number of flows (N) to control aggregated average incoming

traffic rate and to control per-flow average throughput. For example, an AQM that has

knowledge of the average RTT of the incoming flow aggregate and N can compute a

proper CNP for the flow aggregate using the queue law assuming ideal TCP traffic

sources. Or, knowing the fair bandwidth share (link capacity divided by N) and RTTs

of individual flows, a router can compute a proper CNP for each flow that will bring

each average flow rate to the fair share. To be practical, since not all TCP traffic is

greedy and long-lived, a proportional congestion controller is required to know per-

flow transmission rates (cwnd=RTT) and congestion estimation.

That is, the router should know exactly how much bandwidth to take away

from each flow to compensate for the overloaded amount of the service bandwidth to

increase congestion control precision. Without knowing the transmission rate of each

flow in the first place, it is not possible to determine how much bandwidth will be

reduced per notification. However, knowing the per-flow transmission rate (or just

cwnd since RTT is already known) helps little under the current congestion

notification structure of the Internet due to the inefficient and inaccurate router-to-

host binary congestion control communication method of marking with the CNP. In

addition, it is not possible for a router to instantly make coarsely-responding TCP to

transmit at a specific rate. Thus, precision congestion control using CNP alone cannot

readily be done. This is the basic argument behind the design of XCP, one of the most

recent mechanisms, that proposes to use a window based traffic source that inform

routers of RTT and cwnd and transmit at the rate that the most congested router
36
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques

explicitly specifies in terms of allowable cwnd. One critical problem facing AQMs in

this category is that per-flow traffic information such as RTT and cwnd may not be

practically and securely obtainable under the current Internet structure. On the other

hand, AQMs that compute CNP based only on the congestion estimate or use

incomplete traffic information may not be able to support a wide range of traffic

without confronting configuration problems or failure to meet QoS and/or

performance goals for some traffic mixes. The second class of CNP computation

methods is Integral control that heuristically searches for a stable state CNP that will

bring the aggregated traffic to a desired level based on the recent control history and

congestion estimates measured by the Congestion Monitor. More precisely, integral

congestion controllers continuously update the CNP of the previous interval based on

an estimated congestion control error. A significant advantage of integral CNP

computation methods over proportional methods is that integral methods require no

additional traffic information to converge to a CNP that accomplishes the aggregate

QoS goals such as bounded queuing delay. While the integral CNP computation

techniques are usually used for incoming traffic aggregates, they can be applied for

groups of flows or even possibly for individual flows assuming per-flow throughput

and fair bandwidth share (or N) can be measured. An important integral congestion

controller issue is to find an appropriate CNP update interval and

increment/decrement steps in order to ensure the congestion control stability and

responsiveness under both steady and changing network traffic conditions.

So far, two types of CNP determination methods have been discussed and

illustrated. The integral method is usually used to implement uniform or per-class

based congestion notification services. On the other hand, the proportional CNP

computation method utilizing the complete traffic information is a typical per-flow


37
Dept of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques

CNP computation method that can be used to implement per-flow fair congestion

notification services or customized QoS congestion notification services.

Alternatively, a per-flow QoS service may be implemented using the integral method

with per-flow QoS requirement information from traffic sources. That is, a router may

heuristically adjust the updated CNP of the traffic aggregate for each flow considering

the QoS requirement of the flow.

As briefly mentioned in a previous paragraph, Congestion Controllers can use

either the implicit congestion notification method of packet dropping or the explicit

congestion notification method of marking. Currently, the Internet Protocol only

supports binary ECN bit marking. ECN marking can over significant performance

gain in terms of packet loss rate compared to the implicit packet drop congestion

notification. However, it is possible that multiple bits can be used to enhance

congestion control precision in the future.

2.2.4 Queue Controller

The last component of an AQM is a Queue Controller. A Queue Controller

manages the transmission of packets forwarded by the Congestion Controller or

Bandwidth Controller. Typically, AQM mechanisms keep only a single packet queue.

However, a mechanism may assign a packet queue for each incoming flow and

perform link scheduling (although it is arguable that this mechanism is not an AQM),

Alternatively, an AQM may assign a packet queue for each class of traffic. To

encompass these possibilities, the AQM taxonomy includes the number of packet

queues for the Queue Controller categorization.

Every packet queue has a management discipline, the simplest being FIFO

queue management. Other management disciplines include support for a uniform QoS
..... ................... ..... 38
Dept, of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques

such as bounded average queuing delay or per-flow or per-class delay. The Queue

Controller can support diverse per-flow QoS requirements using a packet scheduling

rather than FIFO in cooperation with the Congestion Controller, although little work

has been done in this direction. The per-flow QoS parameters that an Internet router

can support are CNP and queuing delay. As mentioned earlier, a Congestion

Controller may consider the QoS requirements in determining the CNP for a flow

given QoS information from traffic sources. Similarly, the Queue Controller can

consider the delay requirement of each flow using QoS packet scheduling. As long as

the flow uses bandwidth less than or equal to the fair share, trying to meet the QoS

requirements of individual flows may be desirable. However, QoS packet scheduling

may need to address issues such as starvation that can affect the throughput of

window based traffic sources like TCP.

2.3 Classification of AQM Schemes

Figure 2.2: Classification of AQM Schemes

39
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques

2.3.1 Congestion metric without Flow Information

It is the first category of classification that considers only the congestion

metric and not the flow information. However, based on the congestion metric further

the AQMs can be classified. AQMs use a variety of congestion metrics like Queue

length, load and link utilization to sense the congestion in routers.

2.3.1.1 Queue-based AQM

a) RED: The first well known AQM scheme proposed is RED[1]. It is one of

the popular algorithms. It tries to avoid problems like global synchronization, lock­

out, bursty drops and queuing delay that exists in the traditional passive queue

management i.e Droptail scheme.

For every packet arrival {


Calculate
if (Qaw > max*) {
Drop die packet
>
else if (Qive > mm&) {
Calculate die dropping probability pa
Drop the packet with probability pa,
otherwise forward it
}
else {
Forw ard the packet
}
}

Variables:
Q«. : average queue size
P* : current packet-marking probability
q : current queue size
p* ; temporary marking or dropping
probability

Fixed parameters:
w4 queue weight - 0 .1 ~ 0 0001
max* : maximum threshold for queue
mm* : minimum threshold for queue
niaXp : maximum dropping probability

Figure 2.3: Pseudo code for RED

Dept of Computer Science & Technology, S.K University, Anantapur.


Active Queue ManagementTechniques

The algorithm detects congestion by computing the average queue size Qave.

To calculate average queue size, low pass filter is used which is an exponential

weighted moving average (EWMA). The average queue is then compared with two

thresholds: a minimum threshold minth and a maximum threshold maxth. If the

average queue size is between minimum and maximum threshold, the packet is

dropped with a probability. If it exceeds maximum threshold, then the incoming

packets are dropped. Packet drop probability is linear function of queue length. So the

dropping probability depends on various parameters like minth, maxth, Qave and wq.

These parameters must be tuned well for the RED to perform better. However, it faces

weaknesses such as accurate parameter configuration and tuning. This becomes a

major disadvantage for the RED algorithm. Though RED avoids global

synchronization but fails when load changes dramatically. Queue length gives

minimum information regarding the severity of congestion. RED does not consider

the packet arrivals from the various sources,

for the congestion indication.


Acc.
Since RED considers only the queue C&llrNot ft?.
congestion remains as an inherent problem. In case of number of users increasing, the

performance of the RED queue degrades. According to queuing theory, it is only

when packet inter-arrivals have a Poisson distribution that queue length directly relate

to the number of active sources and thus indicating the true level of congestion.

However in network gateways packet inter-arrival times are decided non-Poisson

which clearly does not indicate the severity of congestion. Packet loss and utilization

at the link varies with regard to the network load variation as RED is sensitive to

parameter configuration. In case of accurate tuning of parameter wq, high utilization

41
Dept of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques

and low packet drop at the link can be achieved. In case of poor minth, poor

utilization at the link exists and poor maxth value results in large packet drop

b) DS-RED; RED[26] uses a single linear drop function to calculate the drop

probability of a packet and uses four parameters and average queue to regulate its

performance. RED suffers unfairness and low throughput. DS-RED uses two-segment

drop function which provides much more flexible drop-operation than RED.

However, DSRED is similar to RED in some aspects. Both of them use linear drop

functions to give smoothly increasing drop action based on average queue length.

Next they calculate the average queue length using the same definition. The two

segment drop function of DSRED uses the average queue length which is related to

long term congestion level. As the congestion increases, drop will increase with

higher rate instead of constant rate. As a result, congestion will be relieved and throughput

will increase. This results in a low packet drop probability at a low congestion level

and gives early warning for long term congestion. DSRED showed a better packet

drop performance resulting in higher normalized throughput than RED in both the

heavy load and low load. It results in lower average queuing delay and queue size

than RED.

c) MRED: To overcome problems faced in RED, MRED computes the packet

drop probability based on a heuristic method rather than the simple method used in

RED. In this scheme the average queue size is estimated using a simple EWMA in the

forward or backward path. The packet drop probability is calculated to determine how

frequently the router drops packets at the current level of congestion. In MRED the

packet drop probability is computed step form by using packet loss and link

utilization history. MRED is able to improve fairness, throughput and delay compared

to RED.

Dept of Computer Science & Technology, S.K University, Anantapur.


Active Queue ManagementTechniques

d) Adaptive Random Early Detection (ARID)

Floyd [5] argued that a weakness of RED is that it does not take into

consideration the number of flows sharing a bottleneck link. As discussed in Chapter

1, a TCP flow in congestion avoidance reduces its transmission rate by half when it

experiences a packet mark or drop. If the bandwidth of a bottleneck link is shared

equally by n flows (each flow receives 1/n bandwidth of the link), a single packet

mark or drop causes one flow to reduce its transmission rate to 0.5n-l and reduces the

offered load by a factor of (1 - 0.5n-l). It is obvious that as n increases, the effect of

a packet mark or drop on reducing the aggregate transmission rates of n TCP flows

decreases. Thus, when n is large, RED either has to incur a high packet loss rate or is

not effective in reducing load on a congested link and in controlling the queue length.

On the other hand, when n is small, RED can be too aggressive, i.e., it drops too many

packets, and can cause underutilization of an Internet link.

Fixed Target

Figure 2.4: ARED’s fixed target queue size

Floyd[5] concluded that RED needs to be dynamically tuned as the of the

traffic on a link changes, i.e., the number of flows on an Internet link is not known a

priori. They proposed a self-configuring algorithm for RED by adjusting maxp. In

their algorithm, maxp is adjusted every time the average queue length falls out of the
43
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques

target range between minth and maxth. When the average queue length is smaller than

minth, maxp is decreased multiplicatively to reduce RED’s aggressiveness in marking

or dropping packets; when the queue length is larger than maxth, maxp is increased

multiplicatively. This algorithm is expressed in pseudo code in figure 2,5. a and p in

the pseudo code are constants that have to be chosen by network operators.

Floyd et al. improved upon Feng’s original adaptive RED algorithm by

replacing the MIMD (multiplicative increase multiplicative decrease) approach with

an AIMD (additive increase multiplicative decrease) approach for adapting maxp

slowly in 2001. The pseudo code for updating maxp proposed by Floyd et al. is shown

in figure 2.5. They also provided guidelines for choosing minth, maxth, and the

coefficient of the low-pass filter for computing the weighted average queue size. The

Adaptive RED version proposed by Floyd et al. (referred to herein as “ARED”) also

includes the “gentle mode” that was discussed in 2.3.1.1. The parameters for the

ARED algorithm proposed by Floyd et al. are summarized in table 2.1.

On every update for queue average q:


if minth < q < maxth then
status Between
maxp — 0
end if
if q < minth && status! = Below then
status <— Below
maxp -4— maxp/ a
end if
if q > maxth && status! = Above then
status ■*—Above
maxp 4— maxp P
end if
Figure 2.5 Pseudo code for updating maxp

44
Dept of Computer Science & Technology, S,K,University, Anantapur.
Active Queue ManagementTechniques

On every update interval (0.5 seconds) for maxp:

if q > minth + 0.6(maxth - minth) && maxp < 0.5 then

maxp -4— maxp + min(0.01, maxp/4)

else if q < minth + 0.4(maxth - minth) && maxp > 0.01 then

maxp.*— maxp/p

end if

Figure 2.6: Pseudo code for updating maxp

Parameters Description

wq Coefficient of a low-pass filter for computing the average


queue size

P Decrease factor for adapting maxp


minth Low queue threshold for computing drop or mark probability
for arriving packets
maxth High queue threshold for computing drop or mark probability
for arriving packets
Table 2.1: ARED parameters

According to Floyd et al., one of the original RED algorithm’s main

weaknesses is that it cannot control the router’s average queue size effectively and

predictably. When maxp is high or congestion on the link is light, RED keeps the

average queue size near minth. On the other hand, when maxp is low or the link is

heavily congested, RED’s average queue size grows to maxth. Floyd et al. claimed

that ARED does not have this problem since it dynamically adjusts maxp. They also

demonstrated via simulations that ARED can achieve good and predictable

performance without requiring hand-tuning its parameter settings. Further, they

claimed that unlike RED, ARED is relatively insensitive to parameter settings .

45
Dept, of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques

e) PD-RED: PD-RED [10] was introduced to improve the performance over

the Adaptive RED scheme. This scheme is based on the proportional derivative (PD)

control principle. It includes control theory and adapts the maximal drop rate

parameter to RED called maxp to stabilise the queue length. In this scheme, AQM is

considered as a typical control system. PD-RED algorithm is composed of two parts a

new PD controller and the original RED AQM. The variation of queue length and the

drop probability is smaller in PD-RED compared to Adaptive RED. PD-RED showed

better performance in terms of mean queue length and standard deviation of the queue

length.

f) LRED. The AQM scheme Loss Ratio based RED, measures the latest

packet loss ratio, and uses it as a complement to queue length in order to dynamically

adjust packet drop probability. So in this scheme packet loss ratio is a clear indication

of severe congestion occurrence. Queue-length is also used in small time-scale to

make the scheme more responsive in regulating the length to an expected value LRED

tries to decouple the response time and packet drop probability, there making its

response time almost independent of network status.

g) HRED: In RED, the drop probability curve is linear to the change of the

average queue size. But in this paper, the drop probability curve is a hyperbola curve.

As a result this algorithm regulates the queue size close to the reference queue value.

This makes the algorithm no longer sensitive to the level of network load, low

dependency on the parameter settings. It also achieves higher network utilization.

Since HRED is insensitive to the network load and queue size does not vary much

with the level of congestion, the queuing delay is less unpredictable. It rapidly reaches

and keeps around its reference queue length, irrespective of the increase or decrease in

Dept of Computer Science & Technology, S.K University, Anantapur.


Active Queue ManagementTechniques

queue length. Hyperbola RED tries to provide the highest network utilization because

it strives to maintain a larger queue size.

h) AutoRED: The AutoRed feature takes care of the traffic properties,

congestion characteristics and the buffer size. In AutoRed, calculating the average

queue size using EWMA model is modified and redefined. Therefore wq,t is a

combination of the three main network characteristics such as traffic properties,

congestion characteristics and the queue normalization. In the above technique, the

wq,t is written as a product of the three network characteristics. The AutoRed with

RED performs better than the RED scheme. This model reduces the queue oscillations

appropriately in the RED-based algorithms. The AutoRed uses the strength and effect

of both the burstyness and the transient congestion.

2.3.1.2 Load-based AQM

a) AVQ: The virtual queue is updated [15], when a packet arrives at the real

queue to indicate the new arrival of the packet. As in Fig 2.7 when the virtual queue

or buffer overflows, the packets are marked / dropped. The virtual capacity of the link

is modified such that total flow entering each link achieves a desired utilization of the

link.

This is done by aggressive marking when the link utilization exceeds the

desired utilization and less aggressive when the link utilization is below the desired

utilization. As a result this provides early feedback than the RED.

b) YELLOW: In this scheme the routers periodically monitor their load on

each link and determine a load factor, the available capacity and the queue length.

47
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques

Parameters Description

S The desired utilization at a link

a Coefficient for computing the virtual queue capacity

Table 2.2: YELLOW parameters

At each packet arrival epoch do


VQ = nax(VQ - C(t - s), 0) /“Update Virtual Queue
Size */
IfVQ + b>B
Mark or drop packet in the real queue
else
VQ = VQ +- b
!* Update Virtual Queue Size */
endif
C = max(mm(C + a *y *C * (t - s),C) - a* b,0)
s=t
Constant
C = Capacity of a link
B = buffer size
b = number of bytes in current packet
a = smoothing parameter
y = desired utilization of the link
Ollier
C = Virtual queue capacity
t = Current time
s = arrival time of previous packet
VQ = Number of bytes currently in the virtual queue

Figure 2.7: Pseudo code of AVQ

This helps in identifying the incipient congestion in advance and calculates the

packet marking probability. Yellow improves the robust performance with respect to

round-trip propagation delay by introducing the early queue controlling function. So

Yellow uses the load factor (link utilization) as a main merit to manage congestion.

To improve congestion control performance, a queue control function (QCF) is

introduced as a secondary merit. The sufficient condition for globally asymptotic

Dept of Computer Science & Technology, S.K University, Anantapur.


Active Queue ManagementTechniques

stability is presented based on Lyapunov theory. Furthermore, the principle for

parameter settings is given based on the bounded stable conditions.

c) SAVQ: It is observed that the desired utilization parameter y in AVQ [27]

algorithm has an influence on the dynamics of queue and link utilization. It is difficult

to achieve a fast system response and high link utilization simultaneously using a

constant value y. An adaptive setting method for y is proposed according to the

instantaneous queue size and the given reference queue value. This new algorithm,

called stabilized AVQ (SAVQ) [18], stabilizes the dynamics of queue maintaining

high link utilization.

d) EAVQ: It is a rate based stable enhanced adaptive virtual queue [27]

proposed . Arrival rate at the network link is maintained as a principal measure of

congestion. A subordinate measure is used as the desired link utilization to solve the

problem such as hardness of parameter setting, poor ability of anti-disturbance and a

little link capacity low. The EVAQ proved the transit performance of the system and

assured the entire utilization of link capacity. Based on linearization, the local stability

conditions of the TCP/EAVQ system were presented. The simulation results show the

excellent performances of EAVQ such as the higher utilization, the lower link loss

rate, the more stable queue length, and the faster system dynamic response than AVQ.

2.3.1.3 Queue and Load-based AQM

a) REM: As discussed [31] Random Exponential Marking (REM) achieves

high utilization with negligible loss or queuing delay even as the load increases. This

schemestabilizes both the input rate around link capacity and the queue around a

small target independent of the number of users sharing the link. It uses a congestion

TO
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques

measure price to determine the marking probability. The congestion measure price is

updated based on the rate mismatch and queue mismatch as in .

p(k+D=\p<r+Mm - ti)+x, e, (*»r


Constants
y> 0
al> 0
b{ : target queue length
b£k) : aggregate buffer occupancy
c; (k) : available bandwidth

Figure 2.8. Calculation of congestion measure price

When the number of users in the network increases, the queue mismatch and

rate mismatch increases increasing the price value. Increase in price value results in

increased marking probability. This in turn reduces the source rate of the user input.

When the source rates are too small, the mismatch is negative, decreasing the price

and marking probability value that increases the source rate. The price adjustment rule

tries to regulate user rates with network capacity and controls queue length around a

target value. RED tries to couple the congestion measure and the performance

measure, but REM decouples the congestion measure and the performance measure

showing a better performance than the earlier scheme.

b) SVB: The SVB [31] scheme uses the packet arrival rate and queue length

information to detect congestion in an Internet router. As AVQ, it maintains a virtual

queue and responds to the traffic dynamically. A new packet arrival is reflected in the

virtual queue considering both the queue length and the arrival rate. The most striking

feature of the proposed scheme is its robustness to workload fluctuations in

maintaining a stable queue for different workload mixes (short and long flows) and
50
Dept of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques

parameter settings. The service rate of the virtual queue is fixed as the link capacity of

the real queue and adapts the limit of the virtual buffer to the packet arrival rate. The

incoming packets are marked with a probability calculated based on both the current

virtual buffer limit and the queue occupancy. The simulations results have shown that

it provides lower loss rate, good stability and throughput in dynamic workloads than

the other AQM schemes like RED, REM and AVQ.

2.3.1.4 Others Congestion metrics (Loss event, Link history, link

utilization)

a) BLUE; The BLUE [9] algorithm resolves some of the problems of RED by

employing two factors: packet loss from queue congestion and link utilization. So

BLUE performs queue management based on packet loss and link utilization as

shown in Fig. 2.9 It maintains a single probability pm to mark or drop packets. If the

buffer overflows, BLUE increases pm to increase the congestion notification and is

decreased to reduce the congestion notification rate in case of buffer emptiness.

This scheme uses link history to control the congestion. The parameters of BLUE are

51, 82 and freeze time. The freeze time determines the minimum time period between

two consecutive updates of pm.

Table 2.3: BLUE parameters

Parameters Description

5, Incremental adjustment for pm

§2 Decremental adjustment for pm

free time Minimum interval between two successive updates of pm

51
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques

BLUE maintains minimum packet loss rates and marking probability over

varying queue size and number of connections compared to RED. In case of large

queue, RED has continuous packet loss followed by lower load that leads to reduced

link utilization.

Upon Packet toss (or Qkaa > L) event:


if ( ( now - lastjupdate) > freeze_time )

lastupdate := now
Upon link idle event
if{ ( now - lastjupdate) > fteeze time)
Pm - Pm ■
lastupdate := now
Constant:
Kh
ireeze_time : minimum time period between two
consecutive updates of pK

Figure 2.9 Pseudo code of BLUE algorithm

In BLUE [9], the queue length is stable compared to RED, which has a large

varying queue length. This ensures that the marking probability of BLUE converges

to a value that results in reduced packet loss and high link utilization.

2.3.2 Congestion metric With Flow Information

AQMs also belong to this category using both congestion metric and the flow

information to detect congestion in routers. AQMs that used only congestion metric

and not flow information faced the problem of unfairness in handling the different

types of traffic. While considering the congestion metric they can be further classified

as Queue-based or load based and others.

52
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques

2.3.2.1 Queue-based

a) FRED: This is based on instantaneous queue occupancy of a given flow. It

removes the unfairness effects found in RED. FRED generates selective feedback to a

filtered set of connection having a large no. of packets queue rather than choosing

connections randomly to drop packets proportionally. It provides better protection

than RED for adaptive flows and isolating non-adaptive greedy flows.

b) CHOKe : (CHOose and Keep for responsive flows, and CHOose and Kill

for unresponsive flows) algorithm penalizes misbehaving flows by dropping more of

their packets. So CHOKe tries to bring fairness for the flows that pass through a

congested router. CHOKe in Fig. 2.10 calculates the average occupancy of the buffer

like as in RED using EWMA. If average queue is greater than minth, the flowid of

each arriving packet and a randomly selected packet called drop candidate packet is

compared. If the packets are of the same flow then the drop both the packets.

Otherwise if average queue is greater than maxth, then drop the new packet else place

the packet in the buffer and admit the new packet with a probability p

Calculate QM

Admit sew packet


}
else {
Draw a drop candidate packet at random from buffer.
If fUmid of aniviag packet and drop candidate packet is
same
Drop both packets
else

Admit tbe packet with probability p


else
Drop die new packet
}

Figure 2.10: Pseudo code of CHOKe algorithm

53
Dept, of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques

c) SHRED: Short-lived flow friendly RED (SHRED) [3], an AQM

mechanism improved response time for short lived Web traffic. It uses a cwnd hint

from a TCP source to compute the cwnd ratio of an arriving packet to the cwnd

average and reduces the probability of dropping packets during the sensitive period

when a flow’s cwnd is small. Sources mark each packet with its current window size,

allowing SHRED to drop packets from flows with TCP windows with a lower

probability. Small TCP window sizes can significantly affect short-lived flows. A

small TCP window results in a lower transmission rate and short-lived flows are more

sensitive to packet drops. SHRED provides improvement in web response time and is

web traffic performance improvements are achieved without negatively impacting

long-lived FTP traffic.

d) Stochastic RED: To handle the tremendous growth of unresponsive traffic

internet, Stochastic RED was introduced. Basically, Stochastic RED tunes the packet

drop probability of RED for all the flows by taking into consideration the bandwidth

share obtained by the flows. The dropping probability is adjusted such that the packets

of the flow with high transmission rate are more likely to be dropped than flows with

lower rate. This algorithm distinguishes individual flows without requiring per-flow

state information at the routers. It is called stochastic because it does not really

distinguish the flows accurately. The arriving traffic is divided by the router into a

limited number of counting bins using a hashing algorithm. On the arrival of each

packet at the queue, a hash function is used to assign the packet to one of the bins

based on the flow information. It dispatches the packets of the different flows to the

set of bins. With a given hash function, packet of the same flow are mapped to the

same bin. Therefore when the flow is unresponsive, the bin load increases

dramatically.
54
Dept of Computer Science & Technology, S.K, University, Anantapur.
Active Queue ManagementTechniques

Stochastic RED estimates the bin loads and uses these loads to penalize flows

that map to each bin according to the load of the associated bin. Thus unresponsive

flows experience a large packet drop probability. The Stochastic RED is effective in

disciplining misbehaving flows, making unresponsive flows TCP friendly and

improving the response time of Web transfer without degrading the link utilization.

2.3.2.2 Load based

a) SFED: SFED is rate control based AQM discipline which is coupled with

any scheduling discipline. It maintains a token bucket for every flow or aggregate

flows. The token filling rates in proportion to the permitted bandwidths. When a

packet is enqueued, tokens are removed from the corresponding bucket. The decision

to enqueue or drop a packet of any flow depends on the occupancy of its bucket at

that time. A token bucket serves as a control on the bandwidth consumed by a flow.

SFED ensures early detection and congestion notification the adaptive source. The

token bucket also keeps record of the bandwidth used by its corresponding flow in the

recent past.

b) FABA: The AQM scheme fair bandwidth allocation provides fairness

amongst competing flows even in the presence of the non-adaptive flows. It is a rate

control based AQM algorithm. It offers congestion avoidance by early detection and

notification with low implementation complexity. It maintains per active-flow state

with scalable implementation. It performs better than RED and CHOKe. In case of

buffer sizes constrained, it performs significantly better than FRED. It gives high

values of fairness for diverse applications such as FTP, Telnet and HTTP.

Performance is superior even for a large number of connections passing though the

routers. It is a scalable algorithm.

Dept of Computer Science & Technology, S.K University, Anantapur.


55
Active Queue ManagementTechniques

c) LUBA; LUBA is link utilization based AQM algorithm. In this algorithm

malicious flows are identified which causes congestion at the router, and assigns them

drop rates in proportion of their abuse of the network. A malicious flow continuously

hogs more than its fair share of link bandwidth. So LUBA assigns the drop probability

to a malicious flow so that it does not get more than its fair share of network. Luba

Interval, B, is the byte-count of total packets received by the congested router during

an interval to measure whether a flow is hogging more than its fair share. Overload-

factor (U) is computed by B bytes arriving at the router. If the overload-factor U is

below target link utilization router is non-congested and packets are not marked or

dropped otherwise all arriving packets are monitored while assigning a flow Id to

each ingress flow at the router. A history table is maintained to monitor flows which

take more than their fair share of bandwidth in a Luba Interval. It disciplines

malicious flows in proportion to their excess inflow. It offers high throughput and

avoids global synchronization of responsive flows. LUBA works well in different

network conditions and the complexity of the algorithm does not increase even when

there is large number of non-responsive flows

23.2.3 OTHERS

a) SFB; It is a FIFO queuing algorithm that identifies and rate-limits non-

responsive flows based on accounting mechanisms. The accounting bins are used to

keep track of queue occupancy statistics of packets belonging to a particular bin. Each

bin keeps a dropping probability pm which is updated based on bin occupancy. As a

packet arrives at the queue, it is hashed into one of the N bins in each of the levels. If

the number of packets mapped to a bin goes above a certain threshold, pm for the bin

is increased. If the number of packets drops to zero, pm is decreased. SFB is highly


............ ......... ................. 56
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques

scalable and enforces fairness using an extremely amount of state and a small amount

of buffer space

2.3.3. Only flow information

The third category of AQMs uses only the flow information and does not

identify the congestion metric to control the congestion.

a) Stabilised RED; SRED in pre-emptively discards packets with a load-

dependent probability when a buffer in a router is congested. It stabilizes its buffer

occupancy at a level independent of the number of the active connections. SRED does

this by estimating the number of active connections. It obtains the estimate without

collecting or analyzing state information. Whenever a packet arrives at the buffer, the

arriving packet with randomly chosen packet that recently preceded it into the buffer

is compared. The information about the arriving packets is augmented with a “Zombie

list”. As packets arrive, as long as the list is not full, for every packet the packet flow

identifier is added to the list. Once the zombie is full, whenever a packet arrives, it is

compared with a randomly chosen zombie in the zombie list If the arriving packet’s

■flow matches the zombie it is declared “hit”. If the two are not of the same flow, it is

declared “no hit”. The drop probability depends on whether there was a hit or not.

This identifies the no. of active flows and finds candidates for misbehaving flow.

SRED keeps the buffer occupancy close to a specific target and away from overflow

or underflow. In SRED the buffer occupancy is independent of the number of

connections while in RED the buffer occupancy increases with the number of

connections. The hit mechanism is used to identify misbehaving flows without

keeping per-flow state. Stabilized RED overcomes the scalability problem but suffers

from low throughput.


57
Dept, of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques

b) GREEN: This algorithm uses flow parameters and the knowledge of TCP

end-host behavior to intelligently mark packets to prevent queue build up, and prevent

congestion from occurring. It offers a high utilization and a low packet loss. An

improvement of this algorithm is that there are no parameters that need to be tuned to

achieve optimal performance in a given scenario. In this algorithm, both the number

of flows and the Round Trip Time of each flow are taken into consideration to

calculate the congestion-notification probabilities. The marking probability in

GREEN is generally different for each flow because it depends on characteristics that

are flow specific.

2.4 Tail-drop

The tail-drop algorithm was not designed to be an efficient AQM. It is simply

a queue which, when filled to its maximum, overflows and drops any subsequently

arriving packets. However, we can interpret it as an AQM, which measures the

backlog to determine the congestion level. No congestion is detected by the tail-drop

algorithm until the queue is full. As shown in Fig. 2.11, when the queue is full, the

maximum congestion signal is generated because all of the arriving packets are

dropped. Once sources detect lost packets they slow down and the arrival rate of

packets to the queue will be less than the capacity of the link and the packet backlog

in the queue decreases. Then, when the buffer is not full, no congestion feedback

signal is generated by tail-drop algorithm and the source rates increase until overflow

happens again. We can see that the tail-drop AQM results in a cycle of decrease and

increase of rates around the point where the buffer is nearly full. The actual mean size

of the buffer depends on the load on the link. The tail-drop algorithm is incapable of

58
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques

generating any feedback signal (price) unless the buffer is full. This is why the current

Internet suffers long queuing delays and long RTTs.

Packet 1
Drop
Probability

Queue Size Bn^

Figure 2.11 Tail-Drop Dropping Probability vs Queue Size

Most work on Active Queue Management uses the tail-drop queue as a lower

bound for performance comparison. However, in a basic mechanism is used to

improve the fairness between flows of the tail-drop queue. We will now introduce

AQM proposals that give performance superior to the tail-drop AQM.

2.5 Explicit Congestion Notification

AQM algorithms traditionally notify end systems of incipient congestion by

dropping arriving packets at a router. Transport protocols at the end systems such as

TCP infer the presence of congestion when they detect packet losses and react to these

losses by reducing their sending rates. With reliable data delivery semantics (such as

provided by TCP), the lost data packets have to be retransmitted. This results in

decreased throughput and increased latency for applications. The IETF proposed a

protocol for an explicit signaling mechanism called Explicit Congestion Notification

(ECN)[16) by using bits in TCP and IP headers [13]. With ECN routers can mark a

packet by setting a bit in the header (instead of dropping the packet) to deliver the

........ ... Dept of Computer Science & Technology, S.KUniversity, Anantapur.



Active Queue ManagementTechniques

congestion signal explicitly to the end systems. This approach avoids packet losses

and the potential impact of packet losses on applications.

Senders indicate their ECN capability by setting the ECN-Capable Transport

(ECT) code point in the IP header. When congestion is detected, routers mark packets

that have the ECT code point set to convey an explicit congestion signal to the end

systems. ECN marking is done by setting the Congestion Experienced (CE) code

point in the IP header. When the receiver receives a data packet with the CE code

point set, it sets the ECN-Echo flag in the TCP header of its next ACK packet to

notify the sender of congestion in the network. Upon receiving an ACK packet with

the ECN-Echo flag set, the sender reduces its congestion window as if it had lost a

packet. The sender also sets the CWR flag in the TCP header of its next packet to

confirm the receipt of the receiver’s ECN-Echo flag.

Since an uncooperative or malicious user can set the ECT code point and

ignores the routers’ congestion signal, the standard specification for ECN

recommends that routers only mark packets when their average queue size is low.

When the average queue size exceeds a certain threshold, the standard specification

for ECN recommends routers to drop packets rather than set the CE code point in the

IP header. The ARED algorithm described in section 2.3.1 follows this

recommendation and drops all arriving packets when its average queue size grows

larger than maxth.

2.6 Evaluation of AQM and ECN

Most studies on AQM and ECN have been based on algorithms. While

simulation is a useful tool for research to gain insights into new network protocols and

mechanisms that have not been implemented or deployed yet, simulation results

6U
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques

approximately obtain realistic results. When simulators are built, numerous

abstractions and simplifications for real implementations have to be made.. Further,

many researchers also agreed that the widely used network simulator ns-2 does not

have any numerous implementation bugs and questioned the validity of simulation

results obtained from ns-2[22] .This simulation results lead to accurate conclusions.

For the reason mentioned above, studies with a implementation of AQM and

ECN in a network and under controlled and realistic conditions are very important.

This is because results from these evaluation studies are more credible than studies

results. Hence, conclusions drawn from results obtained under simulation conditions

are more convincing than those obtained from studies. Despite their important role,

there have been only a few evaluation studies of AQM and ECN in networks. In this

section, we will review existing evaluation studies in networks and discuss their

limitations.

2.7 Queue Law

Figure 2.12 depicts the queue law as explained in [2], assuming a single

congested router with uniform dropping probability. As the drop rate at the router

increases the average queue size and hence, average queuing delays experienced by

the incoming packets decreases. However, an increase in drop rate also means

reduced throughput. Thus, the average queue size at the router decides the throughput

and delay treatment given to flows passing through it, and both throughput and delay

can be controlled by changing drop rate. ARED tries to maintain fixed average queue

size, thus providing predictable average queuing delays by adapting drop rate.

However, a fixed average queue size does not suit all kinds of traffic mixes and

hence, the two AQM mechanisms proposed in this thesis, RED-Worcester and RED-
6i
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques

Boston, adjust average queue size based on average requirements of the incoming

traffic in order to provide better overall QoS at the router.

Average Queue Size

Drop Rate

Figure 2.12: Queue Law

2.8 Summary

AQM has been proposed to replace drop-tail in order to achieve more effective

congestion control. While all AQM algorithms attempt to achieve this common goal,

many AQM algorithms have been invented for slightly different purposes such as

stabilizing router queues, approximating fairness among flows, controlling

unresponsive high-bandwidth flows, and improving performance for short flows. In

this Chapter II we reviewed the most prominent AQM algorithms in the

aforementioned categories and discussed how these AQM algorithms were evaluated.

We also reviewed the limitations of existing evaluation for AQM algorithms such as

unrealistic simulations, one-way traffic and lack of synthetic general TCP traffic.

These limitations will be addressed in subsequent Chapters of my dissertation.

62
Dept of Computer Science & Technology, S.K University, Anantapur.

You might also like