11 Chapter Study Material
11 Chapter Study Material
2.1 Introduction
before it becomes severe by overfilling the router queue. It means that the router tries
to reduce the sending rate of the traffic sources by dropping or marking packets.
There exist two approaches to indicate congestion: Packets can be dropped and
packets can be marked. First strategy requires cooperation of the endpoints and latter
packets as they have been dropped and decrease their throughput. With this the same
overhead costs. Additionally some AQM mechanisms aim to reduce the bandwidth of
greedy flows by dropping their packets at higher rates. In this chapter we give a
survey on Active Queue Management algorithms that are suitable for Peer-to-Peer
networks.
a router that detect and notify traffic sources of imminent network congestion to
prevent outbound buffer over flow and control queuing delay. When notified of
network congestion, cooperative traffic sources like TCP reduce their transmission
30
Dept of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques
rates to participate in the congestion control. In the case network congestion cannot be
managed voluntarily by the traffic sources, AQMs may use buffer management
techniques to suppress traffic to the targeted traffic level and achieve the QoS goal. In
this section, we first propose an AQM taxonomy that provides a systematic way to
In general, the tasks of AQM can be divided into that of a Congestion Monitor
which detects and estimates congestion, a Bandwidth Controller that manages use of
the output bandwidth, a Congestion Controller which computes and applies the
which manages buffer usage and packet scheduling. We have developed an AQM
The first task of an AQM is to monitor, detect and estimate congestion. This
Controller.
classified by the monitoring policy that uses either queue length as a measure of
either the instant or average measure of the quantity can be used. Traditionally, AQM
mechanisms used queue statistics such as instant or average queue length against
31
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques
incoming traffic load and declare congestion when the measured load is greater than
the target load. The target traffic load is typically set to near 1, which defines
congestion as a state that the estimated incoming traffic rate (the offered load) is
greater than the service rate or link capacity. As in the case of the queue-based
load can be used in the load-based policies. An instant load refers to the load
measured in the last measurement interval, whereas an average load can be defined as
The traffic load based congestion estimation policies can be further classified
by the monitoring method (traffic rate or queue length) to estimate the traffic load. It
is more intuitive to measure the traffic load in terms of incoming traffic rate over
service rate. Yet, the traffic load can also be estimated in terms of queue length
have a little more overhead than queue-based methods since rate-based methods need
to collect every incoming packet size while queue-based methods can sample the
queue size every measurement interval. However, an important advantage is that the
rate-based congestion estimation methods can reduce the estimation noise and detect
impending congestion before the queue starts to grow, allowing the Band- width
imminent congestion.
interval, which may affect the stability of the feedback control system, link utilization
and queuing delay and buffer over flows. For example, choosing an insufficiently
32
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques
small interval can lead to the Congestion Controller making an over reactive decision
on network congestion while choosing an exceedingly large interval can make the
for congestion estimation, the averaging factor will affect the responsiveness and
Controller that manages the use of the outbound bandwidth. Bandwidth controllers
can be categorized based on the nature of services they provide and the QoS goals. A
congestion, packets from a lower priority class are dropped before dropping packets
The most common type of bandwidth controller is one that provides fairness
protection in which individual flows or groups of flows are protected from one
mechanisms usually have the least overhead among the three, whereas per-flow
fairness protection mechanisms that maintain per-flow state have the most overhead.
flow information and rate limit the flows. Pseudo per-flow management protects
Even within the same subcategory of the Bandwidth Guardians, the accuracy
and performance of the mechanisms can significantly differ in the complexity and
traffic information used for bandwidth management. For example, a simple class
based Bandwidth Guardian can pre-assign a fixed congestion bandwidth to each class,
bandwidth to each class for the price of estimating the number of active flows in each
class.
that congestion responsive traffic sources such as TCP can reduce their transmission
packets has been historically used. For this reason, it is sometimes hard to distinguish
congestion, while Congestion Controllers attempt to prevent congestion with the help
Controller and a Congestion Controller is, if it does not make sense for a mechanism
34
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques
Congestion Controller. Note that an AQM may have either a Bandwidth Controller or
probability (CNP) based on the estimated congestion level, its control history and
possibly other traffic information, and notifies traffic sources by randomly marking
(or dropping) the incoming packets with the estimated CNP. Every Congestion
Controller has its own QoS goal and thus has a specific CNP computation policy of
which flows should reduce or increase their transmission rate and by what amount.
The QoS goals can be simply to prevent congestion with minimized queuing delay, to
these QoS and performance goals, the AQM may perform a uniform, class-based, per-
flow or per packet CNP computation. For example, to achieve only the basic goal of
CNP to all incoming traffic. In order to additionally yield fair bandwidth allocations
The CNP computation methods can be further classified into two categories
based on how the CNP is computed. The first category is a Proportional (memory
less) controller that does not consult the recent control history but computes the CNP
based only on the current estimated congestion level and traffic information.
to successfully control congestion. The fact that the stable state average transmission
rate of window-based traffic sources (like TCP) given a CNP can differ based on the
average RTTs they experience implies that a router should have some knowledge of
the average RTT and the number of flows (N) to control aggregated average incoming
traffic rate and to control per-flow average throughput. For example, an AQM that has
knowledge of the average RTT of the incoming flow aggregate and N can compute a
proper CNP for the flow aggregate using the queue law assuming ideal TCP traffic
sources. Or, knowing the fair bandwidth share (link capacity divided by N) and RTTs
of individual flows, a router can compute a proper CNP for each flow that will bring
each average flow rate to the fair share. To be practical, since not all TCP traffic is
That is, the router should know exactly how much bandwidth to take away
from each flow to compensate for the overloaded amount of the service bandwidth to
increase congestion control precision. Without knowing the transmission rate of each
flow in the first place, it is not possible to determine how much bandwidth will be
reduced per notification. However, knowing the per-flow transmission rate (or just
cwnd since RTT is already known) helps little under the current congestion
notification structure of the Internet due to the inefficient and inaccurate router-to-
host binary congestion control communication method of marking with the CNP. In
transmit at a specific rate. Thus, precision congestion control using CNP alone cannot
readily be done. This is the basic argument behind the design of XCP, one of the most
recent mechanisms, that proposes to use a window based traffic source that inform
routers of RTT and cwnd and transmit at the rate that the most congested router
36
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques
explicitly specifies in terms of allowable cwnd. One critical problem facing AQMs in
this category is that per-flow traffic information such as RTT and cwnd may not be
practically and securely obtainable under the current Internet structure. On the other
hand, AQMs that compute CNP based only on the congestion estimate or use
incomplete traffic information may not be able to support a wide range of traffic
performance goals for some traffic mixes. The second class of CNP computation
methods is Integral control that heuristically searches for a stable state CNP that will
bring the aggregated traffic to a desired level based on the recent control history and
congestion controllers continuously update the CNP of the previous interval based on
QoS goals such as bounded queuing delay. While the integral CNP computation
techniques are usually used for incoming traffic aggregates, they can be applied for
groups of flows or even possibly for individual flows assuming per-flow throughput
and fair bandwidth share (or N) can be measured. An important integral congestion
So far, two types of CNP determination methods have been discussed and
based congestion notification services. On the other hand, the proportional CNP
CNP computation method that can be used to implement per-flow fair congestion
Alternatively, a per-flow QoS service may be implemented using the integral method
with per-flow QoS requirement information from traffic sources. That is, a router may
heuristically adjust the updated CNP of the traffic aggregate for each flow considering
either the implicit congestion notification method of packet dropping or the explicit
supports binary ECN bit marking. ECN marking can over significant performance
gain in terms of packet loss rate compared to the implicit packet drop congestion
Bandwidth Controller. Typically, AQM mechanisms keep only a single packet queue.
However, a mechanism may assign a packet queue for each incoming flow and
perform link scheduling (although it is arguable that this mechanism is not an AQM),
Alternatively, an AQM may assign a packet queue for each class of traffic. To
encompass these possibilities, the AQM taxonomy includes the number of packet
Every packet queue has a management discipline, the simplest being FIFO
queue management. Other management disciplines include support for a uniform QoS
..... ................... ..... 38
Dept, of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques
such as bounded average queuing delay or per-flow or per-class delay. The Queue
Controller can support diverse per-flow QoS requirements using a packet scheduling
rather than FIFO in cooperation with the Congestion Controller, although little work
has been done in this direction. The per-flow QoS parameters that an Internet router
can support are CNP and queuing delay. As mentioned earlier, a Congestion
Controller may consider the QoS requirements in determining the CNP for a flow
given QoS information from traffic sources. Similarly, the Queue Controller can
consider the delay requirement of each flow using QoS packet scheduling. As long as
the flow uses bandwidth less than or equal to the fair share, trying to meet the QoS
may need to address issues such as starvation that can affect the throughput of
39
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques
metric and not the flow information. However, based on the congestion metric further
the AQMs can be classified. AQMs use a variety of congestion metrics like Queue
a) RED: The first well known AQM scheme proposed is RED[1]. It is one of
the popular algorithms. It tries to avoid problems like global synchronization, lock
out, bursty drops and queuing delay that exists in the traditional passive queue
Variables:
Q«. : average queue size
P* : current packet-marking probability
q : current queue size
p* ; temporary marking or dropping
probability
Fixed parameters:
w4 queue weight - 0 .1 ~ 0 0001
max* : maximum threshold for queue
mm* : minimum threshold for queue
niaXp : maximum dropping probability
The algorithm detects congestion by computing the average queue size Qave.
To calculate average queue size, low pass filter is used which is an exponential
weighted moving average (EWMA). The average queue is then compared with two
average queue size is between minimum and maximum threshold, the packet is
packets are dropped. Packet drop probability is linear function of queue length. So the
dropping probability depends on various parameters like minth, maxth, Qave and wq.
These parameters must be tuned well for the RED to perform better. However, it faces
major disadvantage for the RED algorithm. Though RED avoids global
synchronization but fails when load changes dramatically. Queue length gives
minimum information regarding the severity of congestion. RED does not consider
when packet inter-arrivals have a Poisson distribution that queue length directly relate
to the number of active sources and thus indicating the true level of congestion.
which clearly does not indicate the severity of congestion. Packet loss and utilization
at the link varies with regard to the network load variation as RED is sensitive to
41
Dept of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques
and low packet drop at the link can be achieved. In case of poor minth, poor
utilization at the link exists and poor maxth value results in large packet drop
b) DS-RED; RED[26] uses a single linear drop function to calculate the drop
probability of a packet and uses four parameters and average queue to regulate its
performance. RED suffers unfairness and low throughput. DS-RED uses two-segment
drop function which provides much more flexible drop-operation than RED.
However, DSRED is similar to RED in some aspects. Both of them use linear drop
functions to give smoothly increasing drop action based on average queue length.
Next they calculate the average queue length using the same definition. The two
segment drop function of DSRED uses the average queue length which is related to
long term congestion level. As the congestion increases, drop will increase with
higher rate instead of constant rate. As a result, congestion will be relieved and throughput
will increase. This results in a low packet drop probability at a low congestion level
and gives early warning for long term congestion. DSRED showed a better packet
drop performance resulting in higher normalized throughput than RED in both the
heavy load and low load. It results in lower average queuing delay and queue size
than RED.
drop probability based on a heuristic method rather than the simple method used in
RED. In this scheme the average queue size is estimated using a simple EWMA in the
forward or backward path. The packet drop probability is calculated to determine how
frequently the router drops packets at the current level of congestion. In MRED the
packet drop probability is computed step form by using packet loss and link
utilization history. MRED is able to improve fairness, throughput and delay compared
to RED.
Floyd [5] argued that a weakness of RED is that it does not take into
1, a TCP flow in congestion avoidance reduces its transmission rate by half when it
equally by n flows (each flow receives 1/n bandwidth of the link), a single packet
mark or drop causes one flow to reduce its transmission rate to 0.5n-l and reduces the
a packet mark or drop on reducing the aggregate transmission rates of n TCP flows
decreases. Thus, when n is large, RED either has to incur a high packet loss rate or is
not effective in reducing load on a congested link and in controlling the queue length.
On the other hand, when n is small, RED can be too aggressive, i.e., it drops too many
Fixed Target
traffic on a link changes, i.e., the number of flows on an Internet link is not known a
their algorithm, maxp is adjusted every time the average queue length falls out of the
43
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques
target range between minth and maxth. When the average queue length is smaller than
or dropping packets; when the queue length is larger than maxth, maxp is increased
the pseudo code are constants that have to be chosen by network operators.
slowly in 2001. The pseudo code for updating maxp proposed by Floyd et al. is shown
in figure 2.5. They also provided guidelines for choosing minth, maxth, and the
coefficient of the low-pass filter for computing the weighted average queue size. The
Adaptive RED version proposed by Floyd et al. (referred to herein as “ARED”) also
includes the “gentle mode” that was discussed in 2.3.1.1. The parameters for the
44
Dept of Computer Science & Technology, S,K,University, Anantapur.
Active Queue ManagementTechniques
else if q < minth + 0.4(maxth - minth) && maxp > 0.01 then
maxp.*— maxp/p
end if
Parameters Description
weaknesses is that it cannot control the router’s average queue size effectively and
predictably. When maxp is high or congestion on the link is light, RED keeps the
average queue size near minth. On the other hand, when maxp is low or the link is
heavily congested, RED’s average queue size grows to maxth. Floyd et al. claimed
that ARED does not have this problem since it dynamically adjusts maxp. They also
demonstrated via simulations that ARED can achieve good and predictable
45
Dept, of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques
the Adaptive RED scheme. This scheme is based on the proportional derivative (PD)
control principle. It includes control theory and adapts the maximal drop rate
parameter to RED called maxp to stabilise the queue length. In this scheme, AQM is
new PD controller and the original RED AQM. The variation of queue length and the
better performance in terms of mean queue length and standard deviation of the queue
length.
f) LRED. The AQM scheme Loss Ratio based RED, measures the latest
packet loss ratio, and uses it as a complement to queue length in order to dynamically
adjust packet drop probability. So in this scheme packet loss ratio is a clear indication
make the scheme more responsive in regulating the length to an expected value LRED
tries to decouple the response time and packet drop probability, there making its
g) HRED: In RED, the drop probability curve is linear to the change of the
average queue size. But in this paper, the drop probability curve is a hyperbola curve.
As a result this algorithm regulates the queue size close to the reference queue value.
This makes the algorithm no longer sensitive to the level of network load, low
Since HRED is insensitive to the network load and queue size does not vary much
with the level of congestion, the queuing delay is less unpredictable. It rapidly reaches
and keeps around its reference queue length, irrespective of the increase or decrease in
queue length. Hyperbola RED tries to provide the highest network utilization because
congestion characteristics and the buffer size. In AutoRed, calculating the average
queue size using EWMA model is modified and redefined. Therefore wq,t is a
congestion characteristics and the queue normalization. In the above technique, the
wq,t is written as a product of the three network characteristics. The AutoRed with
RED performs better than the RED scheme. This model reduces the queue oscillations
appropriately in the RED-based algorithms. The AutoRed uses the strength and effect
a) AVQ: The virtual queue is updated [15], when a packet arrives at the real
queue to indicate the new arrival of the packet. As in Fig 2.7 when the virtual queue
or buffer overflows, the packets are marked / dropped. The virtual capacity of the link
is modified such that total flow entering each link achieves a desired utilization of the
link.
This is done by aggressive marking when the link utilization exceeds the
desired utilization and less aggressive when the link utilization is below the desired
each link and determine a load factor, the available capacity and the queue length.
47
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques
Parameters Description
This helps in identifying the incipient congestion in advance and calculates the
packet marking probability. Yellow improves the robust performance with respect to
Yellow uses the load factor (link utilization) as a main merit to manage congestion.
algorithm has an influence on the dynamics of queue and link utilization. It is difficult
to achieve a fast system response and high link utilization simultaneously using a
instantaneous queue size and the given reference queue value. This new algorithm,
called stabilized AVQ (SAVQ) [18], stabilizes the dynamics of queue maintaining
congestion. A subordinate measure is used as the desired link utilization to solve the
little link capacity low. The EVAQ proved the transit performance of the system and
assured the entire utilization of link capacity. Based on linearization, the local stability
conditions of the TCP/EAVQ system were presented. The simulation results show the
excellent performances of EAVQ such as the higher utilization, the lower link loss
rate, the more stable queue length, and the faster system dynamic response than AVQ.
high utilization with negligible loss or queuing delay even as the load increases. This
schemestabilizes both the input rate around link capacity and the queue around a
small target independent of the number of users sharing the link. It uses a congestion
TO
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques
measure price to determine the marking probability. The congestion measure price is
When the number of users in the network increases, the queue mismatch and
rate mismatch increases increasing the price value. Increase in price value results in
increased marking probability. This in turn reduces the source rate of the user input.
When the source rates are too small, the mismatch is negative, decreasing the price
and marking probability value that increases the source rate. The price adjustment rule
tries to regulate user rates with network capacity and controls queue length around a
target value. RED tries to couple the congestion measure and the performance
measure, but REM decouples the congestion measure and the performance measure
b) SVB: The SVB [31] scheme uses the packet arrival rate and queue length
queue and responds to the traffic dynamically. A new packet arrival is reflected in the
virtual queue considering both the queue length and the arrival rate. The most striking
maintaining a stable queue for different workload mixes (short and long flows) and
50
Dept of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques
parameter settings. The service rate of the virtual queue is fixed as the link capacity of
the real queue and adapts the limit of the virtual buffer to the packet arrival rate. The
incoming packets are marked with a probability calculated based on both the current
virtual buffer limit and the queue occupancy. The simulations results have shown that
it provides lower loss rate, good stability and throughput in dynamic workloads than
utilization)
a) BLUE; The BLUE [9] algorithm resolves some of the problems of RED by
employing two factors: packet loss from queue congestion and link utilization. So
BLUE performs queue management based on packet loss and link utilization as
shown in Fig. 2.9 It maintains a single probability pm to mark or drop packets. If the
This scheme uses link history to control the congestion. The parameters of BLUE are
51, 82 and freeze time. The freeze time determines the minimum time period between
Parameters Description
51
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques
BLUE maintains minimum packet loss rates and marking probability over
varying queue size and number of connections compared to RED. In case of large
queue, RED has continuous packet loss followed by lower load that leads to reduced
link utilization.
lastupdate := now
Upon link idle event
if{ ( now - lastjupdate) > fteeze time)
Pm - Pm ■
lastupdate := now
Constant:
Kh
ireeze_time : minimum time period between two
consecutive updates of pK
In BLUE [9], the queue length is stable compared to RED, which has a large
varying queue length. This ensures that the marking probability of BLUE converges
to a value that results in reduced packet loss and high link utilization.
AQMs also belong to this category using both congestion metric and the flow
information to detect congestion in routers. AQMs that used only congestion metric
and not flow information faced the problem of unfairness in handling the different
types of traffic. While considering the congestion metric they can be further classified
52
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques
2.3.2.1 Queue-based
removes the unfairness effects found in RED. FRED generates selective feedback to a
filtered set of connection having a large no. of packets queue rather than choosing
than RED for adaptive flows and isolating non-adaptive greedy flows.
b) CHOKe : (CHOose and Keep for responsive flows, and CHOose and Kill
their packets. So CHOKe tries to bring fairness for the flows that pass through a
congested router. CHOKe in Fig. 2.10 calculates the average occupancy of the buffer
like as in RED using EWMA. If average queue is greater than minth, the flowid of
each arriving packet and a randomly selected packet called drop candidate packet is
compared. If the packets are of the same flow then the drop both the packets.
Otherwise if average queue is greater than maxth, then drop the new packet else place
the packet in the buffer and admit the new packet with a probability p
Calculate QM
53
Dept, of Computer Science & Technology, S.K. University, Anantapur.
Active Queue ManagementTechniques
mechanism improved response time for short lived Web traffic. It uses a cwnd hint
from a TCP source to compute the cwnd ratio of an arriving packet to the cwnd
average and reduces the probability of dropping packets during the sensitive period
when a flow’s cwnd is small. Sources mark each packet with its current window size,
allowing SHRED to drop packets from flows with TCP windows with a lower
probability. Small TCP window sizes can significantly affect short-lived flows. A
small TCP window results in a lower transmission rate and short-lived flows are more
sensitive to packet drops. SHRED provides improvement in web response time and is
internet, Stochastic RED was introduced. Basically, Stochastic RED tunes the packet
drop probability of RED for all the flows by taking into consideration the bandwidth
share obtained by the flows. The dropping probability is adjusted such that the packets
of the flow with high transmission rate are more likely to be dropped than flows with
lower rate. This algorithm distinguishes individual flows without requiring per-flow
state information at the routers. It is called stochastic because it does not really
distinguish the flows accurately. The arriving traffic is divided by the router into a
limited number of counting bins using a hashing algorithm. On the arrival of each
packet at the queue, a hash function is used to assign the packet to one of the bins
based on the flow information. It dispatches the packets of the different flows to the
set of bins. With a given hash function, packet of the same flow are mapped to the
same bin. Therefore when the flow is unresponsive, the bin load increases
dramatically.
54
Dept of Computer Science & Technology, S.K, University, Anantapur.
Active Queue ManagementTechniques
Stochastic RED estimates the bin loads and uses these loads to penalize flows
that map to each bin according to the load of the associated bin. Thus unresponsive
flows experience a large packet drop probability. The Stochastic RED is effective in
improving the response time of Web transfer without degrading the link utilization.
a) SFED: SFED is rate control based AQM discipline which is coupled with
any scheduling discipline. It maintains a token bucket for every flow or aggregate
flows. The token filling rates in proportion to the permitted bandwidths. When a
packet is enqueued, tokens are removed from the corresponding bucket. The decision
to enqueue or drop a packet of any flow depends on the occupancy of its bucket at
that time. A token bucket serves as a control on the bandwidth consumed by a flow.
SFED ensures early detection and congestion notification the adaptive source. The
token bucket also keeps record of the bandwidth used by its corresponding flow in the
recent past.
amongst competing flows even in the presence of the non-adaptive flows. It is a rate
control based AQM algorithm. It offers congestion avoidance by early detection and
with scalable implementation. It performs better than RED and CHOKe. In case of
buffer sizes constrained, it performs significantly better than FRED. It gives high
values of fairness for diverse applications such as FTP, Telnet and HTTP.
Performance is superior even for a large number of connections passing though the
malicious flows are identified which causes congestion at the router, and assigns them
drop rates in proportion of their abuse of the network. A malicious flow continuously
hogs more than its fair share of link bandwidth. So LUBA assigns the drop probability
to a malicious flow so that it does not get more than its fair share of network. Luba
Interval, B, is the byte-count of total packets received by the congested router during
an interval to measure whether a flow is hogging more than its fair share. Overload-
below target link utilization router is non-congested and packets are not marked or
dropped otherwise all arriving packets are monitored while assigning a flow Id to
each ingress flow at the router. A history table is maintained to monitor flows which
take more than their fair share of bandwidth in a Luba Interval. It disciplines
malicious flows in proportion to their excess inflow. It offers high throughput and
network conditions and the complexity of the algorithm does not increase even when
23.2.3 OTHERS
responsive flows based on accounting mechanisms. The accounting bins are used to
keep track of queue occupancy statistics of packets belonging to a particular bin. Each
packet arrives at the queue, it is hashed into one of the N bins in each of the levels. If
the number of packets mapped to a bin goes above a certain threshold, pm for the bin
scalable and enforces fairness using an extremely amount of state and a small amount
of buffer space
The third category of AQMs uses only the flow information and does not
occupancy at a level independent of the number of the active connections. SRED does
this by estimating the number of active connections. It obtains the estimate without
collecting or analyzing state information. Whenever a packet arrives at the buffer, the
arriving packet with randomly chosen packet that recently preceded it into the buffer
is compared. The information about the arriving packets is augmented with a “Zombie
list”. As packets arrive, as long as the list is not full, for every packet the packet flow
identifier is added to the list. Once the zombie is full, whenever a packet arrives, it is
compared with a randomly chosen zombie in the zombie list If the arriving packet’s
■flow matches the zombie it is declared “hit”. If the two are not of the same flow, it is
declared “no hit”. The drop probability depends on whether there was a hit or not.
This identifies the no. of active flows and finds candidates for misbehaving flow.
SRED keeps the buffer occupancy close to a specific target and away from overflow
connections while in RED the buffer occupancy increases with the number of
keeping per-flow state. Stabilized RED overcomes the scalability problem but suffers
b) GREEN: This algorithm uses flow parameters and the knowledge of TCP
end-host behavior to intelligently mark packets to prevent queue build up, and prevent
congestion from occurring. It offers a high utilization and a low packet loss. An
improvement of this algorithm is that there are no parameters that need to be tuned to
achieve optimal performance in a given scenario. In this algorithm, both the number
of flows and the Round Trip Time of each flow are taken into consideration to
GREEN is generally different for each flow because it depends on characteristics that
2.4 Tail-drop
a queue which, when filled to its maximum, overflows and drops any subsequently
algorithm until the queue is full. As shown in Fig. 2.11, when the queue is full, the
maximum congestion signal is generated because all of the arriving packets are
dropped. Once sources detect lost packets they slow down and the arrival rate of
packets to the queue will be less than the capacity of the link and the packet backlog
in the queue decreases. Then, when the buffer is not full, no congestion feedback
signal is generated by tail-drop algorithm and the source rates increase until overflow
happens again. We can see that the tail-drop AQM results in a cycle of decrease and
increase of rates around the point where the buffer is nearly full. The actual mean size
of the buffer depends on the load on the link. The tail-drop algorithm is incapable of
58
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques
generating any feedback signal (price) unless the buffer is full. This is why the current
Packet 1
Drop
Probability
Most work on Active Queue Management uses the tail-drop queue as a lower
improve the fairness between flows of the tail-drop queue. We will now introduce
dropping arriving packets at a router. Transport protocols at the end systems such as
TCP infer the presence of congestion when they detect packet losses and react to these
losses by reducing their sending rates. With reliable data delivery semantics (such as
provided by TCP), the lost data packets have to be retransmitted. This results in
decreased throughput and increased latency for applications. The IETF proposed a
(ECN)[16) by using bits in TCP and IP headers [13]. With ECN routers can mark a
packet by setting a bit in the header (instead of dropping the packet) to deliver the
congestion signal explicitly to the end systems. This approach avoids packet losses
(ECT) code point in the IP header. When congestion is detected, routers mark packets
that have the ECT code point set to convey an explicit congestion signal to the end
systems. ECN marking is done by setting the Congestion Experienced (CE) code
point in the IP header. When the receiver receives a data packet with the CE code
point set, it sets the ECN-Echo flag in the TCP header of its next ACK packet to
notify the sender of congestion in the network. Upon receiving an ACK packet with
the ECN-Echo flag set, the sender reduces its congestion window as if it had lost a
packet. The sender also sets the CWR flag in the TCP header of its next packet to
Since an uncooperative or malicious user can set the ECT code point and
ignores the routers’ congestion signal, the standard specification for ECN
recommends that routers only mark packets when their average queue size is low.
When the average queue size exceeds a certain threshold, the standard specification
for ECN recommends routers to drop packets rather than set the CE code point in the
recommendation and drops all arriving packets when its average queue size grows
Most studies on AQM and ECN have been based on algorithms. While
simulation is a useful tool for research to gain insights into new network protocols and
mechanisms that have not been implemented or deployed yet, simulation results
6U
Dept of Computer Science & Technology, S.KUniversity, Anantapur.
Active Queue ManagementTechniques
many researchers also agreed that the widely used network simulator ns-2 does not
have any numerous implementation bugs and questioned the validity of simulation
results obtained from ns-2[22] .This simulation results lead to accurate conclusions.
For the reason mentioned above, studies with a implementation of AQM and
ECN in a network and under controlled and realistic conditions are very important.
This is because results from these evaluation studies are more credible than studies
results. Hence, conclusions drawn from results obtained under simulation conditions
are more convincing than those obtained from studies. Despite their important role,
there have been only a few evaluation studies of AQM and ECN in networks. In this
section, we will review existing evaluation studies in networks and discuss their
limitations.
Figure 2.12 depicts the queue law as explained in [2], assuming a single
congested router with uniform dropping probability. As the drop rate at the router
increases the average queue size and hence, average queuing delays experienced by
the incoming packets decreases. However, an increase in drop rate also means
reduced throughput. Thus, the average queue size at the router decides the throughput
and delay treatment given to flows passing through it, and both throughput and delay
can be controlled by changing drop rate. ARED tries to maintain fixed average queue
size, thus providing predictable average queuing delays by adapting drop rate.
However, a fixed average queue size does not suit all kinds of traffic mixes and
hence, the two AQM mechanisms proposed in this thesis, RED-Worcester and RED-
6i
Dept of Computer Science & Technology, S.K University, Anantapur.
Active Queue ManagementTechniques
Boston, adjust average queue size based on average requirements of the incoming
Drop Rate
2.8 Summary
AQM has been proposed to replace drop-tail in order to achieve more effective
congestion control. While all AQM algorithms attempt to achieve this common goal,
many AQM algorithms have been invented for slightly different purposes such as
aforementioned categories and discussed how these AQM algorithms were evaluated.
We also reviewed the limitations of existing evaluation for AQM algorithms such as
unrealistic simulations, one-way traffic and lack of synthetic general TCP traffic.
62
Dept of Computer Science & Technology, S.K University, Anantapur.