0% found this document useful (0 votes)
93 views15 pages

Dynamic Buffer Sizing For Wireless Devices Via Maximum Entropy (AVazquezR)

This document discusses dynamic buffer sizing for wireless devices using maximum entropy. It proposes using maximum entropy to determine additional probability distributions needed to size buffers, given only average measured values from the system. It also presents an extension of this maximum entropy approach for buffer sizing in wireless devices, where shared wireless transmission imposes additional constraints. Simulation results show this approach can improve memory utilization efficiency in wireless networks while maintaining bounded packet loss.

Uploaded by

Andres Mejia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views15 pages

Dynamic Buffer Sizing For Wireless Devices Via Maximum Entropy (AVazquezR)

This document discusses dynamic buffer sizing for wireless devices using maximum entropy. It proposes using maximum entropy to determine additional probability distributions needed to size buffers, given only average measured values from the system. It also presents an extension of this maximum entropy approach for buffer sizing in wireless devices, where shared wireless transmission imposes additional constraints. Simulation results show this approach can improve memory utilization efficiency in wireless networks while maintaining bounded packet loss.

Uploaded by

Andres Mejia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Computer Communications 44 (2014) 44–58

Contents lists available at ScienceDirect

Computer Communications
journal homepage: www.elsevier.com/locate/comcom

Dynamic buffer sizing for wireless devices via maximum entropy


Andrés Vázquez-Rodas ⇑, Luis J. de la Cruz Llopis 1, Mónica Aguilar Igartua 2, Emilio Sanvicente Gargallo 1
Department of Telematics Engineering, Universitat Politècnica de Catalunya BarcelonaTech (UPC), Jordi Girona Street, 31, 08034 Barcelona, Spain

a r t i c l e i n f o a b s t r a c t

Article history: Buffer overflow is an important phenomenon in data networks that has much bearing on the overall net-
Received 3 April 2013 work performance. Such overflow critically depends on the amount of storage space allotted to the trans-
Received in revised form 23 December 2013 mission channels. To properly dimension this buffering capacity a detailed knowledge of some set of
Accepted 4 March 2014
probabilities is needed. In real practice, however, that information is seldom available, and only a few
Available online 14 March 2014
average values are at the analyst disposal. In this paper, the use of a solution to this quandary based
on maximum entropy is proposed. On the other hand, when wireless devices are taken into account,
Keywords:
the transmission over a shared medium imposes additional restrictions. This paper also presents an
Buffer sizing
Wireless networks
extension of the maximum entropy approach for this kind of devices. The main purpose is that wireless
Packet loss nodes become able to dynamically self-configure their buffer sizes to achieve more efficient memory uti-
Maximum entropy lization while keeping bounded the packet loss probability. Simulation results using different network
Queuing systems settings and traffic load conditions demonstrate meaningful improvement in memory utilization effi-
ciency. This could potentially benefit devices of different wireless network technologies like mesh routers
with multiple interfaces, or memory constraint sensor nodes. Additionally, when the available memory
resources are not a problem, the buffer memory reduction also contributes to prevent the high latency
and network performance degradation due to overbuffering. And it also facilitates the design and man-
ufacturing of devices with faster memory technologies and future all-optical routers.
Ó 2014 Elsevier B.V. All rights reserved.

1. Introduction can provide some benefits to modern network devices. For


instance, a small buffer size is extremely valued in all-optical pack-
Buffers are crucial elements of all kind of routers. They have a et switching routers design and construction [3–5]. Nevertheless,
great impact in many performance evaluation parameters like too small buffers increase the packet losses and reduce the link
packet loss probability, end-to-end delay, delay jitter, link utiliza- utilization when TCP-alike protocols are used [6,7]. Dimensioning
tion, and throughput. This impact is especially significant during routers buffer size is therefore not an easy task and is an active
congestion times. The prevention of packet losses has motivated research topic mainly for wired routers [6,8,9]. On the other
the spread of excessively large buffering across a wide range of hand, the growth in the use of mobile devices and their
network devices and technologies, from Internet core routers to ac- increasing computational capacity, together with the user
cess devices on the edge networks. This phenomenon of buffer ubiquitous access expectations, is causing a constant increase in
oversizing, known as bufferbloat [1], has been accelerated in recent the demand of wireless networking technologies (like wireless
years by the reduction in the memory cost. The bufferbloat brings local area networks, mobile ad hoc networks, wireless mesh
as a direct consequence that end users experience excessive high networks, etc.). However, less attention has been provided to the
latencies in their communications, independently of their access buffer sizing in wireless devices, where new challenging issues
technology and bandwidth [2]. In such environment, the quality arise [10].
of service provided to real-time applications, which are very sensi- In the study of queuing theory, it is customary to begin assum-
tive to delays, could be very low. Thus, new buffer sizing schemes ing knowledge of the distributions of service and inter-arrival
times, and from there the theory is constructed using Markov
(and embedded Markov) chains, Laplace and Z transforms and
⇑ Corresponding author. Tel.: +34 934017027; fax: +34 934011058. other mathematical techniques [11]. In real practice, however, that
E-mail addresses: [email protected] (A. Vázquez-Rodas), ljcruz@
detailed information is seldom available and, in fact, in most
entel.upc.edu (L.J. de la Cruz Llopis), [email protected] (M. Aguilar Igartua),
[email protected] (E. Sanvicente Gargallo).
instances, the only information at our disposal is a few average
1
Tel.: +34 934016014; fax: +34 934011058. values from which other parameter of interest, related to the
2
Tel.: +34 934015997; fax: +34 934011058. system performance, must be provided. Obviously, this process

http://dx.doi.org/10.1016/j.comcom.2014.03.003
0140-3664/Ó 2014 Elsevier B.V. All rights reserved.
A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58 45

involves a risk since in so doing we are giving more information Connected Mobile Networks) nodes. Here, each node is modeled
than we have. To be more specific, suppose that the average value as an M/M/1/B queue and large deviations theory is used to study
of packets in a transmission system (buffer and server) and the ser- the queue buffer sizes in terms of buffer overflow. Authors con-
ver occupancy have been measured, using, for instance, moving sider a buffer loss probability exponent as the configuration
averages. How could we, then, obtain, solely from those two aver- parameter and evaluate the performance of their approach in
ages, the percentiles of packets in the system? To produce these terms of delivery ratio, delivery delay and message loss ratio. They
percentiles, the distribution of packets must be acquired, which state that their sized buffer model, with the adequate configuration
is far beyond the available knowledge. The solution to this dilem- parameter, offers a performance statistically equivalent to the infi-
ma can be stated in general terms as follows: when in the course of nite buffer model. Another example of the importance of buffers in
a system evaluation, the data required to assess the system perfor- such wireless networks is given in [14]. Here, authors show the im-
mance exceeds the available data, the extra information needed pact of the buffer size on packet loss probability, throughput and
should be generated maximizing its entropy in a way compatible delay of IEEE 802.16 networks. They also observe that beyond a
with the obtained measurements. This approach has been already certain threshold, larger buffers do not improve none of these per-
presented by some researchers in a rigorous mathematical way formance parameters.
[12]. In this paper, we include a new derivation of the method from On the other hand, several authors propose dynamic buffer siz-
an engineering point of view. ing mechanisms. For instance, authors in [10] remark the necessity
Besides, when dealing with the task of buffer sizing for wireless of such a dynamic mechanism for wireless local area networks in
devices new problems arise. As it will be described, this is mainly order to guarantee high throughput efficiency and reasonable
due to the fact that the node transmission state does not depend low end-to-end delays. They analyze a simple adaptation of the
only on itself, but also on the state of the other nodes inside the classic bandwidth-delay product (BDP) rule [6]. This well-known
same collision domain. In this work we also extend the maximum rule, based on the dynamics of the TCP’s congestion control mech-
entropy method for wireless devices working over shared chan- anism, states that an internet router requires a buffer size B given
nels. We provide to these devices the capability of dynamically by B = C  RTT to achieve a hundred percent utilization at the bot-
self-configure its buffer sizes according to the traffic load variation tleneck links. Here, C represents the link data rate and RTT is the
and keeping bounded the packet loss probability. Extensive simu- average round-trip time of a TCP flow passing through that link.
lations have been done to verify the proper performance of the pro- Following this rule, and taking into account the currently high pos-
posal. We evaluate different scenarios varying the network sible values of C, impractically large buffers may be obtained. This
topology and load conditions. The analysis is done for two kinds was first observed by Appenzeller et al. [8] where authors showed
of load variations. In the first scenario the variations are due to that a link with n long-lived or short-lived TCP flows requires only
pffiffiffi
changes in the traffic generated precisely by the node whose buffer B ¼ ðC  RTTÞ= n buffers, and further analyzed in [15] for con-
is being sized. In the second case the load fluctuation is due to the gested links with different TCP flow types. Some open issues are
activation or deactivation of other nodes in the same collision presented by the same authors in [16]. The adaptation proposed
domain. by Leith and Malone [10] consists of an online measurement and
In summary, the purpose of this work is to provide to devices actualization of the mean packet service time values. This is re-
which transmit over shared channels a straightforward method, quired since, in contrast to wired networks in which this value is
based on easily measured parameters, to self-configure and effi- constant, for 802.11-based wireless networks the service time de-
ciently manage their available memory. This is achieved by pends on the number of active stations that contend for the chan-
dynamically adapting their buffer sizes according to the traffic load nel (CSMA/CA mechanism stochastic effect) and on the varying
variation during the network operation. The method is based on modulation and coding scheme chosen by the physical layer
the application of the maximum entropy principle. (which in turn depends on the radio channel conditions). For this
The rest of the paper is organized as follows. In Section 2 we re- first approach, they use a maximum queuing delay as the configu-
port and analyze the related work. In Section 3 we present, in a ration parameter. In second place they propose the Adaptive Limit
practical way, our approximation to the solution of the G/G/1 Tuning (ALT) algorithm which main idea is to decrease the buffer
and G/G/1/K queues via maximum entropy. Here, to make this pa- size when it has been busy for a long time and increase the buffer
per more readable and self-contained, we start presenting some size when it has been idle for a long period. The aim is to take
known expression relating the distribution of packets in the sys- advantage of the statistical multiplexing of TCP congestion window
tem to the distribution of packets found in the system by an arrival. backoffs when multiple flows share the same link.
We continue with the maximum entropy approach for the compu- In the same line, authors in [17] propose a buffer sizing mech-
tation of the state probabilities. This section ends with the applica- anism in order to reduce the queuing delays of TCP multi-hop
tion that motivated this study, namely: buffer sizing for the G/G/1/ flows while maintaining high network utilization inside 802.11-
K queuing system and its numerical evaluation. In Section 4 we ex- based WMNs. Their main idea is to consider a joint neighborhood
tend the method for wireless devices transmitting over shared buffer distributed over a set of nodes that contend for channel ac-
channels and present the results obtained from different simula- cess within a collision domain. The cumulative buffer for the colli-
tion scenarios. Finally, in Section 5 we conclude our work and pro- sion domain is also determined using the classical bandwidth-
vide some lines for further study. delay product. To distribute the collective buffer amongst neigh-
borhood nodes, they establish a simple cost function that takes
into account the fact that a queue drop close to the source node
2. Related work wastes fewer network resources than a queue drop in a node closer
to destination.
The task of dimensioning buffer sizes can be developed in two Different approaches based on modern control theory are pre-
ways: once at the design stage or dynamically in order to adapt sented in [18,19]. In [18], authors show the monotonic relationship
that size to the variability of the network and traffic conditions. between the buffer size and packet loss rate and utilization. This
A recent approach of the first type is presented in [13] where relationship states that if the buffer size increases then the loss rate
authors propose a large deviations framework to dimension the monotonically decreases, while the utilization monotonically in-
buffer size of delay-tolerant network nodes, as, for instance, VA- creases. Based on these monotonic relationships the authors pro-
NETs (Vehicular Ad-hoc NETworks) or ICMNs (Intermittently pose the so called Adaptive Buffer Sizing (ABS) mechanism. ABS
46 A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58

consists of two integral sub-controllers that dynamically adapt to customers are present, and turned off when there are no custom-
variations of input traffic by regulating the buffer size. This regula- ers). They perform a comparative analysis between the approxi-
tion is based on the error between the measured and target values mate results obtained through the maximum entropy principle
of loss rate and utilization. The controller under the utilization con- and the exact results obtained in previous works. The main conclu-
straint requires an additional treatment to avoid the buffer size sion is that error percentages are very small and therefore the use
goes to infinity in non-bottleneck routers, since they are always of the maximum entropy principle is accurate enough for practical
under-utilized regardless of its buffer size. Finding the optimal val- purposes. The same authors extend in [26] their work to a M/G/1
ues for integral gains, which control the convergence speed and queueing system with unreliable server and general startup times.
stability of the system, is not an easy task due to the unknown The conclusions are very similar to the previous ones, confirming
underlying model for Internet traffic and its high time-variability. the utility of the maximum entropy principle. This principle has
To solve this, authors combine a controller output error method also been used in some related fields as, for instance, the detection
with the gradient descent technique. As a final result they associate of anomalies in the network traffic [27]. Here, the current network
each sub-controller with a gradient-based parameter training com- traffic distribution is compared against a baseline traffic distribu-
ponent able to find optimal integral gain values. As a consequence, tion which is estimated by means of the maximum entropy
the ABS routers can adapt their buffer size to meet the required technique.
loss rate and utilization under variable traffic conditions. In the In this paper we include the solution for the G/G/1 and G/G/1/K
same line, in [19] a control-theoretic approach to analyze the net- queues via maximum entropy, obtained in a practice-oriented way.
work stability is provided. Authors show that desynchronized The derivation is rather elementary and intuitive, but we think this
flows improve network stability and require smaller buffers, which approach will appeal to engineers and computer scientists, more
in turn promote desynchronization. familiar with the simulation aspects of a performance evaluation
When the research focuses on inelastic real-time offered services, problem than with the intricacies of mathematical arguments.
UDP flows should be taken into account. Although TCP dominates In summary, the contributions of this paper are the following:
the traffic carried by the Internet backbone links, recent works the development of an alternative and easy to follow explanation
[20,21] demonstrate a significantly growth of UDP traffic (up to for the solution of the G/G/1 and G/G/1/K queues using a maximum
46-fold increase in volume in the past four years). Besides real time entropy approach and its application to the dynamic buffer sizing
applications, some recent peer-to-peer data transfers contribute to problem; the verification of the proper functioning of the resulting
this growth. Authors in [21] actually show that UDP represents the analytical queue models through simulation; the adaptation of the
largest fraction of flows on a given link. Further, a self-contained proposed dynamic buffer sizing mechanism in order to be applied
mesh network (MBSS) [22] allows that real time applications can on devices working over shared channels; the implementation of
dispense with not only TCP but even with IP. These considerations the buffer sizing algorithm in ns-2 [28] and ns-3 [29] simulators
have motivated to focus our work on inelastic real time traffic which and the verification of its performance under different scenarios
is usually encapsulated over UDP. Note that not all the classical real- and traffic conditions, and finally the definition and evaluation of
time services fall into this category. For instance, in most video a metric that allows the quantification of the system performance
streaming systems, the video sequence is previously coded and in terms of efficiency in memory utilization. The results could be
stored. This fact, together with the increasing available bandwidth valuable in network devices with constrained resources such as
and a big amount of memory in the receiver, allows sending the vi- sensor nodes, or for an efficient memory resource management
deo sequence faster than needed and storing it at the receiver side. In in wireless mesh routers with multiple interfaces. On the other
summary, video streaming service has become a kind of file transfer hand, when the available memory resources are not a problem,
service. The strictest real-time services are those on which the infor- the dynamic buffer sizing also contributes to prevent the high la-
mation is consumed at the same time that it is generated. For these tency and network performance degradation due to the overbuffer-
flows, one of the most important aspects to study is the packet loss ing issue (bufferbloat).
rate, since it is directly related to the quality of service that the end
user will experience. Moreover, there are two main possibilities for 3. Approximation to the solution of the G/G/1 and G/G/1/K
the traffic engineering in a queue system. Different flows from dif- queues via maximum entropy: a practical approach
ferent service classes (with different traffic characteristics and
requirements) could coexist in the same queue, or be allocated to This section presents the method for the solution of the G/G/1
independent queues. The increasing tendency of service differentia- and G/G/1/K queues via maximum entropy in an easy and compre-
tion allows that a variety of applications and services meet their spe- hensive way. As said previously, the derivation is rather elemen-
cific QoS needs. Therefore, in this work we have considered the tary and intuitive. First of all, a vision of the system’s dynamic is
presence of a traffic classifier that separates at least TCP and UDP presented. We follow with the computation of state probabilities
flows and assigns them to different queues. via maximum entropy in both the infinite and finite buffer size
Traditionally, the way to analytically study the loss probability cases. Finally, we present different methods to find the buffer size
is through the transmission system modeling, especially the G/G/1/ for a given loss probability target and show some experimental
K queue. The loss probability can be determined by obtaining the results.
state probabilities in these queues. However, to find these set of
probabilities, it is required a detailed knowledge of the traffic to 3.1. Packets in the system and packets seen by arrivals and departures
be transmitted which is seldom available. This work adopts an
approximation in which just a few average values are available Consider a transmission system in which packets arrive and de-
to carry out the buffer dimensioning. This approximation is based part one at a time. In steady state, such a system evolves through
on the idea of maximum entropy, which was presented in an periods of activity and idleness as represented in Fig. 1. When
exhaustive way for the G/G/1 queue in [12]. The same author stud- the system is continuously observed over a long enough time span
ies the G/G/1/K queue in [23], and had previously presented the comprising many of those cycles, tobs, the probability of having i
solution for the M/G/1 and G/M/1 queues in [24]. For their part, packets in the system, p(i), can be computed as:
authors in [25] use the maximum entropy analysis in their study
of a single removable server queueing system operating under
tðiÞ
pðiÞ ¼ ð1Þ
the N policy (that is, the server is turned on whenever N or more tobs
A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58 47

Number of packets in the system The probabilities p(i) and a(i) are related through arrival and
departure rates, defined respectively as follows:
cycle 1 cycle 2 cycle M

na ðiÞ
3
kðiÞ ¼
tðiÞ
2
1
time
0 nd ðiÞ
lðiÞ ¼ ð6Þ
t(0)
tði þ 1Þ
t(1) The global arrival rate, k, is:
t(2)
. na ð0Þ þ na ð1Þ þ    na
k¼ ¼ ð7Þ
.. tð0Þ þ tð1Þ þ    t obs
tB
Similarly, the global departure ratio, l, is:
tobs
nd ð0Þ þ nd ð1Þ þ    nd
l¼ ¼ ð8Þ
tð1Þ þ tð2Þ þ    tB
Fig. 1. Pictorial definition of t(i) and tB.
From all the above, we readily obtain the following relation-
ships which will be useful in the sequel.
where t(i) is the total amount of time the system has sojourned in na ðiÞ kðiÞtðiÞ kðiÞ
state i (during tobs). Similarly, the probability that the system is aðiÞ ¼ ¼ ¼ pðiÞ; i ¼ 0; 1; 2; . . . ð9Þ
na kt obs k
busy, q, is given by:
tB tði þ 1Þ nd ðiÞk k
q¼ ð2Þ pði þ 1Þ ¼ ¼ ¼ aðiÞ; i ¼ 0; 1; 2; . . . ð10Þ
t obs t obs na lðiÞ lðiÞ

where tB is the total duration of the activity periods, i.e.: and from both, we finally have:

X
1 k kðiÞ kðiÞ
tB ¼ tðiÞ ð3Þ pði þ 1Þ ¼ pðiÞ ¼ pðiÞ; i ¼ 0; 1; 2; . . . ð11Þ
lðiÞ k lðiÞ
i¼1

To evaluate the probability of packets found in the system by an


3.2. The computation of probabilities: a maximum entropy approach
arrival, a(i), one has to proceed differently, since now measure-
ments are only taken at precise time instants. If there are na arriv-
We will distinguish two cases, corresponding to infinite or finite
als in tobs, and na(i) of those see the system in state i, a(i) can be
buffer size.
estimated as (see Fig. 2):

na ðiÞ 3.2.1. Infinite buffer size


aðiÞ ¼ ð4Þ
na As anticipated in the introduction, the problem we address in
this section is the following. Suppose that the average number of
Note that now we are dealing with discrete quantities, as op-
packets in a transmission system (buffer and server) and the server
posed to the continuous values used to estimate p(i). Therefore,
occupancy have both been measured. As explained before, such
it should not be surprising that, in general, p(i) and a(i) differ.
measurements may yield different values depending on the way
Analogously, denoting by d(i) the probability of leaving behind i
they are taken: at arrival times or by continuously monitoring
packets at service completion, we can write:
the system. Both ways of proceeding are used in engineering prac-
nd ðiÞ tice. Time averages are very useful for operators, for instance, since
dðiÞ ¼ ð5Þ they clearly indicate the utilization of network resources. The
nd
knowledge of these values is pertinent to many aspects of para-
where nd and nd(i) are, respectively, the total number of departures mount importance like economic revenues, equipment heating
and the number of those departures that leave i packets in the sys- and, in general, subject matters related to network exploitation.
tem. From Fig. 2, it is apparent that nd = na and nd(i) = na(i). (Observe On the other hand, users are more concerned with a different set
that these equalities hold also true for every cycle.) Therefore, we of parameters, like packet losses, delays, etc., that, in the usual par-
also have d(i) = a(i). lance of traffic engineering are grouped under the heading of Qual-
ity of Service. In this article we focus on buffer dimensioning to
achieve a given packet loss probability, PL. Obviously, a packet is
lost when, upon its arrival to the transmission system, no storage
space is left in the buffering element. Therefore, to evaluate PL
measurements should be taken at arrival times. Since PL is nor-
mally a very small number, its reliable estimation is a lengthy pro-
cess that needs many packets. To avoid excessive burden on the
measuring mechanism, and to make the adjustments more dy-
namic, our model requires the collection of only two values,
namely: the server occupancy, qa, and the average number of pack-
ets in the system, Na, both of them seen by arrivals. Note that qa
and Na are several orders of magnitude larger than PL, and therefore
they can be estimated much faster, using for instance moving aver-
ages. Then, the problem is to evaluate the probabilities of packets
Fig. 2. Pictorial definition of na(i) and nd(i). seen by an arrival, a(i), having at our disposal only the knowledge
48 A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58

of those two quantities. If the a(i) were given, qa and Na would fol- q2a 1
a¼ ð23Þ
low readily. Our aim is just the opposite, i.e.: to obtain the a(i) from Na 1  Nqaa
the sole knowledge of qa and Na. Observe that by producing a(i) we
are providing more information than we have available. We Once a and b have been determined, the values of a(i) are given
should, therefore, be as ‘‘ambiguous’’ as possible, but always in by the expressions:
compliance with the observed data. Since the pioneering work of að0Þ ¼ 1  qa
C. Shannon [30], the uncertainty of a random variable (rv) is mea-
sured by a quantity known as entropy. Focusing in our case of  
q2a q i1
interest, if a rv takes on the values x1, x2, x3, . . . with probabilities aðiÞ ¼ 1 a ; i ¼ 1; 2; . . . ð24Þ
Na Na
p1, p2, p3, ... the entropy of this rv is given by the expression [31]:
X1 Proceeding similarly with p(i), we can write:
1
pi ln ð12Þ pð0Þ ¼ 1  q
i¼1
pi

The base of logarithms can be arbitrary and, by convenience, in q2  q i1


the above expression, we have chosen natural logarithms. pðiÞ ¼ 1 ; i ¼ 1; 2; . . . ð25Þ
N N
Going back to our case, since a(0) = 1  qa is known, the prob-
where q and N are, respectively, the server occupancy and the aver-
lem can be stated as follows:
age number of packets obtained by continuously monitoring the
Compute a(1), a(2), a(3), . . . to maximize:
system.
X1
1
aðiÞ ln ð13Þ
aðiÞ 3.2.2. Finite buffer size
i¼1
Let Q be the buffer size. Similarly to before, we have:
subject to the conditions:
aQ ð0Þ ¼ 1  qa
X
1
aðiÞ ¼ qa
i¼1
aQ ðiÞ ¼ abi ; i ¼ 1; 2; . . . ; Q þ 1 ð26Þ
where aQ(i) denotes the probability seen by an arrival when the buf-
X
1
iaðiÞ ¼ Na ð14Þ fer size is Q. The parameters a and b are now obtained from qa and
i¼1 Na as indicated below.
From
Using the approach routinely employed to solve constrained
optimization problems of this sort, we form the Lagrangean func- X
Q þ1 X
Q þ1

tion [32]: qa ¼ aQ ðiÞ ¼ abi ð27Þ


i¼1 i¼1
! !
X
1 X
1 X
1
Fðað1Þ; að2Þ; . ..Þ ¼  aðiÞ ln aðiÞ þ A aðiÞ  qa þ B iaðiÞ  N a we obtain:
i¼1 i¼1 i¼1
1b 1
ð15Þ a ¼ qa ð28Þ
b 1  bQ þ1
and equal its partial derivatives to 0, as follows:
The computation of Na yields:
@F X
Qþ1 X
Qþ1
¼ ðln aðiÞ þ 1Þ þ A þ Bi ¼ 0; i ¼ 1; 2; . . . ð16Þ Na ¼ iaQ ðiÞ ¼ iabi
@aðiÞ
i¼1 i¼1
Therefore
qa 1  ½ðQ þ 2Þ  ðQ þ 1ÞbbQþ1
¼ ð29Þ
aðiÞ ¼ eA1þBi ¼ abi ; i ¼ 1; 2; . . . ð17Þ 1b 1  bQþ1
where Observe that when Q increases to infinity we reobtain (20) and
A1 B
(21). Eq. (29) can be rewritten as:
a¼e ; b¼e ð18Þ
Na
To compute a and b we use the known conditions: ¼ fQ ðbÞ ð30Þ
qa
X1 X1
ab
qa ¼ aðiÞ ¼ abi ¼ ð19Þ where
i¼1 i¼1
1 b
1 1  ½ðQ þ 2Þ  ðQ þ 1ÞbbQ þ1
which implies: fQ ðbÞ ¼ ð31Þ
1b 1  bQ þ1

1b is an increasing function of b and fQ(0) = 1 (see Fig. 3). Therefore,


a ¼ qa ð20Þ since (Na/qa) > 1, Eq. (31) can be easily solved for b, and a readily
b
follows from Eq. (28).
Also
X
1 X
1
ab qa 3.3. Application to the buffer sizing for the G/G/1/K queuing system
Na ¼ iaðiÞ ¼ iabi ¼ 2
¼ ð21Þ
i¼1 i¼1 ð1  bÞ 1b
Buffer overflow in data networks causes packet losses and, con-
Then, sequently, it should be evaluated and properly controlled to guar-
antee the desired level of service performance. Obviously, a packet
qa
b¼1 ð22Þ is lost when, upon its arrival to the transmission system, no storage
Na
space is left in the buffering element. Therefore, as said before, to
and therefore, from Eq. (20): evaluate this parameter, the use of the probabilities of packets
A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58 49

11 As shown later, Eqs. (32) and (34) give similar results for the va-
lue of the needed buffer size. The value of Q provided by Eq. (36) is
10 Q=5 however larger than necessary, but the advantage is that Q can be
Q=10
9 expressed in closed-form.
Q=15
In the next section we comment on the numerical results ob-
8 Q=20
tained using different traffic aggregations.
7
3.4. Numerical and simulation results
6
fQ

5 In this section we give a few representative figures obtained by


simulation and using the procedure and methods described in
4 Sections 3.2 and 3.3. Although the main interest of this paper is
3
the computation of buffer overflow using to that end the probabil-
ities seen by arrivals, we have also indicated how to evaluate the
2 probabilities of packets in the system, and pointed out the differ-
ences between the two of them. To more visually highlight those
1
0 0.2 0.4 0.6 0.8 1 differences, in Fig. 4 we have shown the values of p(i) and a(i) for
β infinite buffers and two specific distribution of interarrival and
service times. More specifically, we have chosen a Gamma(a, m)
Fig. 3. Calculation of b.
distribution for the interarrival times and a Pareto(k, xm) for
packet lengths.
found in the system by an arrival (and not the probability of pack- The probability density function for the Gamma(a, m) is given
ets in the system) is in order. Based on the previous formulas, we by:
propose and compare three different methods. For the first method xm1 x
we use the finite buffer model presented in Section 3.2.2, whereas f ðxÞ ¼ ea ; x>0 ð37Þ
am CðmÞ
for methods 2 and 3 we employ Eq. (24) derived in Section 3.2.1.
where a > 0, m > 0 and C(m) is the gamma function. Its average value
— Method 1: The probability of packet loss, PL, can be evaluated as is ma.
the probability of being in the state K, that is, Q + 1. The density function for the Pareto(k, xm) is:

xkm
PL ¼ aQ ðQ þ 1Þ ¼ abQ þ1 ð32Þ f ðxÞ ¼ k ; x>0 ð38Þ
xkþ1
where a and b are computed as indicated in Section 3.2.2.
where xm > 0 and k > 1. Its average value is kxm/(k  1).
PL can also be calculated using the probabilities a(i) obtained in
Observe that one of the distributions considered for the packet
Section 3.2.1 for the infinite buffer size. Here, we present two
lengths is the Pareto distribution, which takes into account the
methods, akin to the ones used to compute blocking probabilities
‘‘heavy-tailed’’ effect present in the self-similar traffic [34,35]. Any-
for voice circuits.
way, although self-similarity property appears in many real traffics
[36,37], it should be noted that its influence in some environments
— Method 2: Define:
or with certain services (for instance, real time services with small
8 buffers) is not determinant [38,39].
< PQaðiÞ i6Q þ1
~ðiÞ ¼
þ1
aðnÞ; The parameters chosen to compute the model and run the sim-
a n¼0 ð33Þ
: ulations are shown in Table 1.
0; i>Q þ1
Then, PL can be computed as follows:
 Q 0.6
qa
aðQ þ 1Þ q 1  Na
2 Simulation p(i)
a
~ðQ þ 1Þ ¼
PL ¼ a P ¼  Qþ1 ð34Þ Simulation a(i)
1 1 n¼Qþ2 aðnÞ Na
1  qa 1  Nqaa 0.5 Model p(i)
Model a(i)
Observe that this method of truncation is similar to the
Erlang_B approach to compute blocking probabilities for voice cir- 0.4
cuits (truncating, in that case, a Poisson distribution).
0.3
— Method 3: We now parallel Molina’s [33] way of computing
blocking probabilities. Therefore, we set:
0.2
X X    
1 1
q2a q i1 q Q
PL ¼ aðiÞ ¼ 1 a ¼ qa 1  a ð35Þ
i¼Q þ1
N
i¼Q þ1 a
Na Na
0.1

With this method Q can be expressed in closed-form as:


0   1 0
ln qPL 0 1 2 3 4 5 6
Q ¼ int@  A
a
ð36Þ i
ln 1  Nqaa
Fig. 4. Probabilities of packets in the system, p(i), and probabilities seen by arrivals,
where int() is the integer function. a(i), with infinite queue size.
50 A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58

Table 1 Table 3
Traffic configuration parameters. Traffic streams (interarrival time).

Function Parameters Traffic Interarrival time Interarrival time Interarrival time


number (q = 0.4) (q = 0.6) (q = 0.8)
Interarrival time (gamma) a = 0.001 s m=2
Packet length (pareto) xm = 800 bits k=3 1 Trunc. exponential Trunc. exponential Trunc. exponential
k1 = 9  106 s k1 = 6  106 s k1 = 4.5  106 s
xmax = 1.2  104 s xmax = 1.2  104 s xmax = 1.2  104 s
2 Uniform Uniform Uniform
For a 1 Mbps channel, this produces a load of 0.6. Obviously, xmin = 0 s xmin = 0 s xmin = 0 s
many other parameters and distributions could have been chosen xmax = 3  105 s xmax = 2  105 s xmax = 1.5  105 s

and the results are similar. Observe how, as expected, the probabil- 3 Deterministic Deterministic Deterministic
ities p(i) and a(i) differ, but the values provided by the model agree x0 = 5  105 s x0 = 3.333  105 s x0 = 2.5  105 s

accurately with the results of the simulation runs. 4 Gamma Gamma Gamma
Since we are interested in buffer overflow, for the rest of the a = 2.37  106 s a = 1.58  106 s a = 1.18  106 s
k=4 k=4 k=4
computations and figures we are only concerned with the value
of the probabilities seen by arrivals. To more realistically mimic
the actual traffic in any network, we have considered a traffic
resulting from the aggregation of four different mixes with quite
different interarrival times and packet lengths. Besides the gamma
0.7
distribution, already mentioned, the other distributions we have
Model rho=0.4
used are the following: Model rho=0.6
0.6 Model rho=0.8
— Truncated exponential: Simulation rho=0.4
Simulation rho=0.6
0.5
Simulation rho=0.8
k
f ðxÞ ¼ ekx ; 0 6 x 6 xmax ð39Þ
1  ekxmax 0.4
— Truncated pareto:
aQ

k xkmin 0.3
f ðxÞ ¼  k ; xmin 6 x 6 xmax ð40Þ
xmin xkþ1
1 xmax
0.2
— Uniform:
1
f ðxÞ ¼ ; xmin 6 x 6 xmax ð41Þ 0.1
xmax  xmin
— Deterministic: 0
f ðxÞ ¼ @ðx  x0 Þ ð42Þ 0 2 4 6 8 10 12
i
where o() is the Dirac’s delta function.
Moreover, to fully explore the behavior under different loads, Fig. 5. Probabilities seen by arrivals, aQ(i), with finite queue size (Q = 11).
we have compared three loads: q = 0.4, q = 0.6 and q = 0.8. Table 2
shows the packet length distributions and Table 3 the distributions
of the interarrival times for the three different loads. The channel
capacity is set to 1 Gbps. The analytic computations are carried 0.25
out using method 1 presented in Section 3.3. Model rho=0.4
Fig. 5 represents the values (model and simulations) of aQ(i) for Model rho=0.6
Model rho=0.8
a buffer size of 11. It can be observed a good agreement for the 0.2 Simulation rho=0.4
whole spectrum of loads. Fig. 6 shows the packet loss probability Simulation rho=0.6
Simulation rho=0.8

0.15
Table 2
PL

Traffic streams (packet length).

Traffic number Packet length 0.1

1 Truncated pareto
k = 1.5
xmin = 368 bits 0.05
xmax = 12,000 bits
2 Gamma
a = 500 bits
0
m=3 0 5 10 15
3 Uniform Q
xmin = 1000 bits
xmax = 9000 bits Fig. 6. Packet loss probability vs. buffer size.
4 Truncated pareto
k = 1.5
xmin = 368 bits as a function of the buffer size and for the three different load con-
xmax = 18,000 bits
ditions. As usual, the values obtained with the model and by means
A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58 51

35 of simulation are compared. Finally, Fig. 7, although similar to


Model rho=0.4 Fig. 6, shows more conveniently for a practical implementation
Model rho=0.6
30 the Q needed for a given PL for the three loads. As can be seen,
Model rho=0.8
Simulation rho=0.4 the agreement is again very good.
Simulation rho=0.6 Finally, Figs. 8 and 9 show the results obtained using the three
25
Simulation rho=0.8 methods mentioned in Section 3.3. Not to clutter the figure, we
have selected only one load: q = 0.6. From those graphs, we see
20
the agreement of methods 1 and 2 and that, as expected, method
Q

3 allocates more buffer than necessary.


15

4. Method extension for wireless devices transmitting over


10
shared mediums

5 To extend our proposal to wireless networks, where the stations


(nodes) contend inside a shared medium, we need to do some
0 additional considerations. Mainly, we must take into account the
-6 -5 -4 -3 -2 -1
10 10 10 10 10 10 fact that in contrast to a wired network where the node server
PL occupancy is exclusively due to its own packet transmissions, in
wireless networks a node should hold the packets in its queue if
Fig. 7. Buffer size vs. packet loss probability. the radio channel is being used by another station inside its carrier
sensing range. Actually, if we consider the CSMA/CA mechanism of
IEEE 802.11-based networks, we must combine both the physical
and virtual carrier sensing status information in order to determine
0.35 the node server occupancy.
Model 1 The physical carrier sensing mechanism reports the current
Model 2
0.3 state of the medium to the local MAC entity. It reports the channel
Model 3
Simulation as busy whenever the perceived signal strength exceeds the carrier
0.25
Table 4
ns-2 MAC/PHY parameters used in simulations.
0.2 Parameter Value
PL

CWmin 15
0.15 CWmax 23
SIFS 16 ls
DIFS 34 ls
0.1 Slot duration 9 ls
Header duration 20 ls
Short retry limit 7
0.05
Long retry limit 4
CS threshold 6.31 pW
0 Pt 0.2 mW
0 5 10 15 Frequency 5.18 GHz
Q Noise floor 0.25 pW
Power monitor threshold 2.1 pW
Fig. 8. Packet loss probability vs. buffer size using the three models (q = 0.6). SINR preamble capture 2.5118
SINR data capture 100
Propagation loss model Two ray ground
Data/control OFDM rate 6 Mbps
25
Model 1
Model 2
Model 3
Table 5
20 Simulation
Packet length description for UDP traffic load
composition.

Traffic # Packet length


15
1 Truncated pareto
Q

k = 1.5
xmin = 368 bits
10 xmax = 12,000 bits
2 Gamma
a = 1500
5
m=3
3 Uniform
xmin = 1000 bits
xmax = 9000 bits
0 -6 -5 -4 -3 -2 -1
10 10 10 10 10 10 4 Truncated pareto
k = 1.5
PL
xmin = 368 bits
xmax = 18,000 bits
Fig. 9. Buffer size vs. packet loss probability using the three models (q = 0.6).
52 A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58

Table 6
Interarrival time description for UDP traffic load composition.

Traffic Inter-arrival time


#
qa = 0.1 qa = 0.2 qa = 0.3 qa = 0.4 qa = 0.5 qa = 0.6 qa = 0.7
1 Truncated Truncated Truncated Truncated Truncated Truncated Truncated
exponential exponential exponential exponential exponential exponential exponential
k1 = 6.93 ms k1 = 3.61 ms k1 = 2.3 ms k1 = 1.54 ms k1 = 1.34 ms k1 = 1.1 ms k1 = 0.92 ms
xmin = 1.17 ms xmin = 1.17 ms xmin = 0.6 ms xmin = 0.6 ms xmin = 0.44 ms xmin = 0.46 ms xmin = 0.48 ms
xmax = 39 ms xmax = 13 ms xmax = 14 ms xmax = 14 ms xmax = 14.5 ms xmax = 15.3 ms xmax = 16 ms
2 Uniform Uniform Uniform Uniform Uniform Uniform Uniform
xmin = 0 xmin = 0 xmin = 0 xmin = 0 xmin = 0 xmin = 0 xmin = 0
xmax = 26 ms xmax = 13 ms xmax = 9.33 ms xmax = 7 ms xmax = 5.8 ms xmax = 5.1 ms xmax = 4.6 ms
3 Determinist Determinist Determinist Determinist Determinist Determinist Determinist
x0 = 43.33 ms x0 = 21.67 ms x0 = 15.56 ms x0 = 11.67 ms x0 = 9.67 ms x0 = 8.5 ms x0 = 7.62 ms
4 Gamma Gamma Gamma Gamma Gamma Gamma Gamma
a = 8.224 ms a = 4.1 ms a = 2.95 ms a = 2.212 ms a = 1.83 ms a = 1.61 ms a = 1.45 ms
m=4 m=4 m=4 m=4 m=4 m=4 m=4

0.8
Table 7
0.7
(a) Measured packet loss probability values.

Target PL Measured PL
0.6

1.0E3 8.79E04
0.5
1.0E4 5.30E05
Offered Load

1.0E5 3.00E06
0.4

0.3

0.2

0.1
With all this in mind, to measure the server occupancy we con-
sider that a server is idle only in the case that the physical and vir-
0
3000 3100 3200 3300 3400 3500 tual carrier sensing mechanisms report an idle state, and none IFS
Time [s] waiting time is being carried out by the node. In all other cases the
40 node server is seen as busy.
(b) PL=1.0E-3
PL=1.0E-4
It is important to note that, besides buffer overflow, the pro-
35
PL=1.0E-5
posed mechanism indirectly considers the other sources of packet
30 losses experimented by wireless devices. These losses can be pro-
duced both by collisions in the shared medium or by physical
Buffer Size Q [packets]

25
channel impairments (noise, interference, fading, shadowing,
20
etc.). As stated in [22], frames that are not correctly received at
15 the MAC layer (of the next-hop destination nodes) are retransmit-
ted by the source. These retransmissions imply higher channel uti-
10
lization. Therefore, the value of the server occupancy measured by
5 our mechanism will be increased accordingly, and so the buffer
size will also be modified depending on the channel conditions.
0
3000 3100 3200 3300 3400 3500
Time [s]
4.1. First simulation scenario. Simple two-node topology
Fig. 10. (a) Traffic load variation; (b) buffer size for different packet loss probability
target values. In order to validate and assess the proposed buffer sizing mech-
anism, it has been verified by means of extensive simulations. To
sensing threshold or when the physical layer is in transmission be more confident with the obtained results it has been fully
state. On the other hand, the virtual carrier sensing is provided implemented on ns-2 and ns-3 simulators.
by the MAC by means of the network allocation vector (NAV). First, a simple two-node topology has been chosen to determine
802.11-based nodes (stations or access points) use the NAV to the adequate configuration parameters and to illustrate its effects
know how long they must defer from accessing the medium be- over the system’s performance. In this case, the simulations have
cause another node is using it. The duration information required been carried out on ns-2 version 2.34 using the overhauled IEEE
to set the NAV is carried in RTS/CTS frames and in all the data 802.11 MAC and PHY modules [40]. The organized and modular
and control frames interchanged during a contention period. This design of these MAC and PHY modules allows clearer and easier
virtual carrier sensing reports the medium as idle when the NAV channel occupancy measurements and also improves the simula-
is zero and as busy otherwise. In summary, the medium will be tions accuracy in comparison with the native IEEE 802.11 ns-2
considered idle only when the physical and the virtual carrier sens- model. The MAC/PHY parameters used in these simulations are
ing mechanism simultaneously report it as idle. shown in Table 4.
Additionally, we must take into account the inter frame space
time (IFS) that each node has to delay between the transmission 4.1.1. Method validation under variable traffic load conditions
of two consecutive frames (as stated by the IEEE 802.11 standard Following the traffic criteria applied in Section 3.4, it has been
[22]). considered that Node 1 transmits to Node 2 a UDP traffic flow
A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58 53

Table 8
Simulations result for different values of THUP and PL.

Target PL
THUP 0.005 0.01 0.05 0.1 0.15
1.0E3 Measured PL 7.04E04 7.20E04 9.69E04 1.33E03 2.44E03
Executions rate 194.9E03 103.3E03 24.5E03 12.7E03 10.7E03
Q variation rate 40.1E03 34.4E03 20.1E03 12.0E03 9.5E03
1.0E4 Measured PL 4.50E05 4.90E05 8.60E05 1.77E04 3.01E04
Executions rate 194.9E03 102.1E03 26.5E03 13.6E03 10.7E03
Q variation rate 53.5E03 44.0E03 24.9E03 13.3E03 10.5E03
1.0E5 Measured PL 3.00E06 3.00E06 8.00E06 3.50E05 6.90E05
Executions rate 194.9E03 102.1E03 25.2E03 13.6E03 10.7E03
Q variation rate 63.2E03 50.5E03 24.4E03 13.5E03 10.5E03

35
PL=1.0E-3
PL=1.0E-4
30 PL=1.0E-5

25
Buffer Size Q [packets]

20

15

10

0
0 1000 2000 3000 4000 5000 6000 7000
Time [s]

35
PL=1.0E-3
PL=1.0E-4
30 PL=1.0E-5

25
Buffer Size Q [packets]

20

15

10

5 Fig. 12. Scenario 2 ns-3 simulation topology.

0
3000 3100 3200 3300 3400 3500
Time [s]

As previously studied, the input parameters for the proposed


Fig. 11. Buffer size for different PL target values and with the adequate configu-
ration parameters (THUP = 0.05, THDW = 0.075, w = 1.0E4). algorithm are the average channel occupancy qa and the number
of packets in the system Na. These values are measured by the node
itself each time a packet arrival occurs, and they are averaged by
means of an exponentially weighted moving average (EWMA).
resulting from the aggregation of four different flows with different
Fig. 10b shows the resulting buffer size Q for three different
interarrival times and packet lengths. The idea behind this is to ob-
packet loss probability target values. As it was expected the re-
tain a complicated mix of traffics that do not follow a simple pat-
quired buffer is greater when the target PL is more demanding
tern or distribution. To explore the behavior of the proposed buffer
and it continuously adapts to the traffic load variation. The mea-
sizing algorithm under diverse traffic loads, seven different loads
sured packet loss probability values for the mentioned targets
have been generated (from qa = 0.1 to qa = 0.7) and combined for
are shown in Table 7. It can be observed how the measured PL is
the tests. This way, increases and reductions of the system loads
lesser than the target PL for all the cases. Thus, the correct opera-
over time have been simulated. For these experiments, the time
tion of the proposed mechanism has been confirmed.
between significant variations of the load has been considered as
50 s. All the simulation rounds presented in this section were
7500 s long with the purpose of dispose of enough transmitted 4.1.2. Selection of the buffer update times
packets for a reliable estimation of the packet loss probability. Ta- Up to here, it has been verified that the proposed algorithm
ble 5 shows the packet length distributions and Table 6 the inter- works properly since it dynamically adapts the buffer size to
arrival times distributions used for generating these seven loads. the traffic load variations keeping the PL below a previously spec-
The load variation is shown in Fig. 10a. For the sake of clarity, only ified target value. However, due to the fact that the buffer size
a part of the simulation (between 3000 and 3500 s) is presented. updates are done at each packet arrival, it highly overloads the
Other traffic mixes and other load variations have been used to device processor. Two acting thresholds have been defined to
drive simulations with very similar results. avoid this issue by eliminating unnecessary executions of the
54 A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58

Table 9 correctly establish its value. The aim is to determine the greater
ns-3 MAC/PHY parameters used in simulations. THUP value that still maintains the PL under the specified target
Parameter Value value. The larger the THUP, the lower the rate of executions of
AP beacon interval 10 s the algorithm.
STA probe request timeout 50 ms Table 8 shows the simulation results for different values of
STA assoc. request timeout 500 ms THUP and target PL. For each value of target PL and THUP, the table
STA max missed beacons 10 shows the resulting PL, the rate of algorithm executions and the
STA active probing No
CTS timeout 75 ls
rate of variations of Q value. From these results the increasing
AckTimeout 75 ls threshold has been established as THUP = 0.05 because with this
Basic Block Ack Timeout 281 ls value the buffer size update rate has been significantly reduced
Compressed Block Ack Timeout 99 ls to around 25.4E-3 transitions per second (the algorithm was exe-
SIFS 16 ls
cuted on average only once every 39.4 s) and the target PL is still
DIFS 34 ls
Slot duration 9 ls accomplished. Besides, as expected, the difference between the
Max. propagation delay 3.333 ls rate of algorithm executions and the rate of effective variations
Propagation loss model Log distance of Q value, resulting from such execution, decreases according
Exponent: 3 the THUP value increases. This confirms the fact that originally
Ref. distance: 1 m
there were too much unnecessary algorithm runs. Specifically,
Ref. loss:
46.677 dB for THUP = 0.005, on average only 27% of algorithm executions
Data/control OFDM rate 6 Mbps yield a resulting Q that differs from its previous value. In contrast,
Max No. of retransmission attempts for RTS/data 7 for the selected value of THUP = 0.05, on average 91% of algorithm
packets
execution produces an updated different Q value. For higher THUP
RTS/CTS/fragmentation threshold 2346
Random number generator MRG32k3a values this percentage is higher but the target PL is no longer
fulfilled.
Finally, once all the configuration parameters have been prop-
erly selected (w = 1.0E-4, THUP = 0.05, THDW = 0.075), a better per-
algorithm. The purpose of these thresholds is then reduce the formance of the algorithm can be verified from Fig. 11. It shows
number of algorithm executions per second, incrementing or the Q value for three different PL target values. From here, it can
decrementing the Q value only when the variation of the average be concluded that this proposed buffer sizing mechanism dynam-
channel occupancy exceeds an upper or a lower limit, THUP and ically adapts the router’s buffer size according to the traffic load
THDW respectively. Being conservative, it is interesting not to re- variations and keeping the PL under the desired value. Besides,
duce the buffer size unless the load (qa) has decreased around the buffer size presents a much more stable behavior (when com-
0.08. Therefore, the value of THDW has been set at 0.075. Regard- pared to the previous case in Fig. 10) and therefore the device pro-
ing THUP, another round of simulations has been performed to cessor overload is substantially reduced.

ON/OF

ON
N1

ON
N2

ON
N3

ON
N4

ON
N5

Total No. of 3
transmitting
Nodes in 2
WLAN1
1

ON
N6

ON
N7

ON
N8

ON
N9

ON
N10

Total No. of 3
transmitting
Nodes in 2
WLAN2
1

t [s]
0 1000 2000 3000 4000 5000

Fig. 13. Activation pattern for scenario 2 nodes.


A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58 55

(a) N0 Interface 1 N1 N2
1 1 1
Average Channel Occupancy

Average Channel Occupancy

Average Channel Occupancy


0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
Time [x 1000 s] Time [x 1000 s] Time [x 1000 s]

N3 N4 N5
1 1 1
Average Channel Occupancy

Average Channel Occupancy

Average Channel Occupancy


0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
Time [x 1000 s] Time [x 1000 s] Time [x 1000 s]

(b) 1
N0 Interface 2
1
N6
1
N7
Average Channel Occupancy

Average Channel Occupancy

Average Channel Occupancy

0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
Time [x 1000 s] Time [x 1000 s] Time [x 1000 s]

N8 N9 N10
1 1 1
Average Channel Occupancy

Average Channel Occupancy

Average Channel Occupancy

0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0
0 1 2 3 4 5 0 1 2 3 4 5 0 1 2 3 4 5
Time [x 1000 s] Time [x 1000 s] Time [x 1000 s]

Fig. 14. Average Channel Occupancy for scenario 3 nodes. (a) WLAN1 nodes; (b) WLAN2 nodes.
56 A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58

4.2. Second simulation scenario. Multiple WLANs topology the average channel occupancy for each of the two WLANs
(Observe the correspondence with the total number of active
For this scenario, the ns-3 simulation topology shown in Fig. 12 nodes for each WLAN in Fig. 13).
has been utilized. It consists of two infrastructure-based WiFi net- With this information, every node is able to autonomously and
works with five stations associated with each AP. Additionally, a dynamically adapt its buffer size according to the traffic and
special node (N0) equipped with two network interface cards channel states. As always, this is done with the constraint of a gi-
(IF1 and IF2) and associated to both WiFi networks has been con- ven packet loss probability target. Fig. 15 shows the buffer size
sidered and subjected to deeper analysis. The APs are connected evolution of each N0 interface for three different packet loss
to the wired hosts (H1 and H2) through 500 Mbps links. Static probability targets. This confirms the proper functioning of the
routing has been configured in the nodes for connectivity purposes. algorithm for bigger topologies where different nodes are con-
Wireless nodes in WLAN1 (WLAN2) transmit UDP packets to H1 tending for the channel.
(H2) through AP1 (AP2). Node N0 belongs to both networks. For
all the simulations under this scenario it transmits UDP packets
through its two interfaces during the entire simulation time. The 4.2.2. Memory utilization efficiency
other nodes transmit UDP packets according to the activation pat- To illustrate and quantify the improvements achieved with the
tern shown in Fig. 13. dynamic buffer sizing mechanism, the concept of memory utiliza-
The ON state means that a station is transmitting UDP traffic to tion efficiency g has been defined as the ratio between the area un-
its respective wired host at that time instant. The figure also shows der the buffer occupancy curve (current number of packets waiting
the total number of active nodes in each WLAN at any time. As it in the buffer) and the area under the buffer size curve (maximum
can be seen, the length of the simulations in this scenario is number of packets allowed in the buffer). Fig. 16 shows these
5000 s. The MAC/PHY configuration parameters used in these sim- two curves, for the interface 2 of the N0 node, and for a target PL
ulations are shown in Table 9. of 1E-3. For the sake of clarity, a ‘zoomed’ version is also provided.
It can be seen that this memory utilization efficiency is expected to
be very low due to the fact that the nodes, working under normal
4.2.1. Method validation under variable number of active nodes traffic load conditions, expend long time with empty buffers.
In the previous section it has been demonstrated that the pro- A round of simulations has been carried out to compare the va-
posed mechanism works properly for a simple wireless node lue of g with the dynamic buffer sizing algorithm against the
adapting its buffer size according to its own traffic load variation. resulting g when the nodes are configured with a static buffer size.
This section starts demonstrating that the mechanism works also To do this comparison in the correct terms, the buffer size selected
for larger topologies. More specifically, it is shown that the mech- with the static allocation has been the one that produces the same
anism works for the case that the channel occupancy and the traf- loss probability than the dynamic allocation. That is, if we run the
fic variations are due to the activation or deactivation of different dynamic algorithm with a target PL equal to 1E-3, we must com-
network nodes. Fig. 14 shows the average channel occupancy mea- pare it with a static buffer size that results in the same PL. For
sured in both WLANs. Here it can be observed that each node in the the scenario under study, the correspondences are: 10 packets
network is able to correctly capture the average channel occupancy
during its corresponding activity periods. Due to the fact that the
N0 node is transmitting by its two interfaces during all the simula- 12 Buffer Size
tion time, the first graph in Fig. 14a and b, which corresponds to Buffer Occupancy
each interface of the N0 node, shows the complete evolution of 10

8
[pkts]

6
40 PL=1.0E-5
N0 INTERFACE 1
PL=1.0E-4
PL=1.0E-3 4
Buffer Size Q [pkts]

30
2

20
0
0 1000 2000 3000 4000 5000
10
Time [s]

5
0 Buffer Size
0 1000 2000 3000 4000 5000
Buffer Occupancy
Time [s] 4

40 PL=1.0E-5 3
N0 INTERFACE 2
[pkts]

PL=1.0E-4
PL=1.0E-3
Buffer Size Q [pkts]

30
2

20
1

10
0
1090.3 1090.4 1090.5 1090.6 1090.7 1090.8 1090.9
0
0 1000 2000 3000 4000 5000 Time [s]
Time [s]
Fig. 16. Buffer size vs. buffer occupancy for the interface 2 of the N0 node with
Fig. 15. Buffer size for the N0 node configured with three different PL target values. target PL = 1E3.
A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58 57

Table 10
Memory efficiency comparison for the two interfaces-node configured with static vs. dynamic buffer size.

Q for static [pkts] gSTATIC_Q gDYNAMIC_Q % Improvement


Interface 1 10 0.03984 0.05728 43.77
16 0.02669 0.03961 48.41
22 0.01976 0.03114 57.59
Interface 2 10 0.02772 0.04424 59.56
16 0.01780 0.03234 81.69
22 0.01313 0.02649 101.85

for PL = 1E3, 16 for PL = 1E4 and 22 for PL = 1E5. These static wireless mesh routers with multiple interfaces, and can also be
buffer size values have also been determined by simulation. valuable for resource constraint devices like wireless sensor nodes.
Table 10 summarizes the simulation results for the two inter- The better memory utilization could also stimulate and facilitate
faces-node. From here, it can be verified that the proposed mecha- the design and use of all-optical routers where small buffers are
nism achieves more efficient memory utilization for all the appreciated.
simulated conditions and for both node interfaces. The efficiency Further work is in progress to implement and test the mecha-
improvement achieved is around the 50% for the interface 1 and nism in real devices using the MadWifi Linux kernel drives [41].
around 81% for the interface 2. It can be also noticed that in both, Besides, we are currently working in the application of the maxi-
static and dynamic buffer sizing schemes, the memory utilization mum entropy principle in devices where different service policies
efficiency is lesser when the target PL value is more stringent. This are applied to different traffic flows. Performance evaluation for
is because greater buffer sizes are required to reach this value and TCP and a mix of TCP and UDP flows is another work in progress.
so the probability of underutilization increases. Evaluation in sensor and mesh networks with different traffic types
is also planned.

5. Conclusion and future work Acknowledgments

It is a well-known fact that the evaluation of operating param- This work was supported by the Spanish Research Council un-
eters (buffer occupancy, waiting times, etc.) for the G/G/1 queue der projects COPPI (TEC2011-26491) and CONSEQUENCE
requires detailed knowledge of interarrival and service times. (TEC2010-20572-C02-02), and by the Ecuadorian SENESCYT
However, in real practice that knowledge is seldom obtainable (2010) and Salesian Polytechnic University grant programs.
and, therefore, the engineer has to make do with the limited The authors thank the anonymous reviewers for their careful
amount of data about the system available to him. reading and insightful comments that have helped in improving
To deal with this problem, in this paper we used a maximum the presentation of this paper.
entropy approach and verified that it offers promising possibilities.
In fact, we have shown, from a practical point of view, that the References
problem of buffer sizing can be solved with very good accuracy
with only two measurements, namely: server occupancy and the [1] J. Gettys, K. Nichols, Bufferbloat: dark buffers in the internet, Commun. ACM 55
(1) (2012) 57–65.
average number of packets in the transmission system (both seen [2] C. Kreibich, N. Weaver, B. Nechaev, V. Paxson, Netalyzr: illuminating the edge
by an arriving packet). Based on this, we have designed and imple- network, in: Proc. of the 10th ACM SIGCOMM Conference on Internet
mented a dynamic buffer sizing algorithm for wireless devices Measurement (IMC ’10), ACM, New York, NY, USA, 2010, pp. 246–259.
[3] N. Beheshti, Y. Ganjali, R. Rajaduray, D. Blumenthal, N. McKeown, Buffer sizing
working over shared mediums. The behavior of the mechanism in all-optical packet switches, in: Optical Fiber Communication Conference,
has been verified by simulation using different simulators, network 2006 and the 2006 National Fiber Optic Engineers Conference. OFC 2006, 2006.
scenarios and traffic load conditions. The obtained results demon- [4] A. Vishwanath, V. Sivaraman, G.N. Rouskas, Anomalous loss performance for
mixed real-time and TCP traffic in routers with very small buffers, IEEE/ACM
strate that the wireless devices were provided with the capability
Trans. Networking 19 (4) (2011) 933–946.
of self-configure their buffer sizes, and efficiently self-manage their [5] M. Enachescu, Y. Ganjali, A. Goel, N. McKeown, T. Roughgarden, Part III: routers
memory resources while keeping bounded their packet loss rate. A with very small buffers, SIGCOMM Comput. Commun. Rev. 35 (3) (2005) 83–
proper setting of the configuration parameters can considerably re- 90.
[6] C. Villamizar, C. Song, High performance TCP in ANSNET, SIGCOMM Comput.
duce the processor overload required by our mechanism. Results Commun. Rev. 24 (5) (1994) 45–60.
also show that the system exhibits a proper convergence rate [7] L.L.H. Andrew, T. Cui, J. Sun, M. Zukerman, K.-T. Ko, S. Chan, Buffer sizing for
and stable behavior for all the analyzed scenarios. In larger topol- nonhomogeneous TCP sources, IEEE Commun. Lett. 9 (6) (June 2005) 567–569.
[8] G. Appenzeller, I. Keslassy, N. McKeown, Sizing router buffers, SIGCOMM
ogies the system correctly captures the channel occupancy and Comput. Commun. Rev. 34 (4) (2004) 281–292.
load variations and so, it still fulfills the objective. [9] N. Beheshti, Y. Ganjali, M. Ghobadi, N. McKeown, G. Salmon, Experimental
Most of the current literature associates the buffer sizing prob- study of router buffer sizing, in: Proceedings of the 8th ACM SIGCOMM
conference on Internet measurement (IMC ’08), ACM, New York, 2008, pp.
lem with the dynamics of TCP congestion control mechanism. 197–210.
Then, the performance evaluation parameters are strongly related [10] T. Li, D. Leith, D. Malone, Buffer sizing for 802.11-based networks, IEEE/ACM
with this type of traffic (throughput, delay). Since this work pre- Trans. Networking 19 (1) (2011) 156–169.
[11] H. Akimaru, K. Kawashima, Teletraffic, Springer, 1999.
sents an alternative buffer sizing method and it is focused on real [12] D. Kouvatsos, A maximum entropy analysis of the G/G/1 queue at equilibrium,
time traffic, new performance evaluation metrics are introduced. J. Oper. Res. Soc. 39 (2) (1988) 183–200.
Results demonstrate significant improvement in memory [13] V. Mahendran, T. Praveen, C.S.R. Murthy, Buffer dimensioning of delay-tolerant
network nodes – a large deviations approach, in: Proceedings of the 13th
utilization efficiency achieved with the proposed mechanism, in
International Conference on Distributed Computing and Networking (ICDCN
comparison with a static buffer allocation. It is also shown that a 2012), LNCS, 7129, Springer-Verlag, Berlin, 2012, pp. 502–512.
lower amount of memory is required by nodes configured with dy- [14] J. Liu, S. Chan, H. Vu, Performance modeling of broadcast polling in IEEE 802.16
namic buffers. networks with finite-buffered subscriber stations, IEEE Trans. Wireless
Commun. 11 (12) (2012) 4514–4523.
Finally, the efficiency improvement achieved with the proposed [15] A. Dhamdhere, H. Jiang, C. Dovrolis, Buffer sizing for congested Internet links,
mechanism can benefit the self-organization capabilities of in: Proceedings IEEE INFOCOM 2005, vol. 2, 13–17 March 2005.
58 A. Vázquez-Rodas et al. / Computer Communications 44 (2014) 44–58

[16] A. Dhamdhere, C. Dovrolis, Open issues in router buffer sizing, SIGCOMM [29] ns-3 Network simulator, Available Online at: <http://www.nsnam.org/>.
Comput. Commun. Rev. 36 (1) (2006) 87–92. [30] C. Shannon, W. Weaver, The Mathematical Theory of Communication,
[17] K. Jamshaid, B. Shihada, L. Xia, P. Levis, Buffer sizing in 802.11 wireless mesh University of Illinois Press, 1972.
networks, mobile adhoc and sensor systems (MASS), in: 2011 IEEE 8th [31] T.M. Cover, J.A. Thomas, Elements of Information Theory, John Wiley, 1991.
International Conference on, 2011, pp. 272–281. [32] R.K. Sundaram, A First Course in Optimization Theory, Cambridge University
[18] Y. Zhang, D. Loguinov, ABS: adaptive buffer sizing for heterogeneous networks, Press, 1996.
Comput. Networks 54 (14) (2010) 2562–2574. [33] E.C. Molina, The theory of probability applied to telephone trunking problems,
[19] G. Raina, D. Towsley, D. Wischik, Part II: control theory for buffer sizing, Bell Syst. Tech. J. 1 (2) (1922) 69–81.
SIGCOMM Comput. Commun. Rev. 35 (3) (2005) 79–82. [34] W.E. Leland, M.S. Taqqu, W. Willinger, D.V. Wilson, On the self-similar nature
[20] C. Lee, D.K. Lee, S. Moon, Unmasking the growing UDP traffic in a campus of Ethernet traffic (extended version), IEEE/ACM Trans. Networking 2 (1)
network, in: PAM 2012, LNCS, vol. 7192, 2012, pp. 1–10. (1994) 1–15.
[21] M. Zhang, M. Dusi, W. John, C. Chen, Analysis of UDP traffic usage on internet [35] M. Fras, J. Mohorko, Z. Cucej, Packet size process modeling of measured self-
backbone links, in: Applications and the Internet, 2009. SAINT ‘09. Ninth similar network traffic with defragmentation method, in: Systems, Signals and
Annual International Symposium on, 2009, pp. 280–281. Image Processing, 2008, in: IWSSIP 2008. 15th International Conference on,
[22] IEEE standard for information technology – telecommunications and 2008, pp. 253–256.
information exchange between systems local and metropolitan area [36] Z. Sahinoglu, S. Tekinay, On multimedia networks: self-similar traffic and
networks – specific requirements. Part 11: wireless LAN medium access network performance, IEEE Commun. Mag. 37 (1) (1999) 48–52.
control (MAC) and physical layer (PHY) specifications, IEEE Std 802.11-2012 [37] L.J. de la Cruz, E. Pallares, J.J. Alins, J. Mata, Self-similar traffic generation using
(Revision of IEEE Std 802.11-2007), 2012, pp. 1–2793. a fractional ARIMA model. Application to the VBR MPEG video traffic, SBT/IEEE
[23] D.D. Kouvatsos, Maximum entropy and the G/G/1/N queue, Acta Inf. 23 (5) International Telecommunications Symposium. ITS ‘98 Proceedings, pp. 102–
(1986) 545–565. 107, 1998.
[24] M.A. El-Affendi, D.D. Kouvatsos, A maximum entropy analysis of the M/G/1 [38] B.K. Ryu, A. Elwalid, The importance of long-range dependence of VBR video
and G/M/1 queueing systems at equilibrium, Acta Inf. 19 (4) (1983) 339–355. traffic in ATM traffic engineering: myths and realities, SIGCOMM Comput.
[25] Kuo-Hsiung Wang, Shu-Lung Chuang, Wen-Lea Pearn, Maximum entropy Commun. Rev. 26 (4) (1996) 3–14.
analysis to the N policy M/G/1 queueing system with a removable server, Appl. [39] T.G. Robertazzi, Computer networks and systems: queueing theory and
Math. Modell. 26 (12) (2002) 1151–1162. performance evaluation, third ed., Springer-Verlag New York Incorporated,
[26] Kuo-Hsiung Wang, Tsung-Yin Wang, Wen.-Lea. Pearn, Maximum entropy 2000.
analysis to the N policy M/G/1 queueing system with server breakdowns and [40] Q. Chen, F. Schmidt-Eisenlohr, D. Jiang, M. Torrent-Moreno, L. Delgrossi, H.
general startup times, Appl. Math. Comput. 165 (1) (2005) 45–61. Hartenstein, Overhaul of IEEE 802.11 modeling and simulation in ns-2, in:
[27] Yu Gu, A. McCallum, D. Towsley, Detecting anomalies in network traffic using Proceedings of the 10th ACM Symposium on Modeling, Analysis, and
maximum entropy estimation, in: IMC ‘05 Proceedings of the 5th ACM Simulation of Wireless and Mobile Systems (MSWiM ’07), ACM, New York,
SIGCOMM conference on Internet, Measurement, 2005, pp. 345–350. 2007, pp. 159–168.
[28] The network simulator ns-2, Available online in: <http://www.isi.edu/nsnam/ [41] The MadWifi project, Available online in: <http://madwifi-project.org/>.
ns/>.

You might also like