0% found this document useful (0 votes)
62 views11 pages

Context Caching Using Neighbor Graphs For Fast Handoffs in A Wireless Network

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 11

Context Caching using Neighbor Graphs for Fast

Handoffs in a Wireless Network


Arunesh Mishra Min-ho Shin William A. Arbaugh
Department of Computer Science
University of Maryland
College Park, Maryland 20742
{arunesh,mhshin,waa}@cs.umd.edu

Abstract— User mobility in wireless data networks is in- lend themselves to what we call continuous mobility where
creasing because of technological advances, and the desire for the user moves while utilizing the network.
voice and multimedia applications. These applications, however, Voice based applications are the usual application in con-
require fast handoffs between base stations to maintain the
quality of the connections. Previous work on context transfer for tinuous mobility as seen in the current cellular networks, and
fast handoffs has focused on reactive methods, i.e. the context we expect voice and multimedia applications will serve as the
transfer occurs after the mobile station has associated with the catalyst for continuous mobility in Wi-Fi networks much as
next base station or access router. In this paper, we describe they did for the cellular networks once multi-mode handsets
the use of a novel and efficient data structure, neighbor graphs, and end-user applications become more widely available.
which dynamically captures the mobility topology of a wireless
network as a means for pre-positioning the station’s context Supporting voice and multimedia with continuous mobility
ensuring that the station’s context always remains one hop ahead. implies that the total latency (layer 2 and layer 3) of handoffs
From experimental and simulation results, we find that the use between base stations must be fast. Specifically, the overall
of neighbor graphs reduces the layer 2 handoff latency due to latency should not exceed 50 ms to prevent excessive jitter [5].
reassociation by an order of magnitude from 15.37ms to 1.69ms, Unfortunately, the vast majority of Wi-Fi based networks
and that the effectiveness of the approach improves dramatically
as user mobility increases. do not currently meet this goal with the layer 2 latencies
contributing approximately 90% of the overall latency which
Index Terms— System Design, Simulations, Experimentation exceeds 100 ms [6], [7]. [6] suggests various mechanisms to
with Testbed, Network Measurements.
reduce the layer 2 latency to within 20 to 60 ms depending
on the client. Handoffs involve transfer of station context
I. I NTRODUCTION [8], which is the stations’s session, QoS and security related
state information, via inter-access point communication. This
Wireless networks, specificially those based on the IEEE
transfer only furthers the handoff delay by an average 15.37
802.11 standard (Wi-Fi), are experiencing rapid growth due to
ms.
their low cost and unregulated bandwidth. As a result of this
One method of reducing the context transfer latency of
tremendous growth, pockets of connectivity have been created
handoffs is to transfer or cache context ahead of a mobile
not unlike those created during the first few years of the cel-
station in a pro-active fashion. Unfortunately, the previous
lular systems. The logical next step for Wi-Fi based networks
work on context transfer has focused solely on reactive context
is support for fast roaming within the same administrative
transfers, i.e. the context transfer is initiated only after the
domain and then eventually between overlapping pockets of
mobile station associates with the next base station or access
connectivity or different administrative domains. Thus, we
router resulting in an overall increase in the latency of the
expect users to become more mobile once technological ad-
handoff rather than reducing it [7], [9]. The problem with
vances such as multi-mode (Wi-Fi and GSM/CDMA cellular)
pro-active approaches, however, is how to determine the set
handsets become more available much as users became more
of potential next base stations without examining the network
mobile in the traditional cellular networks once handsets
topology and manually creating the set.
became smaller and more affordable.
In this paper, we introduce a novel and efficient data
Previous studies of wireless network mobility have shown
structure, the neighbor graph, which dynamically captures the
that users tend to roam in what we call discrete mobility where
mobility topology of a wireless network through real-time
the user utilizes the network while stationary (or connected
examination of the handoffs occurring in the network in either
to the same base station) and before moving the user ceases
a distributed fashion, e.g. at a base station or access point, or
operation only to continue using the network after moving
in a centralized fashion, e.g. at the authentication server.
to a new location [1], [2], [3], [4]. That is the users do not
A neighbor graph is an undirected graph with each edge
usually move while using the network because the majority
representing a mobility path between the vertices, or access
of current network applications and equipment do not easily
points. Therefore, given any edge, e, the neighbors of e
This research was supported in part by a grant from Samsung Electronics represent the set of potential next access points. While there
and the US National Institute of Standards and Technology. are numerous uses for this information, we focus in this paper

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004


on using it for pro-active context transfers ensuring that a of authentication server’s log information using the inverse of
mobile station’s context is always one hop ahead. the ratio of the number of handoffs from APi to APj to the
In this work, we have implemented neighbor graphs over time spent by the mobile station at APi prior to the handoff.
the IEEE Inter-access point protocol ([9]). We find that using While the FHR notion represents neighboring access points, it
neighbor graphs the reassociation latency reduces from 15.37 requires O(n2 ) computation and space, where n is the number
ms to 1.69 ms. We also find through simulations that as of access points in the network, and must be created at the
users become more mobile the effectiveness of our solution authentication server. Furthermore, the FHR notion does not
increases, i.e. the context cache hit ratio increases to over quickly adapt to changes in the network topology. This is
98% in most cases with reasonable cache sizes. The proactive in contrast to neighbor graphs which require O(degree(ap))
context caching and forwarding algorithm presented in this computation and storage space per AP1 and which quickly
work has been included in the final version of the IAPP adapts to changes in the network topology. Additionally,
standard [10]. neighbor graphs can be utilized either in a distributed fashion
The next section discusses the related work and section at each access point, or client, and in a centralized fashion at
III briefly discusses the background. Section IV presents the the authentication server.
neighbor graph datastructure along with the proactive caching Capkun et. al. leverage station mobility to create an ad-hoc
algorithm and some performance analysis. Section V discusses public key infrastructure by neighboring stations exchanging
the experiment and simulation results of the approach. Section public key certificates to create a certificate graph [14]. The
VI concludes the work. idea is that a neighboring station can most likely verify the
identity of another station, and after successfully doing so add
II. R ELATED W ORK the certificate to their graph. The resultant graph represents
the mobility pattern with respect to other stations. While this
The related work is broken into two distinct categories: mobility graph has a different focus and use than neighbor
context transfers, and algorithms that dynamically generate graphs, it none-the-less uses the notion of neighbors, and we
the topology of wireless networks. include a discussion of it for completeness.
The previous work on context transfers has mostly focused In the 1980’s to overcome the geographic limitation of a
on the IP layer using reactive transfer mechanisms [7], and LAN, LANs were connected using bridges. In this approach,
general purpose transfer mechanisms without detailing trans- a bridge connecting two or more links listens promiscuously
fer triggers [11]. The only previous work on link layer context to all packets and forwards them to a link on which the
caching was also originally reactive until neighbor graphs destination station is known to reside. A bridge also dynam-
were recently added [10]. ically learns the locations of stations so that it can forward
The IP layer context transfer mechanisms focus solely on traffic to the correct link. In [15], Perlman proposed a self-
the transfer of context from access router to access router, and configuring and distributed algorithm to allow bridges to
while Koodli [7] mentions access points briefly– indicating learn the loop-free subset of the topology that connects all
that access routers and access points can be co-located. The LANs, by communicating with other bridges. This subset is
context transfer mechanisms are designed solely for access required to be loop-free ( a spanning tree ) to avoid unneces-
routers and are reactive rather than pro-active as in neighbor sary congestion caused by infinitely circulating packets. This
graphs [7]. In the case of the SEAMOBY context transfer Spanning Tree Algorithm / Protocol [16] is self-configuring
protocol, the protocol provides a generic framework for either because the only apriori information necessary in a bridge is
reactive or pro-active context transfers [11]. The framework, its own unique ID (MAC address). The algorithm requires
however, does not define methods for implementing either a very small bounded amount of memory per bridge, and
reactive or pro-active context transfers. As a result, our ap- a bounded amount of communications bandwidth for each
proach can easily be integrated into the SEAMOBY protocol LAN. Furthermore, there is no requirement for modifications
providing a pro-active context transfer mechanism as it was to stations and the algorithm interoperates with older bridges.
with IAPP. Neighbor graphs are also self-configuring and operate in the
The previous work on topology algorithms has focused on same manner– examining network traffic, specifically layer 2
pre-authentication, automated bridge learning, and sharing of management frames or AAA messages , to create the wireless
public key certificates [12], [13], [14]. network topology dynamically. The two algorithms, and their
Pack proposes pre-authentication be performed to the k purposes are different however.
most likely next access points. The k stations are selected
using a weighted matrix representing the likelihood (based on III. BACKGROUND
the analysis of past network behavior) that a station, associated
to APi , will move to APj . The mobile station may select A. IEEE 802.11 Handoffs
only the most likely next access points to pre-authenticate, The IEEE 802.11 MAC specification allows for two modes
or it may select all of the potential next access points [12], of operation: ad hoc and infrastructure mode [17]. In ad
[13]. Pack uses the notion of a frequent handoff region hoc mode, two or more wireless stations (STAs) recognize
(FHR) to represent the adjacent access points, or neighbors, each other (through beacons) and establish a peer-to-peer
which is obtained by examining the weighted matrix. The
weights within the matrix are based on an O(n2 ) analysis 1 The cache consumes an O(1) storage and computation.

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004


STA APs in Range
A
(broadcast) for beacon messages (passive scan) broadcast by APs on
Probe Requ
est channels of interest. Messages A through D in figure 1 show
B
Probe R
esponse the active scan. For each channel, a probe request (messages
PROBE
DELAY C
A and C) is sent by the STA and probe responses (messages
(broadcast)
Probe Requ
est
B and D ) are received from the APs in the vicinity of the
D esponse
STA. After scanning all intended channels, the STA selects
Probe R

E New AP Old AP
the new-AP based on the data rates and signal strength2 .
Authentica

AUTHENTICATION
tion
Probe Delay is the time spent by the STA in scanning and
HANDOFF
DELAY
DELAY
F
Authent
ication selecting the next AP. After the probe, the STA and new-
G
Reassocia
tion Requ
AP exchange 802.11 authentication frames, and the latency
est
Send secu
rity block
incurred is the authentication delay (messages E and F). After
authentication, the STA sends an 802.11 reassociation request
ock
curity bl
REASSOCIATION
Ack Se to the AP (message G) and receives a reassociation response
DELAY Move No
tify from the AP (message H) which completes the handoff
Move R
esponse process. The latency incurred during this exchange is the
esponse
reassociation delay and this process is called the reassociation
iation R
H Reassoc
process. 3 During reassociation, the APs involved exchange
802.11 IAPP station context information. This is achieved through the use
MESSAGES MESSAGES
of the Inter Access Point Protocol (IAPP, [10]). The next
Fig. 1. The handoff procedure by the IEEE 802.11 and IEEE 802.1f. subsection discusses the broader role of IAPP in managing
the distribution system (DS).

relationship. In infrastructure mode, an access point (AP) B. Inter Access Point Protocol
provides network connectivity to its associated STAs which An early draft of the IAPP recommended best practice
forms a Basic Service Set (BSS). Multiple APs as a part of the specified two types of interaction for completing context
same wireless network form an Extended Service Set (ESS). transfer [9]. The first form of interaction occurs between APs
Because of mobility, load conditions, or degrading signal during a handoff and is achieved by the IAPP protocol, and the
strength, a STA might move to another AP within the same second form of interaction is between an AP and the RADIUS
wireless network. This process is referred to as a handoff. The server[18].
mechanism or sequence of messages between a STA and the
IAPP plays a significant role during a handoff. The two
APs resulting in a transfer of physical layer connectivity and
main objectives achieved by inter-access point communication
state information from one AP to another with respect to the
are : (a) Single Association Invariant: Maintaining a single
STA is referred to as a handoff. While the process involves
association of a station with the wireless network, and (b) the
various MAC and network layer functions, we only focus on
secure transfer of state and context information between APs
the layer 2 aspects in this paper.
involved in a reassociation. The client context information [8]
We use the following terms in the paper: STA, station, client can include but is not limited to IP flow context, security con-
or user refers to a computing device capable of performing the text, QOS (diffserv or intserv as needed), header compression
role of an 802.11 mobile station. We use old-AP to refer to the and accounting/AAA information.
AP to which a STA was associated prior to a handoff, and new- Association and reassociation events change a station’s
AP to refer to the AP to which the STA is associated after the point of access to the network. When a station first associates
handoff. The term current-AP refers to the AP to which a STA to an AP, the AP broadcasts an Add-Notify message notifying
is currently associated to. The term distribution system (DS) all APs of the station’s association. Upon receiving an Add-
refers to the interconnection architecture for communication Notify, the APs clear all stale associations and state for the
between the APs and other network devices (authentication station. This enforces a unique association for the station with
server, routers, etc) which together form the ESS. respect to the network. When a station reassociates to a new-
Figure 1 shows the sequence of steps that are designed to AP, it informs the old-AP of the reassociation using IAPP
occur during a handoff. The first step (not indicated in the messages, see figure 1.
figure) is the termination of a STA’s association to the current At the beginning of a reassociation, the new-AP can op-
AP. Either entity can initiate a disassociation for various tionally send a Security Block message to the old-AP, each
reasons ([17], page 53). Due to mobility or degradation of of which acknowledges with an Ack-Security-Block message.
physical connectivity (signal strength), it might not be possible This message contains security information to establish a
for the STA or the AP to send an 802.11 disassociate message. secure communication channel between the APs. The new-AP
In such cases, a timeout on inactivity or communication sends a Move-Notify message to the old-AP requesting station
between APs or the receipt of an IAPP Move-Notify message context information and notifying the old-AP of the reasso-
(discussed later) terminates the association.
During the second step, the STA scans for APs by either 2 The exact method is proprietary for each STA.
sending probe request messages (active scan) or by listening 3 For a detailed analysis of the probe and authentication process, see [6].

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004


ciation. The old-AP responds by sending a Move-Response set of neighbors. The construction and maintenance of this
message. datastructure (in a distributed fashion) is discussed further in
For confidentiality of the context information, IAPP recom- section IV-C.
mends the use of a RADIUS server (to obtain shared keys)
to secure the communication between APs. The RADIUS Access Point
server can also provide the address mapping between the MAC Movement of the Station

addresses and the IP addresses of the APs, which is necessary A E


A

for IAPP communication at the network layer. B


C
Although the IAPP communications serve to fulfill the
mandatory DS functions, they invariably increase the overall B
D E
handoff latency because of their reactive nature.
C

IV. N EIGHBOR G RAPHS D

In this section, we describe the notion for neighbor graphs, Physical Topology of the Wireless Network Corresponding Neighbor Graph

and the abstractions they provide. As an application we present


a proactive context caching algorithm based on neighbor Fig. 2. Figure shows an example placement of APs and the corresponding
graphs to improve the reassociation latency. As seen in figure neighbor graph.
1, the reassociation phase primarily involves the transfer of
station context from the old-AP to the new-AP. In order
to improve the reassociation latency, the context transfer
process (using IAPP) must be separated from the reassociation B. Proactive Caching and Locality of Mobility
process. This can be achieved by providing the new-AP with Caching strategies are based on some locality principle, eg:
the client-context prior to the handoff, or pro-actively. Since locality of reference, execution etc. In this environment, we
we are unable to predict the mobile station’s movement, we have locality in the client’s association pattern. In this section
need a method for determining the candidate set of potential we discuss the proactive caching strategy, based on locality of
new-APs to perform the transfer prior to the handoff. The mobility.
neighbor graph datastructure provides the basis for identifying We define the Locality of Mobility principle to state that for
this candidate set. a client c, with association pattern Γ(c) as defined above, for
any two successive APs according to Γ(c), say, api and api+1
A. Definitions should satisfy the reassociation relationship. This concept of
Reassociation Relationship: Two APs, say, api and apj are locality is the abstraction captured by the neighbor graph as
said to have a reassociation relationship if it is possible for an a datastructure.
STA to perform an 802.11 reassociation through some path The following functions/notations are used to describe the
of motion between the physical locations of api and apj , see algorithm:
the dotted lines in figure 2. 1) Context(c): Denotes the context information related to
The reassociation relationship depends on the placement of client c.
APs, signal strength and other topological factors and in most 2) Cache(apk ): Denotes the cache datastructure main-
cases corresponds to the physical distance (vicinity) between tained at apk .
the APs. Given a wireless network, we dynamically construct 3) P ropagate Context(api , c, apj ): denotes the propaga-
a datastructure called a neighbor graph which captures the tion of client c’s context information from api to apj .
reassociation relationship between access points. This can be achieved by sending a Context-Notify mes-
Association Pattern: Define the association pattern Γ(c) for sage from api to apj (as discussed later in section IV-E).
client c as {(ap1 , t1 ), (ap2 , t2 ), . . . , (apn , tn )}, where api is 4) Obtain Context(apf rom , c, apto ): apto obtains Con-
the AP to which the client reassociates (new-AP) at time ti text(c) from apf rom using IAPP Move-Notify messsage
and {(api , ti ) , (api+1 , ti+1 )} is such that the handoff occurs as discussed in section III-B.
from api to api+1 at time ti+1 ; the client maintains continuous 5) Remove Context(apold , c, apnghbr ): apold sends a
logical network connectivity from time t1 to tn . Cache-Invalidate message to apnghbr in order to remove
AP Neighbor Graph: Define a undirected graph G = Context(c) from Cache(apnghbr ).
(V, E) where V = {ap1 , ap2 , . . . , apn } is the set of all APs 6) Insert Cache(apj , Context(c)): Insert the context of
(constituting the wireless network under consideration), and client, c, in to the cache datastructure at apj . Perform
there is an edge e = (api , apj ) between api and apj if they an LRU replacement if necessary.
have a reassociation relationship. Define N eighbor(api ) = The Proactive Caching Algorithm: The access points use
{apik : apik ∈ V, (api , apik ) ∈ E}, i.e. it is the set of all the following algorithm for proactive caching:
neighbors of api in G. At each AP, the cache replacement algorithm used is a least
The neighbor graph can be implemented either in a cen- recently used(LRU) approach.
tralized or a distributed manner. In this work, we are imple- The cache can be implemented as a hash table over a sorted
menting it in a distributed fashion, with each AP storing its linked list (according to the insertion time). This would give

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004


Algorithm 1 Proactive Caching Algorithm (apj , c, api ) As a result, this also makes the datastructure adaptive to
Require: Algorithm executes on AP apj , dynamism in the reassociation relationship (i.e. changes in
api is the old-AP, AP placements, physical topology changes, etc).
c is the client. The graph is generated by executing the following pseu-
1: if client c associates to apj then docode at each AP. aphost is the AP on which the algorithm
2: for all api ∈ N eighbor(apj ) do is assumed to be executing:
3: P ropagate Context(apj , c, api ) 1) Receipt of a reassociation request: When a client c
4: end for reassociates to aphost from api , add edge api as a
5: end if neighbor of aphost (i.e. aphost adds api to its list of
6: if client c reassociates to apj from apk then neighbors).
7: if Context(c) not in Cache(apj ) then 2) Receipt of a Move-Notify: When aphost receives a
8: Obtain Context(apk , c, apj ) Move-Notify from api , add api to the list of neighbors.
9: end if
Thus the graph is constructed independently by each AP.
10: for all api ∈ N eighbor(apj ) do
The first client to traverse an edge incurs a high handoff
11: P ropagate Context(apj , c, api )
latency. But, the edge is added to the graph, and the cost
12: end for
is amortized over subsequent handoffs. Thus after O(|E|)
13: end if
high latency handoffs, the algorithm converges to its expected
14: if client c reassociates to apk from apj then
performance. The algorithm has a O(1) running time per
15: for all api ∈ N eighbor(apj ) do
reassociation.
16: Remove Context(apj , c, api )
17: end for
18: end if D. Performance Analysis
19: if apj received Context(c) from api then In this subsection we present an analysis of the proactive
20: Insert Cache(apj , Context(c)) caching algorithm based on neighbor graphs.
21: end if 1) Upper bound on the cache size: Assuming memory is
not a constraint, there is an upper bound on the cache size,
i.e., the cache would not grow beyond a particular limit. Let
a cache lookup of O(1) and a cache replacement of O(1)
G = (V, E) be a neighbor graph. Let Clientlist(api ) denote
as well. The method P ropagate Context requires sending
the set of clients associated to api . If client c reassociates from
the context to each neighbor and hence would incur an
api to apj , Context(c) is propagated to N eighbors(apj ) and
execution cost of O(degree(apj )∗propagation time), where
removed from N eighbors(api ). Hence :
propagation time is the round-trip time for communication
between the two APs under consideration.
Context(c) ∈ Cache(api ) =⇒

C. Generation of the Neighbor Graph c∈ Clientlist(apk ) (1)
The neighbor graph can be automatically generated (i.e. apk ∈N eighbor(api )
learned) by the individual access points over time. There
From equation 1 it follows that :
are two ways that APs can learn the edges in the graph.
Firstly, when an AP receives an 802.11 reassociation request | Cache(api ) |≤ Napi ∗ M (2)
frame from a STA, the message contains the MAC (BSSID)
of the old-AP and hence establishes the reassociation rela- where Napi =| N eighbor(api ) |= degree(api ) and M =
tionship between the two APs. Secondly, receipt of a Move- maximum number of clients associated to any AP. Summing
Notify message from another AP via IAPP also establishes up equation 1 for all vertices, we get the total memory used
the relationship. These two methods of adding edges are by caches over all APs :
complementary, and the graph will remain undirected.  
| Cache(api ) |≤ M ∗ Napi = M ∗ 2∗ | E | (3)
Each AP maintains the edges locally in an LRU fashion.
api ∈V api ∈V
This is necessary in order to eliminate the outliers, i.e. incor-
rectly added edges. One situation where this would happen 2) Characterizing the Cache Misses: As discussed earlier,
is a client that goes into the power save mode, and can the caching algorithm is based on the locality of mobility
potentially wake up in a different location to reassociate to principle. Since reassociation relationships are captured in
any other AP on the wireless network. Thus a timestamp based the neighbor graphs and client-context is forwarded to all
LRU approach would guarantee the freshness of the neighbor neighbor APs, technically we would expect a 100% cache hit
graph, and eliminate the outlier edges over time. The effect ratio for the reassociations. This assumes that the neighbor
of the outliers on the performance of the algorithm would be graph has been learned and the cache size is unlimited (i.e.
nominal, as it would just result in an additional caching of a the cache at each AP is large enough according to equation
client’s context for a short amount of time (LRU freshness). 1).
The autonomous generation also eliminates the need for The above assumption takes us to the two kinds of cache
any survey or other manual based construction methods. misses possible during a reassociation:

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004


1) Reassociation between non-neighbor APs: When a re- 3) Cache-Invalidate: This message is sent from an AP to
association occurs between two APs that are not neigh- its neighbor in order to remove the context information
bors, the station-context does not get forwarded and from the neighbor’s cache. It is sent following a reas-
results in a cache miss. The edge subsequently is added sociation or a disassociation involving an STA leaving
to the graph through the learning process. Thus when the AP.
a wireless network is first brought up (or rebooted), As can be seen from figure 3, a cache-hit avoids the Move-
the initial reassociations in the network would be cache Notify and Security-Block communication latency during re-
misses. association resulting in a faster handoff.
2) Context evicted by LRU replacement: This happens The knowledge of neighboring APs at each AP is essential
when the client-context is evicted at the new-AP because for the effective operation of proactive caching. To avoid
of other clients reassociating to neighboring APs. the management overhead of manually maintained neighbor
As discussed in the previous section, the first type of cache graphs, IAPP now includes the algorithms from section IV-
miss would occur only once per edge and has a nominal effect B.
towards the performance in the long run. The second type of
cache miss depends on mobility of other users, and hence V. E XPERIMENTS AND S IMULATIONS
dictates the performance of the algorithm. Presented below is We present both simulation and implementation results to
a simple analysis of the expected performance perceived by demonstrate the performance of proactive caching. Section
a client (i.e. hit ratio observed by a client), with regard to its V-A discusses the implementation results and the simulation
mobility. results are presented in section V-B
Let Γ(c) = {(ap1 , t1 ), (ap2 , t2 ), . . . , (apn , tn )}, be the
association pattern observed for a client c. Consider a re-
A. Experiments
association {(api , ti ), (api+1 , ti+1 )}. Client c reassociated to
api at time ti . Also at ti , the client’s context was inserted In this section, we discuss the implementation of IAPP with
into the cache at api+1 . The time spent by c at api , would neighbor graphs in a custom wireless testbed. We describe
be T (c, api ) = ti+1 − ti . The longer the client stays at the testbed configuration, the process of the experiments, and
api , the greater the chance of its context being removed by the results. In brief, we measured 114 reassociations in the
other clients reassociating to the neighbors of api+1 . Thus testbed resulting in an average reassociation latency of 15.37
the probability P (c, api+1 ) that the client’s context is evicted ms for a cache-miss without an outlier and 23.58 ms with the
from the cache at api+1 would be directly proportional to the outlier (which is the traditional IAPP communication latency)
time spent by the client at api . Thus : and 1.7 ms for a cache-hit– achieving an order of magnitude
improvement in the reassociation latency.
P (c, api+1 ) ∝ T (c, api ) 1) The Wireless Testbed: The wireless testbed spans a
section of two floors (2nd and 3rd) of an office building.
A faster client (i.e. higher mobility) would spend less time
There were five APs on the third floor and four on the second.
at each AP and hence would have a higher probability of a
The geometry of the floors (L-shape and the dimensions) and
cache hit. Thus, the performance of the algorithm perceived
topology of nine access points are shown in figure 4. The gray
by a client would be expected to improve with its mobility.
circles in the figure represent APs, labeled by an identifier.
Three channels, namely 1, 6 and 11 were used by the APs.
E. IAPP and Proactive Caching There were 4 APs on channel 1 and 11 each and one AP
on channel 6. These channels were assigned in a fashion to
In this section, we discuss the modifications to an early avoid interference with other wireless networks operating in
draft of IAPP to incorporate proactive caching using neighbor the building resulting in less than optimal RF design.
graphs. The modifications consist of two new messages; The APs used for the experiments were based on a Soekris
Cache-Notify, and Cache-Response for the purposes of imple- [19] board NET4521 which has a 133 MHz AMD proces-
menting the Propagate Context() method discussed in section sor, 64MB SDRAM, two PC-Card/Cardbus slots for wireless
IV-B. These changes are now included in the IAPP recom- adapters and one CompactFlash socket. A 200mW Prism 2.5
mended practice [10]. based wireless card was used as the AP interface with a 1ft
Figure 3 shows the modified reassociation process (com- yagi antenna. OpenBSD 3.1 with access point functionality
pared to figure 1). For the sake of clarity, the probe and was used as the operating system.
authentication messages are not shown. The IAPP protocol, neighbor graphs, and the caching
1) Cache-Notify: This message is sent from an AP to its algorithm were implemented in the driver (for the wireless
neighbor and carries the context information pertaining interface) along with the AP functionality.
to the client. It is sent following a reassociation or an 2) Experiment Process: To preclude possible interference,
association request. we shutdown the other wireless networks in the building
2) Cache-Response: This is sent in order to acknowledge during the experiments. A mobile unit consisting of a client
the receipt of Cache-Notify. A timeout on this message laptop, and a sniffer was used in the experiments. A laptop
results in removal of the edge, as the neighbor AP might with Pentium III 750 MHz CPU and 256 MB RAM and
not be alive. a Prism 2.5 based ZoomAir wireless card was used as the

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004


STA New AP Old AP STA New AP NeighborAPs

Reassocia Reassocia
tio
tio
Request n Request n
Reassociation
Security−B Delay Cache−not
lock
ciation ify
Reasso nse
dge Respo
Reassociation Acknowle
response
Delay Move−no Cache−
tify

ciation resp
Reasso nse Move− Old AP NeighborAPs
Respo NeighborAPs NeighborAPs
Security−B
Cache−not lock
ify Cache−Inv
alidate dge
Acknowle
onse
Cac he−resp Move−no
tify

resp
Move− Cache−Inv
alidate

response
Cache−

(a) Reassociation with IAPP and cache miss (b) Reassociation with IAPP and cache hit

Fig. 3. Message sequences during a handoff with context caching.

client. The reassociation latencies were measured by capturing 3) Experiment Results: Figure 4 depicts the (3D) neighbor
management frames on channels 1, 6 and 11. This was done by graph created during the experiment. The graph was con-
the sniffer which had a wireless card dedicated to capturing structed by observing the reassociation request frames cap-
traffic on each channel (1, 6, and 11). Since the APs were tured by the sniffer. The directed edges indicate the direction
configured only on the above three channels, it was guaranteed of the reassociation (from the old-AP to the new-AP). The
that the sniffer would capture all management frames destined solid edges are intra-floor edges and the rest are inter-floor
to or transmitted by an AP in the testbed (with respect to the edges. The graph shows 23 distinct pairs of APs, between
STA) (primarily reassociation request and response frames). which the STA could reassociate.
Three wireless interfaces in two laptops constituted the sniffer. Experiment A: Figure 5 shows the reassociation latencies at
Two experiments were conducted. The first experiment each AP 4 . The Y-axis is the latency in logarithmic scale. The
was conducted with fresh APs, i.e. there were no neighbor circular points represent reassociation with a cache-miss and
relationships prior to the start of the experiment. The goal cross points are the cache-hits. Most of cache-miss latencies
of this experiment was to study the effect of the learning reside around 16 ms except an outlier of 81 ms at AP-8. The
process on the reassociation latencies with time. The second cache-hit latencies are clustered around an average of 1.69
experiment (following the first) was to confirm guaranteed ms. There are a few cache-hits with latencies more than 4
cache hits once the neighbor graph had been learnt by the ms. We reason that these outliers (involved with AP-4 and 5)
APs. We discuss the detailed setup of each experiment below. are due to poor coverage design with respect to the building
Experiment A: The first experiment consisted of a random topology. AP-4 and AP-5 had a relatively small transmission
walk with the mobile unit, through the physical span of the range when compared to other APs and they were physically
testbed. There were no neighbor relationships existing among close to each other. Since they were the only APs covering a
APs prior to the start of the experiment. The experiment large area, the reassociation latencies were effected by packet
started with the client associating to AP-2 (refer figure 4), errors/retransmits. There was another extreme outlier of 2.36
and a random path of motion covered all APs on third floor. seconds with a cache-hit caused by a sniffing error. This value
The unit then moved to the second floor, covered all APs, and was excluded from the analysis.
returned to the initial point of association (AP-2). This was Figure 6 shows the reassociation latencies observed over
one round of the experiment and nine rounds were conducted time. During the experiment, there was a cache-miss for the
for statistical confidence in the measurements. This resulted first reassociation to each AP (except AP-2) as the neighbor
in one association, and 114 reassociations during the entire graph was built. Figure 6 clearly shows how context caching
experiment. decreases reassociation latencies with time. Except the very
Experiment B: The second experiment, followed the first, first reassociations and a few outliers, most reassociation
consisted of two short rounds using a different client. The latencies lie below 2 ms. In total, there were 8 cache-misses
purpose of this experiment was to verify the existence of
neighbor graphs (i.e learned from the first experiment) at each
AP by observing a cache hit on all reassociations. 4 The reassociation latency is attributed to the new-AP

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004


5
with average of 15.37 ms and 105 cache-hits with average
latency of 1.69 ms.
128
cache miss
cache hit

64
CH = 11
et 310
244 fe 3 feet
CH = 6
32

Reassociation Latency (ms)


2
CH = 1
1 5 16
4 3rd floor
CH = 11 CH = 1
t
fee
85 8

4
7
CH = 1 2
6
CH = 11
8 9
2nd floor 1
CH = 1 CH = 11

0.5
1 2 3 4 5 6 7 8 9
Access Point Number
Fig. 4. Experiment Environment and the Neighbor Graph. Fig. 5. Reassociation latencies at each access point.

Experiment B: The second experiment, was done with a


different client. The APs had learnt the neighbor graph, and
hence during the experiment there were no cache misses.
128
Each association/reassociation forwarded the context to the cache-miss
cache-hit
neighbors, and hence the client’s context was always found 64
in cache during a reassociation 6 . This experiment had 18
reassociation, all cache hits, resulting in an average latency of 32
Reassociation Latency (ms)

1.5 ms.
16
Thus the experiment results show that proactive caching
with neighbor graphs reduces the reassociation latency by an 8
order of magnitude.
4

B. Simulations 2

Access points, unlike cellular basestations, are embed- 1


ded systems with limited resources (computing power and
memory) as vendors attempt to lower their costs. A typical 0.5
0 1000 2000 3000 4000 5000
access point has around 4MB of RAM and 1 MB of flash. Experiment Time (sec)
Client context information could potentially consist of security Fig. 6. Reassociation Latencies with Time.
credentials, QoS information etc. Thus an AP can store
only a limited number of contexts in its cache (LRU cache
replacement). In this section we present results on how the
throughout the simulation. The assumptions and the model we
algorithm performs while varying the mobility, the number of
used are:
clients and the number of APs in the network.
Simulation Model and Assumptions:
Simulation Objectives:
1) AP Neighbor Graph does not change during the simula-
1) To observe the effect of cache size, number of clients
tion: As noted earlier, changes in the AP neighbor graph
and the mobility of clients on the cache hit ratio.
would cost (in the worst case), one high latency handoff
2) To observe the performance of caching with various
per edge, and has a nominal effect on the overall cache
neighbor graphs.
performance.
Each simulation starts with a set of APs, a neighbor graph 2) Correctness and completeness of the Neighbor Graphs:
structure connecting them, a set of clients and their initial We are assuming that the neighbor graphs are correct
distribution on the APs. Each client is assigned a mobility and complete, i.e. the simulations do not consider any
index (defined later), which dictates the mobility of the client reassociations which are not covered as edges in the
5 The outlier of 81 ms has been excluded from the average calculation. We
graph 7 . This makes it sufficient to simulate reassocia-
eliminated it since it would unfairly distort our result by making it higher, tions according to the neighbor graph without maintain-
i.e. better, than what is clearly the average of 16 ms.
6 Since we had only one client in the experiment, there were no cache 7 As discussed earlier, such reassociations would have resulted in the edge
evictions. being added to the graph

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004


ing any correspondence with the physical placement of Mobility Improves the Proactive Caching Performance
APs (that would produce the neighbor graph). 1

3) Initial User-AP distribution: We have assumed a uni-


0.9
form distribution of clients across APs to at the start
of the simulation. Figure 7 shows the distribution of the
0.8
maximum number of users associated to each AP during

Cache Hit Ratio


a simulation with 100 APs, and 500 clients.
0.7
4) Roaming Model: The client roams according to the
following model: 0.6
a) Let client c have an association pattern Γ(c) =
{(ap1 , t1 ), (ap2 , t2 ), . . . , (apn , tn )}. The client c 0.5
Cache Size = 20, Users = 50, Vertices = 10
is said to roam from ap1 to apn if (i) the time Cache Size = 20, Users = 100, Vertices = 20
associated at each api , (1 < i < n) : ti+1 − ti 0.4
Cache Size = 20, Users = 500, Vertices = 100
0 20 40 60 80 100
is of the order of a typical reassociation latency Client Mobility (Index)
(around 100 ms, [6]) and (ii) the time the client
spends on ap1 and apn is of the order of a typical Fig. 8. Plot of clients mobility and the cache hit ratio achieved.
client session [4]. Thus the client stays for a
session-duration with an AP, roams to another AP
(according to an association pattern), and stays for Effect of Cache Size and Client Mobility on Hit Ratio
another session. 1

b) At any given point of time during the simulation,


the client is either roaming (according to definition 0.9

above) or staying associated to its current AP.


c) The association pattern of a roam is decided 0.8
Cache Hit Ratio

randomly: If the client c is associated to api ,


0.7
it can move to any one of its neighbors
(api1 , api2 , . . . apik ) with equal probability.
0.6
5) User Mobility: Define mobility index of a client as
the probability that the client is roaming at any given 0.5
point of time during the simulation. At the end of the Cache Size = 20
Users = 200, Vertices = 100 Cache Size = 30
simulation it converges to the (Total time spent in roam- 0.4
Cache Size = 40

ing/Total simulation time). Mobility indices are assigned 0 20 40 60 80 100


Client Mobility (Index)
to clients on a scale of 1 . . . 100. The distribution of
mobility indices on clients is uniform. Fig. 9. Effect of Cache Size and Client Mobility on Hit Ratio.

Distribution of Clients across APs


30

25 2) Simulation Results:
20
1) Mobility Improves Proactive Caching Performance: Fig-
Number of APs

15 ure 8 shows the cache hit ratio achieved by clients


10
according to their mobility index. The figure compares
the hit ratio performance over neighbor graphs of size
5
10, 20, 100 vertices keeping the cache size constant.
0
0 5 10 15 20 25
In all three curves, the hit ratio increases with client
Maximum Number of Clients mobility as previously discussed in section IV-D. The
relative improvement diminishes with increasing num-
Fig. 7. Distribution of Maximum number of clients associated to an AP
during a simulation with 100 APs and 500 users. ber of vertices in the NG graph, and the prime reason for
this being the constant cache size. Later plots elucidate
this observation.
1) Simulation Environment: 2) Effect of Cache Size and Client Mobility on Hit Ratio:
1) The simulation uses random and connected neighbor Figure 9 shows the effect of cache size on the hit ratio
graphs with 10, 20, 50 and 100 vertices. keeping the number of clients, and the NG graph the
2) Duration of the Simulation: The simulation runs for one same. The graph has 100 vertices, and 200 users. Clearly
million reassociation events uniformly distributed over an increase in the cache size has a direct impact on the
the users according to their mobility indices. This makes cache hit ratio, to the extent that for a cache size of
the duration of the simulation large enough for statistical 40 (or 20% of the number of users), all clients have a
confidence in the results. hit-ratio of 98% or better.

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004


Effect of Cache Size and Number of Users on Hit Ratio structure, which provides the DS information about the phys-
1 ical topology of the APs, can be leveraged for optimizations
on existing algorithms (load balancing, network management,
0.9
and key pre-distribution) and may lay the foundation for other
interesting and novel applications.
0.8
As an example application for neighbor graphs, we im-
Hit Ratio

plemented and studied the performance of the proactive


0.7
caching algorithm for faster wireless handoffs. The caching
0.6
algorithm uses neighbor graphs to send station-context to its
neighbors prior to the handoff and hence separates the context
0.5 Vertices = 100 200 users transfer process from reassociation. We have implemented
300 users
400 users the approach using an early version of IAPP [9] running
500 users on a dedicated wireless testbed and presented results from
0.4
20 25 30 35 40 45 50
experiments conducted on the testbed and as a result of our
Cache Size
early experiments proactive caching using neighbor graphs has
Fig. 10. Effect of Cache Size and Number of Users on Hit Ratio. been added to the final version of IAPP [10].
In our experiments, 114 reassociations occurred with an
average reassociation latency of 23.58 ms (including the one
Variation of Cache Size (as a percentage of Number of Users) with Hit Ratio
1
outlier) and 15.37 ms (without the outlier) for a cache-miss
(traditional handoff), and 1.69 ms for a cache-hit, an order of
0.9
magnitude improvement due to proactive caching. In our sim-
ulations, we studied the performance of the algorithm under
Average Cache Hit Ratio

0.8 varying network characteristics : user mobility, the number


of users associated to the network, and the number of APs
0.7 forming the network. We conclude that the performance of the
algorithm (hit-ratio) improves as the user mobility increases
0.6 eventually reaching a 100% hit-ratio under certain network
configurations. As expected, we find that the cache size plays
0.5 Vertices = 100 an important role in the performance of the algorithm and that
a cache size of 15% (of the number of users associated to the
0.4
0 5 10 15 20
network) gives a minimum cache hit-ratio of 98%.
Cache Size : As a percentage of the Number of Users The other applications of neighbor graphs we are working
on include: a comprehensive key distribution scheme for
Fig. 11. Variation of Cache Size (as a percentage of Number of Users) with secure inter-network and intra-network roaming. We plan to
Hit Ratio.
investigate application of neighbor graphs to perform load
balancing, and network management of APs. As a special
appliation, neighbor graphs could potentially lead to a scalable
3) Effect of Cache Size and Number of Users on Hit Ratio: method of organizing and managing a large scale cooperative
The number of clients in the network has a direct impact wireless network which interconnects APs from different
on the performance. Figure 10 shows the effect of the network domains and with different characteristics (network
two parameters on hit ratio. Figure 11 shows the effect bandwidth, cost etc). Neighbor graphs can also be used to
of the cache size as a percentage of the number of users eliminate the expensive scanning operation for faster MAC
on the hit ratio. The data points were taken for cache layer handoffs by making an intelligent guess about the list
sizes varying from 20 to 50 and the number of users of APs on a particular channel.
varying from 200 to 500 in increments of 100. Thus a
15 percent cache size is sufficient for a hit ratio of 98
% while a cache size of 20 percent gives a hit ratio of ACKNOWLEDGEMENT
100 %.
The authors wish to thank Nick Petroni for providing
valuable inputs on the experiments and Dr. Udaya Shankar
VI. C ONCLUSIONS AND F UTURE W ORK
for assistance with the simulation details. The authors would
In this paper, we have introduced a novel, efficient and also like to thank Dr. Kyunghun Jang and Ms. Insun Lee from
a dynamic data structure, neighbor graphs, which captures Samsung Electronics, Korea for the collaborative work on this
the topology of a wireless network by autonomously moni- project.
toring the handoffs. This datastructure abstracts the physical
topology of the network into a neighbor relationship which
can be used as a vehicle for numerous applications. Neighbor R EFERENCES
graphs provide more structure to the distribution system (DS) [1] Diane Tang and Mary Baker, “Analysis of a metropolitan-area wireless
interconnecting the APs forming the wireless network. This network,” in Mobile Computing and Networking, 1999, pp. 13–23.

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004


[2] Kevin Lai, Mema Roussopoulos, Diane Tang, Xinhua Zhao, and Mary [10] IEEE, “Recommended Practice for Multi-Vendor Access Point Interop-
Baker, “Experiences with a mobile testbed,” in Proceedings of The erability via an Inter-Access Point Protocol Across Distribution Systems
Second International Conference on Worldwide Computing and its Supporting IEEE 802.11 Operation,” IEEE Draft 802.1f/Final Version,
Applications (WWCA’98), Mar 1998. January 2003.
[3] A. Balachandran, G. Voelker, P. Bahl, and P. Rangan, “Characterizing [11] M. Nakhjiri, C. Perkins, and R. Koodli, “Context Transfer Protocol,”
user behavior and network performance in a public wireless lan,” 2002. Internet Draft : draft-ietf-seamoby-ctp-01.txt, March 2003.
[4] Magdalena Balazinska and Paul Castro, “Characterizing Mobility and [12] Sangheon Pack and Yanghee Choi, “Fast Inter-AP Handoff using
Network Usage in a Corporate Wireless Local-Area Network,” in In- Predictive-Authentication Scheme in a Public Wireless LAN,” IEEE
ternational Conference on Mobile Systems, Applications, and Services, Networks 2002 (To Appear), August 2002.
May 2003. [13] Sangheon Pack and Yanghee Choi, “Pre-Authenticated Fast Handoff
[5] International Telecommunication Union, “General Characteristics of in a Public Wireless LAN based on IEEE 802.1x Model,” IFIP TC6
International Telephone Connections and International Telephone Cir- Personal Wireless Communications 2002 (To Appear), October 2002.
cuits,” ITU-TG.114, 1988. [14] S. Capkun, Levente Buttyan, and Jean-Pierre Hubaux, “Self-Organized
[6] Arunesh Mishra, Minho Shin, and William Arbaugh, “An empirical Public-Key Management for Mobile Ad Hoc Networks,” To appear n
analysis of the ieee 802.11 mac layer handoff process,” in Computer IEEE Transactions on Mobile Computings 2003.
Communications Review (ACM SIGCOMM) (To Appear), 2003. [15] Radia Perlman, “An algorithm for distributed computation of a span-
[7] R. Koodli and C.E. Perkins, “Fast Handover and Context Relocation in ningtree in an extended lan,” pp. 44–53, 1985.
Mobile Networks,” ACM SIGCOMM Computer Communication Review, [16] Radia Perlman, Interconnections, Second Edition: Bridges, Routers,
vol. 31, no. 5, October 2001. Switches and Internetworking Protocols, Pearson Education, September
[8] Pat Calhoun and James Kempf, “Context transfer, handoff candidate 1999.
discovery, and dormant mode host alerting,” IETF SeaMoby Working [17] IEEE, “Part 11: Wireless LAN Medium Access Control (MAC) and
Group. Physical Layer (PHY) Specifications,” IEEE Standard 802.11, 1999.
[9] IEEE, “Draft 4 Recommended Practice for Multi-Vendor Access Point [18] C. Rigney, S. Willens, A. Rubens, and W. Simpson, “Remote Authen-
Interoperability via an Inter-Access Point Protocol Across Distribution tication Dial In User Service (RADIUS),” RFC 2865, June 2000.
Systems Supporting IEEE 802.11 Operation,” IEEE Draft 802.1f/D4, [19] “Soekris Engineering,” URL: http://www.soekris.com.
July 2002.

0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004

You might also like