Wireless Sensor Network Notes
Wireless Sensor Network Notes
Preface
page xi
1 Introduction
1.1 Wireless sensor networks: the vision
1.2 Networked wireless sensor devices
1.3 Applications of wireless sensor networks
1.4 Key design challenges
1.5 Organization
1
1
2
4
6
9
2 Network deployment
2.1 Overview
2.2 Structured versus randomized deployment
2.3 Network topology
2.4 Connectivity in geometric random graphs
2.5 Connectivity using power control
2.6 Coverage metrics
2.7 Mobile deployment
2.8 Summary
Exercises
10
10
11
12
14
18
22
26
27
28
3 Localization
3.1 Overview
3.2 Key issues
3.3 Localization approaches
3.4 Coarse-grained node localization using minimal information
31
31
32
34
34
vii
viii
Contents
39
43
51
53
54
4 Time synchronization
4.1 Overview
4.2 Key issues
4.3 Traditional approaches
4.4 Fine-grained clock synchronization
4.5 Coarse-grained data synchronization
4.6 Summary
Exercises
57
57
58
60
61
67
68
68
5 Wireless characteristics
5.1 Overview
5.2 Wireless link quality
5.3 Radio energy considerations
5.4 The SINR capture model for interference
5.5 Summary
Exercises
70
70
70
77
78
79
80
82
82
82
86
87
91
96
100
101
103
103
105
109
113
Contents
ix
114
116
116
119
119
119
122
125
128
130
133
136
137
9 Data-centric networking
9.1 Overview
9.2 Data-centric routing
9.3 Data-gathering with compression
9.4 Querying
9.5 Data-centric storage and retrieval
9.6 The database perspective on sensor networks
9.7 Summary
Exercises
139
139
140
143
147
156
159
162
163
165
165
167
168
170
175
177
178
11 Conclusions
11.1 Summary
11.2 Further topics
179
179
180
References
Index
183
197
Introduction
Introduction
Perhaps one of the earliest research efforts in this direction was the lowpower wireless integrated microsensors (LWIM) project at UCLA funded by
DARPA [98]. The LWIM project focused on developing devices with low-power
electronics in order to enable large, dense wireless sensor networks. This project
was succeeded by the Wireless Integrated Networked Sensors (WINS) project
a few years later, in which researchers at UCLA collaborated with Rockwell
Science Center to develop some of the first wireless sensor devices. Other early
projects in this area, starting around 19992000, were also primarily in academia,
at several places including MIT, Berkeley, and USC.
Researchers at Berkeley developed embedded wireless sensor networking
devices called motes, which were made publicly available commercially, along
with TinyOS, an associated embedded operating system that facilitates the use
of these devices [81]. Figure 1.1 shows an image of a Berkeley mote device.
The availability of these devices as an easily programmable, fully functional,
relatively inexpensive platform for experimentation, and real deployment has
played a significant role in the ongoing wireless sensor networks revolution.
Sensors
Memory
Processor
GPS
Radio transceiver
Power source
Introduction
sensors used are highly dependent on the application; for example, they may
include temperature sensors, light sensors, humidity sensors, pressure sensors,
accelerometers, magnetometers, chemical sensors, acoustic sensors, or even
low-resolution imagers.
5. Geopositioning system: In many WSN applications, it is important for all
sensor measurements to be location stamped. The simplest way to obtain
positioning is to pre-configure sensor locations at deployment, but this may
only be feasible in limited deployments. Particularly for outdoor operations,
when the network is deployed in an ad hoc manner, such information is most
easily obtained via satellite-based GPS. However, even in such applications,
only a fraction of the nodes may be equipped with GPS capability, due to
environmental and economic constraints. In this case, other nodes must obtain
their locations indirectly through network localization algorithms.
6. Power source: For flexible deployment the WSN device is likely to be
battery powered (e.g. using LiMH AA batteries). While some of the nodes
may be wired to a continuous power source in some applications, and energy
harvesting techniques may provide a degree of energy renewal in some cases,
the finite battery energy is likely to be the most critical resource bottleneck
in most WSN applications.
Depending on the application, WSN devices can be networked together in a
number of ways. In basic data-gathering applications, for instance, there is a node
referred to as the sink to which all data from source sensor nodes are directed.
The simplest logical topology for communication of gathered data is a single-hop
star topology, where all nodes send their data directly to the sink. In networks
with lower transmit power settings or where nodes are deployed over a large area,
a multi-hop tree structure may be used for data-gathering. In this case, some nodes
may act both as sources themselves, as well as routers for other sources.
One interesting characteristic of wireless sensor networks is that they often
allow for the possibility of intelligent in-network processing. Intermediate nodes
along the path do not act merely as packet forwarders, but may also examine and
process the content of the packets going through them. This is often done for the
purpose of data compression or for signal processing to improve the quality of
the collected information.
Introduction
inspections or occasionally through expensive and time-consuming technologies, such as X-rays and ultrasound. Unattended networked sensing techniques
can automate the process, providing rich and timely information about incipient cracks or about other structural damage. Researchers envision deploying
these sensors densely on the structure either literally embedded into the
building material such as concrete, or on the surface. Such sensor networks
have potential for monitoring the long-term wear of structures as well as
their condition after destructive events, such as earthquakes or explosions.
A particularly compelling futuristic vision for the use of sensor networks
involves the development of controllable structures, which contain actuators
that react to real-time sensor information to perform echo-cancellation" on
seismic waves so that the structure is unaffected by any external disturbance.
4. Industrial and commercial networked sensing: In industrial manufacturing
facilities, sensors and actuators are used for process monitoring and control.
For example, in a multi-stage chemical processing plant there may be sensors
placed at different points in the process in order to monitor the temperature,
chemical concentration, pressure, etc. The information from such real-time
monitoring may be used to vary process controls, such as adjusting the amount
of a particular ingredient or changing the heat settings. The key advantage
of creating wireless networks of sensors in these environments is that they
can significantly improve both the cost and the flexibility associated with
installing, maintaining, and upgrading wired systems [131]. As an indication
of the commercial promise of wireless embedded networks, it should be noted
that there are already several companies developing and marketing these
products, and there is a clear ongoing drive to develop related technology
standards, such as the IEEE 802.15.4 standard [94], and collaborative industry
efforts such as the Zigbee Alliance [244].
2.
3.
4.
5.
replacing batteries for a large network, much longer lifetimes are desired.
In practice, it will be necessary in many applications to provide guarantees
that a network of unattended wireless sensors can remain operational without
any replacements for several years. Hardware improvements in battery design
and energy harvesting techniques will offer only partial solutions. This is the
reason that most protocol designs in wireless sensor networks are designed
explicitly with energy efficiency as the primary goal. Naturally, this goal
must be balanced against a number of other concerns.
Responsiveness: A simple solution to extending network lifetime is to operate
the nodes in a duty-cycled manner with periodic switching between sleep and
wake-up modes. While synchronization of such sleep schedules is challenging
in itself, a larger concern is that arbitrarily long sleep periods can reduce
the responsiveness and effectiveness of the sensors. In applications where
it is critical that certain events in the environment be detected and reported
rapidly, the latency induced by sleep schedules must be kept within strict
bounds, even in the presence of network congestion.
Robustness: The vision of wireless sensor networks is to provide largescale, yet fine-grained coverage. This motivates the use of large numbers of
inexpensive devices. However, inexpensive devices can often be unreliable
and prone to failures. Rates of device failure will also be high whenever
the sensor devices are deployed in harsh or hostile environments. Protocol
designs must therefore have built-in mechanisms to provide robustness. It is
important to ensure that the global performance of the system is not sensitive
to individual device failures. Further, it is often desirable that the performance
of the system degrade as gracefully as possible with respect to component
failures.
Synergy: Moores law-type advances in technology have ensured that device
capabilities in terms of processing power, memory, storage, radio transceiver
performance, and even accuracy of sensing improve rapidly (given a fixed
cost). However, if economic considerations dictate that the cost per node
be reduced drastically from hundreds of dollars to less than a few cents, it
is possible that the capabilities of individual nodes will remain constrained
to some extent. The challenge is therefore to design synergistic protocols,
which ensure that the system as a whole is more capable than the sum of
the capabilities of its individual components. The protocols must provide
an efficient collaborative use of storage, computation, and communication
resources.
Scalability: For many envisioned applications, the combination of finegranularity sensing and large coverage area implies that wireless sensor
6.
7.
8.
9.
Introduction
Organization
1.5 Organization
This book is organized in a bottomup manner. Chapter 2 addresses tools, techniques, and metrics pertinent to network deployment. Chapter 3 and Chapter 4
present techniques for spatial localization and temporal synchronization respectively. Chapter 5 addresses a number of issues pertaining to wireless characteristics, including models for link quality, interference, and radio energy.
Algorithms for medium-access and radio sleep scheduling for energy conservation are described in Chapter 6. Topology control techniques based on sleep
active transitions are described in Chapter 7. Mechanisms for energy-efficient
and robust routing are discussed in Chapter 8, while Chapter 9 presents concepts
and techniques for data-centric routing and querying in wireless sensor networks.
Chapter 10 covers issues pertinent to congestion control and transport-layer quality of service. Finally, we present concluding comments in Chapter 11, along
with a brief survey of some important further topics.
Network deployment
2.1 Overview
The problem of deployment of a wireless sensor network could be formulated
generically as follows: given a particular application context, an operational
region, and a set of wireless sensor devices, how and where should these nodes
be placed?
The network must be deployed keeping in mind two main objectives: coverage and connectivity. Coverage pertains to the application-specific quality of
information obtained from the environment by the networked sensor devices.
Connectivity pertains to the network topology over which information routing
can take place. Other issues, such as equipment costs, energy limitations, and
the need for robustness, should also be taken into account.
A number of basic questions must be considered when deploying a wireless
sensor network:
1. Structured versus randomized deployment: Does the network involve
(a) structured placement, either by hand or via autonomous robotic nodes, or
(b) randomly scattered deployment?
2. Over-deployment versus incremental deployment: For robustness against
node failures and energy depletion, should the network be deployed a priori
with redundant nodes, or can nodes be added or replaced incrementally when
the need arises? In the former case, sleep scheduling is desirable to extend
network lifetime, a topic we will treat in Chapter 7.
3. Network topology: Is the network topology going to be a simple star topology, or a grid, or an arbitrary multi-hop mesh, or a two-level cluster hierarchy?
What kind of robust connectivity guarantees are desired?
10
11
12
Network deployment
The simplest WSN topology is the single-hop star shown in Figure 2.1(a). Every
node in this topology communicates its measurements directly to the gateway.
Wherever feasible, this approach can significantly simplify design, as the networking concerns are reduced to a minimum. However, the limitation of this
topology is its poor scalability and robustness properties. For instance, in larger
areas, nodes that are distant from the gateway will have poor-quality wireless
links.
2.3.2 Multi-hop mesh and grid
For larger areas and networks, multi-hop routing is necessary. Depending on how
they are placed, the nodes could form an arbitrary mesh graph as in Figure 2.1(b)
or they could form a more structured communication graph such as the 2D grid
structure shown in Figure 2.1(c).
2.3.3 Two-tier hierarchical cluster
Perhaps the most compelling architecture for WSN is a deployment architecture where multiple nodes within each local region report to different clusterheads [76]. There are a number of ways in which such a hierarchical architecture
Network topology
13
(a)
(b)
(c)
(d)
may be implemented. This approach becomes particularly attractive in heterogeneous settings when the cluster-head nodes are more powerful in terms of
computation/communication [90, 114]. The advantage of the hierarchical clusterbased approach is that it naturally decomposes a large network into separate
zones within which data processing and aggregation can be performed locally.
Within each cluster there could be either single-hop or multi-hop communication.
Once data reach a cluster-head they would then be routed through the secondtier network formed by cluster-heads to another cluster-head or a gateway. The
second-tier network may utilize a higher bandwidth radio or it could even be
a wired network if the second-tier nodes can all be connected to the wired
infrastructure. Having a wired network for the second tier is relatively easy in
building-like environments, but not for random deployments in remote locations.
In random deployments there may be no designated cluster-heads; these may
have to be determined by some process of self-election.
14
Network deployment
Figure 2.3 shows how the probability of network connectivity varies as the radius
parameter R of a geometric random graph is varied. Depending on the number of
nodes n, there exist different critical radii beyond which the graph is connected
with high probability. These transitions become sharper (shifting to lower radii)
as the number of nodes increases.
Figure 2.4 shows the probability that the network is connected with respect to
the total number of nodes for different values of fixed transmission range in a
fixed area for all nodes. It can be observed that, depending on the transmission
15
1
R = 0.2
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
(a)
1
R = 0.4
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
(b)
Figure 2.2 Illustration of G(n, R) geometric random graphs: (a) sparse (small R) and (b)
dense (large R)
range, there is some number of nodes beyond which there is a high probability
that the network obtained is connected. This kind of analysis is relevant for
random network deployment, as it provides insights into the minimum density
that may be needed to ensure that the network is connected.
16
Network deployment
1
0.9
n = 10
n = 20
n = 50
n = 100
0.8
Pr[network is connected]
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
Communication radius
0.7
0.8
0.9
Figure 2.3 Probability of connectivity for a geometric random graph with respect to transmission radius
1
0.9
r = 0.05
r = 0.15
r = 0.25
r = 0.35
r = 0.45
Pr[network is connected]
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
20
30
40
50
60
70
80
90
100
Number of nodes
Figure 2.4 Probability of connectivity for a geometric random graph with respect to number
of nodes in a unit area
17
q
4 logn n 5 .
In other words, the critical transmission range for connectivity is O
This result is also implied by the work of Penrose [156] on the longest edge
of the minimal spanning tree of a random graph. Another surprising result is
that the critical radius at which a geometric random graph G4n1 R5 attains the
property that all nodes have at least K neighbors is asymptotically equal to the
critical radius at which the graph attains the property of K-connectivity1 [157].
2.4.2 Monotone properties in G(n, R)
4 logn n 5 log1/4 n .
Another geometric random graph model is G4n1 K5, where n nodes are placed at
random in a unit area, and each node connects to its K nearest neighbors. This
model potentially allows different nodes in the network to use different powers.
In this graph, it is known that K must be higher than 00074 log n and lower than
2072 log n, in order to ensure asymptotically almost sure connectivity [232, 217].
2.4.4 Connectivity and coverage in Ggrid (n, p, R)
Yet another geometric random graph model is the unreliable sensor grid
model [191]. In this model n nodes are placed on a square grid within a unit
1
K-connectivity is the property that no k 1 vertices can be removed to disconnect the graph, which,
as per Mengers theorem [201], is equivalent to the property that there exist at least K vertex-disjoint
paths between all pairs of nodes.
18
Network deployment
area, p is the probability that a node is active (not failed), and R is the transmission range of each node. For this unreliable sensor grid model, the following
properties have been determined:
For the active nodes to form a connected topology, as well as to cover the
unit square region, p R2 must be O4 logn n 5.
The maximum
qnumber
of hops required to travel from any active node to
n
another is O
4 log n 5
There exists a range of p values sufficiently small such that the active nodes
form a connected topology but do not cover the unit square.
19
of these nodes. However, under more dynamic conditions this may not be an
issue, as load balancing may be provided through activation of different nodes
at different times.
2.5.1 Minimum energy connected network construction (MECN)
The COMPOW protocol [142] ensures that the lowest common power level that
ensures maximum network connectivity is selected by all nodes. A number of
arguments can be made in favor of using a common power level that is as low as
20
Network deployment
possible (while still providing maximum connectivity) at all nodes: (i) it makes
the received signal power on all links symmetric in either direction (although
SINR may vary in each direction); (ii) it can provide for an asymptotic network
capacity which is quite close to the best capacity achievable without common
power levels; (iii) a low common power level provides low-power routes; and
(iv) a low power level minimizes contention.
The COMPOW protocol works as follows: first multiple shortest path algorithms (e.g. the distributed BellmanFord algorithm) are performed, one at each
possible power level. Each node then examines the routing tables generated by
the algorithm and picks the lowest power level such that the number of reachable
nodes is the same as the number of nodes reachable with the maximum power
level.
The COMPOW algorithm can be shown to provide the lowest functional
common power level for all nodes in the network while ensuring maximum
connectivity, but does suffer from some possible drawbacks. First, it is not very
scalable, as each node must maintain a state that is of the order of the number
of nodes in the entire network. Further, by strictly enforcing common powers,
it is possible that a single relatively isolated node can cause all nodes in the
network to have unnecessarily large power levels. Most of the other proposals
for topology control with variable power levels do not require common powers
on all nodes.
2.5.3 Minimizing maximum power
The cone-based topology control (CBTC) technique [222, 117] provides a minimal direction-based distributed rule to ensure that the whole network topology
is connected, while keeping the power usage of each node as small as possible.
The cone-based topology construction is very simple in essence, and involves
21
only a single parameter , the cone angle. In CBTC each node keeps increasing
its transmit power until it has at least one neighboring node in every cone or it
reaches its maximum transmission power limit. It is assumed here that the communication range (within which all nodes are reachable) increases monotonically
with transmit power.
The CBTC construction is illustrated in Figure 2.5. On the left we see an
intermediate power level for a node at which there exists an cone in which the
node does not have a neighbor. Therefore, as seen on the right, the node must
increase its power until at least one neighbor is present in every .
The original work on CBTC [222] showed that 2/3 suffices to ensure
that the network is connected. A tighter result has been obtained [117] that can
further reduce the power-level settings at each node:
Theorem 2
If 5/6, then the graph topology generated by CBTC is connected, so long as the
original graph, where all nodes transmit at maximum power, is also connected. If > 5/6,
then disconnected topologies may result with CBTC.
If the maximum power constraint is ignored so that any node can potentially
reach any other node in the network directly with a sufficiently high power
setting, then DSouza et al. [41] show that = is a necessary and sufficient
condition for guaranteed network connectivity.
22
Network deployment
This metric is applicable in contexts where there is some notion of a region being
covered by each individual sensor. A field is said to be K-covered if every point
in the field is within the overlapping coverage region of at least K sensors. We
will limit our discussion here to two dimensions.
Definition 1
Consider an operating region A with n sensor nodes, with each node i providing coverage
to a node region Ai A (the node regions can overlap). The region A is said to be
K-covered if every point p A is also in at least K node regions.
At first glance, based on this definition, it may appear that the way to determine
that an area is K-covered is to divide the area into a grid of very fine granularity
Coverage metrics
23
and examine all grid points through exhaustive search to see if they are all
K-covered. In an s s unit area, with a grid of resolution unit distance, there
will be 4 s 52 such points to examine, which can be computationally intensive.
A slightly more sophisticated approach would attempt to enumerate all subregions
resulting from the intersection of different sensor node-regions and verify if each
of these is K-covered. In the worst case there can be O4n2 5 such regions and they
are not straightforward to compute. Huang and Tseng [92] prove the interesting
result below, which is used to derive an O4nd log d5 distributed algorithm for
determining K-coverage.
Definition 2
A sensor is said to be K-perimeter-covered if all points on the perimeter circle of its region
are within the perimeters of at least K other sensors.
Theorem 3
The entire region is K-covered if and only if all n sensors are k-perimeter-covered.
These results are shown to hold for the general case when different sensors
have different coverage radii. A further improvement on this result is obtained
by Wang et al. [220]. They prove the following stronger theorem (illustrated in
Figure 2.6 for k = 2):
Figure 2.6 An area with 2-coverage (note that all intersection points are 2-covered)
24
Network deployment
Theorem 4
The entire region is K-covered if and only if all intersection points between the perimeters
of the n sensors (and between the perimeter of sensors and the region boundary) are
covered by at least K sensors.
Recall that the two main considerations for evaluating a given deployment
are coverage and connectivity. Wang et al. [220] also provide the following
fundamental result pertaining to the relationship between K-coverage and
K-connectivity:
Theorem 5
If a convex region A is K-covered by n sensors with sensing range Rs and communication
range Rc , their communication graph is a K-connected network graph so long as Rc 2Rs .
One class of coverage metrics that has been developed is suitable primarily for
tracking targets or other moving objects in the sensor field. A good example of
such a metric is the maximal breach distance metric [136]. Consider for instance
a WSN deployed in a rectangular operational field that a target can traverse from
left to right. The maximal breach path is the path that maximizes the distance
between the moving target and the nearest sensor during the targets point of
nearest approach to any sensor. Intuitively, this metric aims to capture a worstcase notion of coverage, Given a deployment, how well can an adversary with
full knowledge of the deployment avoid observation?
Given a sensor field, and a set of nodes on it, the maximal breach path is
calculated in the following manner:
1. Calculate the Voronoi tessellation of the field with respect to the deployed
nodes, and treat it as a graph. A Voronoi tessellation separates the field into
separate cells, one for each node, such that all points within each cell are
closer to that node than to any other. While the maximal breach path is not
unique, it can be shown that at least one maximal breach path must follow
Voronoi edges, because they provide points of maximal distance from a set
of nodes.
2. Label each Voronoi edge with a cost that represents the minimum distance
from any node in the field to that edge.
3. Add a starting (ending) node to the graph to represent the left (right) side of
the field, and connect it to all vertices corresponding to intersections between
Voronoi edges and the left (right) edge of the field. Label these edges with
zero cost.
Coverage metrics
25
(a)
(b)
Figure 2.7 Illustration of (a) maximal breach path through Voronoi cell edges and
(b) minimal support path through Delaunay triangulation edges
26
Network deployment
represent the shortest way to traverse between any pair of nodes, it can be shown
that at least one maximal support path traverses only through Delaunay edges.
The edges are labelled with the maximum distance from any point on the edge
to the nearest source (i.e. with half the length of the edge). A graph search or
dynamic programming algorithm can then be used to find the path through the
Delaunay graph (extended to include a start and end node as before) on which
the maximum edge cost is minimized. This is illustrated in Figure 2.7(b).
2.6.3 Other metrics
Summary
27
2.8 Summary
We observe that the deployment of a sensor network can have a significant impact
on its operational performance and therefore requires careful planning and design.
The fundamental objective is to ensure that the network will have the desired
connectivity and application-specific coverage properties during its operational
lifetime. The two major methodologies for deployment are: (a) structured placement and (b) random scattering of nodes. Particularly for smallmedium-scale
deployments, where there are equipment cost constraints and a well-specified set
of desired sensor locations, structured placements are desirable. In other applications involving large-scale deployments of thousands of inexpensive nodes,
such as surveillance of remote environments, a random scattering of nodes may
be the most flexible and convenient option. Nodes may be over deployed, with
redundancy for reasons of robustness, or else deployed/replaced incrementally
as nodes fail.
Geometric random graphs offer a useful methodology for analyzing and
determining density and parameter settings for random deployments of WSN.
There exist several geometric random graph models including G4n1 R5, G4n1 K5,
Ggrid 4n1 p1 R5. One common feature of all these models is that asymptotically
the condition to ensure connectivity is that each node have O4log n5 neighbors
on average. All monotone properties (including most coverage and coverage
28
Network deployment
Exercises
2.1
Topology selection: Consider a remote deployment consisting of three sensor nodes A, B, C, and a gateway node D. The following set of stationary
packet reception probabilities (i.e. the probability that a packet is received
successfully) has been determined for each link from experimental measurements: [AB: 0.65, AC: 0.95, AD: 0.95, BA: 0.90, BC: 0.3, BD:
0.99, CA: 0.95, CB: 0.6, CD: 0.3]. Assuming all traffic must originate
at the sources (A, B, C) and end at the gateway (D), explain why a singlehop star topology is unsuitable for this deployment, and suggest a topology
that would be more suitable.
2.2
The G4n1 R5 geometric random graph: In this question assume all nodes
are deployed randomly with a uniform distribution in a unit square area.
Determine the following through simulations:
(a) Estimate the probability of connectivity when n = 401 R = 0020.
(b) Estimate the minimum number of nodes nmin that need to be deployed
to guarantee network connectivity with greater than 80% probability
if R = 002.
(c) Plot, with respect to R, the probability that each node has at least K
neighbors for k =1, 2, and 3, assuming n = 100.
Exercises
29
2.4
2.5
2.6
2.7
K-coverage: Consider the square region from (0,0) to (1,1) with 100 sensor
nodes again located on the grid at coordinate points 4m/101 n/105. Assume
all nodes have the same sensing range Rs . What should Rs be in order to
ensure that the area is K-covered, for k = 11 21 4? For these cases, give a
setting of the communication range Rc that will also ensure K-connectivity.
30
2.8
Network deployment
Localization
3.1 Overview
Wireless sensor networks are fundamentally intended to provide information
about the spatio-temporal characteristics of the observed physical world. Each
individual sensor observation can be characterized essentially as a tuple of
the form < S1 T1 M >, where S is the spatial location of the measurement,
T the time of the measurement, and M the measurement itself. We shall address
the following fundamental question in this chapter: How can the spatial location
of nodes be determined?
The location information of nodes in the network is fundamental for a number
of reasons:
1. To provide location stamps for individual sensor measurements that are
being gathered.
2. To locate and track point objects in the environment.
3. To monitor the spatial evolution of a diffuse phenomenon over time, such
as an expanding chemical plume. For instance, this information is necessary
for in-network processing algorithms that determine and track the changing
boundaries of such a phenomenon.
4. To determine the quality of coverage. If node locations are known, the
network can keep track of the extent of spatial coverage provided by active
sensors at any time.
5. To achieve load balancing in topology control mechanisms. If nodes are
densely deployed, geographic information of nodes can be used to selectively
shut down some percentage of nodes in each geographic area to conserve
energy, and rotate these over time to achieve load balancing.
31
32
Localization
Key issues
2.
3.
4.
5.
33
the unknown nodes. The unknown nodes may be cooperative (e.g. participants in the network, or robots traversing the networked area) or noncooperative (e.g. targets being surveilled). The last distinction is important
because non-cooperative nodes cannot participate actively in the localization
algorithm.
When to localize? In most cases, the location information is needed for
all unknown nodes at the very beginning of network operation. In static
environments, network localization may thus be a one-shot process. In other
cases, it may be necessary to provide localization on-the-fly, or refresh the
localization process as objects and network nodes move around, or improve
the localization by incorporating additional information over time. The time
scales involved may vary considerably from being of the order of minutes to
days, even months.
How well to localize? This pertains to the resolution of location information
desired. Depending on the application, it may be required for the localization
technique to provide absolute 4x1 y1 z5 coordinates, or perhaps it will suffice
to provide relative coordinates (e.g. south of node 24 and east of node 22);
or symbolic locations (e.g. in room A, in sector 23, near node 21).
Even in case of absolute locations, the required accuracy may be quite different (e.g. as good as 20 cm or as rough as 10 m). The technique must
provide the desired type and accuracy of localization, taking into account the
available resources (such as computational resources, time-synchronization
capability, etc.).
Where to localize? The actual location computation can be performed at
several different points in the network: at a central location once all component
information such as inter-node range estimates is collected; in a distributed
iterative manner within reference nodes in the network; or in a distributed
manner within unknown nodes. The choice may be determined by several
factors: the resource constraints on various nodes, whether the node being
localized is cooperative, the localization technique employed, and, finally,
security considerations.
How to localize? Finally, different signal measurements can be used as
inputs to different localization techniques. The signals used can vary from
narrowband radio signal strength readings or packet-loss statistics, UWB RF
signals, acoustic/ultrasound signals, infrared. The signals may be emitted and
measured by the reference nodes, by the unknown nodes, or both. The basic
localization algorithm may be based on a number of techniques, such as
proximity, calculation of centroids, constraints, ranging, angulation, pattern
recognition, multi-dimensional scaling, and potential methods.
34
Localization
Perhaps the most basic location technique is that of binary proximity involving
a simple decision of whether two nodes are within reception range of each other.
A set of references nodes are placed in the environment in some non-overlapping
(or nearly non-overlapping) manner. Either the reference nodes periodically emit
beacons, or the unknown node transmits a beacon when it needs to be localized.
If reference nodes emit beacons, these include their location IDs. The unknown
node must then determine which node it is closest to, and this provides a coarsegrained localization. Alternatively, if the unknown node emits a beacon, the
reference node that hears the beacon uses its own location to determine the
location of the unknown node.
35
The same proximity information can be used to greater advantage when the density of reference nodes is sufficiently high that there are several reference nodes
within the range of the unknown node. Consider a two-dimensional scenario.
Let there be n reference nodes detected within the proximity of the unknown
node, with the location of the ith such reference denoted by 4xi 1 yi 5. Then, in this
technique, the location of the unknown node 4xu 1 yu 5 is determined as
n
1X
xu =
x
n i=1 i
n
1X
y
yu =
n i=1 i
(3.1)
This simple centroid technique has been investigated using a model with each
node having a simple circular range R in an infinite square mesh of reference
nodes spaced a distance d apart [16]. It is shown through simulations that, as the
overlap ratio R/d is increased from 1 to 4, the average RMS error in localization
is reduced from 0.5d to 0.25d.
3.4.3 Geometric constraints
If the bounds on radio or other signal coverage for a given node can be
described by a geometric shape, this can be used to provide location estimates by
36
Localization
Reference node
Unknown node
Constrained
location region
Disc
Quadrant
Sector
Annulus
37
When the upper bounds on these regions are tight, the accuracy of this geometric approach can be further enhanced by incorporating negative information
about which reference nodes are not within range [54].
38
Localization
E
D
Node locations
Connectivity graph
ID:
A,C
A,F
C,H
F,H
Figure 3.3 Illustration of the ID-CODE technique showing uniquely identifiable regions
39
As we have seen, time-of-flight techniques show poor performance due to precision constraints, and RSS techniques, although somewhat better, are still limited
by fading effects. A more promising technique is the combined use of ultrasound/acoustic and radio signals to estimate distances by determining the TDoA
of these signals [164, 183, 223]. This technique is conceptually quite simple, and
is illustrated in Figure 3.4. The idea is to simultaneously transmit both the radio
and acoustic signals (audible or ultrasound) and measure the times Tr and Ts of the
40
Localization
Transmitter
T0
RF
Acoustic
Receiver
Tr
Ts
arrival of these signals respectively at the receiver. Since the speed of the radio signal is much larger than the speed of the acoustic signal, the distance is then simply
estimated as 4Ts Tr 5 Vs , where Vs is the speed of the acoustic signal.
One minor limitation of acoustic ranging is that it generally requires the nodes
to be in fairly close proximity to each other (within a few meters) and preferably
in line of sight. There is also some uncertainty in the calculation because the
speed of sound varies depending on many factors such as altitude, humididity,
and air temperature. Acoustic signals also show multi-path propagation effects
that may impact the accuracy of signal detection. These can be mitigated to a
large extent using simple spread-spectrum techniques, such as those described in
[61]. The basic idea is to send a pseudo-random noise sequence as the acoustic
signal and use a matched filter for detection, (instead of using a simple chirp and
threshold detection).
On the whole, acoustic TDoA ranging techniques can be very accurate in practical settings. For instance, it is claimed in [183] that distance can be estimated
to within a few centimeters for node separations under 3 meters. Of course,
the tradeoff is that sensor nodes must be equipped with acoustic transceivers in
addition to RF transceivers.
The location of the unknown node 4x0 1 y0 5 can be determined based on measured
distance estimates di to n reference nodes 84x1 1 y1 51 : : : 1 4xi 1 yi 51 : : : 1 4xn 1 yn 59.
This can be formulated as a least squares minimization problem.
Let di be the correct Euclidean distance to the n reference nodes, i.e.:
p
di = 4xi x0 52 + 4yi y0 52
(3.3)
41
Thus the difference between the measured and actual distances can be represented as
i = di di
(3.4)
The least squares minimization problem is then to determine the 4x0 1 y0 5 that
n
P
minimizes 4i 52 . This problem can be solved by the use of gradient descent
i=1
(3.5)
By subtracting out the nth equation from the rest, we would have n 1
equations of the following form:
xi2 + yi2 xn2 yn2 di2 + dn2 = x0 24xi xn 5 + y0 24yi yn 5
(3.6)
(3.7)
x = 4AT A51 AT B
(3.8)
Solving for the above may not directly yield a numerical solution if the matrix
A is ill-conditioned, so a recommended approach is to instead use the pseudoinverse A+ of the matrix A:
x = A+ B
(3.9)
42
Localization
Another possibility for localization is the use of angular estimates instead of distance estimates. Angles can potentially be estimated by using rotating directional
beacons, or by using nodes equipped with a phased array of RF or ultrasonic
receivers. A very simple localization technique, involving three rotating reference beacons at the boundary of a sensor network providing localization for all
interior nodes, is described in [143]. A more detailed description of AoA-based
triangulation techniques is provided in [147].
Angulation with ranging is a particularly powerful combination [27]. In theory,
if the angular information provided to a given reference node can be combined
with a good distance estimate to that reference node, then localization can be
performed with a single reference using polar coordinate transformation. While
the accuracy and precision with which angles in real systems can be determined
are unclear, significant improvements can be obtained by combining accurate
ranging estimates with even coarse-grained angle estimates.
An alternative to measuring distances or angles that is possible in some contexts is to use a pre-determined map of signal coverage in different locations
of the environment, and use this map to determine where a particular node is
located by performing pattern matching on its measurements. An example of
this technique is RADAR [5]. This technique requires the prior collection of
empirical measurements (or high-fidelity simulation model) of signal strength
statistics (mean, variance, median) from different reference transmitters at various locations. It is also important to take into account the directional orientation
of the receiving node, as this can result in significant variations. Once this
information is collected, any node in the area is localized by comparing its
measurements from these references to determine which location matches the
received pattern best. This technique has some advantages, in particular as a
pure RF technique it has the potential to perform better than the RSS-based
distance-estimation and the triangulation approach we discussed before. However, the key drawback of the technique is that it is very location specific and
requires intensive data collection prior to operation; also it may not be useful in settings where the radio characteristics of the environment are highly
dynamic.
Network-wide localization
43
The ecolocation technique [238] uses the relative ordering of received radio
signal strengths for different references as the basis for localization. It works as
follows:
1. The unknown node broadcasts a localization packet.
2. Multiple references record their RSSI reading for this packet and report it to
a common calculation node.
3. The multiple RSSI readings are used to determine the ordered sequence of
references from highest to lowest RSSI.
4. The region is scanned for the location for which the correct ordering of
references (as measured by Euclidean distances) has the best match to the
measured sequence. This is considered the location of the unknown node.
In an ideal environment, the measured sequence would be error free, and
ecolocation would return the correct location region. However, in real environments, because of multi-path fading effects, the measured sequence is likely to
be corrupted with errors. Some references, which are closer than others to the
true location of the unknown node, may show a lower RSSI, while others, which
are farther away, may appear earlier in the sequence. Therefore the sequence
must be decoded in the presence of errors. This is why a notion of best match
is needed.
The best match is quantified by deriving the n4n 15/2 pair-wise ordering
constraints (e.g. reference A is closer than reference B, reference B is closer than
reference C, etc.) at each location, and determining how many of these constraints
are satisfied/violated in the measured sequence. The location which provides
the maximum number of satisfied constraints is the best match. Simulations
and experiments suggest that ecolocation can provide generally more accurate
localizations compared with other RF-only schemes, including triangulation using
distance estimates. Intuitively, this is because the ordered relative sequence of
RSSI values at the references provides robustness to fluctuations in the absolute
RSSI value.
44
Localization
Geometric constraints can often be expressed in the form of linear matrix inequalities and linear constraints [40]. This applies radial constraints (two nodes are
determined to be within range R of each other), annular constraints (a node
is determined to be within ranges 6Rmin 1 Rmax 7 of another), angular constraints
(a node is determined to be within a particular angular sector of another), as well
as other convex constraints. Information about a set of reference nodes together
Network-wide localization
45
with these constraints (which provide the inter-node relationships amongst reference as well as unknown nodes) describes a feasible set of constraints for a
semidefinite program. By selecting an appropriate objective function for the program, the constraining rectangle, which bounds the location for each unknown
node, can be determined.
When using bounding rectangles, a distributed iterative solution can be
used [54]. In this solution, at each step nodes broadcast to their neighbors their
current constrained region, which is calculated based on the overheard information about their neighbors constrained regions at the previous step. If continued
for a sufficient number of iterations, or until there is no longer a significant
improvement in the bounds, this can provide a solution that is near or at optimal.
Network localization can also be performed in the presence of mobile reference/target nodes [54]. If the mobile node is a reference and able to provide
an accurate location beacon, then it can substantially improve localization over
time, because each new observation of the moving beacon introduces additional
constraints. In theory the location error can be reduced to an arbitrarily small
quantity if the moving beacon is equally likely to move to any point in the
network. If the mobile node is a non-cooperative target, then the distributed
iterative algorithm can be extended to provide simultaneous network localization
and tracking with performance that improves over time.
3.6.3 RSS-based joint estimation
If radio signal strengths can be measured between all pairs of nodes in the network
that are within detection range, then a joint maximum likelihood estimation
(MLE) technique can be used to determine the location of unknown nodes in a
network [154]. In the joint MLE technique, first an expression is derived for the
likelihood that the obtained matrix of power measurements would be received
given a particular location set for all nodes; the objective is then to find the
location set that maximizes this likelihood. The performance of this joint MLE
technique has been verified through simulations and experiments to show that
localization of the order of 2 meters is possible when there is a high density of
unknown nodes, even if there are only a few reference nodes sparsely placed.
3.6.4 Iterative multilateration
46
Localization
approach). The algorithm is quite simple. It applies the basic triangulation technique for node localization (see Section 3.5.3 above) in an iterative manner to
determine the locations of all nodes. One begins by determining the location
of an unknown node that has the most reference nodes in its neighborhood. In
a distributed version, the location of any node with sufficient references in its
neighborhood may be calculated as the initial step. This node is then added to the
set of reference nodes and the process is repeated. Figure 3.5 shows an example
of a network with one possible sequence in which unknown nodes can each
compute their location so long as at least three of their neighbors have known or
already computed locations.
Note that a version of iterative multilateration can also be utilized if only
connectivity information is available. In such a case, a centroid calculation could
be used at each iterative step by the unknown nodes, instead of using distancebased triangulation.
The iterative multilateration technique suffers from two shortcomings: first, it
may not be applicable if there is no node that has sufficient (3 for the 2D plane)
reference nodes in its neighborhood; second, the use of localized unknown nodes
as reference nodes can introduce substantial cumulative error in the network
localization (even if the more certain high-reference neighborhood nodes are
used earlier in the iterative process).
3.6.5 Collaborative multilateration
I1
I1
I2
I2
I3
I3
I2
I4
I1
I3
I4
I4
In = n th iteration.
Network-wide localization
47
48
Localization
3.6.7 Refinement
Once a possible initial estimate for the location of unknown nodes has
been determined through iterative multilateration/collaborative multilateration or
the distance-vector estimation approaches, additional refinement steps can be
applied [182]. Each node continues to iterate, obtaining its neighbors location
estimates and using them to calculate an updated location using triangulation.
After some iterations, the position updates become small and this refinement
process can be stopped.
3.6.8 Force-calculation approach
u
Fi1j = 4di1j di1j 5
i1j
X
Fi =
Fi1j
(3.10)
(3.11)
jHi
Each unknown node then updates its position in the direction of the resulting
vector force in small increments over several iterations (with the force being
recalculated at each step). However, it should be kept in mind that this technique
may be susceptible to local minima.
3.6.9 Multi-dimensional scaling
Given a network with a sparse set of reference nodes, and a set of pair-wise
distances between neighboring nodes (including reference and unknown nodes),
another network localization approach utilizes a data analysis technique known
as multi-dimensional scaling (MDS) [193]. It consists of the following three
steps:
1. Use a distance-vector algorithm (similar to DV-distance) to generate an n n
matrix M, whose 4i1 j5 entry contains the estimated distance between nodes
i and j.
Network-wide localization
49
In some scenarios, we may encounter sensor networks that are deployed in such
an ad hoc manner, without GPS capabilities, that there are no reference nodes
whatsoever. In such a case, the best that can be hoped for is to obtain the location
of the network nodes in terms of relative, instead of absolute, coordinates. While
such a map is not useful for location stamping of sensor data, it can be quite useful
for other functions, such as providing the information required to implement
geographic routing schemes.
The multi-dimensional scaling problem (described above) can provide such a
relative map, by simply eliminating step 3. Rao et al. [170] also develop such a
technique for creating a virtual coordinate system for a network where there are
no reference nodes and also where no distance estimates are available (unlike
with MDS). Their algorithm is described as a progression of three scenarios with
successively fewer assumptions:
1. All (and only) nodes at the boundary of the network are reference nodes.
2. Nodes at the boundary are aware that they are at the boundary, but are not
reference nodes.
3. There are no reference nodes in the network, and no nodes are aware that
they are at the boundary.
In the first scenario, all nodes execute a simple iterative algorithm for localization. Unknown interior nodes begin by assuming a common initial coordinate
(say [0,0]), then at each step, each unknown node determines its location as the
centroid of the locations of all its neighbors. It is shown that this algorithm tends
to stretch the locations of network nodes through the location region. When
the algorithm converges, nodes have determined a location that is close to their
nearest boundary nodes. Figure 3.6 gives an example of a final solution.
50
Localization
50
45
40
35
30
25
20
15
10
5
0
0
10
15
20
25
30
35
40
45
50
30
35
40
45
50
(a)
50
45
40
35
30
25
20
15
10
5
0
0
10
15
20
25
(b)
While the final solution is generally not accurate, it is shown that for greedy
geographic routing it results in only slightly longer routing paths and potentially
even slightly better routing success rates (as non-ideal positions can sometimes
improve over the local optima that arise in greedy geographic routing).
51
The second scenario can be reduced approximately to the first. This can be
done by having the border nodes first flood messages to communicate with each
other and determine the pair-wise hop-counts between themselves. These hopcounts are then used in a triangulation algorithm to obtain virtual coordinates for
the set B of all border nodes by minimizing
X
i1jB
(3.12)
where hops4i1 j5 is the number of hops between border nodes i, j, and dist4i1 j5
their Euclidean distance for given virtual coordinates. An additional bootstrapping
mechanism ensures that all nodes calculate consistent virtual coordinates.
Finally, the third scenario can be reduced to the second. Any node that is
farthest away from a common node in terms of hop-count with respect to all
its two-hop neighbors can determine that it is on the border. This hop-count
determination is performed through a flood from one of the bootstrap nodes.
52
Localization
returns once each node has about 68 neighbors on average. It also suggests,
somewhat surprisingly, that increasing the fraction of beacon nodes from 4% to
20% does not dramatically decrease the localization error (under the assumptions
of uniform placement, high-density, low-ranging error).
3.7.2 Unique network localization
The key result concerning the conditions for a network to be unique localizable
is the following:
Theorem 6
A network N is uniquely localizable if and only if the weighted grounded graph G0N
corresponding to it is globally rigid.
There are two terms here that need to be explained weighted grounded graph
and global rigidity. The weighted grounded graph G0N is constructed from the
graph described by network N (with each edge weighed by the corresponding
distance) by adding additional edges between all pairs of reference nodes, labelled
with the distance between them (which can be readily calculated, since reference
positions are known).
We shall give an intuitive definition of global rigidity. Consider a configuration
graph of points in general position on the plane, with edges connecting some of
them to represent distance constraints. Is there another configuration consisting
of different points on the plane that preserves all the distance constraints on
the edges (excluding trivial changes, such as translations, rotations, and mirror
images)? If there is not, the configuration graph is said to be globally rigid in
the plane. Figure 3.7 gives examples of non-globally rigid and globally rigid
configuration graphs.
There exist polynomial algorithms to determine whether a given configuration
graph is globally rigid in the plane, and hence to determine if a given network is
uniquely localizable. However, the problem of realizing globally rigid weighted
graphs (which is closely related to actually determining possible locations of
Summary
53
A
E
B
A
C
(a)
C
(b)
A
E
B
A
C
(c)
C
(d)
Figure 3.7 Examples of configuration graphs that are not globally rigid ((a),(b)) and that
are globally rigid ((c),(d))
the unknown nodes in the corresponding network) is NP-hard. While this means
that in the worst case there exist no known tractable algorithms to solve all
instances, in the case of geometric random graphs, with at least three reference
nodes
within range of each other, there exists a critical radius threshold that
is O
log n
n
3.8 Summary
Determining the geographic location of nodes in a sensor network is essential
for many aspects of system operation: data stamping, tracking, signal processing,
querying, topology control, clustering, and routing. It is important to develop
algorithms for scenarios in which only some nodes have known locations.
The design space of localization algorithms is quite large. The selection of a
suitable algorithm for a given application and its performance depends upon several key factors, such as: what information about known locations is already available, whether the problem is to locate a cooperative node, how dynamic location
changes are, the desired accuracy, and the constraints placed on hardware. On the
basis of what needs to be localized, the location algorithms that have been proposed can be broadly classified into two categories: (i) node localization algorithms,
which provide the location of a single unknown node given a number of reference nodes, and (ii) network localization algorithms, which provide the location
54
Localization
of multiple unknown nodes in a network given other reference nodes. The node
localization algorithms are often a building-block component of network localization algorithms. The accuracy of the localization algorithms is often dependent
crucially upon how detailed the information obtained from reference nodes is.
The node localization algorithms we discussed include centroids, the use of
overlapping geometric constraints, triangulation using distance estimates obtained
using received signal strength and time difference of arrival, as well as AoA and
pattern-matching approaches. For triangulation, TDoA techniques provide very
accurate ranging at the expense of slightly more complex hardware. For RSSbased systems, an alternative to ranging-based triangulation for dense deployments is the ecolocation technique, which uses sequence-based decoding instead
of absolute RSS values.
Network localization techniques include joint estimation techniques, iterative
and collaborative multilateration, force-calculation, and multi-dimensional scaling. Even when no reference points are available, it is possible to construct a
useful map of relative locations.
On the theoretical front, the CramrRao bound on the error variance of unbiased estimators is useful in analyzing the performance of localization techniques.
Rigidity theory is useful in formalizing the necessary and sufficient conditions
for the existence of unique network localization.
Exercises
3.1
d
circles of radius: (a) R = 2 , (b) R = d, and (c) R = 2d. In each
case, identify the different unique centroid solutions that can be obtained
(depending on the location of the unknown node) and the corresponding
distinct regions. Estimate the worst case and average location estimate
error in each case.
3.2
Exercises
55
3.4
3.5
RSSI-based distance estimates: Using the statistical model of equation (3.2), assuming d0 = 1m1 = 31 = 7, generate a scatter plot of
RSSI-based estimated distance versus true distance. What do you observe?
3.6
are (a) 1.8, 1.2, and 3 respectively; and if they are (b) 2, 2, and 8
respectively. What is the resultant location error in each case?
3.7
A
D
56
Localization
(0,10)
C
D (10,10)
F
E
(5,5)
J
K
A
(0,0)
B
(10,0)
Figure 3.9 A network localization problem for exercise 3.8. Shaded nodes are reference
nodes with known locations in parenthesis, all other nodes are unknown
3.8
3.9
3.10
Time synchronization
4.1 Overview
Given the need to coordinate the communication, computation, sensing, and
actuation of distributed nodes, and the spatio-temporal nature of the monitored
phenomena, it is no surprise that an accurate and consistent sense of time is
essential in sensor networks. In this chapter, we shall discuss the many motivations for time synchronization, the challenges involved, as well as some of the
solutions that have been proposed.
Distributed wireless sensor networks need time synchronization for a number
of good reasons, some of which are described below:
1. For time-stamping measurements: Even the simplest data collection applications of sensor networks often require that sensor readings from different
sensor nodes be provided with time stamps in addition to location information.
This is particularly true whenever there may be a significant and unpredictable
delay between when the measurement is taken at each source and when it is
delivered to the sink/base station.
2. For in-network signal processing: Time stamps are needed to determine
which information from different sources can be fused/aggregated within the
network. Many collaborative signal processing algorithms, such as those for
tracking unknown phenomena or targets, are coherent and require consistent
and accurate synchronization.
3. For localization: Time-of-flight and TDoA-based ranging techniques used
in node localization require good time synchronization.
4. For cooperative communication: Some physical layer multi-node cooperative communication techniques involve multiple transmitters transmitting
57
58
Time synchronization
in-phase signals to a given receiver. Such techniques [105] have the potential to provide significant energy savings and robustness, but require tight
synchronization.
5. For medium-access: TDMA-based medium-access schemes also require that
nodes be synchronized so that they can be assigned distinct slots for collisionfree communication.
6. For sleep scheduling: As we shall see in the following chapters, one of the
most significant sources of energy savings is turning the radios of sensor
devices off when they are not active. However, synchronization is needed
to coordinate the sleep schedules of neighboring devices, so that they can
communicate with each other efficiently.
7. For coordinated actuation: Advanced applications in which the network
includes distributed actuators in addition to sensing require synchronization
in order to coordinate the actuators through distributed control algorithms.
(4.1)
Then, assuming t = 0 as the initial reference time, the associated clock reads
time Ci 4t5 at time t, which is given as:
t
1 Z
fi 45d
Ci 4t5 = Ci 405 +
f0
0
df t2
f
+ rc 4t5
t+
= Ci 405 + t +
f0
2
(4.2)
Key issues
59
where rc 4t5 is the random clock error term corresponding to the error term rf 4t5
in the expression for oscillator frequency. Frequency drift and the random error
term may be neglected to derive a simpler linear model for clock non-ideality:
Ci 4t5 = i + i t
(4.3)
where i is the clock offset at the reference time t = 0 and i the clock drift
(rate of change with respect to the ideal clock). The more stable and accurate the
clock, the closer i is to 0, and the closer i is to 1. A clock is said to be fast if
i is greater than 1, and slow otherwise.
Manufactured clocks are often specified with a maximum drift rate parameter
, such that 1 i 1 + . Motes, typical sensor nodes, have values on the
order of 40 ppm (parts per million), which corresponds to a drift rate of 40s
per second.
Note that any two clocks that are synchronized once may drift from each other
at a rate at most 2. Hence, to keep their relative offset bounded by seconds
at all times, the interval sync corresponding to successive synchronization events
between these clocks must be kept bounded: sync /2.
Perhaps the simplest approach to time synchronization in a distributed system
is through periodic broadcasts of a consistent global clock. In the US, the National
Institute for Standards and Technology runs the radio stations WWV, WWVH,
WWVB, that continuously broadcast timing signals based on atomic clocks. For
instance WWVB, located at Fort Collins, Colorado, broadcasts timing signals on
a 60 kHz carrier wave on a high power (50 kW) signal. Although the transmitter
has an accuracy of about 1 s, due to communication delays, synchronization
around only 10s is possible at receivers with this approach. While this can
be implemented relatively inexpensively, the accuracy may not be sufficient for
all purposes. Satellite-based GPS receivers can provide much better accuracy,
of the order of 1s or less, albeit at a higher expense, and they operate only
in unobstructed environments. In some deployments it may be possible to use
beacons from a subset of GPS-equipped nodes to provide synchronization to all
nodes. In yet other networks, there may be no external sources of synchronization.
The requirements for time synchronization can vary greatly from application
to application. In some cases the requirements may be very stringent say
1s synchronization between clocks in others, it may be very lax of the
order of several milliseconds or even more. In some applications it will be
necessary to keep all nodes synchronized globally to an external reference, while
in others it will be sufficient to keep nodes synchronized locally and pair-wise
to their immediate neighbors. In some applications it may be necessary to keep
nodes synchronized at all times, and in other cases it may suffice only to know,
60
Time synchronization
post facto, the times when particular events occurred. Additional factors that
determine the suitability of a particular synchronization approach to a given
sensor network context include the corresponding energy costs, convergence
times, and equipment costs.
B
Re
eq
f=
TB
A
T1
Transmit = T1
TB
T2
Recv = T2
The text by Tanenbaum and van Steen [209] provides a good discussion of many of these algorithms;
we shall summarize these only briefly here.
61
62
Time synchronization
T
f=
Re
=TB
f
Re
Exchange ref
63
There is no guarantee that the estimates obtained of CA 4t5 CB 4t5 and CB 4t5
CC 4t5 add up to the estimate obtained for CA 4t5 CC 4t5. An alternative technique
has been developed for obtaining globally consistent minimum-variance pair-wise
synchronization estimates, based on flow techniques for resistive networks [103].
4.4.2 Pair-wise senderreceiver synchronization (TPSN)
The timing-sync protocol for sensor networks (TPSN) [55] provides for classical senderreceiver synchronization, similar to Cristians algorithm. As shown in
Figure 4.3, node A transmits a message that is stamped locally at node A as T1 . This
is received at node B, which stamps the reception time as its local time T2 . Node B
then sends the packet back to node A, marking the transmission time locally at B as
T3 . This is finally received at node A, which marks the reception time as T4 .
Let the clock offset between nodes A and B be and the propagation delay
between them be d. Then
T 2 = T1 + + d
(4.4)
T4 = T3 + d
(4.5)
(4.6)
(4.7)
Processing time
B
ns B
Tra
=T2
Tra
n
sA
ecv B
,R
=T
=T3
A
T1
TransA = T1
T2
T3
T4
RecvA = T4
64
Time synchronization
A A
+
C 4t5
B B B
(4.8)
(4.9)
Assuming the same pair-wise message exchange as in TPSN for nodes A and B,
we have that the transmission time T1 and reception time T4 are measured in node
As local clock, while reception time T2 and transmission time T3 are measured
in node Bs local clock. We therefore get the following temporal relationships:
T1 < AB + AB T2
(4.10)
T4 > AB + AB T3
(4.11)
The principle behind this approach to synchronization is to use these inequalities to determine constraints on the clock offset and drift. Each time such a
pair-wise message takes place the expressions (4.10) and (4.11) provide additional constraints that together result in upper and lower bounds on the feasible
65
(T11,T12)
(T2,T1)
(T10,T9)
Constrained region (correct
linear fit must pass through
this shaded region)
The flooding time synchronization protocol (FTSP) [134] aims to further reduce
the following sources of uncertainties, which exist in both RBS and TPSN:
1. Interrupt handling time: This is the delay in waiting for the processor to
complete its current instruction before transferring the message in parts to the
radio.
66
Time synchronization
The simple linear model of equation (4.3) is only reasonable for very short time
intervals. In the real world, clock drift can vary over time quite drastically due
to environmental temperature and humidity changes. This is the reason clock
drifts must be continually reassessed. The na}ve approach to this reassessment
is to re-synchronize nodes periodically at the same interval. However, a static
synchronization period must be chosen conservatively to accommodate a range
of environments. This does not take into account the possibility of temporal
67
= Tdi
n
X
ik
(4.12)
k=1
This approach assumes that time stamps can be added as close to the packet
transmission and reception as possible at the link layer. It is thus robust to many of
the sources of latency uncertainty that contribute to error in other synchronization
68
Time synchronization
4.6 Summary
Like localization, time synchronization is also a core configuration problem
in WSNs. It is a fundamental service building block useful for many network functions, including time stamping of sensor measurements, coherent distributed signal processing, cooperative communication, medium-access, and sleep
scheduling. Synchronization is necessitated by the random clock drifts that vary
depending on hardware and environmental conditions.
Two approaches to fine-grained time synchronization are the receiverreceiver
synchronization technique of RBS, and the more traditional senderreceiver
approach of TPSN. While the latter provides for greater accuracy on a single link,
RBS has the advantage that multiple receivers can be synchronized with fewer
messages. It has been shown that these can provide synchronization of the order
of tens of micro-seconds. The flooding-time synchronization protocol further
improves performance by another order of magnitude by reducing uncertainties
due to jitter in interrupt handling and coding/modulation. Thus it appears that even
fairly demanding synchronization requirements can be met in principle through
such algorithms. However, there is an energyaccuracy tradeoff involved in longterm synchronization, because the accuracy is determined by the frequency with
which nodes are periodically re-synchronized. It has been shown that adaptive
prediction-based drift-estimation techniques can reduce this overhead further.
For some applications, instead of using inter-node synchronization, coarsegrained data time stamps can be obtained by timing packets as they move through
the network and by performing a simple calculation at the final destination.
Exercises
4.1
4.2
Exercises
69
4.3
4.4
4.5
4.6
Wireless characteristics
5.1 Overview
Wireless communication is both a blessing and a curse for sensor networks.
On the one hand, it is key to their flexible and low-cost deployment. On the
other hand, it imposes considerable challenges because wireless communication
is expensive and wireless link conditions are often harsh and vary considerably
in both space and time due to multi-path propagation effects.
Wireless communications have been studied in depth for several decades and
entire books are devoted to the subject [171, 207]. The goal of this chapter is by
no means to survey all that is known about wireless communications. Rather, we
will focus on three sets of simple models that are useful in understanding and
analyzing higher-layer networking protocols for WSN:
1. Link quality model: a realistic model showing how packet reception rate
varies statistically with distance. This incorporates both an RF propagation
model and a radio reception model.
2. Energy model: a realistic model for energy costs of radio transmissions,
receptions, and idle listening.
3. Interference model: a realistic model that incorporates the capture effect
whereby packets from high-power transmitters can be successfully received
even in the presence of simultaneous traffic.
71
range R, and a non-existent link (0% packet reception rate) if they are outside this
range. This ideal model, as we have already seen in the preceding chapters, is the
basis of many algorithms and is widely used in analytical and simulation studies.
While it is useful in some contexts, the ideal model can be quite misleading when
designing and evaluating routing protocols. Therefore we seek more realistic
models based on real-world observations.
72
Wireless characteristics
0.6
0.2
0.9
0.6
0.8
0.6
0.6
0.4
0.2
1
0.9
0.8
0.7
0.6
Connected
region
0.5
Transitional
region
Disconnected
region
0.4
0.3
0.2
0.1
0
10
20
30
40
50
Figure 5.2 Realistic packet reception rate statistics with respect to inter-node distance
73
The main lesson to take away is that the transitional region is of particular
concern in WSN as it contains high variance, unreliable links. As we shall see,
the transitional region can be explained and understood using simple concepts
from communication theory.
d
d0
+ X1dB
(5.2)
In the above expressions, Pr1dB 4d5 is the received power in dB, Pt1dB is the transmitted power, PLdB 4d5 is the path-loss in dB a distance d from the transmitter,
d0 is a reference distance, is the path-loss exponent, which indicates the rate
at which the mean signal decays with respect to distance, and X1dB is a zeromean Gaussian random variable (in dB) with standard deviation . Figure 5.3
illustrates this model.
This basic model could be extended in many ways. Obstacles including walls
can be modeled by adding an additional absorption term to the path-loss. The
random fading term X could be modeled as a multi-dimensional random process
to incorporate both temporal and spatial correlations. Even richer models that
explicitly characterize the impact of other factors besides distance e.g. the
antenna orientation and height may be needed for some studies.
The bit error rate for a given radio is a function of the received signal-to-noise
ratio (SNR). The exact form of this function depends on the physical layer
particulars of the radio, particularly the modulation and encoding scheme used.
Depending on the frame size, and any frame-level encoding used, this can in turn
be used to derive the relationship between the packet reception rate (PRR) and
the receiver SNR. For instance, for a Mica 2 Mote, which uses a non-coherent
74
Wireless characteristics
60
70
80
90
100
110
120
10
20
30
40
50
Figure 5.3 Illustration of received signal strength versus distance in a path-loss model with
log-normal variance
FSK radio, the packet reception rate for a packet of length L bytes is given as
the following function of the SNR [245]:
SNR 1
1
PRR = 1 exp 2 0064
2
8L
(5.3)
Figure 5.4 shows how the PRR varies with the received signal strength, based
on both analytical derivation and empirical measurements for a typical WSN
node. The curve is sigmoidal and it is most important to note that there are two
significant radio thresholds with respect to the received signal strength: a lower
threshold below which the PRR is close to zero and a higher threshold beyond
which it is close to one.
5.2.4 The transitional region
The composition of the received power versus distance curve with the upper and
lower SNR thresholds for packet receptions yields the PRR versus distance behavior for links [245]. Figure 5.5 illustrates this composition, along with the three
distinct regions observed empirically, with respect to distance: the connected
75
100
90
80
70
60
Empirical
Analytical
50
40
30
20
10
0
105
100
95
Received power (dBm)
90
85
65
70
Beginning of
transitional
region
+ 2
75
End of
transitional
region
80
85
90
Upper
95
Radio thresholds
Lower
100
105
10
20
30
40
50
Figure 5.5 Composition of the RF propagation model and the radio reception model
explains the empirical observations of three distinct regions
76
Wireless characteristics
region, the transitional region, and the disconnected region. In the connected
region, which occurs at close distances, the received signal strength is sufficient
to provide an SNR higher than the upper threshold with high probability. In the
disconnected region, which occurs at far distances, the signal strength is so low
that the SNR is below the lower threshold with high probability. At distances in
between, we have the transitional region, where there are significant link quality
variations as the signal strength values cause the SNR to fluctuate between the
two thresholds due to fading effects. This approach allows a nice decoupling of
environmental parameters (1 ) and radio parameters (lower and upper SNR
thresholds). It can be used in simulations to generate link PRR statistics, as in
Figure 5.2.
It should be noted that in the transitional region, even if there is no time
variation due to lack of mobility, different links will show very different qualities.
Minor receiver variations in the transitional region can cause high incidence of
asymmetric links if the SNR fluctuates between the radio thresholds. And, if
there is time-varying fading, its effects will be even more pronounced in the
transitional region. Thus we can see that the transitional region is of particular
concern, and has a big impact on the robustness of the wireless network.
This concern is further exacerbated by the fact that the area of the transitional
region, particularly in comparison with the connected region, can be quite large.
Even if the width of the transitional region (in distance) is the same as that of the
connected region, because of the quadratic increase in area with respect to radius
there will three times as much area covered by the transitional region. This, in
turn, implies three times as many unreliable links as reliable links in the WSN.
Let us define the transitional region (TR) coefficient , as the ratio of the radii
of the transitional and connected regions. In view of the relative undesirability of
the transitional region with respect to the connected region, generally it is better
for the TR coefficient to be as low as possible.
It can be shown that the TR coefficient does not vary with transmission power,
because both the connected and transitional regions grow proportionally with the
transmission power. Table 5.1 shows how the coefficient behaves with respect
to and . It is understandable that with lower , when the variations due to
fading are low, the relative size of the transitional region is smaller. The table
also shows that, paradoxically, higher environments characterized by rapid
transmission power decay also show a lower TR coefficient, at the cost of a
smaller connected region (though this can be combatted through power control).
One way to deal with the unreliability introduced by the transitional region
in practice is to periodically monitor the quality of links and blacklist any
links determined to be of poor quality (e.g. weak links with low reception rate,
77
=
=
=
=
2
4
6
8
=2
=4
=6
=8
2.3
0.8
0.5
0.3
4.2
1.3
0.7
0.5
7.2
1.9
1.0
0.7
1200
206
104
009
asymmetric links) [225, 204, 62]. Properly implemented, blacklisting can provide
a useful abstraction of an ideal topology for efficient communication and routing.
(5.4)
78
Wireless characteristics
Table 5.2 Typical power consumption costs and startup times for different radios
Radio
Frequency/Data Rate
Sleep
Receive
Transmit
Startup
CC 2420 [18]
CC 1000 [17]
MIT AMPS-1 [139]
IEEE 802.11b [165]
60 W
0.6 W
negligible
negligible
59 mW
29 mW
279 mW
1.4 W
52 mW
50 mW
330 mW
2.25 W
0.6 ms
2.0 ms
0.5 ms
1.0 ms
Summary
79
C
PA
Figure 5.6 Interference in wireless medium: (a) an idealized model, (b) the capture effect
model. In the capture effect model simultaneous successful receptions are possible so long
as SINR is sufficiently high at each receiver
node i. Then node 1 can receive a packet successfully from node 0 even if there
is a set of interfering nodes I that are simultaneously transmitting packets if
P
iI
P0 g011
>c
gi11 Pi + N1
(5.5)
5.5 Summary
The wireless nature of the sensor networks we examine brings both advantages
in terms of flexibility and cost as well as great challenges, because of the harsh
radio channels. While radio propagation and fading models have been studied in
depth for many years for mobile personal communications, their implication for
multi-hop networks is only now beginning to be studied and understood.
A number of empirical studies have shown that designing and analyzing multihop wireless networks using high-level zeroone abstractions for links can be
80
Wireless characteristics
quite misleading. In terms of distance, three distinct regions of link quality have
been identified: the connected region, where links are always high quality; the
disconnected region, where links rarely exist; and the intermediate transitional
region. The transitional region is of particular concern because it is considerable
in size and contains many links that are dynamically varying and asymmetric.
A simple realistic statistical model of packet reception rates with respect to
transmitterreceiver distance can be derived and used to analyze the transitional
region and to simulate realistic wireless topologies.
Communications can be a significant source of energy consumption in wireless networks, and the data suggest that it is important to minimize radio idle
receive mode time by turning off the radio as much as possible when not in
use; switching costs must also be kept in mind. The distance-independent term
in the radio energy model can be quite significant, so that tuning the transmission power down for short-range transmissions may not provide large energy
gains.
Depending on the modulation scheme used and the particular placement of
communicating nodes, simultaneous transmissions within the same neighborhood
can take place without significant packet loss. This effect is best modeled using
the SINR-based capture model.
Exercises
5.1
5.2
Neighborhood link quality: For the scenario in exercise 5.1, if nodes are
distributed uniformly with a high density, what percentage of a nodes
neighbors (defined here as nodes that are within the connected or transitional regions) will lie in the transitional region?
5.3
Exercises
81
5.4
5.5
5.6
6.1 Overview
An essential characteristic of wireless communication is that it provides an
inherently shared medium. All medium-access control (MAC) protocols for
wireless networks manage the usage of the radio interface to ensure efficient
utilization of the shared bandwidth. MAC protocols designed for wireless sensor
networks have an additional goal of managing radio activity to conserve energy.
Thus, while traditional MAC protocols must balance throughput, delay, and fairness concerns, WSN MAC protocols place an emphasis on energy efficiency
as well.
We shall discuss in this chapter a number of contention-based as well as
schedule-based MAC protocols that have been proposed for WSN. A common
theme through all these protocols is putting radios to a low-power sleep mode
either periodically or whenever possible when a node is neither receiving nor
transmitting.
83
The simplest forms of medium-access are unslotted Aloha and slotted Aloha.
In unslotted Aloha, each node behaves independently and simply transmits a
packet whenever it arrives; if a collision occurs, the packet is retransmitted after
a random waiting period. The slotted version of Aloha works in a similar manner,
but allows transmissions only in specified synchronized slots. Another classic
MAC protocol is the carrier sense medium-access (CSMA) protocol. In CSMA,
a node that wishes to transmit first listens to the channel to assess whether it is
clear. If the channel is idle, the node proceeds to transmit. If the channel is busy,
the node waits a random back-off period and tries again. CSMA with collision
detection is the basic technique used in IEEE 802.3/Ethernet.
Radio range
(a)
(b)
Figure 6.1 Problems with basic CSMA in wireless environments: (a) hidden node,
(b) exposed node
84
These problems are duals of each other in a sense: in the hidden node problem
packets collide because sending nodes do not know of another ongoing transmission, whereas in the exposed node problem there is a wasted opportunity to send
a packet because of misleading knowledge of a non-interfering transmission.
The key underlying mismatch is that it is not the transmitter that needs to sense
the carrier, but the receiver. Some communication between the transmitter and
receiver is needed to solve these problems.
6.2.3 Medium-access with collision avoidance (MACA)
The MACA Protocol by Karn [101] introduced the use of two control messages
that can (in principle) solve the hidden and exposed node problems. The control
messages are called request to send (RTS) and clear to send (CTS). The essence
of the scheme is that when a node wishes to send a message, it issues an RTS
packet to its intended recipient. If the recipient is able to receive the packet, it
issues a CTS packet. When the sender receives the CTS, it begins to transmit the
packet. When a nearby node hears an RTS addressed to another node, it inhibits
its own transmission for a while, waiting for a CTS response. If a CTS is not
heard, the node can begin its data transmission. If a CTS is received, regardless
of whether or not an RTS is heard before, a node inhibits its own transmission
for a sufficient time to allow the corresponding data communication to complete.
Under a number of idealized assumptions (e.g., ignoring the possibility of
RTS/CTS collisions, assuming bidirectional communication, no packet losses,
no capture effect) it can be seen that the MACA scheme can solve both the
hidden node problem and the exposed node problem. Using the earlier examples,
it solves the hidden node problem because node C would have heard the CTS
message and suppressed its colliding transmission. Similarly it solves the exposed
node problem because, although node C hears node Bs RTS, it would not receive
the CTS from node A and thus can transmit its packet after a sufficient wait.
6.2.4 IEEE 802.11 MAC
Closely related to MACA is the widely used IEEE 802.11 MAC standard [95].
The 802.11 device can be operated in infrastructure mode (single-hop connection
to access points) or in ad hoc mode (multi-hop network). It also includes two
mechanisms known as the distributed coordination function (DCF) and the point
coordination function (PCF). The DCF is a CSMA-CA protocol (carrier sense
multiple access with collision avoidance) with ACKs. A sender first checks to
see if it should suppress transmission and back off because the medium is busy;
85
if the medium is not busy, it waits a period DIFS (distributed inter-frame spacing)
before transmitting. The receiver of the message sends an ACK upon successful
reception after a period SIFS (short inter-frame spacing). The RTS/CTS virtual
carrier sensing mechanism from MACA is employed, but only for unicast packets.
Nodes which overhear RTS/CTS messages record the duration of the entire
corresponding DATA-ACK exchange in their NAV (network allocation vector)
and defer access during this duration. An exponential backoff is used (a) when
the medium is sensed busy, (b) after each retransmission (in case an ACK is not
received), and (c) after a successful transmission.
In the second mechanism, PCF, a central access point coordinates mediumaccess by polling the other nodes for data periodically. It is particularly useful
for real-time applications because it can be used to guarantee worst-case delay
bounds.
The IEEE 802.15.4 standard is designed for use in low-rate wireless personal
area networks (LR-WPAN), including embedded sensing applications [94]. Most
of its unique features are for a beacon-enabled mode in a star topology.
In the beacon-enabled mode for the star topology, the IEEE 802.15.4 MAC
uses a superframe structure shown in Figure 6.2. A superframe is defined by a
periodic beacon signal sent by the PAN coordinator. Within the superframe there
is an active phase for communication between nodes and the PAN coordinator
and an inactive phase, which can be adjusted depending on the sleep duty
cycle desired. The active period has 16 slots that consist of three parts: the
beacon, a contention access period (CAP), and a collision-free period (CFP)
that allows for the allocation of guaranteed time slots (GTS). The presence of
the collision-free period allows for reservation-based scheduled access. Nodes
which communicate only on guaranteed time slots can remain asleep and need
Active
CAP
Sleep
CFP
GTS GTS
Periodic
beacon
86
only wake-up just before their assigned GTS slots. The communication during
CAP is a simple CSMA-CA algorithm, which allows for a small backoff period
to reduce idle listening energy consumption. A performance evaluation of this
protocol and its various settings and parameters can be found in [126].
While the IEEE 802.15.4 can, in theory, be used for other topologies, the
beacon-enabled mode is not defined for them. In the rest of the chapter we
will concern ourselves with both contention-based and schedule-based energyefficient MAC protocols that are relevant to multi-hop wireless networks.
There exist power management options in the infrastructure mode for 802.11.
Nodes inform the access point (AP) when they wish to enter sleep mode so
that any messages for them can be buffered at the AP. The nodes periodically
wake-up to check for these buffered messages. Energy savings are thus provided
at the expense of lower throughput and higher latency.
6.3.2 Power aware medium-access with signalling (PAMAS)
87
network are not affected adversely. However, there can still be considerable
energy wastage in the idle reception mode (i.e. the condition when a node has
no packets to send and there is no activity on the channel).
6.3.3 Minimizing the idle reception energy costs
Nodes need to be able to sleep to save energy when they do not have any communication activity and be awake to participate in any necessary communications.
The first solution is a hardware one equipping each sensor node with two
radios. In such a hardware design, the primary radio is the main data radio, which
remains asleep by default. The secondary radio is a low-power wake-up radio that
remains on at all times. Such an idea is described in the Pico Radio project [166],
as well as by Shih et al. [195]. If the wake-up radio of a node receives a wake-up
signal from another node, it responds by waking up the primary radio to begin
receiving. This ensures that the primary radio is active only when the node has
data to send or receive. The underlying assumption motivating such a design is
that, since the wake-up radio need not do much sophisticated signal processing,
it can be designed to be extremely low power. A tradeoff, however, is that all
nodes in the broadcast domain of the transmitting node may be woken up.
88
El Hoiydi [83] and Hill and Culler [82] independently developed a similar rendezvous mechanism for waking up sleeping radios. In this technique, referred to
as preamble sampling or low-power listening, the receivers periodically wake-up
to sense the channel. If no activity is found, they go back to sleep. If a node
wishes to transmit, it sends a preamble signal prior to packet transmission. Upon
detecting such a preamble, the receiving node will change to a fully active receive
mode. The technique is illustrated in Figure 6.3.
The wake-up signal could potentially be sent over a high-level packet interface;
however, a more efficient approach is to implement this directly in the physical
layer thus the wake-up signal may be no more than a long RF pulse. The
detecting node then only checks for the radio energy on the channel to determine
whether the signal is present. Hill and Culler argue that this can reduce the
receiver check duty cycle to as low as 0.00125%, allowing an almost 2000-fold
improvement in lifetime compared with a packet-level wake-up (from about a
week to about 38 years). We should note that this scheme will also potentially
wake-up all possible receivers in a given transmitters neighborhood, though
mechanisms such as information in the header can be used to put them back to
sleep if the communication is not intended for them.
6.4.3 WiseMAC
Sender
R
Receiver
Preamble sampling
Active to receive
message
89
additional contents of ACK packets, each node learns the periodic sampling times
of its neighboring nodes, and uses this information to send a shorter wake-up
preamble at just the right time. The preamble duration is determined by the
potential clock drift since last synchronization. Let TW be the receiver sampling
period, the clock drift, and L the interval between communications, then the
duration of the preamble TP need only be:
TP = min44L1 TW 5
(6.1)
The packets in WiseMAC also contain a more bit (this is also found in the
IEEE 802.11 power save protocol), which the transmitter uses to signal to the
receiver if it needs to stay awake a little longer in order to receive additional
packets intended for it.
6.4.4 Transmitter/receiver-initiated cycle receptions (TICER/RICER)
Sleep
RTS
Sleep
RTS
RTS
CTS
Data
Sleep
(a)
Receiver
Sender
Sleep
S
Beacon
Beacon
Sleep
Beacon
Sleep
R
Receiver
(b)
Figure 6.4 Asynchronous sleep using (a) TICER and (b) RICER
Data
90
RTS signals followed by a short time when it monitors the channel. When
the receiver detects an RTS, it responds right away with a CTS signal. If the
sender detects a CTS signal in response to its RTS, it begins transmission
of the packet. Thus the key difference from preamble sampling is that in TICER
the sender sends a sequence of interrupted signals instead of a single long
preamble, and waits for an explicit signal from the receiver before transmitting.
In the receiver-initiated cycle receiver technique (RICER), illustrated in
Figure 6.4(b), a receiving node periodically wakes up to execute a three phase
monitorsend wake-up beaconmonitor sequence. A source that wishes to transmit wakes up and stays in a monitoring state. When it hears a wake-up beacon
from a receiver, it begins transmission of the data. The receiver in a monitor
state that sees the start of a data packet remains on until the packet reception is
completed.
However, one subtlety pertaining to the TICER/RICER techniques is that,
while the RTS/CTS/wake-up signals involved are easy to implement at a higher
packet level, it can be more challenging to do so at a lower-power RF analog
level. This is because to match the transmission to the correct receiver, the
receiver needs to uniquely identify itself to the transmitter.
6.4.5 Reconfigurable MAC protocol (B-MAC)
Sleep-scheduled techniques
91
B-MAC Components
Basic B-MAC
Basic B-MAC
LPL
Basic B-MAC
LPL
ACK
Basic B-MAC
LPL
ACK
RTS/CTS
ROM
RAM
3046
166
4092
170
4386
172
4616
277
Figure 6.5 Components of B-MAC and their memory requirements (in Bytes)
The S-MAC protocol [234, 237] is a wireless MAC protocol designed specifically
for WSN. As shown in Figure 6.6, it employs a periodic cycle, where each
node sleeps a while, and then wakes up to listen for an interval. The duty cycle
of this listensleep schedule, which is assumed to be the same for all nodes,
provides for a guaranteed reduction in energy consumption. During initialization,
nodes remain awake and wait a random period to listen for a message providing
the sleeplisten schedule of one of their neighbors. If they do not receive such
On
Off
On
Off
On
Off
On
Off
On
Off
On
Off
92
a message, they become synchronizer nodes, picking their own schedules and
broadcasting them to their neighbors. Nodes that hear a neighbors schedule
adopt that schedule and are called follower nodes. Some boundary nodes may
need to either adopt multiple schedules or adopt the schedule of one neighbor (in
the latter case, in order to deliver messages successfully, the boundary nodes will
need to know all neighbor node schedules.). The nodes periodically transmit these
schedules to accommodate any new nodes joining the network. Although nodes
must still periodically exchange packets with neighbors for synchronization, this
is not a major concern because the listening period is typically expected to be
very large (on the order of a second) compared with clock drifts. Sleep schedules
are not followed during data transmission. An extension to the basic S-MAC
scheme called adaptive listening [237] allows the active period to be of variable
length, in order to mitigate sleep latency to some extent.
Aside from the sleep scheduling, S-MAC is quite similar to the medium-access
contention in IEEE 802.11, in that it utilizes RTS/CTS packets. Both physical
carrier sense and the virtual carrier sense based on NAV are employed. S-MAC
implements overhearing avoidance, whereby interfering nodes are sent to sleep
so long as the NAV is non-zero (the NAV, as in 802.11, is set upon reception
of RTS/CTS packets corresponding to the ongoing transmission). S-MAC also
provides for fragmentation of larger data packets into several small ones, for all
of which only one RTS/CTS exchange is used.
It should be noted that the energy savings in S-MAC come at the expense
of potentially significant sleep latency: a packet travelling across the network
will need to pause (every few hops, depending on the settings) during the sleep
period of intermediate nodes.
Sleep-scheduled techniques
93
to send any message to its intended receiver to interrupt its timeout. When the
sender can send after the end of the contention period, the intended receiver
is already in sleep mode. Two possible solutions to the early sleep problem
are proposed and studied in [37]; we mention them only briefly here. The first
solution uses an explicit short FRTS (future request to send) control message
that can be communicated to the intended recipient asking it to wait for an
additional timeout period. The second solution is called full buffer priority,
in which a node prefers sending to receiving when its buffer is almost full.
With this scheme, a node has higher priority to send its own packet instead
of receiving another packet, and is able to interrupt the timeout of its intended
receiver.
For packets that need to traverse multiple hops, both S-MAC and T-MAC provide
energy savings at the expense of increased delay. This is because the packet can
traverse only a few hops in each cycle before it reaches a node that must go to
sleep. This is referred to as the data-forwarding interruption problem.
An application-specific solution to this problem is provided by the D-MAC
(data-gathering MAC) protocol [127], which applies only to flows on a predetermined data-gathering tree going up from the various network nodes to a
common sink. D-MAC essentially applies a staggered sleep schedule, where
nodes at each successive level up the tree follow a receivetransmitsleep
sequence that is shifted to the right. These cycles are aligned so that a node at
level k is in the receiving mode when the node below it on the tree at level k + 1
is transmitting. This is illustrated in Figure 6.7.
The staggered schedule of D-MAC has many advantages it allows data and
control packets (such as requests for adaptive extensions of the active period)
to sequentially traverse all the way up a tree with minimum delay; it allows
Rx
Rx
Rx
Tx
Tx
Tx
Sleep
Sleep
Sleep
94
requests for adaptive extensions of the active period to be propagated all the way
up the tree; it reduces interference by separating active periods at the different
levels; and it is also shown to reduce the number of nodes that need to be awake
when cycle adaption occurs. To deal with contention and interference, D-MAC
also includes optional components referred to as data prediction and the use of
more-to-send (MTS) packets.
Despite these advantages, D-MAC in itself is not a general purpose MAC as
it applies only to one-way data-gathering trees. However, as we discuss next, the
notion of scheduling the wake-up times in such a way as to minimize delay can
be extended to other settings.
6.5.4 Delay-efficient sleep scheduling (DESS)
It has been shown that DESS is an NP-hard problem for arbitrary graphs, but
optimal solutions are known in the case of tree- and ring-based graphs and good
approximations can be found for a grid network. If nodes are allowed to adopt
multiple schedules, then even more significant improvements in delay can be
obtained; e.g., on a grid a node can be assigned four schedules of k slots each (one
each for transmissions to the left, right, top, and bottom neighbors), which are
periodically repeated. On a grid, using multiple schedules can reduce the delay
between two nodes that are d hops away on the graph to O4d + k5, while the average fraction of wake-up times is still 1/k. On an arbitrary communication graph
Sleep-scheduled techniques
95
with n nodes, the use of two schedules on an embedded tree enables delays that
are provably O44d + k5 log n5, while maintaining the same 1/k wakesleep ratio.
6.5.5 Asynchronous sleep schedules
This problem can be formulated for asymmetric designs (where different nodes
can have different wake-up slot functions) as well as symmetric designs (where
all nodes have the same WSF, except for cyclic time shifts). The authors show
the following lower bound for the number of active slots in each cycle.
Theorem 7
For any arbitrary WSF design the necessary condition for C4u1 v5 m is that kv ku m T .
Thus, even for a single-slot overlap, the design must have a number of active
slots that is at least the square root of the total number of slots in the cycle. The
WSF design problem is related to the theory of combinatorial block designs. In
particular, a 4T1 k1 m5 symmetric block design is equivalent to a WSF symmetric
design that has T slots, with k active slots such that m of them overlap between
any two of them. It is further shown that techniques from block design theory
can be used to compute the desired WSF designs. In particular, it is shown that,
if p is a power of a prime, there exists a (p2 + p + 1, p + 1, 1) design. Figure 6.8
shows an example of a (7,3,1) design.
96
Node 1
Node 2
Node 3
Node 4
Node 5
Node 6
Node 7
Contention-free protocols
97
assignment offline, then distribute it back to the network [31]. However, such
solutions do not scale well with network size, particularly in the case of dynamic
environments. Decentralized approaches are therefore called for.
6.6.1 Stationary MAC and startup (SMACS)
One decentralized approach is the stationary MAC and startup algorithm proposed
in [203]. In this algorithm, each node need only maintain local synchronization.
During the starting phase, each node decides on a common communication slot
with a neighboring node through handshaking on a common control channel.
Each link also utilizes a unique randomly chosen frequency or CDMA frequencyhopping code. It is assumed that there are sufficiently many frequencies/codes to
ensure that there are no common frequency/time assignments within interference
range, and hence there is no contention. The slot is then used periodically, once
each cycle, for communication between the two nodes.
6.6.2 BFS/DFS-based scheduling
S
8
1
D
10 11
A
E
(a)
1
F
9 11
10
(b)
Figure 6.9 Time slot allocations for a data-gathering tree: (a) BFS, (b) DFS
98
from active to sleep and vice versa are large, since it minimizes such transitions
at each node. With DFS, each node does not have contiguous slots, but the slots
from each sensor source to the sink are contiguous, ensuring that intermediate
node buffers are not filled up during data-gathering. This provides low latency
to some extent; although the data must remain queued at the sensor node until
its transmission slot, once the data is transmitted it is guaranteed to reach the
sink in as many steps as the number of hops.
BFS has also been used for more tightly packed channel assignment for a
data-gathering tree in a scenario where interference constraints are taken into
account to provide spatial reuse [205]. In this slot-allocation scheme, each node
performs slots assignment sequentially. At each nodes turn, it chooses time slots
from the earliest possible slot number for its children. Local message exchanges
ensure that the slot does not interfere with any already assigned to nodes within
two hops of the child. The number of slots to be assigned to each node is
pre-determined based on an earlier maxmin fair bandwidth allocation phase.
With these techniques, though, it should be kept in mind that they generally
require global synchronization.
A significant concern with many of the TDMA schemes that provide guaranteed
slots to all nodes is that they are not flexible in terms of allowing the traffic from
each node to change over time. Reservation-based schemes such as ReSync [32]
provide greater flexibility.
In ReSync, each node in the network maintains the notion of an epoch based
on its local time alone, but it is assumed that each nodes epoch lasts the
same duration (or can be synchronized with nearby neighbors accordingly). Each
node picks a regular time each epoch based on its local clock to send a short
intent message. It is assumed that this selection can be done without significant
collisions because the intent message duration is very short. By listening for
sufficiently long, each node must further learn when its neighbors send intent
messages so that it can wake-up in time to listen to them. When a node wishes
to transmit to another node, it indicates in the intent message when it plans to
transmit the data (this data transmission time is chosen randomly and indicated
as an increment rather than in absolute terms). The intended recipient will then
wake-up at the corresponding time (which it can calculate based on its own
local clock) in order to receive the message. ReSync does not incorporate an
RTS/CTS mechanism to prevent message collisions due to the hidden node
Contention-free protocols
99
problem; however, since the data transmissions are scheduled randomly, any
collisions are not persistent.
6.6.4 Traffic-adaptive medium access (TRAMA)
100
absolute winner is the assumed transmitter unless the alternate winner is hidden
from the absolute winner and is in the possible transmitter set, in which case the
assumed transmitter is the alternate winner. Whenever the assumed transmitter
gives up, the need transmitter is the true transmitter. Nodes that are not in the
schedule listed by the transmitter can shift to sleep mode to save energy, while
the relevant transmitter and receiver must stay awake to complete the pertinent
communication. It is proved by Rajendran et al. [167] that TRAMA is a correct
protocol in that it avoids packet losses due to collisions or due to transmission
to sleeping nodes.
6.7 Summary
In wireless sensor networks, medium-access protocols provide a dual functionality, not only providing arbitration for access to the channel as in traditional
MAC protocols, but also providing energy efficiency by putting the radio to
sleep during periods of communication inactivity.
Given the diverse application contexts and the need for tunable tradeoffs in
WSN, the B-MAC protocol is a particularly elegant building-block approach
to provide basic functionality. B-MAC includes several distinct components
including low-power wake-up, clear channel assessment, acknowledgements, and
random backoff, that can be individually turned off or tuned by higher layers
as desired. Because of its flexibility, sophisticated MAC schemes, including
TDMA and sleep-scheduled contention-based schemes, can be easily built on
top of it.
Energy efficiency is a key concern for sensor network MAC protocols. As
we noted in the previous chapter, significant energy savings are possible by
avoiding idle listening. While there have been proposals to use a secondary lowpower wake-up radio to achieve this, the simpler low-power listening/preamble
sampling technique provides effectively the same benefit. Further savings are
possible by setting nodes on periodic sleepwake cycles, as proposed in the
S-MAC technique. S-MAC has been extended to incorporate adaptive listening
to reduce the sleep latency caused by periodical wake-up/sleep schedule, while
an enhancement to make S-MAC adaptive to traffic variation is addressed in
T-MAC. The problem of minimizing end-to-end latency while using sleep modes
has been addressed in the D-MAC protocol and the DESS work. Sleep-scheduling
techniques that eliminate the need for inter-node synchronization have also been
developed.
Exercises
101
An important alternative to the above contention-based techniques is TDMAbased MAC protocols. It is trivial in TDMA protocols to avoid idle listening to
provide energy efficiency, since all communications in TDMA are pre-scheduled.
However, the tradeoff is that TDMA techniques involve higher-complexity
distributed algorithms and impose tight synchronization requirements. They are
also generally best suited when communication flows are somewhat predictable,
although schemes like TRAMA incorporate periodic rescheduling to handle
dynamic traffic conditions.
Exercises
6.1
6.2
6.3
6.4
DESS: For a 5 5 grid of sensors propose a delay efficient (DESS) allocation of reception wake-up slots if k = 5. How does the delay for this
scheme compare with that for the S-MAC style allocation of the same
active slot to all nodes?
D
E
C
G
H
I
102
6.5
Asynchronous slot design: Give a (13,4,1) design for the slotted asynchronous wake-up scheme.
6.6
8.1 Overview
Information routing in wireless sensor networks can be made robust and
energy-efficient by taking into account a number of pieces of state information
available locally within the network.
1. Link quality: As we discussed in Chapter 5, link quality metrics (e.g. packet
reception rates) obtained through periodic monitoring are very useful in making routing decisions.
2. Link distance: Particularly in case of highly dynamic rapidly fading environments, if link monitoring incurs too high an overhead, link distances can
be useful indicators of link quality and energy consumption.
3. Residual energy: In order to extend network lifetimes it may be desirable to
avoid routing through nodes with low residual energy.
4. Location information: If relative or absolute location information is available,
geographic routing techniques may be used to minimize routing overhead.
5. Mobility information: Recorded information about the nearest static sensor
node near a mobile node is also useful for routing.
We examine in this chapter several routing techniques that utilize such information to provide energy efficiency and robustness.
120
If all wireless links are considered to be ideal error-free links, then routing
data through the network along shortest hopcount paths may be appropriate.
However, the use of shortest hopcount paths would require the distances of the
component links to be high. In a practical wireless system, these links are highly
likely to be error-prone and lie in the transitional region. Therefore, the shortest
hopcount path strategy will perform quite poorly in realistic settings. This has
been verified in a study [39], which presents a metric suitable for robust routing
in a wireless network. This metric, called ETX, minimizes the expected number
of total transmissions on a path. Independently, an almost identical metric called
the minimum transmission metric was also developed for WSN [225].
It is assumed that all transmissions are performed with ARQ in the form
of simple ACK signals for each successfully delivered packet. Let df be the
packet reception rate (probability of successful delivery) on a link in the forward
direction, and dr the probability that the corresponding ACK is received in the
reverse direction. Then, assuming each packet transmission can be treated as
a Bernoulli trial, the expected number of transmissions required for successful
delivery of a packet on the link is:
ETX =
1
df dr
(8.1)
This metric for a single link can then be incorporated into any relevant routing
protocol, so that end-to-end paths are constructed to minimize the sum of ETX
on each link on the path, i.e. the total expected number of transmissions on the
route.
Figure 8.1 shows three routes between a given source A and destination B,
each with different numbers of hops with the labelled link qualities (only the
forward probabilities are shown, assume the reverse probabilities are all dr = 1).
C
dCD = 0.9
dDE = 0.9
dAC = 0.9
dEB = 0.9
dAB = 0.1
dAF = 0.8
dFB = 0.8
F
Metric-based approaches
121
If the environment contains highly mobile objects, or if the nodes are themselves
mobile, the quality of links may fluctuate quite rapidly. In this case, use of
ETX-like metrics based on the periodic collection of packet reception rates may
not be useful/feasible.
Reliable routing metrics for wireless networks with rapid link quality fluctuations have been derived analytically [106]. They explicitly model the wireless channel as having multi-path fading with Rayleigh statistics (fluctuating
over time), and take an outage probability approach to reliability. Let d represent
the distance between transmitter and receiver, the path-loss exponent, SNR the
normalized signal-to-noise ratio without fading, f the fading state of the channel,
then the instantaneous capacity of the channel is described as:
f 2
C = log 1 + SNR
d
(8.2)
The outage probability Pout is defined as the probability that the instantaneous
capacity of the channel falls below the transmission rate R. It is shown that
d
Pout = 1 exp
SNR
(8.3)
122
In this case the metric for each link is d with a proportional power setting,
which is referred to as the minimum energy route (MER) metric. It should
be noted, however, that here the energy that is being minimized is only the
distance-dependent output power term not the cost of receptions or other
distance-independent electronics terms. It turns out that the MER metric can also
be used to determine the route which maximizes the end-to-end reliability metric,
subject to a total power constraint.
One key difference between MOR/MER metrics and the ETX metric is that
they do not require the collection of link quality metrics (which can change quite
rapidly in dynamic environments), but assume that the fading can be modelled by
a Rayleigh distribution. Also, unlike ETX, this work does not take into account
the use of acknowledgements.
The authors of [106] also propose and analyze the reliabilityenergy tradeoffs
for a simple technique for providing diversity in wireless routing that exploits
the wireless broadcast advantage. This is illustrated in Figure 8.2 by a simple
two-hop route from A to B to C. With traditional routing, the reliability of the
123
A
B
D
C
(a)
(b)
Figure 8.2 Illustration of relay diversity: (a) traditionally C receives a packet from A
successfully only if the transmission from A to B and B to C are both successful; (b) with
relay diversity, the transmission could also be successful in addition if the transmission from
A to B is overheard successfully by C
A related innovative network layer approach to robust routing that takes unique
advantage of the broadcast wireless channel for diversity is the extremely opportunistic routing (ExOR) technique [10]. Unlike traditional routing techniques, in
ExOR the identity of the node, which is to forward a packet, is not pre-determined
before the packet is transmitted. Instead, it ensures that the node closest to the
destination that receives a given packet will forward the packet further. While this
technique does not explicitly use metric-based routing, the protocol is designed
to minimize the number of transmissions as well as the end-to-end routing delay
(Figure 8.3).
124
B D E C
Priority of
candidate
receivers
0.2
0.5
Source
0.9
0.9
0.9
B
Destination
0.6
0.6
Multi-path routing
125
(yet closer to the destination) are less likely to receive a packet, but, whenever
they do, they are in a position to act as forwarders. The almost counter-intuitive
approach of routing without pre-specifying the forwarding node thus saves on
expected delay as well as the number of transmissions.
As with the relay diversity technique described before, ExOR also requires a
larger set of receivers to be active, which may have an energy penalty. Moreover,
to determine the priority ordering of candidate receivers, the inter-node delivery
ratios need to be tracked and maintained.
126
The Gradient Broadcast mechanism (GRAB) [236] enhances the GRAd approach
by incorporating a tunable energyrobustness tradeoff through the use of credits.
Similar to GRAd, GRAB also maintains a cost field through all nodes in the
network. The packets travel from a source to the sink, with a credit value that is
decremented at each step depending on the hop cost. An implicit credit-sharing
mechanism ensures that earlier hops receive a larger share of the total credit in a
packet, while the later hops receive a smaller share of the credit. An intermediate
forwarding node with greater credit can consume a larger budget and send the
packet to a larger set of forwarding eligible neighbors. This allows for greater
spreading out of paths initially, while ensuring that the diverse paths converge
to the sink location efficiently. This is illustrated in Figure 8.4, which shows the
set of nodes that may be used for forwarding between a given source and the
sink.
The GRAB-forwarding algorithm works as follows. Each packet contains three
fields: (i) Ro the credit assigned at the originating node; (ii) Co the cost-tosink at the originating node; and (iii) U the budget already consumed from the
source to the current hop. The first two fields never change in the packet, while
Multi-path routing
127
Sink
Source
the last is incremented at each step, depending on the cost of packet transmission
(e.g., it could be related to the power setting of the transmission). To prevent
routing loops, only receivers with lower costs can be candidates for forwarding.
Each candidate receiver i with a cost-to-sink of Ci computes a metric called
and a threshold as follows:
=1
Roi
Ro
(8.4)
2
(8.5)
Ci
=
Co
where
Roi = U 4Co Ci 5
(8.6)
The expression Roi determines how much credit has already been used up in
traversing from the origin to the current node. The metric , therefore, is an
estimate of the remaining credit of the packet. The threshold is a measure of
remaining distance to the sink. The candidate node will forward the message so
long as > . The square gives the threshold a weighting, so that the threshold
is more likely to be exceeded in the early hops than the later hops, as desired.
The authors of GRAB show that the choice of initial credit, Ro , provides a tunable
parameter to increase robustness at the expense of greater energy consumption.
128
In an ideal, lightly loaded environment, assuming all links require the same
energy for the transmission of a packet, the traditional minimum hop-count
routing approach will generally result in minimum energy expended per packet.
If different links have uneven transmission costs, then the route that minimizes
the energy expended in end-to-end delivery of a packet would be the shortest
path route computed using the metric Ti1j , the transmission energy for each link
i1 j. However, in networks with heterogeneous energy levels, this may not be the
best strategy to extend the network lifetime (defined, for instance, as the time
till the first node exhaustion).
The basic power-aware routing scheme [200] selects routes in such a way
as to prefer nodes with longer remaining battery lifetime as intermediate nodes.
Specifically, let Ri be the remaining energy of an intermediate node i, then
the link metric used is ci1j = R1 . Thus, the path P (indicating the sequence of
i
transmitting nodes for each hop) selected by a shortest-cost route determination
algorithm (such as Dijkstras or Bellman-Ford) would be one that minimizes
P 1
iP R .
i
(8.7)
129
This general formulation captures a wide range of metrics. If 4a1 b1 c5 = 401 01 05,
we have a minimum hop metric; if 4a1 b1 c5 = 411 01 05, we have the minimum
energy-per-packet metric; if b = c, then normalized residual energies are used,
while c = 0 implies that absolute residual energies are used; if 4a1 b1 c5 = 401 11 05,
we have the inverse-residual-energy metric suggested in [200]. However, simulation results in [20] suggest that a non-zero a and relatively large b = c terms
provide the best performance (e.g. (1, 50, 50)).
8.5.3 Load-balanced energy-aware routing
Cij1
kNi
Cik1
(8.8)
Node i then calculates the expected minimum cost to destination for itself as
Ci =
Pij Cij
(8.9)
jNi
Each time the node needs to route any packet, it forwards to any of its
neighbors randomly with the corresponding probability. This provides for load
balancing, preventing a single path from rapidly draining energy.
8.5.4 Flow optimization formulations
Chang and Tassiulas [21] also formulate the global problem of maximizing the
network lifetime with known origindestination flow requirements as a linear
program (LP), and propose a flow augmentation heuristic technique based on
iterated saturation of shortest-cost paths to solve it. The basic idea is that in each
iteration every origin node computes the shortest cost path to its destination,
and augments the flow on this path by a small step. After each step the costs
are recalculated, and the process repeated until any node runs out of its initial
energy Ei .
130
We should note that such LP formulations have been widely used by several
authors in the literature to study performance bounds and derive optimal routes.
Bhardwaj et al. use LP formulations to derive bounds on the lifetime of sensor
networks [9]. LP-based flow augmentation techniques are used by Sadagopan
and Krishnamachari [179] for a related problem involving the maximization
of total data gathered for a finite energy budget. Techniques to convert multisession flows obtained from such linear programming formulations into singlesession flows are discussed in [151]. Kalpakis et al. [99] also present an integer
flow formulation for maximum lifetime data-gathering with aggregation, along
with near-optimal heuristics. Ordonez and Krishnamachari present non-linear
flow optimization problems that arise when variable power-based rate control
is considered, and compare the gains obtained in energy-efficient data-gathering
with and without power control [110].
To illustrate this approach, consider the following simple flow-based linear
program. Let there be n-numbered source nodes in the network, and a sink
labelled n + 1. Let fij be the data rate on the corresponding link, Cij the cost of
transmitting a bit on the link, R the reception cost per bit at any node, T the total
time of operation under consideration, Ei the available energy at each node, and
Bi the total bandwidth available at each node.
max
n
X
s0t0 i 6= 4n + 151
n+1
X
j=1
fij Cij +
j=i
fi1n+1 T
n+1
X
n
X
j=1
n
X
j=1
n+1
X
j=1
fij
j=1
fji 0
fji R T Ei
fij +
c
X
j=1
fji Bi
(a)
(b)
(c)
This linear program maximizes the total data gathered during the time duration T . It incorporates (a) a flow conservation constraint, (b) a per-node energy
constraint, and (c) a shared bandwidth constraint.
Geographic routing
131
location-stamped data or satisfy location-based queries, geographic routing techniques [135] are often a natural choice.
8.6.1 Local position-based forwarding
One major shortcoming of the greedy forwarding technique is that it is possible for it to get stuck in local maxima/dead-ends. Such dead-ends occur (see
Figure 8.5) when a node has no neighbors that are closer to the destination than
itself. For this case, the greedy-face-greedy (GFG) algorithm [14], which is the
basis of the greedy perimeter stateless routing protocol (GPSR) [102], routes
along the face of a planar sub-graph using the right-hand rule. The planar subgraph can be obtained using localized constructions such as the Gabriel graph
and the relative neighbor graph constructions. A packet switches from greedy
to face-routing mode whenever it reaches a dead end, is then routed using face
routing, and then reverts back to greedy mode as soon as it reaches a node that
is closer to the destination than the dead-end node. Other studies have examined
ways to improve upon and get provable efficiency guarantees with face-routing
approaches. It should be kept in mind, however, that the likelihood that such
dead ends exist decreases with network density; it can be shown that, if the graph
is dense enough that each interior node has a neighbor in every 2/3 angular
sector, then greedy forwarding will always succeed [49].
132
Source
E
Dead-end
G
B
Destination
Seada et al. show in [187] that greedy geographic forwarding techniques exhibit
a distance-hopenergy tradeoff over real wireless links. If at each step the packet
travels a long distance (as in the basic greedy and MFR techniques), the total
number of hops is minimized; however, because of the distance, each hop is
likely to be a weak link, with poor reception rate requiring multiple transmissions
for successful delivery. At the other extreme, if the packet travels only a short
distance at each hop (as with the NFP technique), the links are likely to be good
quality; however there are multiple hops to traverse, which would increase the
number of transmissions as well. By extensive simulations, real experiments as
well as analysis, it is shown that the localized metric that maximizes the energy
efficiency while providing a good end-to-end delivery reliability is the product of
the packet reception rate on the link (probability of successful delivery) and the
distance improvement towards the destination D, which is known as the PRR D
metric. Thus, when it has a packet to transmit to a given destination, each node
selects the neighbor with the highest PRR D metric to be the one to forward
the message further.
8.6.4 Geographical energy-aware routing (GEAR)
133
for each neighbor that is modified over time to provide a balance between reachability and energy efficiency. This approach also provides robustness to dead
ends. When the packet reaches the destination region, a recursive forwarding
technique is employed as follows. The region is split into k sub-regions, and a
packet is forwarded to each sub-region. This splitforward sequence is repeated
until the region contains only one node, at which point the packet has been
successfully propagated to all nodes within the query region.
Finally, we should mention in this section the trajectory-based forwarding
technique (TBF) [148], which is also an important geographic routing technique
for sensor networks. As a significant application of TBF applies to the routing
of queries, however, we shall defer its description to the next chapter.
In the two-tier data dissemination (TTDD) approach [239], all nodes in the
network are static, except for the sinks that are assumed to be mobile with
unknown/uncontrolled mobility. The data about each event are assumed to originate from a single source. Each active source creates a grid overlay dissemination
network over the static network, with grid points acting as dissemination nodes
134
(see Figure 8.6). A mobile sink, when it issues queries for information, sends
out a locally controlled flood that discovers its nearest dissemination point. The
query is then routed to the source through the overlay network. The sink includes
in the query packet information about its nearest static neighbor, which acts as
a primary agent. An alternative immediate agent is also chosen when the sink
is about to go out of reach of the primary agent for robust delivery. The source
sends data to the sink through the overlay dissemination network to its closest
grid dissemination node, which then forwards it to its primary agent. As the sink
moves through the network, new primary agents are selected and the old ones
time out; when a sink moves out of reach of its nearest dissemination node, a
new dissemination node is discovered and the process continues.
8.7.2 Asynchronous dissemination to mobile sinks (SEAD)
The scalable energy-efficient asynchronous dissemination technique (SEAD) presented in [107] provides for communication of information from a given source
in a static sensor network to multiple mobile sinks. Each mobile sink selects a
nearby static access node to communicate information to and from the source.
Only the access node keeps track of sink movement, so long as it does not
move too far away. When the hop-count between the sink and the nearest access
point exceeds a threshold, a new access node is selected by the sink. Data are
sent from the source first to the various access nodes through a dynamically
Active
source S
Primary
agent
Mobile sink
135
For sparsely deployed sensor networks (e.g. deployed over large areas), the network may never be truly connected; in the most extreme case no two sensor
devices may be within radio range of each other. The MULE (mobile ubiquitous
LAN extensions) architecture [190] aims to provide connectivity in this environment through the use of mobile nodes that may help transfer data between
sensors and static access points, or may themselves act as mobile sinks.
It is assumed that MULE nodes do not have controlled mobility and that their
movements are random and unpredictable. Whenever a MULE node comes into
contact with a sensor node, it is assumed that all the sensors data are transferred
over to the MULE. Whenever the MULE comes into contact with an access
point, it transfers all information to the access point. It is assumed that there is
sufficient density of MULE nodes, with sufficiently uniform mobility, so that
all sensor nodes can be served, although delays are likely to be quite high. Both
MULEs and sensors are assumed to have limited buffers, so that new data are
stored or transferred only if there is buffer space available.
The MULE architecture has been analyzed using random walks on a grid [190].
The analysis provides an insight into the scaling behavior of this system with
respect to number of sensors, MULEs, and access points. One conclusion of
the study worth noting is that the buffer size of MULE nodes can be increased
to compensate for fewer MULE nodes in terms of delivery rates (albeit at
the expense of increased latency), but increasing sensor buffers alone does not
necessarily have a similar effect.
8.7.4 Learning enforced time domain routing
136
S: Source
A, B, C, R: Relays
M1, M2, M3: Moles
T : Current time domain
N ( T ) : Negative reinforcement
P ( T ): Positive reinforcement
M3
y
tor
ec
N(T)
tra
j
M2
Sin
k
P(T)
C
B
N(T)
P(T)
M1
A
N(T)
N(T)
S
vicinity. The periodic time between tours of the sink is divided into multiple
domains, such that the sink may be more likely to be in the vicinity of one set of
moles in one time domain, and in the vicinity of another set of moles in another
time domain.
For each time domain, local forwarding probabilities are maintained at intermediate nodes. When data are generated, depending on the time, they are routed
through the intermediate nodes based on these probabilities to try and reach a
mole that the sink will pass by. Initially, the probability weights at nodes are
all equal, resulting in unbiased random walks. Over time, these weights are
reinforced positively or negatively by moles, depending on the sink probability
distribution and success of the data delivery. Multiplicative weight update rules
for reinforcements are found to be most efficient and robust. A few iterations
may suffice to determine efficient routes for data to reach a mole that is highly
likely to encounter the sink and be delivered successfully.
8.8 Summary
We have examined a number of issues and design concepts relevant to reliable,
energy-efficient routing in wireless sensor networks: selection of routing metrics,
multi-path routing, geographic routing, and delivery of data to mobile nodes.
Exercises
137
Exercises
8.1
ETX: For the directed graph labelled with reception probabilities shown in
Figure 8.3 (ignore the priorities associated with ExOR, and assume dr = 1
on all links), determine the optimal ETX route from node A to node B.
8.2
Relay diversity and MAC: Explain why the relay diversity scheme may not
work well with some sleep-oriented MAC protocols proposed for sensor
networks.
8.3
Relay diversity: For the example of relay diversity shown in Figure 8.2, say
the probabilities of reception for the links AB, BC, and AC were 0.8,
0.8, and 0.6 respectively. What is the probability of successful reception
at C without and with relay diversity?
138
8.4
8.5
Flow formulation: Adapt the linear program given in Section 8.5.4 for a
fairness-oriented objective function that maximizes the minimum flow rate
from all sources.
8.6
Greedy geographic routing fails when a forwarding node on the path finds
no neighbors within range that are closer than itself to the destination.
Prove that this implies the existence of a 2/3 angular sector centered at
this node in which it has no neighbors.
8.7
MULE simulation study: On a 1010 square grid, place the sink node
at the bottom-left-most grid, and ten sources at random squares. Simulate
the movement of k MULE nodes (with varying k), such that all execute
independent unbiased random walks on the grid, moving to a neighboring
cell at each time step. Assume the MULE nodes pick up one unit of
information from the source when they are in the same grid, and drop off
all information they contain when they arrive at the sink. Assume that the
sources always have data available for pick-up and an infinite buffer, and
that MULEs have an infinite buffer too. Analyze significant metrics, such
as the average time delay between each visit to the sink, average size of
the MULE buffers, average throughput between sources and the sink, as
functions of the number of MULE nodes. What is the impact of placing
additional static sink nodes?
Data-centric networking
9.1 Overview
A fundamental innovation in the area of wireless sensor networks has been the concept of data-centric networking. In a nutshell, the idea is this: routing, storage, and
querying techniques for sensor networks can all be made more efficient if communication is based directly on application-specific data content instead of the traditional
IP-style addressing [74].
Consider the World Wide Web. When one searches for information on a popular search site, it is possible to enter a query directly for the content of interest, find
a hit, and then click to view that content. While this process is quite fast, it does
involve several levels of indirection and names: ranging from high-level names,
like the query string itself, to domain names, to IP (internet protocol) addresses,
and MAC addresses. The routing mechanism that supports the whole search process is based on the hierarchical IP addressing scheme, and does not directly take
into account the content that is being requested. This is advantageous because the
IP is designed to support a huge range of applications, not just web searching. This
comes with increased indirection overhead in the form of the communication and
processing necessary for binding; for instance the search engine must go through
the index to return web page location names as the response to the query string,
and the domain names must be translated to IP addresses through DNS. This
tradeoff is still quite acceptable, since the Internet is not resource constrained.
Wireless sensor networks, however, are qualitatively different. They are application specific so that the data content that can be provided by the sensors is
relatively well defined a priori. It is therefore possible to implement network
operations (which are all restricted to querying and transport of raw and processed
139
140
Data-centric networking
sensor data and events) directly in terms of named content. This data-centric
approach to networking has two great advantages in terms of efficiency:
1. Communication overhead for binding, which could cause significant energy
wastage, is minimized.
2. In-network processing is enabled because the content moving through the
network is identifiable by intermediate nodes. This allows further energy
savings through data aggregation and compression.
One of the first proposed event-based data-centric routing protocols for WSN is
the directed diffusion technique (Figure 9.1) [96, 97].
This protocol uses simple attribute-based naming as the fundamental building
block. Both requests for information (called interests) and the notifications of
observed events are described through sets of attributevalue pairs. Thus, a
request for 10 seconds worth of data from temperature sensors within a particular
rectangular region may be expressed as follows:
type
start
interval
duration
location
= temperature
// type of sensor data
= 01:00:00
// starting time
= 1s
// once every second
= 10s
// for ten seconds
= [24, 48, 36, 40] // within this region
And one of the data responses from a particular node may be:
type
value
timestamp
location
= temperature
= 38.3
= 01:02:00
= [30, 38]
Data-centric routing
B
C
A
141
Event
Source
F
A
Sink
D
E
(a)
D
(b)
B
C
A
Sink
Event
C
F
Sink
Source
F
A
Sink
F
D
E
(c)
Source
E
(d)
the interest. The sinks ID/network address is not available and hence not
recorded, however the local neighbors are assumed to be uniquely identifiable
through some link-layer address. The gradient also specifies a value (which
could be an event rate, for instance).
3. A node which obtains sensor data that matches the interest begins sending its
data to all neighbors it has gradients toward. If the gradient values stand for
event rates then the rate to each neighbor must satisfy the gradients on the
respective link. All received data are cached in intermediate nodes to prevent
routing loops.
4. Once the sink starts receiving response data to its interest from multiple
neighbors, it begins reinforcing one particular neighbor (or k neighbors, in
case multi-path routing is desired), requesting it to increase the gradient
value (event rate). These reinforcements are propagated hop by hop back to
the source. The determination of which neighbor to reinforce can take into
account other considerations such as delay, link quality, etc. Nodes continue
to send data along the outgoing gradients, depending on their values.
5. (Optional) Negative reinforcements are used for adaptability. If a reinforced
link is no longer useful/efficient, then negative reinforcements are sent to
reduce the gradient (rate) on that link. The negative reinforcements could be
implemented by timing out existing gradients, or by re-sending interests with
a lower gradient value.
Essentially what directed diffusion does is (a) the sink lets all nodes in the
network know what the sink is looking for, (b) those with corresponding data
142
Data-centric networking
respond by sending their information through multiple paths, and (c) these are
pruned via reinforcement so that an efficient routing path is obtained.
The directed diffusion mechanism presented here is highly versatile. It can
be extended easily to provide multi-path routing (by changing the number of
reinforced neighbors) as well as routing with multiple sinks/sources. It also
allows for data aggregation, as the data arriving at any intermediate node from
multiple sources can be processed/combined together if they correspond to the
same interest.
The basic version of directed diffusion described above can be viewed as a twophase pull mechanism. In phase 1, the sink pulls for information from sources
with relevant information by propagating the interest, and sources respond along
multiple paths; in phase 2, the sink initiates reinforcement, then sources continue
data transfer over the reinforced path. Other variants of directed diffusion include
the one-phase pull and the push diffusion mechanisms [75].
The two-phase pull diffusion can be simplified to a one-phase pull mechanism
by eliminating the reinforcements as a separate phase. In one-phase pull diffusion,
the sink propagates the interest along multiple paths, and the matching source
directly picks the best of its gradient links to send data and so on up the reverse
path back to the sink. While potentially more efficient than two phase pull,
this reverse-path selection assumes some form of bidirectionality in the links, or
sufficient knowledge of the link qualities/properties in each direction.
In push diffusion, the sink does not issue its interests. Instead sources with
event detections send exploratory data through the network along multiple paths.
The sink, if it has a corresponding interest, reinforces one of these paths and the
data-forwarding path is thus established.
The push and pull variants of diffusion have been compared and analyzed
empirically [75] as well as through a simple mathematical model [111]. The
results quantify the intuition that the pull and push variants are each appropriate
for different kinds of applications. In terms of the route setup overhead, pull
diffusion is more energy-efficient than push diffusion whenever there are many
sources that are highly active generating data but there are few, infrequently
interested sinks; while push diffusion is more energy-efficient whenever there
are few infrequently active sources but there are many frequently interested
sinks.
The threshold-sensitive energy-efficient sensor network protocol (TEEN) [132]
is another example of a push-based data-centric protocol. In TEEN, nodes react
143
immediately to drastic changes in the value of a sensed attribute and when this
change exceeds a given threshold communicate their value to a cluster-head for
forwarding to the sink.
The LEACH protocol [76] is a simple routing mechanism proposed for continuous data-gathering applications. In LEACH, illustrated in Figure 9.2, the
Clusterhead
Base station
144
Data-centric networking
The gains due to in-network compression can be best demonstrated in the extreme
case where the data from any number of sources can be combined into a single
packet (e.g. duplicate suppression, when the sources generate identical data). In
this case, if there are k sources, all located close to each other and far from
the sink, then a route that combines their information close to the sources can
achieve a k-fold reduction in transmissions, as compared with each node sending its information separately without compression. In general, the optimal joint
routingcompression structure for this case is a minimum Steiner tree construction problem, which is known to be NP-hard. However there exist polynomial
solutions for special cases where the sources are close to each other [109].
9.3.4 Network correlated data-gathering
Cristescu, BeferullLozano, and Vetterli [35] consider the case where all nodes
are sources but the level of correlation can vary. In this case, when the data
are completely uncorrelated then the shortest path tree provides the best solution
145
(in minimizing the total transmission cost). However, the general case is treated
by choosing a particular correlation model that preserves the complexity; in this
model, only nodes at the leaf of the tree need to provide R bits, but all other
interior nodes, which have side information from other nodes, need only generate
r bits of additional information. The quantity = 1 Rr is referred to as the
correlation coefficient. Now, it can be shown that a travelling salesman path
(that visits all nodes exactly once) provides an arbitrarily more efficient solution
compared with shortest path trees as increases. It is shown that this problem
is NP-hard for arbitrary vlaues.
A good approximation solution for the problem is the following combination
of SPT and the travelling salesman paths. All nodes within some range of the
sink (larger the , smaller this range) are connected through shortest path trees,
and beyond that each strand of the SPT is successively grown by adding nearby
nodes, an approximate way to construct the travelling salesman paths. Thus the
data from distant nodes are compressed sequentially up to a point, and then sent
to the sink using shortest paths.
9.3.5 Simultaneous optimization for concave costs
Goel and Estrin [64] treat the case when the exact reduction in data that can be
obtained by compressing k sources is not known. The only assumption that is
made is that the amount of compression is concave with respect to k. This is a
very reasonable assumption, as it essentially establishes a notion of monotonically
diminishing contributions to the total non-redundant information; it simply means
that the additional non-redundant information in the j + 1th source is smaller
than that of the jth source. A random tree construction is developed for this
problem that ensures that the expected cost is within a factor O4log k5 of the
optimal, regardless of what the exact concave compression function is.
9.3.6 Scale-free aggregation
146
Data-centric networking
assumes a square grid network in which the source is located at the origin, on the
bottom-left corner. The routing technique proposed is a randomized one: a node
at location 4x1 y5 forwards its data, after combining with any other preceding
sources sending data through it, with probability x/4x + y5 to its left neighbor and
with probability y/x + y to its bottom neighbor. It is shown that this randomized
routing technique can provide a constant factor approximation in expectation to
the optimal solution.
9.3.7 Impact of spatial correlations on routing with compression
In [153], an empirically derived approximation is used to quantify spatial correlation in terms of joint entropies. The total joint information generated by an
arbitrary set of nodes is obtained using an approximate incremental construction.
At each step of this construction, the next nearest node that is at a minimum
distance dmin to the current set of nodes is considered. This node contributes an
dmin
amount of uncorrelated data equal to c+d
H1 , where H1 is the entropy of a
min
single source and c a constant that characterizes the degree of spatial correlation.
In the simplest case when all nodes are located on a line with equal spacing of d,
this procedure yields the following expression for the joint entropy of n nodes:
Hn = H1 + 4n 15
d
H
c+d 1
(9.1)
Consider first the two extremes: (i) when c = 0, Hn = nH1 , the nodes are completely uncorrelated; (ii) when c , Hn = H1 , the nodes are completely correlated.
Under this model, it becomes easy to quantify the total transmission cost of
any tree structure where routing is combined with en route compression. An
example scenario is considered with a linear set of sources at one end of a
square grid communicating their data to a sink at the other end. An idealized
distributed source coding is used as a lower bound for the communication costs
in this setting. It is shown that at one extreme, when the data are completely
uncorrelated (c = 0), the best solution is that of shortest path routing (since there
is no possible benefit due to compression). At the other extreme, when data are
perfectly correlated (c ), the best solution is that of routing the data among
the sources first so that they are all compressed, before sending the combined
information directly to the sink. For in-between scenarios, a clustering strategy is
advocated such that the data from s nearby sources are first compressed together,
then routed to the sink along the shortest path. It is shown that there is an optimal
cluster size corresponding to each value of the correlation parameter. The higher
Querying
147
the level of correlation, the larger the optimal cluster size. However, surprisingly,
it is also found that there exists a near-optimal cluster size that depends on the
topology and sink placement but is insensitive to the exact correlation level.
This result has a practical implication, because it suggests that a LEACH-like
clustering strategy combined with compression at the cluster-heads can provide an
efficient solution even in the case of heterogeneous or time-varying correlations.
9.3.8 Prediction-based compression
Another approach to combining routing and compression is to perform predictionbased monitoring [63]. The essence of this idea is that the base station (or a
cluster-head for a region of the network) periodically gathers data from all nodes
in the network, and uses them to make a prediction for data to be generated until
the next period. In the simplest case, the prediction may simply be that the data do
not change. More sophisticated predictions may indicate how the data will change
over time (e.g. the predictions may be based on the expected movement trajectory
of a target node, or in the case of diffuse phenomena such as heat and chemical
plumes, these predictions may even be based on partial differential equations
with known or estimated parameters [176]). This prediction is then broadcast to
all nodes within the region. During the rest of the period, the component nodes
only transmit information to the base station if their measurements differ from
the predicted measurements.
9.3.9 Distributed regression
9.4 Querying
In basic data-gathering scenarios, such as those discussed above in connection
with compression, information from all nodes needs to be provided continuously
to the sink. In many other settings, the sink may not be interested in all the
148
Data-centric networking
information that is sensed within the network. In such cases, the nodes may store
the sensed information locally and only transmit it in response to a query issued
by the sink. Therefore the querying of sensors for desired information is a fundamental networking operation in WSN. Queries can be classified in many ways:
1. Continuous versus one-shot queries: depending on whether the queries are
requesting a long duration flow or a single datum.
2. Simple versus complex queries: complex queries are combinations that
consist of multiple simple sub-queries (e.g. queries for a single attribute
type); e.g. What are the location and temperature readings in those nodes
in the network where (a) the light intensity is at least w and the humidity level is between x and y OR (b) the light intensity is at least z. Complex queries may also be aggregate queries that require the aggregation
of information from several sources; e.g. report the average temperature
reading from all nodes in region R1.
3. Queries for replicated versus queries for unique data: depending on
whether the queries can be satisfied at multiple nodes in the network or only
at one such node.
4. Queries for historic versus current /future data: depending on whether the
data being queried for were obtained in the past and stored (either locally at
the same node or elsewhere in the network), or whether the query is for current/future data. In the latter case data do not need to be retrieved from storage.
When the queries are for truly long-term continuous flows, the cost of the
initial querying may be relatively insignificant, even if that takes place through
naive flooding (as for instance, with the basic directed diffusion). However,
when they are for one-shot data, the costs and overheads of flooding can
be prohibitively expensive. Similarly, if the queries are for replicated data, a
flooding may return multiple responses when only one is necessary. Thus other
alternatives to flooding-based queries (FBQ) are clearly desirable.
9.4.1 Expanding ring search
One option is the use of an expanding ring search, illustrated in Figure 9.3.
An expanding ring search proceeds as a sequence of controlled floods, with the
radius of the flood (i.e. the maximum hop-count of the flooded packet) increasing
at each step if the query has not been resolved at the previous step. The choice
of the number of hops to search at each step is a design parameter that can be
optimized to minimize the expected search cost using a dynamic programming
technique [22].
Querying
149
3 Hop flood
2 Hop flood
1 Hop flood
The information-driven sensor querying approach (IDSQ) [28] suggests an incremental approach to sensor tasking that is suitable for resource constrained,
dynamic environments. The problem of how to route the query to a node with the
maximum information gain is a core problem, that is addressed by the constrained
anisotropic diffusion routing (CADR) technique. CADR essentially routes the
query through a greedy search, making a sequence of local decisions at intermediate steps, based on sensor values of neighboring nodes. A composite objective
function that combines the information utility and communication costs is first
defined. These decisions can be made in different ways:
by forwarding the query to the neighboring node with the highest objective
function;
by forwarding the query to the neighboring node with the steepest (local)
gradient in the objective function;
150
Data-centric networking
In WSN where nodes in the network all have reasonably accurate location
information (either directly through GPS or through the implementation of a
network localization technique), a unique approach to efficient querying is the
Querying
151
use of pre-programmed paths embedded into the query packet. The geographic
trajectory-based forwarding (TBF) technique [148] provides this functionality.
The source encodes a trajectory for the query packet into the header. The trajectory could be anything that can be represented in a parametric form 4x4t51 y4t55
(though non-parametric representations are also possible in principle). For
instance a packet to be sent along a sinusoidal curve in a single direction would
have the trajectory encoding 4x4t5 = t1 y4t5 = A sin t5; and, to travel on a straight
line with slope , it would have the encoding 4x4t5 = t cos 1 y4t5 = t sin 5.
During the course of the forwarding, the ith node that receives the packet with
the encoded trajectory determines the corresponding time ti as the value of t that
corresponds to the point of the curve closest to its location (if the curve passes
by the same location more than once, then additional information, such as the
parameter value chosen by the previous node in the forwarding, may be utilized
to determine ti ). This node then examines its neighboring nodes to determine
which of them would be most suitable to forward the packet to, depending on
x4t51 y4t5, and ti . To make progress on the trajectory, the next hop neighbor must
have a parameter value ti+1 higher than ti .
The next hop can be determined in many ways depending on design considerations, such as by (a) picking the neighbor offering the maximum distance improvement, (b) picking the neighbor that offers the minimum deviation from the encoded trajectory, (c) picking the node closest to the centroid
of the candidate neighbors, and (d) picking the node with maximum energy.
Repeating this process at each step, the packet will follow a trajectory close
to that specified by the parametric expression in the packet. This is illustrated
in Figure 9.4. Good features of this technique are that the trajectory information can often be represented quite compactly, a number of different types
of trajectories can be encoded, and the forwarding decisions at each step are
local and dynamic. The denser the network, the more accurately will the actual
trajectory match the desired trajectory.
While it has many possible applications, TBF is uniquely suited for propagating queries within the network. When a set of possible locations must all be
visited, TBF provides an efficient way to guide the query.
152
Data-centric networking
Active query
forwarded if
unresolved
Querying
153
3. Forward: If it has not already been resolved, the query is then forwarded
to another active node (chosen either randomly or through some guided
mechanism such as TBF) by a sufficient number of hops so that the controlled
flood phases (described below) do not overlap significantly.
A key observation about ACQUIRE is that the look-ahead parameter offers a
tunable tradeoff between a trajectory-based query when d = 0 (which could be
either a random walk or a guided trajectory, depending on the implementation)
and a full flood when d = D, the diameter of the network. There is a tradeoff for
different values of the look-ahead parameter d; when the value of d is small, the
query needs to be forwarded more often, but there are fewer update messages at
each step. When d is large, fewer forwarding steps are involved, but there are
more update messages at each step.
The optimal choice of d in ACQUIRE depends most on sensor data dynamics,
which can be captured by the ratio of the rate at which data change in the network
to the rate at which queries are generated. When the data dynamics are low,
caches remain valid for a long time and therefore the cost of a large d flood can
be amortized over several queries; however, when the data dynamics are very
high, repeated flooding is required, and hence a small d is favored.
9.4.6 Rumor routing
154
Data-centric networking
Source
Pointer to source
located!
Sink interest
Sink
Figure 9.6 Rumor routing
in the energy costs can be obtained by rumor routing compared with the two
extremes of query flooding (pull) and event flooding (push).
9.4.7 The comb-needle technique
The same approach as followed by rumor routing, of combining push and pull
by looking at intersections of queries and event notifications, is also the basis of
the comb-needle technique [124].
In the basic version of this technique, illustrated in Figure 9.7 the queries build
a horizontal comb-like routing structure, while events follow a vertical needlelike trajectory to meet the teeth of the comb. A key tunable parameter in this
construction is spacing between branches of the comb and correspondingly the
length of the trajectory traversed by event notifications, which can be adjusted
depending on the frequency of queries as well as the frequency of events. To
minimize the average total cost per query, the comb inter-spacing as well as the
length of the event trajectories should be smaller when the event-to-query ratio
is higher (more pull, less push); however, when there the event-to-query ratio is
lower, the comb inter-spacing as well as the distance traversed by even notifications should be higher (less pull, more push).
In practice, the frequency of both queries and events is likely to fluctuate
over time. An adaptive version of the algorithm [124] handles this scenario. In
this adaptive technique, the inter-comb spacing and needle trajectory length are
Querying
155
Event
needle
Source
Source
156
Data-centric networking
of the probability that the query is unsuccessful with respect to t, the time duration of the query. Intuitively, the faster this decay rate, the more efficient the
query, as a small time duration will suffice to locate the desired information
with high probability. It is shown that the simple source-driven search decays
as 4log4t551 ; with distributed replication, it decays as approximately t1 ; while,
5k
with the sticky search, the decay is given as t 8 . Thus the sticky search even
outperforms distributed replication, so long as the number of push/pull strands
is at least 2. Thus, this study provides analytical support for the rumor routing
and combneedle approaches discussed above.
The use of geographic hash tables [172] provides a simple way to combine datacentric storage with geographic routing. It is quite simple in essence and works
as follows. Every unique event or data attribute name that can be queried for is
assigned a unique key k, and each data value v is stored jointly with the name
of the data as a key value pair 4k1 v5. Two high-level operations provided are
Put(k,v), and Get(k). A geographic hash function is used to hash each key to a
unique geographic location (x1 y coordinate) within the sensor network coverage
region. The node in the network whose location is closest to this hashed location
(known as the home location for the key), is the intended storage point for the data.
When a sensor node generates a new value, the Put operation is invoked, which
uses the hash function to determine the corresponding unique location and uses
the GPSR geographic routing protocol to route the information to the home node.
When the sink(s) issue a Get(k) query, it is sent directly to the same location.
To ensure that the geographic routing consistently finds the same node for a
key, and to provide robustness to topology changes, a perimeter refresh protocol is
provided in GHT. To provide load balancing in large-scale networks, particularly
for high-rate events, GHT also provides a structured replication mode. In this
157
mode, instead of a single location, for each unique key a number of symmetric
hierarchical mirror locations are chosen throughout the network. When a node
generates data corresponding to the key, it stores it at the closest mirror location,
while queries are propagated to all mirror locations in a hierarchical manner
9.5.2 Distributed index for multi-dimensional data (DIM)
DIM [119] is a storage and retrieval mechanism uniquely geared towards multidimensional range queries. An example of a multi-dimensional query is list
events such that the temperature value is between 20 and 30 degrees, and light
reading is between 100 and 120 units. It comprises two key mappings:
1. All multi-dimensional values are mapped (many-to-one) to a k-bit binary
vector.
2. Each of the 2k possible binary codes is mapped to a unique zone in the
network area.
Assume that all values are normalized to be between 0 and 1. The k-bit vector
is generated by a simple round-robin technique. If the data are m-dimensional,
the first m bits indicate whether the corresponding values are below or above 0.5,
the second m bits whether the corresponding values are in the ranges [00.25,
0.50.75] or in the ranges [0.250.5, 0.751] (with disambiguation within the
ranges provided by the first set of bits), and so on. Consider two examples: let
k = 41 m = 2, the value (0.23, 0.15) is denoted by the binary vector 0000 (which
fits all values in the multi-dimensional range (00.25, 00.25); and the value
(0.35, 0.6) is denoted by 0110 (which fits all values in the multi-dimensional
range (0.250.5, 0.50.75).
The mapping of binary codes to zones in a rectangular 2D network area A is performed by the following simple splitting construction: for each successive division,
split the region A into two equal-size rectangles, alternating between vertical and
horizontal splits. Each division corresponds to a successive bit. If the split-line is
vertical, by convention, a 0 codes for the left half, and if the split-line is horizontal,
a 0 codes for the top half. This construction, illustrated in Figure 9.8, uniquely
identifies a zone with each possible binary vector. In a manner similar to GHT, the
node closest to the centroid of the corresponding zone may be regarded as the home
node, and treated as the unique point for storage and retrieval.
9.5.3 Distributed index for features (DIFS)
DIFS [67] is a technique suitable for index-based storage and retrieval of information in response to range queries (e.g. did any sensors report readings of
158
Data-centric networking
0010
000
10
0011
0100
01
110
111
0101
Figure 9.8 Zone creation based on binary index for storing multi-dimensional range data
temperatures within 2030 degrees?). DIFS constructs a multiply rooted hierarchical index as follows. Nodes store information for a range of values in a given
geographic region. Nodes at low levels cover a wide range of values within a
small region, while nodes at the higher levels cover a small range of values
within a larger region. In DIFS, each parent has exactly four children, while each
child has k parents. Each parent holds information on 1/k of the values that a
child does, but covers four times its geographic range. A source node measuring
an event sends it first to the nearby local index node (determined by a suitable
hash function) with a small area coverage and largest range of values. This node
then propagates a histogram of observed values to the particular parent at the
next higher level with a smaller range of values covering that value, and so on.
The leaf index nodes at level 0 point directly to storage nodes, while nodes at
level 1 and higher each store four histograms pointing to each of the lower-level
index nodes covering smaller areas. DIFS searches may enter at any level of the
index structure (and often at multiple points), depending on the spatial extent
and value range requested in the query, and drill down to obtain events satisfying
the query. The histograms are also useful in resolving more sophisticated queries
involving distributions.
9.5.4 DIMENSIONS
A multi-resolution storage and retrieval functionality suitable for spatiotemporally correlated data is provided by the DIMENSIONS architecture [59,
60]. DIMENSIONS incorporates three key components:
1. Multi-resolution hierarchical storage: In DIMENSIONS, the lowest levels
of the hierarchy store high-resolution information, while the highest levels
store lossy compressed coarse-grained information. Specifically, at the lowest
159
level of the hierarchy, individual nodes store time series data, possibly with
local lossless compression. At each progressively higher level, nodes receive
lossy compressed data from multiple children that they uncompress, combine
together, and compress to a higher lossy compression ratio, using wavelet
compression to send up to the node at the next level. Thus the nodes at higher
levels have information about the larger geographic range, but at a coarse
grain, while nodes at lower levels have information about smaller regions at
a finer granularity.
2. Drill-down querying: Over this hierarchical structure, queries are resolved in
a drill-down manner from the top. First responses at the coarsest grain are
used to determine which low-level branch is likely to resolve the query, and
this process is repeated until the query moves sufficiently down the structure
to be resolved.
3. Progressive aging: In practice, such a storage system will face the practical
limitation of finite storage. The design principle advocated for such storage
in DIMENSIONS is the concept of graceful aging. The more extensive
fine-grained data at the lowest levels of the earlier hierarchy age and are
replaced with incoming new data faster, while the coarse-grained compressed
information at higher layers is replaced more slowly. Thus, the farther back in
time the data being queried for, the more likely it is that they will be obtained
in a summarized form; queries for more recent data are answered with finer
granularity.
It has been shown that a simple SQL-like declarative language with some
extensions can be quite powerful for phrasing a diverse set of queries relevant
160
Data-centric networking
161
target are activated and provide information on it as it moves). This suggests the
possibility of easy high-level programming of sensor networks using an SQL-like
language.
9.6.2 Aggregate queries
In TAG, the responses to queries are routed up a tree, with aggregation operators such as MAX, SUM, COUNT etc. applied at each step within the network.
Aggregates are implemented via three functions: a merging function f , an initializer i and an evaluator e; e.g., if f is AVERAGE, then given partial state
records PSR1 =< S1 1 C1 > and PSR2 =< S2 1 C2 > (S and C for sum and count
respectively), PSR3 = f4PSR11 PSR25 =< S1 + S2 1 C1 + C2 >. The initializer i
gives the partial state record for a single sensor reading; e.g., if the single sensor
value is x, then i4x5 returns < x1 1 >. The evaluator e performs the computation
on a state record to return the final result; e.g. e4< S1 C >5 = S/C.
The communication savings due to aggregation within the network depend
very much on the type of aggregate used. Aggregates such as COUNT, MIN,
MAX, SUM, AVERAGE, MEDIAN, HISTOGRAM, etc. all have different
behaviors. A classification of these aggregates along multiple dimensions, such
as duplicate sensitivity and the size of partial state records, is given in [129] and
used to compare aggregation performance.
9.6.3 Other work
There are several other interesting works pertinent to the database perspective on
WSN. In GADT [50], a probabilistic abstract data type suitable for describing
and aggregating uncertain sensor information is defined. The temporal coherency
aware network aggregation (TiNA) technique [194] provides for additional communication optimization by providing temporal aggregation data values that
do not change from the previous value by more than a tolerance level are not
communicated. Shrivastava et al. propose and analyze data aggregation schemes
suitable for medians and other more sophisticated aggregates, such as histogram
and range queries [197]. The problem of aggregation operators over lossy networks is addressed by Nath et al. [144], who provide an analysis of synopsis
diffusion techniques that provide robust order and duplicate insensitive aggregation that is decoupled from the underlying multi-path routing structure.
Yao and Gehrke [233] discuss taking into account available metadata about
the state of different parts of the network to provide an optimized query plan
distributed across query proxies on sensor nodes. The query plan describes both
162
Data-centric networking
the data flow within the network as well as the computational flow within the
node. Bonfils and Bonnet [12] address the problem of optimizing the placement
of query operators for long-standing queries autonomously within the network
through an exploratory search process.
9.7 Summary
Unlike traditional communication networks that must support a wide range of
applications (some not even known at design time), WSNs are much more
application specific in nature. Communication in a WSN is most often pertinent
to the information available at sensors or desired by an external user. A datacentric approach, where the routing is based on named data rather than addresses,
can be advantageous for two reasons: (a) it eliminates the overhead associated
with name binding and (b) it allows for energy efficiency through in-network
processing, including compression and aggregation of information. The directed
diffusion routing mechanism is unique in routing based on named attributes
rather than traditional IP-style addressing.
Several studies, including the cluster-based LEACH protocol and many analytical studies, have examined the problem of routing with in-network compression
in sensor networks. The studies suggest that, while finding optimal joint routing
compression routes may be difficult, good approximations are possible. It is
possible to achieve near-optimal energy performance for routing with compression with a simple LEACH-like clustering technique that is not correlation aware.
Besides end-to-end routing, data discovery and querying form an important
communication primitive in sensor networks. Alternatives to the high-overhead
naive flooding approach are desired. Several querying techniques have been
proposed and analyzed, including expanding ring search, IDSQ, and ACQUIRE.
Rumor routing and the combneedle approach both advocate hybrid pushpull
rendezvous techniques, where query trajectories from sinks intersect with event
notification trajectories from sources, and show that they can offer significant
gains.
Data-centric storage techniques including GHT, DIM, and DIFS offer another
alternative by decoupling the location of data storage from the location where data
are generated. Data are indexed at locations that depend upon the named content,
which makes retrieval much easier with lower overheads than blind querying.
The DIMENSIONS technique advocates multi-resolution storage with graceful
aging so that more recent fresh information is available at a finer granularity
than older data.
Exercises
163
Exercises
9.1
9.2
9.3
Trajectory-based forwarding: How should a circle be represented in parametric 4x4t51 y4t55 form? Simulate the deployment of a 100 node random
G4n1 R5 network with R = 002 in a unit area. Using any convenient forwarding rule, show the nodes visited by a TBF query that aims to follow
the big in-circle centered in the middle of the area with radius 0.5.
164
Data-centric networking
9.4
9.5
DIM binary code mapping: Give the binary codes that correspond to the
following values if k = 8: (a) (0.23, 0.15), (b) (0.35, 0.6), (c) (0.83, 0.29).
9.6
DIM zone creation: In a square area, draw the regions that correspond to
the following codes: (a) 1010, (b) 1101, (c) 00001.
Index
198
Index
Index
199
200
Index
Index
201
WiseMAC, 8889
TICER/RICER, 8090
sleep-based topology control, 103118
ASCENT, 106, 107108
BECA/AFECA, 105
CCP, 112
Cross layer issues, 114115
GAF, 105107
LDAS, 112113
PEAS, 109
set k-cover algorithms, 113114
span, 106, 108109
sponsored sector approach, 111
sleep scheduling techniques for medium
access, 9496
DESS, 9495
aynchronous wake-up scheduling, 9596
see also TDMA
S-MAC, 9192
SMACS (stationary MAC and startup), 97
SMECN (small minimum energy
communication network) power
control algorithm, 19
span topology control protocol, 106,
108109
SPEED, 176178
sponsored sector approach to sleep-based
topology control, 111
star topology, 1213
synopsis diffusion, 161
target tracking, 5, 24, 181181
TDMA (time division multiple access),
97107
BFS/DFS based scheduling, 9798
resync, 9899
SMACS, 97
TRAMA, 99100, 101
TEEN (threshold sensistive energy-efficient
sensor network protocol), 142
time difference of arrival, 3940
time synchronization, 5769
uses, 5758
clock model for, 5859
time synchronization techniques, 5969
202
Index