Network Deployment
Network Deployment
Objectives:
Other issues:
❑ Equipment costs,
❑ Energy limitations
❑ Need for robustness
Structured versus randomized deployment :
❑ The randomized deployment approach is appealing for futuristic applications of a large scale,
where nodes are dropped from aircraft or mixed into concrete before being embedded in a smart
structure.
❑ However, many small–medium-scale WSNs are likely to be deployed in a structured manner via
careful hand placement of network nodes.
❑ In both cases, the cost and availability of equipment will often be a significant constraint.
1. Place sink/gateway device at a location that provides the desired wired network and power connectivity.
2. Place sensor nodes in a prioritized manner at locations of the operational area where sensor
measurements are needed.
Step 2 can be challenging if it is not clear exactly where sensor measurements are needed, in which case
a uniform or grid-like placement could be a suitable choice.
Adding nodes for ensuring sufficient wireless network connectivity can also be a non-trivial challenge,
particularly when there are location constraints in a given environment that dictate where nodes can or
cannot be placed.
If the number of available nodes is small with respect to the size of the operational area and required
coverage, a delicate balance has to be struck between how many nodes can be allocated for sensor
measurements and how many nodes are needed for routing connectivity.
Methodology for randomized placement:
❑ Randomized sensor deployment can be even more challenging in some respects, since there is no
way to configure a priori the exact location of each device.
❑ In case of a uniform random deployment, the only parameters that can be controlled a priori are the
numbers of nodes and some related settings on these nodes, such as their transmission range.
❑ Random Graph Theory provide useful insights into the settings of these parameters.
Connectivity in geometric random graphs: Random Graph
Theory
A random graph model is essentially a systematic description of some random experiment that can be
used to generate graph instances.
These models usually contain a tuning parameter that varies the average density of the constructed
random graph.
The Bernoulli random graphs G(n, p), studied in traditional Random Graph Theory are formed by taking n
vertices and placing random edges between each pair of vertices independently with probability p.
A random graph model that more closely represents wireless multi-hop networks is the geometric random
graph G(n, R).
In a G(n, R) geometric random graph, n nodes are placed at random with uniform distribution in a square
area of unit size (more generally, a d-dimensional cube).
There is an edge (u, v) between any pair of nodes u and v, if the Euclidean distance between them
is less than R.
Euclidean distance:
In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line
segment between the two points. It can be calculated from the Cartesian coordinates of the points using
the Pythagorean theorem, therefore occasionally being called the Pythagorean distance. These names
come from the ancient Greek mathematicians Euclid and Pythagoras,
Distance formulas:
One dimension
If p and q are two point in real line,
Two dimensions
If p have cartesian coordinate (p1, p2) and q have coordinate (q1,q2)
❑ Figure 1 illustrates G(n, R) for n = 40 at two different R
values.
A graph property is called monotone if the property are monotonically increasing. Nearly all graph
properties of interest from a networking perspective, such as K-connectivity, Hamiltonicity, K-colorability,
etc., are monotone.
A key theoretical result pertaining to G(n, R) geometric random graphs is that all monotone properties
show critical phase transitions.
All monotone properties are satisfied with high probability within a critical transmission range,
This model potentially allows different nodes in the network to use different powers.
In the graph, it is known that K must be higher than 0.074 log n and lower than 2.72 log n, in order to
ensure asymptotically almost sure connectivity.
Connectivity and coverage in Ggrid (n, p, R):
Connectivity using power
❑ Regardless of whether randomized or structured deployment is performed, once the nodes are in place
control:
there is an additional tunable parameter that can be used to adjust the connectivity properties of the
deployed network.
❑ This parameter is the radio transmission power setting for all nodes in the network.
❑ Power control is quite a complex and challenging cross-layer issue. Increasing radio transmission power
has a number of interrelated consequences –
❑ Most of the literature on power-based topology control has been developed for general ad hoc wireless
networks, but these results are very much central to the configuration of WSN.
❑ Some key results and proposed techniques are discussed here.
❑ Some of these distributed algorithms aim to develop topologies that minimize total power consumption
Minimum energy connected network construction
(MECN)
Consider the problem of deriving a minimum power network topology for a given deployment of wireless
nodes that ensures that the total energy usage for each possible communication path is minimized.
A graph topology is defined to be a minimum power topology, if for any pair of nodes there exists a path in
the graph that consumes the least energy compared with any other possible path.
The construction of such a topology is the goal of the MECN (minimum energy communication network)
algorithm.
Each node’s enclosure is defined as the region around it, such that it is always energy-efficient to transmit
directly without relaying only for the neighboring nodes within that region.
Enclosure graph is defined as the graph that contains all links between each node and its neighboring nodes
in the corresponding enclosure region.
The MECN topology control algorithm first constructs the enclosure graph in a distributed manner, then
prunes it using a link energy cost-based Bellman–Ford algorithm to determine the minimum power topology.
Minimum energy connected network construction
(MECN)
MECN algorithm does not yield a connected topology with the smallest number of edges.
Let C(u, v) be the energy cost for a direct transmission between nodes u and v in the MECN-generated
topology.
It is possible that there exists another route r between these very nodes, such that the total cost of routing
on that path Cr < C(u, v); in this case the edge (u, v) is redundant.
It has been shown that a topology where no such redundant edges exist is the smallest graph having the
minimum power topology property.
The small minimum energy communication network (SMECN) distributed protocol, while still suboptimal,
provides a provably smaller topology with the minimum power property compared to MECN.
The advantage of such a topology with a smaller number of edges is primarily a reduced cost for link
maintenance.
Minimum common power setting (COMPOW)
The COMPOW protocol ensures that the lowest common power level that ensures maximum network
connectivity is selected by all nodes.
A number of arguments can be made in favor of using a common power level that is as low as
possible (while still providing maximum connectivity) at all nodes:
(i) it makes the received signal power on all links symmetric in either direction (SINR may vary in each
direction);
(ii) it can provide for an asymptotic network capacity which is quite close to the best capacity achievable
without common power levels;
(iii) a low common power level provides low-power routes; and
(iv) a low power level minimizes contention.
❑ Multiple shortest path algorithms (e.g. the distributed Bellman–Ford algorithm) are performed, one at each
possible power level.
❑ Each node then examines the routing tables generated by the algorithm and picks the lowest power level
such that the number of reachable nodes is the same as the number of nodes reachable with the
maximum power level.
Drawbacks of COMPOW algorithm:
❑ The COMPOW algorithm provide the lowest functional common power level for all nodes in the
network, ensuring maximum connectivity, but does suffer from some possible drawbacks, such as,
1. It is not very scalable, as each node must maintain a state that is of the order of the number of
nodes in the
entire network.
2. Further, by strictly enforcing common powers, it is possible that a single relatively isolated node
can cause
all nodes in the network to have unnecessarily large power levels.
3. Most of the other proposals for topology control with variable power levels do not require common
powers
on all nodes.
Minimizing maximum power
A work by Ramanathan and Rosales-Hain presents exact (centralized) as well as heuristic (distributed)
algorithms that seek to generate a connected topology with non-uniform power levels, such that the
maximum power level
among all nodes in the network is minimized.
They also present algorithms to ensure a biconnected topology, while minimizing the maximum power
level.
This approach is best suited for the situation where all nodes have the same initial energy level, as it
tries to minimize the energy burden on the most loaded device.
Cone-based topology control (CBTC):
The cone-based topology control (CBTC) technique provides a minimal direction-based distributed rule
to ensure that the whole network topology is connected, while keeping the power usage of each node
as small as possible.
The cone-based topology construction is very simple, and involves only a single parameter , the cone
angle.
In CBTC each node keeps increasing its transmit power until it has at least one neighboring node in
every cone or it reaches its maximum transmission power limit. It is assumed here that the
communication range (within which all nodes
In are reachable)
Figure 4 on theincreases
left we seemonotonically withpower
an intermediate transmit
level for
power. a node at which there exists an cone in which the node does
not have a neighbor.
Theorem 2
If ≤ 5π/6, then the graph topology generated by CBTC is connected, so long as the original graph,
where all nodes transmit at maximum power, is also connected. If α >5π/6, disconnected topologies
may result with CBTC.
If the maximum power constraint is ignored so that any node can potentially reach any other node in
the network directly with a sufficiently high power setting,
D’Souza et al. show that α = π is a necessary and sufficient condition for guaranteed network
connectivity.
Local minimum spanning tree construction
(LMST):
Another approach is to construct a consistent global spanning tree topology in a completely
distributed manner.
This scheme first runs a local minimum spanning tree (LMST) construction for the portion of the
graph that is within visible (max power) range.
The local graph is modified with suitable weights to ensure uniqueness, so that all nodes in the
network effectively construct consistent LMSTs such that the resultant network topology is
connected.
The technique ensures that the resulting degree of any node is bounded by 6, and has the property
that the topology generated can be pruned to contain only bidirectional links.
Simulations have suggested that the technique can outperform both CBTC and MECN in terms of
average node degree.
Coverage
metrics:
Connectivity metrics are generally application independent.
The objective is simply to ensure that there exists a path between every pair of nodes.
K-connectivity (whether there exist K disjoint paths between any pair of nodes) metric may be used.
This metric is applicable in contexts where there is some notion of a region being covered by each
individual sensor.
A field is said to be K-covered if every point in the field is within the overlapping coverage region of
at least K sensors.
Definition 1
Consider an operating region A with n sensor nodes, with each node i providing coverage to a node
region Ai ∈ A (the node regions can overlap). The region A is said to be K-covered if every point p
∈ A is also in at least K node regions.
Definition 2
A sensor is said to be K-perimeter-covered if all points on the perimeter circle of its region are within the
perimeters of at least K other sensors.
Theorem 3
The entire region is K-covered if and only if all n sensors are k-perimeter-covered.
Theorem 4
The entire region is K-covered if and only if all intersection points between the perimeters of the n sensors
(and between the perimeter of sensors and the region boundary) are covered by at least K sensors.
Theorem 5
If a convex region A is K-covered by n sensors with sensing range Rs and communication range Rc, their
communication graph is a K-connected network graph so long as Rc ≥ 2Rs
Path observation
❑ One class of coverage metrics that has been developed is suitable primarily for tracking
targets or other moving objects in the sensor field.
❑ Consider for instance a WSN deployed in a rectangular operational field that a target can
traverse from
left to right. The maximal breach path is the path that maximizes the distance between the
moving target and the nearest sensor during the target’s point of nearest approach to any
sensor.
Other metrics
Coverage requirements and metrics can vary from application to application. Some other metrics are
following:
❑ Percentage of desired points covered: given a set of desired points in the region where sensor
measurements need to be taken, determine the fraction of these within range of the sensors.
❑ Average coverage overlap: the average number of sensors covering each point in a given region.
❑ Maximum/average inter-node distance: coverage can also be measured in terms of the maximum or
average distance between any pair of nodes.
❑ Minimum/average probability of detection: given a model of how placement of nodes affects the chances
of detecting a target at different locations, the minimum or average of this probability in the area.
Mobile deployment
One approach to ensuring nonoverlapping coverage with mobile nodes is the use of potential field
techniques,
whereby the nodes spread out in an area by using virtual repulsive forces to push away from each other.
This technique has the great advantage of being completely distributed and localized, and hence easily
scales to very large numbers.
In incremental self-deployment algorithm, a new location for placement is calculated at each step based on
the current deployment, and the nodes are sequentially shifted so that a new deployment is created with a
node moving into that new location and other nodes moving one by one accordingly to fill any gaps.
In bidding protocol (for deployment of a mixture of mobile and static nodes) after an initial deployment of
the static nodes, coverage holes are determined and the mobile nodes move to fill these holes.
In a combination of static sensor nodes and mobile nodes, a robotic node’s mobile explorations help
determine where static nodes are to be deployed, and the deployed static node then provides guidance to
the robot’s exploration.