I Mouse
I Mouse
1. INTRODUCTION
The applications for WSNs are many and varied. They are used in commercial
and industrial applications to monitor data that would be difficult or expensive to monitor
using wired sensors. They could be deployed in wilderness areas, where they would
remain for many years (monitoring some environmental variables) without the need to
recharge/replace their power supplies. They could form a perimeter about a property and
monitor the progression of intruders (passing information from one node to the next).
A sensor node, also known as a mote, is a node in a wireless sensor network that
is capable of performing some processing, gathering sensory information and
communicating with other connected nodes in the network. The main components of a
sensor node are microcontroller, transceiver, external memory, power source and one or
more sensors.
Tranceiver: The functionality of both transmitter and receiver are combined into a
single device know as transceivers are used in sensor nodes. Transceivers lack unique
identifier. The operational states are Transmit, Receive, Idle and Sleep.
Power Source: Power consumption in the sensor node is for the Sensing,
Communication and Data Processing. More energy is required for data communication in
sensor node. Energy expenditure is less for sensing and data processing. Power is stored
either in Batteries or Capacitors. Batteries are the main source of power supply for sensor
nodes. Current sensors are developed which are able to renew their energy from solar or
vibration energy. Two major power saving policies used are Dynamic Power
Management (DPM) and Dynamic Voltage Scaling (DVS). DPM takes care of shutting
down parts of sensor node which are not currently used or active. DVS scheme varies the
power levels depending on the non-deterministic workload. By varying the voltage along
with the frequency, it is possible to obtain quadratic reduction in power consumption.
• It’s event-driven, in the sense that only when an event occurs is a mobile sensor
dispatched to capture images of that event. Thus, iMouse can avoid recording
unnecessary images when nothing happens.
• The more expensive mobile sensors are dispatched to the event locations. They
don’t need to cover the whole sensing field, so only a small number of them are
required.
• It’s both modular and scalable. Adding more sophisticated devices to the mobile
sensors can strengthen their sensing capability without substituting existing static
sensors.
Some researchers use static WSNs for object tracking. These systems assume that
objects can emit signals that sensors can track. However, results reported from a WSN are
typically brief and lack in-depth information. Edoardo Ardizzone and his colleagues
propose a video-based surveillance system for capturing intrusions by merging WSNs
and video processing techniques. The system complements data from WSNs with videos
to capture the possible scenes with intruders. However, cameras in this system lack
mobility, so they can only monitor some locations.
3. SYSTEM DESIGN
Figure 1 show the iMouse architecture. The three main components of the iMouse
system architecture are:
• Static sensors
• Mobile sensors
• External server.
The following steps show the operations that are performed in figure1.
(1) The user issues commands to the network through the server.
(2) Static sensors monitor the environment and report events.
(3) When notified of an unusual event, the server notifies the user and dispatches mobile
sensors.
(4) The mobile sensor moves to the emergency sites and collect data.
(5) The mobile sensor report back to the server after collecting data.
The static sensors form a WSN to monitor the environment and notify the server
of unusual events. Each static sensor comprises a sensing board and a mote for
communication. In our current prototype, the sensing board can collect three types of
data: light, sound, and temperature. We assume that the sensors are in known locations,
which users can establish through manual setting, GPS, or any localization schemes.
An event occurs when the sensory input is higher or lower than a predefined
threshold. Sensors can combine inputs to define a new event. For example, a sensor can
interpret a combination of light and temperature readings as a potential fire emergency.
To detect an explosion, a sensor can use a combination of temperature and sound
readings. Or, for home security, it can use an unusual sound or light reading. To conserve
static sensors’ energy, event reporting is reactive.
Mobile sensors can move to event locations, exchange messages with other
sensors, take snapshots of event scenes, and transmit images to the server. As Figure 2
shows, each mobile sensor is equipped with a Stargate processing board, which is
connected to the following:
The Stargate controls the movement of the Lego car and the web cam.
Figure 3.2: The mobile sensor. Attached to the Stargate processing board are a mote, a
webcam, and an IEEE 802.11 WLAN card. A Lego car provides mobility.
The external server provides an interface through which users can obtain the
system status and issue commands. It also maintains the network and interprets the
meanings of events from sensors. On detecting a potential emergency, the server
dispatches mobile sensors to visit emergency sites to obtain high-resolution images of the
scene. The dispatch algorithm also runs on the server.
To illustrate how iMouse works, we use a fire emergency scenario, as Figure 1 shows.
On receiving the server’s command, the static sensors form a treelike network to
collect sensing data. Suppose static sensors A and C report unusually high temperatures,
which the server suspects to indicate a fire emergency in the sensors’ neighborhoods.
The server notifies the users and dispatches mobile sensors to visit the sites. On
visiting A and C, the mobile sensors take snapshots and perform in-depth analyses. For
example, the reported images might indicate the fire’s source or identify inflammable
material in the vicinity and locate people left in the building.
Each static sensor runs the algorithm in Figure 3. The server periodically floods a
tree-maintenance message to maintain the WSN. It also records each static sensor’s
location and state, which is initially set to normal. Tree maintenance messages help the
static sensors track their parent nodes. To distinguish new from old messages, tree-
maintenance messages are associated with unique sequence numbers. The goal is to form
a spanning tree in the WSN.
When a sensor receives an input above a threshold, indicating an event, the sensor
reports that event to the server. To avoid sending duplicate messages, each sensor keeps a
variable event flag to indicate whether it has already reported that event. When a sensor
detects an event and the event flag is false, the sensor reports that event and sets the flag
to true. The server collects multiple events and assigns them to mobile sensors in batches.
When a mobile sensor visits an event site, it asks the local sensor to clear its event flag.
Figure 3.3: The algorithm executed by static sensors. Three types of messages activate a
static sensor: tree-maintenance message, sensory input, and event message.
Because mobile sensors are battery powered, we assign them to emergency sites
to conserve their energy as much as possible. Specifically, we consider a set L of m
emergency sites to be visited by a set S of n mobile sensors, where each site must be
visited by one mobile sensor. We allow an arbitrary relationship between m and n. The
goal is to maximize the mobile sensors’ total remaining energy after sites are visited.
When m > n, some mobile sensors must visit multiple sites. To solve this problem,
we divide emergency sites into n clusters (for example, by the classical K-means method)
and assign each group to one mobile sensor. In this case, each mobile sensor’s cost will
include moving to the closest site in each group and then traversing the rest of the sites
one by one. Given a set of locations to be visited, we can use a heuristic to the traveling
salesman problem to determine the traversal order.
4. IMPLEMENTATION
Our static sensors are MICAz motes. A MICAz is a 2.4-GHz, IEEE 802.15.4-
compliant module allowing low power operations and offering a 250-Kbps data rate with
a direct sequence spread-spectrum (DSSS) radio.
Our current prototype uses the light sensors on the Lego car to navigate mobile
sensors. We stick different colors of tape on the ground, which lets us easily navigate the
Lego car on a board. In our prototyping, we implemented an experimental 6 X 6 grid-like
sensing field, as Figure 4 shows. Black tape represents roads, and golden tape represents
intersections. We constructed the system by placing two mobile sensors and 17 static
sensors on the sensing field. For static sensors, a light reading below 800 watts simulates
an event, so we cover a static sensor with a box to model a potential emergency.
We use a grid-like sensing field and a grid-like static sensor deployment only for
ease of implementation. In general, the static WSN’s topology can be irregular.
• The time that a mobile sensor takes to cross one grid unit (about 26 centimeters),
• The time that a mobile sensor takes to make a 90-degree turn, and
• The time that a mobile sensor takes to make snapshots and report the results.
In our current prototype, the times are 2.5, 2.2, and 4.0 seconds, respectively.
Figure 4.1: A 6 X 6 grid-like sensing field used in our experiment. Colored tape placed on
the floor (black for roads, golden for intersections) is used to navigate the Lego car.
5. EXPERIMENTAL RESULTS
Figure 5 shows experimental results with one mobile sensor initially placed at (0,
0) and two mobile sensors placed at (0, 0) and (5, 5). We generate some random events
and evaluate the dispatch time (from when the server is notified of these events to when
all event sites are visited) and the average time of each site (from when an event is
detected to when a mobile sensor visits the site). Clearly, using two mobile sensors
significantly reduces dispatch and waiting times.
Figure 5.1: Experimental performance of (a) dispatch time and (b) average waiting time.
As the graphs show, using two mobile sensors reduces dispatch and waiting times.
At the external server, users monitor the system’s status and control mobile sensors
through a user interface, as Figure 6 shows.
Figure 6.1: User interface at the external server. The interface consists of six areas
through which the user can monitor the systems status and control the mobile
sensors.
• The configure area lets users input system configuration information, such as
mobile sensors’ IP addresses, ports, and sensors’ positions.
• The system-command area provides an interface to let users control the overall
system, such as issuing a tree-maintenance message, adjusting the WSN’s
topology, and connecting and disconnecting a specified mobile sensor.
• The sensor-status area shows the current status of a static sensor being queried.
• The action-control area lets users control the mobile sensors’ actions, including
movement and taking snapshots.
• The monitor area shows the WSN’s network topology and the mobile sensors’
patrolling paths. When a sensor detects an event, a fire icon appears in the
corresponding site.
7. CONCLUSION
8. REFERENCE
ACKNOWLEDGEMENT
I take this opportunity to thank the Almighty for keeping me on the right path and
the immense blessing towards the successful completion of my Seminar.
Last of all, all my teachers and friends who extend every possible assistance they
could.
RABILASH. C.H
ABSTRACT
CONTENTS
1. Introduction 1
3. System Design 7
4. Implementation 12
5. Experimental Results 14
7. Conclusion 17
8. Reference 18