0% found this document useful (0 votes)
17 views56 pages

Unit 2 Full ECSD

Uploaded by

vasisht.hk.18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views56 pages

Unit 2 Full ECSD

Uploaded by

vasisht.hk.18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Electronics and Communication for sustainable Developments

UNIT –II

1
2
1. The recent remarkable progress in computing power,
sensors and embedded devices, and wireless networking
combined with data mining and cloud computing paradigms
has enabled researchers and practitioners to create smart
environments for useful application.
2. Existing designs have only been tested in size-limited settings.
In this chapter, we discuss recent technological advances in
smart environment design and data mining techniques that
allow the technologies to scale more easily.
3. New analyses that can be performed on smart environment
Sensor data when such scaling is made possible 3
INTRODUCTION

1. Computing technologies have matured. It can provide context-aware,


automated support in our everyday environments. One physical embodiment of
such a system is a smart environment.
•In these environments, computer software that plays the role of an intelligent
agent perceives the state of the physical environment and residents using
sensors, reasons about this state using artificial intelligence techniques, and then
takes actions to achieve specified goals.
•During perception, sensors embedded in the environment generate readings,
while residents perform their daily routines. The sensor readings are collected by
a computer network and stored in a database that an intelligent agent uses to
generate useful knowledge such as patterns, predictions, and trends.
On the basis of this information, a smart environment can select and automate
actions that meet the goals of the application
goals of the application
4
Smart environment technology for applications:

Health monitoring and energy-efficient automation

•Most implementations of this technology to date are somewhat narrow and


are performed in controlled laboratory settings. These limitations are due in
large part to the difficulty of creating a fully functional smart environment
infrastructure.
•While realistic smart environment prototypes have been designed
implementing these smart environments is so cumbersome that meetings
have been organized to discuss ways to scale such pervasive computing
systems and to share valuable data that have been successfully captured in
such settings.
1.In order to scale, attention needs to be given to how to design smart
environment infrastructures that are lightweight and easy to install,
which will allow the number of smart environment deployments to grow
dramatically.
2.Addition, smart environment capabilities such as activity discovery,
recognition, and tracking need to work out of the box with minimal user
training.
3.Describe methods to achieve these goals. By scaling smart
environments, we also demonstrate the new types of data collections
and analyses that can be performed at a population-wide scale.
4.All of the ideas are evaluatedusing data collected from the CASAS
smart home project at Washington State University.

04-03-20 6
20
SCALING SMART ENVIRONMENT DESIGN
In order to scale the number of environments that employ ambient intelligence
technologies, smart environment infrastructures need to be designed that are easy to
install and ready to use out of the box. The CASAS smart home in a box (SHiB) is
designed to do this
CASAS SHiB Design :
The CASAS SHiB software architecture components are shown in Figure 3.1.
During perception, control flows up from the physical components through the
middleware. When taking an action, control moves down from the application layer to
the physical components that automate the action.
Our goal is that each of the layers is lightweight, extensible, and ready to use as is,
without additional customization or training.
The CASAS architecture is easily maintained, easily extended, and easily scaled. The
architecture is easily maintained because the communication bridges use lightweight
APIs that support a wide variety of messages in a free-form manner. As a result, the
middleware is Compact and stable
8

04-03-2020
The CASAS physical layer contains hardware components including sensors
and actuators. The architecture utilizes a ZigBee wireless mesh that
communicates directly with the hardware components.
The middleware layer is governed by a publish/subscribe manager. The
manager provides named broadcast channels that allow component bridges
to publish and receive messages.
In addition,the middleware provides valuable services including adding time
stamps to events,assigning UUIDs, and maintaining site-wide sensor state.
Every component of the CASAS architecture communicates via a customized
XMPP bridge to this manager.
Examples of such bridges are the ZigBee bridge, the Scribe bridge that
archives messages in permanent storage, and bridges for each of the
software components in the application layer. 9

.
1
0
• All of the CASAS components fit within a single small box, as is shown in Figure
• The current box contains physical components in the form of sensors that are pre
labeled with the intended location. Additional sensors and controllers can be included
when needed.
• The middleware, database, and application components reside on a small, low-power
computer with an ITX form factor (Information Technology Extended, or ITX, is a form
factor for small computer motherboards developed by VIA Technologies.) server.
• While this layout is designed to allow each smart home to run independently and locally,
smart homes can also securely upload events to be stored in a relational database or in
the cloud.
• Bridges can link multiple smart homes together, which allows CASAS to scale to
communities of smart homes.
• The simplicity of the CASAS SHiB design has made it possible for our research group to
install a large number of smart home testbeds.
• A total of 19 datasets represent single-resident sites, 4 represent sites with two
residents, and the rest house larger families or residents with pets.
• With the CASAS streamlined design, our team can install a new smart home in
approximately 2 hrs and can remove the equipment in 30 min, with no changes or
damage to the home.
• The design of the CASAS smart home also keeps installation costs down.
• The CASAS SHiB includes a software agent that alerts residents if sensor battery levels
are getting low or if a sensor suddenly stops reporting events.
• Intelligent systems that focus on the needs of a human require information about
the activities being performed by the human. At the core of these systems, then, is
activity recognition, which is a challenging and well-researched problem. Sensors
in a smart home generate events that consist of a date, a time, a sensor identifier,
and a sensor message.
The generally accepted approach to activity recognition is to design and/or use machine
learning techniques to map a sequence of sensor data to a corresponding activity label.
Online activity recognition, or recognizing activities in real time from streaming data,
introduces challenges that do not occur in the case of offline learning with pre-segmented
data.
However, this is an approach to activity recognition that needs to be considered in order to
scale the capabilities of smart environments
The CASAS activity recognition software, called AR, provides real-time activity labeling as
sensor events arrive in a stream.
To do this, we formulate the learning problem as that of mapping the sequence of the k
most recent sensor events to a label that indicates the activity corresponding to the last
(most recent) event in the sequence. The sensor events preceding the last event define the
context for this last event.
Sensor data: Researchers have found that different types of sensor information are effective for classifying different types of
activities.
•When trying to recognize ambulatory movements (e.g., walking, running, sitting, climbing stairs, and falling),
•data collected from accelerometers positioned on the body have been used
•Use of a smartphone to act as a wearable/carryable sensor with accelerometer and gyroscope capabilities. Researchers have
used phones to recognize gesture and motion patterns
•Objects can be tagged with shake sensors or RFID tags and are selected based on the activities that will be monitored
Activity models: The number of machine learning models that have been used for activity recognition varies as greatly as
the number of sensor data types that have been explored.
•Naive Bayes (NB) classifiers have been used with promising results for offline learning of activities when large amounts of
sample data are available.
•decision trees to learn logical descriptions of the activities
•kNNs a slightly different approach by looking for emerging frequent sensor sequences that can be associated with activities
and can aid with recognition.
•probabilistic graphs, Markov models, dynamic Bayes networks, and conditional random fields (CRFs)

•The approach we describe for online activity recognition can be adapted to many different classifiers. Here, results for NB,
hidden Markov model (HMM), CRF, and support vector machine (SVM) classifiers are considered for this task because they
traditionally are robust in the presence of a moderate amount of noise and are designed to handle sequential data.
•Among these three choices, there is no clear best model to employ—they each utilize methods that offer strengths and
weaknesses for the task at hand.
•The NB classifier uses relative frequencies of feature values as well as the frequency of activity labels found in sample
training data to learn a mapping from activity features, D, to an activity label, a
• The HMM is a statistical approach in which the underlying model is a stochastic
Markovian process that is not observable (i.e., hidden), which can be observed through
other processes that produce the sequence of observed features.
• Like the HMM, the CRF model makes use of transition likelihoods between states as well
as emission likelihoods between activity states and observable states to output a label for
the current data point. The CRF learns a label sequence that corresponds to the observed
sequence of features
• SVMs identify class boundaries that maximize the size of the gap between the class
boundary and the training data points.
Comparing the performance of the machine learning models using data collected over 6
months in three separate smart home environments, each housing one resident, table 3.1
summarizes the recognition accuracy using threefold cross validation.
All of the classifiers perform well at recognizing the 10 predefined activities that are listed
(not including the Other class) and plotted in Figure 3.3. The SVM performs consistently
best, however, so we focus on this classifier when evaluating the approach for scalability
1
6
1
7
• As the matrix indicates, some activities are easier to recognize than others. This is because
some activities, such as cooking, have a fairly unique spatial–temporal signature. Other
activities are more challenging because they overlap with other activity classes or not
enough training data are available to learn the model.
• The weighted average accuracy is 84%, which indicates that the models are fairly robust
even when they are used out of the box in new, distinct home settings

SCALING BEHAVIOR MODELING WITH ACTIVITY DISCOVER


• Recognizing activities from streaming data introduces new challenges because data must
be processed that do not belong to any of the targeted activity classes.
• One way to handle unlabeled data is to design an unsupervised learning algorithm to
discover activities from unlabeled sensor data. Segmenting unlabeled data into smaller
classes improves activity recognition performance because the Other class is no longer
dominant in terms of size, as what frequently happens in activity recognition
• datasets.
• Another important reason to discover activity patterns from unlabeled data is to
characterize and analyze as much behavioral data as possible, not just predefined activity
classes. Such unlabeled data need to be examined and modeled in order to get a complete
view of everyday life.
Activity Discovery with AD
A pattern in AD consists of a sequence definition and all of its occurrences in
the data.
The initial state of the search algorithm is the set of pattern candidates
consisting of all uniquely labeled sensor identifiers. The only operators of the
search are the ExtendSequence operator and the EvaluatePattern operator.
The ExtendSequence operator extends a pattern definition by growing it to
include the sensor event that occurs before or after any of the instances of the
pattern.
During discovery, the entire dataset is scanned to create initial patterns of
length one. After this initial pass, the whole dataset does not need to be
scanned again.
AD extends the patterns discovered in the previous iteration using the
ExtendSequence operator and will match the extended pattern against the
patterns already discovered in the current iteration to see if it is a variation of
a previous pattern or is a new pattern.
• In addition, AD employs an optional pruning heuristic that removes patterns from
consideration if the newly extended child pattern evaluates to a value that is less than the
value of its parent pattern.
• AD uses a beam search to identify candidate sequence patterns by applying the
ExtendSequence operator to each pattern that is currently in the open list of candidate
patterns. The patterns are stored in a beam-limited open list and are ordered based on
their value.
The search terminates upon exhaustion of the search space. Once the search terminates
and AD reports the best patterns that were found, the sensor event data can be compressed
using the best pattern
As an example, Figure 3.5 shows a dataset where the sensor identifiers are represented by
varying patterns.
AD discovers four instances of the pattern P in the data that are sufficiently similar to the
pattern definition. The resulting compressed dataset is shown as well as the pattern P′ that
is found in the new compressed dataset.
Figure 3.6 provides a visualization of the three top activity patterns that are discovered
when AD is applied to a dataset that combines sensor events from the three testbeds.
2
1
The pattern in (a) contains a sequence consisting of motion in the bedroom followed by the living room
and back to the bedroom, around 10:20 in the evening. Many of these events occur prior to sleeping
and may
represent getting ready for bed. The pattern in (b) consists of a front door closing followed by a series of
kitchen events and then a living room event, usually in the late morning or midafternoon. 2
This could
represent a number of different activities that occur after returning home, such as putting
2
away
groceries or getting a drink.
Activity discovery can also be used to improve the performance of activity recognition algorithms.
Section 3.3.3 shows that classifiers can identify activities, even in real time, when applied to predefined activities alone.
On the other hand, Table 3.3 summarizes the performance of the SVM when the Other class is included.
The problem is that approximately half of the sensor events are unlabeled, so the Other
class dominates the dataset
2
4
2
5
2
6
CONCLUSIONS AND FUTURE WORK

1) we highlight the capabilities of a smart home system that can be


deployed, evaluated, and scaled when the smart environment architecture
is made simple and lightweight.
2) we would like to evaluate the ease with which additional sensor
modalities (e.g., RFID, smartphones) can be incorporated into the
architecture and will design applications that more extensively utilize
device controllers.
3) We would also like to expand the scope of the data collection to
include a greater diversity of resident demographics and to perform
longitudinal studies.
4) Finally, we would like to design smart environment automation strategies
that provide safe and energy-efficient support of resident daily activities
2
7
Localization of a Wireless Sensor Network for Environment
Monitoring Using Maximum Likelihood Estimation with Negative
Constraints :

2
8
ABSTRACT
•In many environmental monitoring applications, the location of the sensor node is
important information. Due to the large number of sensor nodes to be deployed, it is not
practical to equip them with global positioning system (GPS) or manually determine
their locations. In this chapter, a smart localization algorithm using maximum likelihood
estimation (MLE) with negative constraints (NCs) is proposed.
•Unlike most of the existing methods that only utilize positive constraint information such
as internode distances or connectivity, the proposed algorithm also utilizes NC
information to achieve more accurate localization.
•The distribution of sensor nodes’ communication ranges is first studied, and the
likelihood function of sensor nodes’ positions is derived based on both the positive and
negative constraints.
• To reduce the computational cost, a novel iterative optimization procedure is also
proposed to find the MLE. Simulation and experimental works show that the proposed
MLE localization algorithm with NC improves the localization accuracy by 20% as
compared to the conventional MLE approach 2
9
INTRODUCTION
One of the key challenges of wireless sensor network (WSN) is to determine the
sensor nodes’ physical locations. This can be achieved by equipping a global
positioning system (GPS) to each sensor node. However, such approach is costly,
consumes higher power, and is subject to the availability of GPS signal. To overcome
these limitations, a number of GPS-less localization systems have been investigated
for WSNs
Based on the computational architecture, they can be classified as
1) Distributed algorithms
2) Collaborative centralized algorithms.

3
0
•Distributed algorithms computation is distributed across the network. Each node is
responsible for computing its own estimated position using the local information.
•Collaborative centralized algorithms assume there is a central node that collects
information across the network, and the estimated positions are computed in the
central node and the whole network is localized collaboratively.

Based on the information used, the localization algorithms can also be classified as
•Range-free and
•Range-based algorithms.
Range-free algorithms assume that the distance or angle information is not available
for the sensor node. They use the network connectivity to proximate the nodes’
locations.
Range-based algorithms require distance measurements between neighbouring,
sensor nodes. They usually use multilateration or maximal likelihood estimation
techniques to find the locations of unknown nodes.
• Besides the network connectivity information or internode distance
measurements, some works also use negative constraints (NCs) to improve
the localization accuracy.
• The NCs use the observation that if there is no communication link between
two sensor nodes, then the distance between them should be longer than their
communication range.
• In Xiao et al. ,an anchor node is used to give a repulsive virtual force to
repulse the estimated position of an unknown node if it is out of the anchor
node’s communication range. As it is a distributed algorithm it only uses the
NCs between an unknown and an anchor node within two hops.
• The NCs between two unknown nodes are not utilized.
• Consider the centralized algorithm. The NCs can be used more
advantageously since the central node has the knowledge of the whole
network. However, this issue has not been well addressed, and there is a lack
32

of works on the effect of the NCs on the WSN localization performance.


1) A maximum likelihood estimation (MLE) WSN localization algorithm that uses the
NC and internode distance information from received signal strength indicator
(RSSI) measurements is proposed.
2) Different from other works that assume the communication coverage of the
sensor node is a perfect circle, this paper studies the communication range
distribution of the sensor node based on the log-shadowing model.
3) The likelihood function of the positions of the network is derived with the NC and
internode distance information. To find the MLE of the network positions, a least
square (LS) optimization problem needs to be solved.
4) The complexity of the optimization problem is largely dependent on the number
of NCs being used. To reduce the computational costs, a novel iterative
optimization procedure is proposed. In each procedure loop, only the important
NCs are being used for the localization.
5) Simulation and experimental works show that the proposed MLE localization
algorithm with NCs improves the localization accuracy by 20% as compared to
the conventional MLE localization without NCs. 33
MLE LOCALIZATION ALGORITHM WITH
NEGATIVE CONSTRAINTS

MLE without Negative Constraints


In almost all existing wireless systems, RSSI can readily be measured after two nodes have
established wireless communication connection. Due to its low cost, RSSI has become a widely used
ranging technology in WSN localizations. From the log-shadowing model [18] for the path loss, it can
be derived that the estimated distance d ij between two nodes i and j is lognormally distributed:

3
4
MLE LOCALIZATION ALGORITHM WITH NEGATIVE CONSTRAINTS
The MLE method is a popular approach in obtaining practical estimators
MLE without Negative Constraints
In wireless systems, RSSI can readily be measured after two nodes have established
wireless communication connection. Due to its low cost, RSSI has become a widely used
ranging technology in WSN localizations. From the log-shadowing model for the path loss,
it can be derived that the estimated distance between two nodes i and j is lognormally
distributed.
Negative Constraints and Modeling
•Consider Figure 4.1 with only the distance information available to anchor nodes A1 and
A2; there are two possible estimated positions for sensor node i, namely, i1 and i2.
•With additional information that node i is out of sensor node j’s communication range, i’s
real position is more likely near to i1 than i2. This information is termed negative
constraint (NC).
• On the other hand, an RSSI-measured internode
distance is called positive constraint
• Empirical studies on real test beds have shown that the assumption of a perfect circular radio range is not
accurate.
• The communication range of a sensor node usually varies in different directions. When two sensor nodes
cannot establish communication, the signal strength between the two nodes i and j is below a certain
threshold η.
• The likelihood function of node i’s position based on this negative constant that i is out of j’s
communication range can be formulated as

Combining the positive constraint and NC information, the overall likelihood function is
calculated.
POSITION ESTIMATION:
The optimization problems defined can be viewed as nonlinear Least Square problems.
There are several iterative numerical optimization algorithms that can be used for such
nonlinear LS problems.
For example, the Gauss–Newton, line-search, and trust-region methods. An initial value is
required for such iterative algorithms.
In general, a good initial value would result in faster convergent to the global optimum. This
can be achieved by estimating the initial positions using low-cost localization algorithms such
as DV-hop, DV-distance, or multi-dimensional scaling (MDS).
Compared to the objective function without NCs, the objective function is more complicated
especially in sparse networks.
For a network with 100 sensor nodes with a connectivity less than 10, direct minimization of
would require 10 times more computation. As many of the NCs provide little information on
the position estimation, not all the NCs are included to the objective function.
For example, if two estimated positions are already very far from each other in the initial
value,there is almost no difference between the results from MLE with and without the
corresponding NC.
Thus, an iterative procedure is developed in this study to select the NCs and find the position
estimate. The procedure is described in the following:
Step 1: Obtain initial value
Obtain initial value by localizing the network from other low-cost algorithms (DV-distance,
MDS, etc.). DV-distance is used in the following discussions.
Step 2: MLE
Estimate the sensor nodes’ positions without considering any NCs, through minimizing the
objective function and using the initial value obtained from Step 1.
Step 3: NC selection
Check if any NCs are violated by the estimated positions. If no new NC is violated, go to
Step 6. Otherwise, go to Step 4.
An NC is said to be violated if
1. There is no communication links between its associated two sensor nodes
2. The distance between the two estimated positions is smaller than a predefined distance.
Usually, the predefined distance is related to the nominal communication range, that is, if
dij(ˆX) < k · R, the NC between node i and j is violated, and k is termed as the NC selection
factor
The factor k needs to be chosen carefully before the localization. If k is too small, very few
NCs will be violated and the localization result will be very similar to the result without
NCs. On the other hand, if k is too large, too many NCs will be included and the
computational cost for the optimization will be increased significantly
Step 4: Objective function update
Update the objective function by including the new violated NCs.
Step 5: NC optimization
Use the current estimated position as the initial value to minimize the new objective
function and find the new estimated positions. Go to Step 6 if it exceeds the maximum
number of loops; otherwise, go to Step 3.
Step 6: End of the optimization procedure

SYSTEM PERFORMANCE EVALUATION


The performance of the proposed localization scheme is evaluated through various
simulation studies. The first simulated network is an isotropic network
that is placed within a 100 × 100 unit’s sensor field. The sensor field is equally partitioned
to 10 × 10 squares and there is a sensor randomly placed in each of the square.
Therefore, there are a total of 100 sensor nodes, assuming that 10% of the nodes are
anchor nodes and they are randomly chosen. The range measurement is assumed
lognormally distributed, and σ ranges from 0.05 to 0.5 in different simulations.
Choice of the Negative Constraints Selection Factor k
The NC selection factor k is an important parameter. If k is too small, very few NCs will be
violated and the optimization problem is close to the optimization problem.
Consequently, the localization result will be similar to the result without NCs. On the other
hand, large k leads to too many NC inclusion.
The effect of k on localization accuracy and the number of violated NCs are plotted in Figures
4.2 and 4.3, respectively. The data points on the graph are the averages of 50 simulation
trials.
From Figure 4.2, it is observed that the localization error decreases as k increases for all the
network topologies and range measurements.
This is due to more NCs that are included to constraint the position estimation for larger k.
However, the reduction in localization error is less significant when k is larger than 1.1.
On the other hand, it is also observed from Figure 4.3 that the number of violated NCs is
increased significantly when k is larger than 1.1.
Therefore, it may be concluded from these results that a value of 1.1 is a reasonable choice
for the NC selection factor k. In the later discussion, k is fixed at 1.1.

EXPERIMENTAL RESULTS
Experimental measurement has been conducted in a 90 m × 90 m park located in the
university campus. The network consists of 25 sensor nodes. Four of them are anchor nodes
and are placed at the corners of the sensor field. Each sensor node is equipped with an XBee
ZNet 2.5 OEM RF module, which is able to measure the RSSI. The experimental results of
both the MLE localization without and with the Ncs are . The average localization errors are
5.27 and 4.14 m, respectively.
Thus, the NCs reduce the localization error by about 21%.
► EXPERIMENTAL
RESULTS

3
8
Reconfigurable Intelligent Space and the Mobile Module for Flexible Smart Space
• Reconfigurable Intelligent Space (R+iSpace) is introduced.
• R+iSpace was proposed to overcome the inadequacies of conventional smart space.
• The devices in R+iSpace can change their position according to the current requirement of the
space.
• By changing their position, the performance of the entire system can be improved.
• The Mobile Module, which is called MoMo, is a wall/ceiling surface robot to suit the
requirements of R+iSpace. In this chapter, the structure of the prototype MoMo is also
described
• Intelligent Space (iSpace) was first proposed in 1996 by the Hashimoto Laboratory at the
University of Tokyo.
• iSpace is a system that provides appropriate services to users in the space by using various
devices and agent robots. Figure 5.2 is a conceptual diagram of iSpace. As shown in the figure,
lots of DINDs (Distributed Intelligent Networked Devices) can be seen installed on the ceiling
and walls of iSpace.
• The DIND is a device that includes a processor for information data handling, a network
communication device, and sensors.
4
0
• By using DINDs, iSpace is able to recognize the user’s demands.
• When a user requires a nonphysical service or an information provision, iSpace offers the
appropriate service by using devices such as projectors and speakers.
• When a user requires physical service, iSpace offers the appropriate service by using
agent robots.

• NEW SYSTEM
• Concept of R+iSpace
• R+iSpace is an extended system of iSpace that can overcome the problems mentioned
earlier.
• R+iSpace stands for Reconfigurable Intelligent Space. R+iSpace can rearrange the
position of devices according to the current situation.
• To provide mobility for its devices, the R+iSpace adopts the originally designed
wall/ceiling moving Mobile Module (MoMo). The devices for iSpace are mounted on the
MoMo.
• The MoMo satisfies all the necessary conditions.
• The architecture of MoMo is explained in Section 5.3.
4
1
Since the MoMo moves on the wall and ceiling, the MoMo does not interfere with people in
the same space.
The R+iSpace works as follows.
When a user comes into the R+iSpace, the sensor devices such as cameras measure the user’s
direction and position. Other devices share this information, and they compute their optimal
positions to rearrange their position. Thus, the R+iSpace is able to reconfigure the character of
the environment.

Electrical Structure of R+iSpace


Basically, the devices of R+iSpace have the same structure as DINDs in the iSpace.
The DIND is composed of a processor part, a sensor (or an output device) part, and a
communication part. The actuator part is added to the DIND for mobility in R+iSpace.
The devices mounted on the MoMo have to rearrange their position according to the situation.
Some devices use DC power source but others require AC power source. Therefore, wireless
communication, which does not limit the area,and stable power supply are needed. The
electrical structure of R+iSpace and the MoMo are shown in Figure 5.9.
4
3
Software Algorithm of R+iSpace
•Sensor devices measure the user’s absolute position and direction for generating optimal
positions for the MoMos.
•Each device calculates the duration of movement until the optimal position is reached and
shares this information with other devices. The devices, which have the same application,
execute the application taking into consideration the moving time of the MoMo and the
importance of the device,
•For example, if there were two camera devices in the R+iSpace and they had three
applications to perform, e.g., gesture recognition application, facial expression recognition
application, and recording application for overall space.
•Above all, according to the priority of the application, R+iSpace selects the applications that
have to be executed on priority.
•Then, the R+iSpace distributes each application to a device depending on the shortness of
the duration time of movement to reach the target position. Figure 5.10 shows the entire
software sequence of the R+iSpace. All devices perform Sequence 1, Sequence 2, and
Sequence 3 in consecutive order
4
4
PROTOTYPE MOMO

•The prototype MoMo is composed of pinning parts and the panning part.
•The role of the pinning part is to fix the MoMo to the wall or the ceiling. The panning part,
which plays the role of legs of the MoMo, controls the pinning part that is to be located on
the next nut hole.
•The panning part is connected to the pinning parts through four panning actuators that
rotate the pinning parts, and each pinning part has one actuator for fixing to a nut hole.
• The panning part rotates the MoMo’s body too by rotating all the panning actuators in the
same direction simultaneously. The CAD image of a prototype MoMo is shown in Figure
5.12
4
5
•Pinning Part of the Prototype MoMo
The pinning part consists of an actuator, an actuator gear, a screw body, a bridge part, a screw gear and a sponge. The structure of
the pinning part is shown in Figure 5.13

4
6
The MoMo is a very useful system to rearrange the device in iSpace.

However, the MoMo is also found to have several problems:


•The first problem is that the movement of the prototype MoMo requires a lot of time. In particular, the motion of loosening
or tightening screws needs approximately 4 s. The prototype MoMo checks the torque load data from the actuator for
estimating its state.
• The second problem is that the trajectory of the device is too big while the MoMo moves. It is seen that the mounted
device cannot be used while it is moving because of the big moving motion. Thus, a new mechanism, which can overcome
these problems, is required.

The concept of R+iSpace and MoMo is valid not only for iSpace but also for other sensor network–based environmental
systems, and future research will aim at improving the performance.

You might also like