Embedded Systems Application Areas
Embedded Systems Application Areas
The ability to automatically track the progress of a task is an important aspect of many
wearable computing applications. Thus for example in an industrial maintenance
scenario it would allow automatic delivery of correct manual parts (e.g. on a head
mounted display) as well as automated procedure logging for later verification.
Pervasive Healthcare Systems:
Beyond pure technology the issue of convincing real life applications is increasingly
becoming a central topic in the field of Pervasive Computing. Here the area of health
and healthcare has emerged as a promising domain. So called Pervasive Healthcare
Systems encompasses a broad range of topics such as advanced hospital information
and logistics systems, mobile health monitoring, assisted living for the elderly and the
handicapped, and lifestyle and wellness related personal systems. All of the above are
both socially and economically highly relevant. The demographic trend towards a more
elderly society and the rising healthcare costs lead to a strong demand for solutions
that provide adequate care at affordable cost. Furthermore consumers are increasingly
health conscious and looking for lifestyle, wellness and health related products.
Pervasive computing has begun to transform the previously inanimate physical objects
in the environment around us into aware, physical information resources. Similarly, the
objects that we wear and carry in our pockets are becoming small computational
devices, that are capable of storing information, sensing, and communicating wirelessly.
This projects explores how to utilize those 'objects' to support a diverse range of human
acitvities, how to manage complexity moving away from dedicated sensory inputs
towards coordination and activity recognition in a more opportunistc fashion.
Emergence is an appealing concept for many reasons. For one, it simplifies the system
description and accomplishes complex system behaviour with simple rules between the
parts of the system.
The Context Recognition Network (CRN) Toolbox allows to quickly build distributed,
multi-modal context recognition systems by simply plugging together reusable,
parameterizable components. Thus, the toolbox simplifies the steps from prototypes to
final implementations that might have to fulfill real-time constraints on low-power mobile
devices. Moreover, it facilitates portability between platforms and fosters easy
adaptation and extensibility. The toolbox also provides a set of ready-to-use
parameterizable algorithms including different filters, feature computations and
classifiers, a runtime environment that supports complex synchronous and
asynchronous data flows, encapsulation of hardware-specific aspects including sensors
and data types (e.g., int vs. float), and the ability to outsource parts of the computation
to remote devices.
Video analysis and motion capturing are standard tools in professional sports to monitor
and improve athletic performance by recognizing and fine-tuning the quality of
movement. Cutting-edge systems with high-quality sensors hardly suffice to fulfill these
professionals’ needs. Quite often, trainers and other experts still process the recorded
data by hand. The whole setup and procedure are not only expensive and time-
consuming but also error-prone in the sense that the effectiveness of the analysis
depends on the humans doing it. Hence, the large-scale use of similar analyses for the
hobbyist and gaming masses requires a completely different approach. We envision to
add context awareness and ambient intelligence to edutainment and computer
sports/gaming applications in general. This requires mixed-reality setups and ever-
higher levels of immersive human-computer interaction. We focus on the automatic
recognition of natural human hand gestures recorded by inexpensive, wearable motion
sensors.people on their bodies (e.g., integrated into their clothes and other personal
accessories like watches or jewelry).
A lot of motion based wearable activity recognition systems rely on a combination of 3d-
accelerometer and 3d-gyro (and maybe also 3d-magnetic field sensor). While this
sensor combination has proven useful for in different applications, it is known to have a
number of problems. In particular tracking absolute orientation and subtle motion
variants is difficult. As a consequence our group is currently investigating two alternative
sensing modalities Monitoring of the muscular activity using force sensitive resistors
placed on the muscles surface. Such sensors are very thin, power efficient and have
also been demonstrated as pure textile devices, so that they can be easily integrated in
such garments as elastic underwear or tight shorts/shirt. The method relies on the fact
that muscle contractions are accompanied by changes in muscle shape. At the same
time very subtle differences between motions are often associated with clearly
distinguishable muscle activation patterns.
applications. Our method uses the same physical principle as large stationary motion
trackers, however it applies the principle in a way suitable for a small, low power
wearable system.
Up to date initial prototypes were demonstrated for both above sensors. Currently work
is ongoing on second generation hardware and the use of the sensors for more complex
activity recognition.
Activity recognition and reasoning about human behaviour is a central goal in artificial
intelligence. For understanding what humans are doing an intelligent system has to
match the physical activities (collected sensor data) with appropriate labels. The labels
are derived from textual descriptions of human activities. The approach is to build up an
acitivity vocabulary from written textcorpora which makes use of techiques applied in
Computerlinguistics.
Proximity Estimation in Sensor Networks
The work evaluates how the particle hardware platform can be used for distance
estimation. Sound distance estimation, RSSI distance estimation and limitation of the
RF receiption area has been evaluated.The collected data from measurements is used
to train classifiers. This work is also part of the RELATE Project.
Most of our awake time we have contact with the floor. So why don't use the floor as a
sensorsystem? For this project the first step was building a simulation tool, which allows
you to create floorplans, feign human movement in it and create genuine sensor output.
The next step is tracking these movements for further applications like predicting
movement intentions, etc.
ASSIGNMENT # 1
Lahore.