A Survey On Activity Detection and Classification Using Wearable Sensors
A Survey On Activity Detection and Classification Using Wearable Sensors
A Survey On Activity Detection and Classification Using Wearable Sensors
Abstract— Activity detection and classification are very require the human to manually record the event(s). However,
important for autonomous monitoring of humans for applica- this only serves as motivation for more automated human
tions, including assistive living, rehabilitation, and surveillance. activity classification systems and approaches. This survey,
Wearable sensors have found wide-spread use in recent years
due to their ever-decreasing cost, ease of deployment and use, therefore, comes at a critical time for forging a clear under-
and ability to provide continuous monitoring as opposed to standing of the current state of the activity monitoring systems,
sensors installed at fixed locations. Since many smart phones are employing wearable sensors, so that future advances can
now equipped with a variety of sensors, such as accelerometer, be made.
gyroscope, and camera, it has become more feasible to develop Existing activity monitoring systems can be broadly classi-
activity monitoring algorithms employing one or more of these
sensors with increased accessibility. We provide a complete and fied into two categories based on the manner in which sensors
comprehensive survey on activity classification with wearable are employed in an environment: (1) fixed sensor setting,
sensors, covering a variety of sensing modalities, including where information is gathered from static sensors mounted
accelerometer, gyroscope, pressure sensors, and camera- and at fixed locations and (2) mobile sensor setting, where the
depth-based systems. We discuss differences in activity types sensors are wearable and thus mobile.
tackled by this breadth of sensing modalities. For example,
accelerometer, gyroscope, and magnetometer systems have a Fixed-sensor approaches involve acoustic [1]–[7], vibra-
history of addressing whole body motion or global type activities, tional [8]–[11], other ambient-based sensors, or static cameras
whereas camera systems provide the context necessary to classify installed at fixed locations [12]–[16]. Acoustic, vibrational,
local interactions, or interactions of individuals with objects. and other ambient sensor-based methods record major events
We also found that these single sensing modalities laid the such as walking or falling based on their characteristic vibra-
foundation for hybrid works that tackle a mix of global and local
interaction-type activities. In addition to the type of sensors and tion patterns. This type of activity monitoring is confined to
type of activities classified, we provide details on each wearable limited environments, where the sensors are installed, captures
system that include on-body sensor location, employed learning all individuals in the environment, and usually involve the
approach, and extent of experimental setup. We further discuss installation of specialized sensing equipment.
where the processing is performed, i.e., local versus remote The alternative to fixed-sensor settings for activity moni-
processing, for different systems. This is one of the first surveys
to provide such breadth of coverage across different wearable toring is mobile sensing, where the sensors are wearable and
sensor systems for activity classification. activity is monitored in a more first-person perspective. Before
the last decade and a half, this type of sensing was on par with
Index Terms— Wearable, sensors, survey, activity detection,
activity classification, monitoring. fixed-sensor settings in terms of availability and cost. Wearable
technologies were often large and cumbersome devices more
often found in specialized healthcare facilities. What then
I. ACTIVITY D ETECTION AND C LASSIFICATION
changed in the last decade and half? The change was spurred
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
CORNACCHIA et al.: SURVEY ON ACTIVITY DETECTION AND CLASSIFICATION USING WEARABLE SENSORS 387
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
388 IEEE SENSORS JOURNAL, VOL. 17, NO. 2, JANUARY 15, 2017
TABLE I TABLE II
PAPERS C LASSIFIED BY A CTIVITY T YPE (G LOBAL B ODY M OTION OR PAPERS C LASSIFIED BY A CTIVITY T YPE (G LOBAL B ODY M OTION
L OCAL I NTERACTION ) U SING A/M/G S ENSORS OR L OCAL I NTERACTION ) U SING C AMERA
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
CORNACCHIA et al.: SURVEY ON ACTIVITY DETECTION AND CLASSIFICATION USING WEARABLE SENSORS 389
TABLE III
PAPERS C LASSIFIED B Y A CTIVITY T YPE (G LOBAL B ODY M OTION OR L OCAL I NTERACTION ) U SING A H YBRID S ET OF S ENSORS
interaction type activities tend to place the sensors on the to [50], Ataya et al. study the temporal coherence of activities
extremities. overtime. In other words, a Hidden Markov Model (HMM) is
used to determine whether or not it is plausible for someone
A. Global Body Motion Activity Classification to be lying down after running.
1) Waist-Mounted: Numerous works place accelerometer- Similar to [50], another major trend in this area, outside of
based sensors at the waist of the subject. In the detection of hierarchical approaches, was comparing numerous classifiers
global body motion, this central location is in many cases more in the design of their systems. Ravi et al. [51] compared
stable than placing the sensor on one of the extremities. The numerous classification techniques using a small combination
extremities do not necessarily move in conjunction with the of 12 time and frequency domain features. They did not
rest of the body. Mounting at the waist also made convenient only compare classifiers but also attempted to fuse numerous
belt mounting of the sensor. Initially, works employing this different combinations of classifiers. One of the novelties
setup used very limited feature sets. Works such as those by of [51] is effectively performing activity classification with
Mantyjarvi et al. [25], Song et al. [40], He et al. [48], and He small feature sets using a combination of low power classifiers.
and Jin [49] employed these limited feature sets with a single Lara and Labrador [56] compare numerous techniques and
selected classification algorithm. create a code base for an Android-based operating system.
Some other earlier works did not even use a classification Kwapisz et al. [42] compared numerous classifiers using a
algorithm, but rather employed empirically determined thresh- smartphone-based platform, but placed the emphasis on the
olds for classification [38], [39], [173]. Later works expanded diversity of a 43-element feature set.
on this threshold-based reasoning, employing a hierarchical With improvements in processing power, more recent works
methodology to rule out initial larger classes of activities. seemed to follow this trend in improving features as opposed
For example, the threshold would be used to determine if to classifiers. References [43], [46], and [84] all compare
an activity was a motion type activity, such as walking or various classifiers, while leveraging unique feature sets that
running, or a motionless activity, such as standing or sitting. range in the number of features from 76 to 110 features.
After determining if an activity was in one of the two larger Feature selection was also a part of some of these works,
classes of motion or motionless, then another method would in other words the improvements of accuracy seemed to stem
often be used to determine the final sub-classification decision from not a particular classifier, but rather the input provided
of walking, running, sitting, or standing. A rule-based method to the classifier. Still others focused on novelties outside
is used in [41] to initially distinguish between motion and of improving activity classification accuracy through feature
motionless activities and then a final classification decision is sets. Alshurafa et al. [57] focused on distinguishing inten-
made using an Support Vector Machine (SVM), trained on sity levels of different activities, whereas [44], [45] attempt
3-axis acceleration measurements. Weng et al. [47] use an to maintain classification accuracy while improving energy
SVM to both distinguish between static versus motion activi- consumption.
ties and for the final classification within the subcategories. 2) Mounted in Other Locations: After 2006, the place-
Ataya et al. [50] also employ a hierarchically structured ment and number of accelerometer-based sensors appeared
approach, first determining static from dynamic activities. to expand for a single system. With increasing number of
They compare numerous classifiers for making the final clas- sensors at different locations, the average number of activities
sification decision and determine Random Forests to be an classified also rose. Many of the waist-mounted works tackled
improvement over SVMs as well as other techniques. Unique activity sets of around five, yet with variation in placement
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
390 IEEE SENSORS JOURNAL, VOL. 17, NO. 2, JANUARY 15, 2017
and more sensors, many of the works to be discussed in this on specific applications such as the ability of the elderly to
section classified a larger range of activities (up to 20) with the ambulate and fall detection, so the goal was not classify-
average number of classified activities being closer to seven ing a larger set of activities. On the other hand, numerous
or eight. works have been presented discriminating a larger set of
However, some works still prescribed to a more central activities by using information from two sensor locations.
body location, such as the chest [59], [67], [174] or lower Lustrek et al. [71], Xu et al. [75], and Ghasemzadeh and
back [77]. Chung et al. [59] use frequency domain features Jafari [83] were able to discriminate between 11, 14, and 25
from a chest-mounted sensor to distinguish between rest, activities respectively.
walking and running. They use a rough hierarchical threshold Multi-sensor approaches were not limited to pairs of sen-
based technique on Signal Magnitude Area (SMA). Curone sors. Krishnan and Panchanathan [60], Trabelsi et al. [79],
et al. [67] likewise use SMA from an upper body trunk and Mortazavi et al. [86] use data from sensors at three
attached accelerometer with a hierarchical threshold based different locations. Ermes et al. [176], Ugulino et al. [177],
technique to distinguish moving from non-moving activities. Zhu and Sheng [26], [63] use data from four orientation-based
However, Curone et al. [67] extend the application to tackle a sensors. Atallah et al. [32], Rednic et al. [78], and Huynh and
larger number of classes compared to [59]. Other works, such Schiele [82] use seven or more orientation based sensors. For
as [174], use a chest-mounted sensor with SMA features but all of these arrangements, containing more than two sensors,
extend the simple threshold-based decision to an HMM. the number of classified activities are also significant, ranging
There are also works placing the sensors on the extremities from 5 to 20, with the average number of classes being
of the body. This placement has become more common around 10.
recently with arm-band mounted smartphones and wrist-based The works that are not focused on multi-sensor combina-
hardware such as e-watches. Additionally, these works tend to tions or placements can be distinguished by their hardware
be more complex in terms of the application of the learning implementation. Earlier works developed their own specialized
algorithm applied to detect certain activities. That is, unlike the hardware. Wang et al. [62] develop a new hardware instanti-
threshold-based techniques applied to the central body loca- ation of self organizing maps or neural network algorithms.
tions, these papers employ supervised learning techniques to Hao et al. [64] propose onboard human activity classification,
address the classification problem. References [68] and [175] using a 3D accelerometer with a GPS module placed anywhere
employ thigh-based deployments of single 3D accelerometers, on the body. Basterretxea et al. [53] present an onboard Field
and both use multiple hierarchically-structured algorithms to Programmable Gate Array (FPGA)-based human activity clas-
make their classification decisions. Mannini and Sabatini [68] sification. Numerous later works [47], [52], [54], [65], [72],
apply two cascaded SVMs, whereas Zhu and Sheng [175] use [76], [178]–[182] attempt to make use of other commercially
two Neural Networks in conjunction with an HMM. Another available hardware, such as mobile platforms or smartphones.
common extremity placement is on the wrist [69], [70], [73]. Ryder et al. [65] propose a tool for monitoring mobility
Chernbumroong et al. [69] compare numerous different feature patterns over time using a mobile phone. Others tried to
combinations, as well as two learning algorithms, namely C4.5 tolerate different placements of these mobile platforms, trying
decision trees and Artificial Neural Networks (ANNs). Andreu to prevent constraints on how an individual carries their mobile
et al. [70] propose a system using an evolving self-learning platform. Berchtold et al. [76], Siirtola and Roning [72],
fuzzy rule-based classifier and address a more comprehensive and Anjum and Ilyas [52] propose smartphone systems with
activity set than [69], to include vacuuming, stretching various different locations.
and bicycling. Margarito et al. [73] likewise classify some
exercise activities, such as cross training, rowing, squatting B. Local Interaction Activity Classification
and weightlifting in addition to the standard set of walking, In addition to the works classifying ambulatory move-
running, and sitting. They apply a template matching technique ments, there are recent works focusing on other types of
on the acceleration signals to make the classification decision. activities. This has also been motivated by the introduction
Lee et al. [74] explore a sensor placement different from thigh of smart watches and FitBit® like devices. Almost all of
and wrist altogether. They introduce a decision-tree-based these works place the sensors on a portion of the arm in
model created from a foot-mounted gyroscope to classify order to detect a user’s interactions with the environment.
walking, running, going upstairs and downstairs. These works vary based on the types of local interactions
There are other works that did not choose a single location that they are trying to detect and the number of sensors
for an accelerometer, gyroscope or a magnetometer. Dobrucali employed.
and Barshan [85], as well as [58], evaluated different sensor Activities of interest to be classified with a single
body placements and selected the location with the best data. accelerometer based sensor, mounted at a location on the arm,
Yet, while [58] and [85] chose the best locations, other systems typically the wrist, have included eating and drinking [96],
were created that consist of using data from all of the multiple dynamic and static activities [22], work types of activities [97],
orientation-based sensors. The simplest combination is the use different arm movements [99], and long-term motions [98].
of pairs of sensor locations. Some of these works applying Min et al. [88]–[90] were interested in classifying morning
pairs of sensor placements, such as [55], [80], and [81], only activities. In their earlier work, Min et al. [88] use a single
tackle the same average activity sets of around five or fewer 2D wrist-worn accelerometer and apply a Gaussian Mixture
activities. In their defense, in [80] and [81], the focus was Model (GMM). In [89], Min et al. also use a GMM, but
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
CORNACCHIA et al.: SURVEY ON ACTIVITY DETECTION AND CLASSIFICATION USING WEARABLE SENSORS 391
extend the work by adding a fixed sensor. Min et al. [90] demonstration systems, especially with the introduction of
additionally explore using an audio recorder with a single Google Glass® , have prototyped algorithms for activity clas-
2D accelerometer. In this work [90], the addition of another sification based on images captured from an eye-glass worn
sensing modality allows the extension from three activities camera [21] [103]. Others have adopted head- [106], [114],
to the ability to classify six activities including brushing [115] or chest-mounted cameras [111], [124]. Bambach [183]
teeth, washing, shaving, electric brushing, and electric shaving summarizes the advances in computer vision algorithms for
with the GMM approach. However, placement of a single egocentric video.
sensor was not limited to the arm. Fortune et al. [91] While these wearable camera-based activity classification
extract three features from a single accelerometer and gyro- systems can be distinguished from one another in terms of
scope located on the thigh. They classify other types of the placement of the camera on the body, it can be seen from
daily activities including dressing, walking, sitting, reading, Table IV that the majority of these camera based setups choose
washing dishes, climbing stairs, writing, dusting, and folding either a head or torso-mounted configuration. Additionally,
laundry. Table IV shows that all works include only a single camera
In order to create more comprehensive systems with the and often do not specify the number of subjects or have
ability to discriminate an increasing number of activities, other fewer than 10 subjects. The high-level distinguishing factor
works have explored multi-accelerometer placements [92], between these camera-based systems is the type of features
[94], [95]. As with the case of global body motion described extracted from the visual scene. These features may be more
above, with the introduction of multiple accelerometer-based general, high-level scene features that quantify the motion of
sensors for local interaction classification, came the necessity the scene or more fine-grained details that try to capture inter-
to use more advanced supervised learning methods. In other actions of the subject with objects in the scene. We therefore,
words, many of the works, employing a single sensor, used similarly to Section II, group the wearable camera-based
threshold-based techniques, GMMs, or other types of mod- activity classification work into two categories: Global body
els such as HMMs. Works such as, [92], [94], [95], use motion and local interaction activity classification described
more complex learning methods compared to single sen- in Sections III-A and III-B, respectively.
sor works. Lin et al. [94] present a wearable module and
neural-network-based activity classification for daily energy A. Global Body Motion Activity Classification
expenditure estimation. They propose to place accelerome- While there are some works in the area of global motion
ters on an individual’s waist, wrists and ankles. They con- activity classification based on wearable cameras, one will
struct energy expenditure regression models from 12 acquired notice that this section has comparatively less number of
and nine derived features to classify 14 different activities. works than the global body motion activity detection using
Zhelong et al. [95] deploy five 3D-accelerometers on wrists, A/M/G. Nonetheless, there are some works that have explored
ankles and trunk to classify numerous cleaning, eating, and at global motion using camera-based techniques. Global body
least 10 other daily life activities. Five features are extracted motion is often inferred from images based on an estimate
for each axe per sensor. An overall accuracy of 91.3% is of the camera motion. The techniques for estimating this
achieved using a probabilistic neural network (PNN) and camera motion often fall into two main groups, (i) tech-
adjustable fuzzy clustering (AFC). The contribution of [95] is niques that leverage methods based on point-based features
an iterative learning method for adding new training samples [101], [184]–[186], and (ii) techniques that employ optical
and new activity classes to the already learned AFC and flow like features [103]–[106], [108].
PNN model. Stickic et al. [93] looked at improving training While some works employing point-based features used
by proposing a framework for learning activity from labeled standard detection algorithms such as Speeded Up Robust
and unlabeled data. This framework involves using experience Features (SURF) [101], others focused on developing novel
sampling to classify activities such as meal preparation, dish- point-based features. Zhang et al. [185], [186] use a novel
washing, grooming, etc. Min and Cho [92] also deploy five point feature detection based on defining interest points from
sets of accelerometers and gyroscopes. They place the sensors the covariance matrix of intensities at each pixel. They then
on the hands to classify 19 activities that include clapping, calculate motion histograms based on the point features to
shaking, teeth-brushing, and other hand manipulation tasks. capture the global motion distribution in the video.
The optical flow based techniques appear to differ based
III. R ECENT W ORK U SING O NLY W EARABLE C AMERAS on the learning algorithm employed. Kitani et al. [105] inves-
Over the years, the flexibility of wearable camera systems tigate the use of motion-based histograms from optical flow
has evolved thanks to decreasing camera and processor sizes vectors and unsupervised Dirichlet learning on classification of
and increasing resolution and processing power. Table II 11 sports-based ego-actions. Zhan et al. [103] extract optical
groups camera-based works into two classes (based on whether flow features for each frame, and pool the optical flow vectors
they focus on global body motion or local interaction with over numerous frames to provide temporal context. They
objects), and lists different activity types that these works compare three classification approaches, namely K-Nearest
address. Initially, many of the efforts involved backpack-based Neighbor (KNN), SVM, and LogitBoost. The approach
camera systems such as those in [151]. Camera technologies also includes a comparison with and without an HMM.
have now been developed to the point where image capture Yin et al. [104] also use an optical flow technique, but employ
can be accomplished by less intrusive setups. Numerous an SVM.
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
392 IEEE SENSORS JOURNAL, VOL. 17, NO. 2, JANUARY 15, 2017
There are still other works that do not exactly fall into segmenting a video into distinct actions. The ability to properly
describing global motion based on local point features or segment regions of activity has been furthered by several
optical flow based features. Ozcan et al. [107] propose a works of Grauman et al. [190]–[194] that proposed algo-
wearable camera-based fall detection and activity classification rithms for video event summarization for ego-centric videos
system without applying learning-based methods. They use including predicting important objects and people [191], [192],
separate edge strength and orientation histograms for fall extracting frames that might be a snapshot from ego-centric
detection. Song and Chen [100] also use a histogram of ori- video [190], and video summarization of daily activity ego-
ented gradients approach to detect humans from their camera- centric videos based on detected objects and their relations and
based system. However, in the case of [100], their system co-occurrences [193]. In addition to temporal segmentation,
is using a mobile camera on a robot watching the human. Lee and Grauman [194] also explored within frame segmen-
Similarly, Watanabe and Hatanaka [102] also use a different tation using region cues to emphasize high-level saliency.
setup of the camera, wherein a single hip or waist-mounted Attempts at activity classification began with works such
camera is employed to describe the gait of individual. Unlike as [118] by Mayol and Murray, who use a shoulder-mounted
other works, the camera is facing downward and motion is camera to recognize manipulations of objects by an individ-
relative to the waist position. A walking state at each frame ual’s hands. This work, however, is constrained to a static
is composed of the position of the waist, and relative position workspace view and manipulations are described as actions
of joints, as well as the angular speed of the camera. It is carried out by the subject’s hands. In fact, to first determine the
assumed that the intrinsic parameters of the camera and the center of focus, they use the center of mass of a detected skin
world coordinates of the observed points are known. region, where these skin regions are assumed to be either one
or both hands. More recent works began to use less constrained
experimental setups, where the camera is in fact mobile.
B. Local Interaction Activity Classification Sundaram and Cuevas [116] use a shoulder-mounted camera
The works, described in Section III-A, focusing on higher on a moving subject. They, like their predecessors, also attempt
level, global body motion activities, while comparable to and to detect the hands and represent activities as manipulations
competitive with the accelerometer-based approaches, do not by hands. They divide their system into two parts; the first,
necessarily take advantage of the visual details captured by where the vision is relied on to recognize the manipulation
cameras. Thus, other researchers have considered the ability to motions and the second, where a Dynamic Bayesian Network
classify more fine-grained activities [113], [114], [116], [118], (DBN) is used to infer objects and activities from the motions
[119]. While some of these works, like the local interaction of the first part. An accuracy of 60.99% is obtained for the
papers of section II-B, look at distinguishing between activities manipulation motions that include making a cup of coffee,
such as brushing teeth, washing dishes and vacuuming, the making a cup of tea, washing a dish, and so on.
visual information allows for even more fine-grained clas- Fathi et al. [114] also use manipulations by the hand to
sification, such as distinguishing between making coffee or classify fine-grained activities and similarly focus on food
making tea. To capture this granularity of classification, the preparation tasks such as making a hotdog, a sandwich, coffee,
works in this section view activities as involving a subject etc. They propose a framework for describing an activity as
and object interaction or the interaction of multiple subjects. a combination of numerous actions. For example, actions of
The majority of the works, discussed in this section, will the coffee making activity include opening coffee jar, pouring
be on classifying activities that involve object and subject water in coffee maker, and so on. They attempt to assign action
interactions, however we do not want to ignore works such labels to each segmented image interval and an object, hand, or
as [113] and [121] that classify interactions between multiple background label to each super-pixel in each frame. The action
subjects. Aghazadeh et al. [121] propose novelty detection models are learned using Adaboost [195] and a set of features
in egocentric video by measuring appearance and geometric that include information such as object frequency, object
similarity of individual frames and exploiting the invariant optical flow, hand optical flow, and so on. The action verbs
temporal order of the activity to determine if a subject runs are estimated first, then the object classes are inferred with a
into a friend, subject gives directions, or the subject goes to probabilistic model. The final steps involve refining decisions
an ice cream shop. Fathi et al. [113] similarly classify the from previous stages based on the final decisions of activities,
type of interaction multiple subjects are involved in, such as using a conditional random field. This work is evaluated on the
a dialogue, monologue, or discussion. GeorgiaTech egocentric activity dataset, which includes video
To begin to understand these interactions, there have been from a head-mounted camera. Frame based action recognition
supporting works that contributed to individual aspects of accuracy is 45%, whereas activity classification accuracy is
being able to tackle the larger problem of activity classifi- 32.4%. They further demonstrate that object recognition is in
cation. These supporting works have addressed problems such fact improved based on action classification results. In [115],
as detecting a hand [120], better recognizing an object from Fathi et al. demonstrate improved results on the same Geor-
an egoview [187], methods for gaze/attention focus detection giaTech dataset by modeling actions through the state changes
and 3D estimation of hand-held objects [188], or linking that are caused on an object or materials. This is in contrast
an object to a location [189]. Another area of work has to their initial work that uses information from all frames. To
been in form of video summarization, which contributes to detect changed regions the authors sample frames from the
activity classification by developing techniques for temporally beginning and end of an action sequence. Change is measured
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
CORNACCHIA et al.: SURVEY ON ACTIVITY DETECTION AND CLASSIFICATION USING WEARABLE SENSORS 393
in terms of the color difference of matched pixels. An SVM each activity. Accuracy obtained was over 93% for sitting,
is trained to detect regions that correspond to specific actions. standing, and walking. Hardegger et al. [201] proposed the
Using state change detection, 39.7% accuracy is achieved over LocAFusion algorithm for improving activity recognition
61 classes on the GeorgiaTech egocentric activity dataset. The while using multiple Inertial Measurement Unit (IMU)
activity segmentation results in 42% accuracy. sensors located at different parts of the body such as foot and
As accuracies in object detection works began to improve, hand wrist. Bulling et al. [202] proposed a method for human
others began to use these more specific object detection mod- activity recognition by using electrooculography system that
els for activity classification. Pirsiavash and Ramanan [110] records the eye movement. Types of activities include copying
use fully supervised learning with dense labels to train a text, reading a printed paper, taking handwritten notes,
deformable parts-based object detectors. With the additional watching a video, and browsing the Web.
label information, this enables more complex understanding Mukhopadhyay [203] provides a recent review of wear-
of multiple objects interacting in a single scene. They also able sensors for activity monitoring, focusing mostly on
recognize that changes in the objects occur throughout an healthcare applications using recent wearable sensors, includ-
action sequence and develop a notion of active versus passive ing some works that use custom-designed single sensors.
objects and create separate detectors for objects in each Cheng et al. [204] proposed capacitive sensors built from
state. To evaluate their system, they created and annotated a conductive textile patches that can monitor heart and breathing
1 million-frame dataset, and obtain results of around 30% rate, hand gesture recognition, swallowing monitoring and
frame classification accuracy. However, with an assumed ideal gait analysis. Wujcik et al. [205] developed an ion sensor
object detector, they demonstrate that this method would for quantizing the amount of sodium ions in sweat real-time.
obtain a 60% frame classification accuracy. The created Veltink and Rossi [206] present e-textile and micromechan-
dataset is from a chest-mounted Go-Pro camera and contains ical sensors for accurate analysis of movement character-
label information about activities such as brushing teeth, istics during activities of daily living. Shaltis et al. [207]
combing hair, making coffee, and so on. Numerous works proposed wearable cuff-less photoplethysmographic monitor
leverage this dataset to evaluate their approaches including for measuring blood pressure. Corbishley and Rodriguez-
[109], [111], [112]. McCandless and Grauman [109] propose Villegas [208] measured breathing rate, with accuracy more
to learn the spatio-temporal discriminative partitions for ego- than 90%, using a wearable acoustic sensor such as a micro-
centric videos and apply object detection to obtain a strong phone. A heart rate monitoring application is proposed by
classifier for recognition of 18 activities, achieving an overall Patterson et al. [209], which is based on a flexible and low-
38.7% F score. power PPG sensor. Dudde et al. [210] developed wearable
Matsuo et al. [111] further extend the work in [110], through closed-loop drug infusion system that analyzes blood glucose
visual attention and saliency, accounting for cases where a levels and infuses insulin appropriately. Ahn et al. [211] devel-
hand may not be involved in the interaction. The groundwork oped a low-cost, disposable biochip for biochemical detection
for using visual attention and saliency can be linked to other of parameters including glucose and blood gas concentration.
works, including [196], [117], and [197]. Matsuo et al. [111] Recently, flexible sensors were presented. Trung et al. [212]
propose a method for quantifying visual saliency, not only provided a review of flexible and stretchable physical sensors
through the static saliency of an image, but through the ego- for applications of activity monitoring and personal healthcare.
motions of the first person viewer. The results are based on the Chen et al. [213] proposed body temperature sensor that is
same dataset used in [110]. The results improve the average wearable, breathable and stretchable for healthcare monitoring.
recognition accuracy, over all activities, from 36.9% to 43.3% They are capable of measuring temperature, pressure and strain
and decrease variance in accuracy over numerous subjects so far. They offer limited capability for recognizing various
from 9.8% to 7.1%. daily activities. However, they promise applicability towards
activity detection and recognition if the sensor capabilities are
improved to the level of A/M/G sensors. These custom-built
IV. R ECENT W ORK U SING OTHER S INGLE T YPE S ENSORS
single sensors allow accurate and robust measurements for
This section will summarize works that only use a sin- healthcare applications.
gle type of sensor (other than A/M/G or camera). These
sensor types include Electrocardiogram (ECG), Photoplethys-
V. R ECENT W ORK U SING H YBRID S ENSOR M ODALITIES
mogram (PPG), some acoustic sensors, custom-built sensors
etc. This section does not have as many papers as the other With the advances in single sensor type systems, and sensor
sections. The reason is that many works use these type of technologies becoming smaller, cheaper and more commer-
sensors in conjunction with other sensor types, and we refer cialized, there has been many research efforts that now look
to these as works using hybrid sensor modalities, and will at not only a single modality sensor but rather a hybrid of
summarize them in Section V. multiple types of sensors. Table III specifies the activities
As the single sensor type, Pawar et al. [198], addressed by each hybrid sensor work. These approaches are
Kher et al. [199] and Keskar et al. [200] use an ECG. not as easily separated into the classification of global body
El Achkar et al. [130] use force sensors on the bottom of and local interaction type activities, as can be seen in Table III,
an individual’s feet. The total force is estimated and a linear and a single work in this area can often address activities
regression model of the total force was created to classify in both categories. Hybrid sensing systems, with multiple
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
394 IEEE SENSORS JOURNAL, VOL. 17, NO. 2, JANUARY 15, 2017
sensors, also often employ supervised learning methods as Combining the classifiers through structured prediction using
shown in Tables IV and V. In the next two subsections, we a Conditional Random Field (CRF) with Tree Re-Weighted
will discuss some different combinations of different sensor Belief Propagation, they obtain an overall accuracy of 84.45%
modalities. Section V.A will discuss works that contain at least on 12 activities. Hernandez et al. [134] propose a real-time
one imaging sensor, while Section V.B will discuss systems, human activity recognition for sitting, standing and supine
which do not contain camera sensors or imaging capabilities. with glasses-based system with onboard processing.
Ishimaru et al. [216] additionally make use of an
A. Works Using Hybrid Sensor Modalities With Cameras Infrared (IR) proximity sensor on a glasses-based system.
There are also multiple hybrid systems that include The IR proximity sensor is used to measure the dis-
vision-based sensors in addition to other sensor types tance between the eyes and the eyewear in order to per-
[122], [124], [140]. While Nam et al. [124] simply fuse form blink detection. Moreover, the average variance of
information from a camera and accelerometer to increase 3D-accelerometer is calculated to construct a head motion
the accuracy of classification for ambulatory activities, model. They try to distinguish between the activities of
Doherty et al. [140] and Wu et al. [122] use the context watching, reading, solving, sawing and talking. An overall
provided by cameras to identify the specific class of activity accuracy of 82% is achieved on an 8-person dataset for
once an accelerometer has identified the level of activity being these five activities. Other works have used IR, orientation,
undertaken. and camera systems. Fleury et al. [148] use a webcam,
Numerous of these works favor a camera and orientation- a 3D-accelerometer, a 3D-magnetometer, an IR sensor, and a
based sensor combination. Spriggs et al. [214] focus on microphone. They apply Principal Component Analysis (PCA)
temporal segmentation of an activity in order to recognize on extracted features to get ten dominant features that are
different temporal parts. The authors explore the usage of fed through a multi-SVM for 35 activities, reaching 86%
GMMs, HMMs, and K-Nearest Neighbors for segmenting accuracy. Punangpakisiri et al. [144] deploy senseCam with
and classifying various actions involved in the cooking of a 3D accelerometer, a light sensor, and a passive IR sensor to
different meals. The results demonstrate that using both IMU classify ten activities.
and camera data improves results over single modality sensing. With advances in gaming systems, such as the Kinect, Red,
Additionally, the best results were obtained using a K-nearest Green, Blue, and Depth (RGB-D) sensors have also surged in
neighbor approach, whose success was largely attributed to popularity in the last few years. Bahle et al. [145], Damen
the fact that the feature vectors being used had high dimen- et al. [151], and Moghimi et al. [137] explore the use of
sionality. Li et al. [215] utilize gyroscope to obtain candidate RGB-D sensors that provide both a regular imaging capability
boundary between different daily activities, and extract visual and depth information. Bahle et al. [145] reach 92% overall
features from images for better video segmentation. Ozcan and accuracy by using camera and depth-based information with a
Velipasalar [149] proposed wearable camera and accelerom- Dynamic Time Warping technique when classifying walking,
eter based fall detection algorithm for portable systems that writing, using stairs, and using the dishwasher. A helmet-
can eliminate false positives significantly, compared to camera mounted RGB-D camera is employed in [137]. The features
only and accelerometer only systems, while preserving correct used include GIST and a skin segmentation algorithm. To
detection rate. classify the activities, learning based methods such as bag
More recently, a common proposal, for these camera and of SIFT words, Convolutional Neural Networks (CNNs), and
orientation-based sensors has been an eyeglass-mounted sys- SVMs were explored. They conclude that CNN-based features
tem [21], [134], [135], [216]. Windau and Itti [21] use the provide the best representation for finer-grained activity recog-
inertial sensing from a gyroscope and accelerometer to nor- nition or tasks that involve manipulations of objects by hands.
malize the coordinate system for different head orientations. These works however are still relatively new and have explored
For the inertial sensors, energy consumption and movement limited activity sets compared to other hybrid approaches with
intensity, and the mean and variance of the sensor readings around three to six activities classified in each of these works.
are extracted. As for the camera-based features, a single GIST
vector per frame is calculated. The video data is also used to
classify an indoor versus outdoor environment using the GIST B. Works Using Hybrid Sensor Modalities Without Cameras
vectors. The activities classified include lying down, walking, There are yet other works that rely on a combination
jogging, biking, washing dishes, brushing teeth, etc. The of numerous sensors but do not use an imaging sensor.
results show an 81.5% classification accuracy on 20 different To acquire the context otherwise extracted from a camera,
activities. Zhan et al. [135] also design an eye glasses-like such as object type or an indication of location in an indoor
system with a first person view camera and accelerometer. space, one common sensor suite is one that contains an Radio
This work shows that while accelerometer-only classification Frequency Identification (RFID) sensor in addition to other
has proven effective for dynamic activities, the camera is modalities. Hong et al. [123], [125] use an accelerometer
more suitable for static activities. The extracted accelerometer combined with an RFID sensor to identify object types and
features include both frequency and time domain information. ultimately provide the context for classification. Shimokawara
The extracted video features include optical flow vectors. The et al. [150] use tagging technology to recognize what area of a
authors compare classification results for different sensors house the individual is in and then apply accelerometer-based
using LogitBoost and SVM with numerous different kernels. information for classification. Torres et al. [126] also use a
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
CORNACCHIA et al.: SURVEY ON ACTIVITY DETECTION AND CLASSIFICATION USING WEARABLE SENSORS 395
TABLE IV
PAPERS C LASSIFIED BY S ENSOR L OCATION , N UMBER , AND T YPE , N UMBER OF S UBJECTS U SED
FOR E VALUATION , AND VARIANT OF L EARNING A LGORITHM U SED
3D-accelerometer and RFID to classify motions in and out of Another more popular combination for these hybrid systems
bed. Ranasinghe et al. [217] use RFID information to monitor appears to be an ECG sensor used together with other sensor
the motions of patients in a hospital. types [24], [66], [128], [129], [141], [142]. Li et al. [66],
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
396 IEEE SENSORS JOURNAL, VOL. 17, NO. 2, JANUARY 15, 2017
TABLE V
PAPERS C LASSIFIED BY S ENSOR L OCATION , N UMBER , AND T YPE , AND VARIANT OF L EARNING A LGORITHM U SED C ONTINUED
TABLE VI
PAPERS C LASSIFIED A CCORDING TO A CTIVITY AND S ENSOR T YPE AND P ROCESSING L OCATION
Jia and Liu [129], and Banerjee et al. [24] use an ECG sensor propose a wearable on-board processing chip with ECG sen-
together with an accelerometer. Li et al. [66] and [129], both sors, using HMM to classify ECG signals to detect abnormal-
use an SVM approach to classify ambulatory type activities, ity. Patel et al. [219] use multiple gyrometer and accelerometer
differing in the size of feature sets used for the classification. sensors with heart rate and respiration rate data to monitor
Reference [129] improve the accuracy over [66] by extracting the activity of a person with Chronic Obstructive Pulmonary
262 features, and selecting the six most relevant by performing Disease (COPD). Fletcher et al. [220] use an ankle-mounted
Linear Discriminant Analysis (LDA). Others apply ECG sen- sensor capable of measuring electrodermal activity (EDA), 3-
sor combinations for different applications. Tapia et al. [127] axis acceleration, temperature, along with a chest-mounted
use an accelerometer in conjunction with a heart rate monitor ECG. These sensors are used in conjunction with an Android-
to classify physical activities and the intensity of the activity. based software, applying the system to military Post Trau-
Lin et al. [94] use an accelerometer and heart rate sensor matic Stress Syndrome to determine if a user is agitated.
to classify not only the intensity level but specific daily life The pressure sensor also appears frequently in the literature
activities. [132], [138], [130], [143], [147], [221]. Edgar et al. [147]
These applications for ECG sensor combinations extend design a system with three pressure sensors on both shoes
in applying activity classification for the healthcare field to and a 3D accelerometer on wrist to classify household and
automatically monitor patients. Yuan and Herbert [218] use athletic activities. Sazonov et al. [138] use five force-sensitive
position, ECG, blood pressure, and blood oxygenation sensors pressure sensors and a 3D-accelerometer on each shoe to
for the general monitoring of individuals. Keskar et al. [200] classify sitting, standing, walking, ascending and descending
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
CORNACCHIA et al.: SURVEY ON ACTIVITY DETECTION AND CLASSIFICATION USING WEARABLE SENSORS 397
stairs, and cycling. Initially, they extract eight features to increasing feature sets used. Moving these orientation sensors
train an SVM and reach an average accuracy of 95.2% [138], to the extremities also enabled the ability to approach new
but later achieve a 99.8% overall accuracy using SVM with activity types that involve more localized motions such as
rejection and ANN with rejection on the six activity set [132]. brushing teeth, vacuuming, and so on. However, orientation-
Sazonov et al. [133] later extend the activity set to 15 different based sensors are limited by the information they provide, and
activities and apply SVM, Multinomial Logistic Discrimina- with improvements in the camera technology, others have been
tion (MLD), and Multilayer Perceptrons (MLP). able to show that cameras can provide details that these other
There are yet other sensor combinations explored in [136], orientation sensors cannot. That is, images and videos can
[139], [146], [152], [153], and [221]. Basak et al. [152] provide more detailed information and context for a specific
use a wearable microphone on kids in day-care, and clas- action sequence, and not only be able to distinguish between
sify four activities; cough, sneeze, cry and toilet flush. making food, but also making specific types of food. Yet,
De et al. [131] and [221] use an accelerometer, a gyroscope, a despite the additional activities these camera only systems
temperature/humidity sensor, and atmospheric pressure sensor can tackle, the accuracies of these systems, often below 60%,
to classify a combination of ambulatory and daily life activ- are far lower than other works discussed. Hence, more recent
ities. Pärkkä et al. [136] explore not only combinations of hybrid works have provided the ability to classify both global
two different sensors, but employ 22 wearable sensors. They and local activities, while maintaining higher accuracies of
classify lying, rowing, cycling, sit/stand, running, nordic walk, above 80%. However, it can be argued that these single-sensor
or walking by introducing a custom automatically generated based approaches have laid the groundwork for more advanced
decision tree and an ANN. Riboni and Bettini [139] use a hybrid approaches. These hybrid approaches have also tended
3D accelerometer, temperature, and light sensor to classify to be more extensive in their experimentation, often using more
a mix of global body motion and local interaction activi- realistic experimentation approaches with more subjects. With
ties through ontological reasoning. Tacconi et al. [146] use community datasets sparse in all but the camera-based only
accelerometer and microphone on wrist and hip, classifying works, the experimentation is often specific to each work.
emotion and activity, reaching 47.1% accuracy. These works Hence, a potential contribution to activity classification using
show that the hybrid sensing modalities used are dictated wearable sensors would be the creation of a standard dataset.
by the application for which the author is attempting to Table VI also reflects that there is still work to do to develop
address. That is, an ECG is often used in the healthcare methods that perform processing onboard in real-time versus
field or to measure intensity level of an activity, RFIDs are works where analysis is remotely processed.
used in combination of other sensors to provide locational
or object type context to the classification, and others may R EFERENCES
apply a mix of sensors to classify both local and global type [1] K. Yatani and K. N. Truong, “BodyScope: A wearable acoustic sensor
interactions. for activity recognition,” in Proc. Ubicomp, 2012, pp. 341–350.
[2] O. Amft, M. Stäger, P. Lukowicz, and G. Tröster, “Analysis of chewing
sounds for dietary monitoring,” in Proc. 7th Int. Conf. Ubiquitous
VI. C ONCLUSION Comput., Berlin, Germany, 2005, pp. 56–72.
A review of wearable sensor approaches for activity detec- [3] G. J. Brown and M. Cooke, “Computational auditory scene analysis,”
Comput. Speech Lang., vol. 8, no. 4, pp. 297–336, 1994.
tion and classification has been presented. Different from [4] J. Chen, A. H. Kam, J. Zhang, N. Liu, and L. Shue, Bathroom Activity
existing surveys, which focus on only certain types of sensors, Monitoring Based on Sound. Berlin, Germany: Springer-Verlag, 2005,
we have covered a breadth of wearable sensing modalities, pp. 47–61.
[5] C. Brian, N. Sawhney, and A. Pentland. 1998. Auditory
including accelerometer, gyroscope, pressure sensors, depth- Context Awareness Via Wearable Computing. [Online]. Available:
based, and hybrid modality systems, to provide a complete http://citeseer.ist.psu.edu/brian98auditory.html
and comprehensive survey. In addition to the type, number [6] H. Lu, W. Pan, N. D. Lane, T. Choudhury, and A. T. Campbell,
“Soundsense: Scalable sound sensing for people-centric applications on
and placement of sensors, and the type of activities that mobile phones,” in Proc. 7th Int. Conf. Mobile Syst., Appl., Services,
are detected, we have classified the large set of research 2009, pp. 165–178.
papers based on the employed learning approach, the extent of [7] V. Peltonen, J. Tuomi, A. Klapuri, J. Huopaniemi, and T. Sorsa,
“Computational auditory scene recognition,” in Proc. IEEE Int.
experimentation and evaluation, and whether the processing is Conf. Acoust., Speech, Signal Process. (ICASSP), May 2002,
performed onboard or remotely. Another goal is to motivate pp. II-1941–II-1944.
additional work to extend wearable camera-based systems. [8] A. Yazar, A. E. Çetin, and B. U. Töreyin, “Human activity classification
using vibration and PIR sensors,” in Proc. 20th Signal Process.
We have covered a wide range of wearable sensor Commun. Appl. Conf. (SIU), Apr. 2012, pp. 1–4.
approaches by also discussing the progression of how methods [9] D. Gordon, F. Witt, H. Schmidtke, and M. Beigl, “A long-term sensory
using different sensor modalities approach various different logging device for subject monitoring,” in Proc. 4th Int. Conf. Pervasive
Comput. Technol. Healthcare, Mar. 2010, pp. 1–4.
problems. Initial orientation-based sensing approaches started [10] K. Van Laerhoven and H. Gellersen, “Spine versus porcupine: A study
with stable sensors at the waist and classified a limited set of in distributed wearable activity recognition,” in Proc. 8th Int. Symp.
activities. As orientation-based sensors and processing power Wearable Comput., Oct. 2004, pp. 142–149.
[11] K. Van Laerhoven, D. Kilian, and B. Schiele, “Using rhythm awareness
advanced, other works explored orientation-based sensors on in long-term activity recognition,” in Proc. 12th IEEE Int. Symp.
the extremities. With this advancement to the extremities, the Wearable Comput. (ISWC), Sep. 2008, pp. 63–66.
number of classes that an approach was able to distinguish [12] B. Mirmahboub, S. Samavi, N. Karimi, and S. Shirani, “Automatic
monocular system for human fall detection based on variations in
increased. These approaches additionally saw improvements silhouette area,” IEEE Trans. Biomed. Eng., vol. 60, no. 2, pp. 427–436,
and can be distinguished from one another in terms of the Feb. 2013.
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
398 IEEE SENSORS JOURNAL, VOL. 17, NO. 2, JANUARY 15, 2017
[13] H. Nait-Charif and S. McKenna, “Activity summarisation and fall [35] R. Saeedi, J. Purath, K. Venkatasubramanian, and H. Ghasemzadeh,
detection in a supportive home environment,” in Proc. 17th Int. Conf. “Toward seamless wearable sensing: Automatic on-body sensor local-
Pattern Recognit., vol. 4, Aug. 2004, pp. 323–326. ization for physical activity monitoring,” in Proc. 36th Annu. Int. Conf.
[14] X. Zou and B. Bhanu, “Anomalous activity classification in the IEEE Eng. Med. Biol. Soc. (EMBC), Aug. 2014, pp. 5385–5388.
distributed camera network,” in Proc. 15th IEEE Int. Conf. Image [36] A. M. Khan, Y. K. Lee, and S. Y. Lee, “Accelerometer’s position free
Process., (ICIP), Oct. 2008, pp. 781–784. human activity recognition using a hierarchical recognition model,” in
[15] G. Panahandeh, N. Mohammadiha, A. Leijon, and P. Handel, “Continu- Proc. 12th IEEE Int. Conf. e-Health Netw. Appl. Services (Healthcom),
ous hidden Markov model for pedestrian activity classification and gait Jul. 2010, pp. 296–301.
analysis,” IEEE Trans. Instrum. Meas., vol. 62, no. 5, pp. 1073–1083, [37] A. Betancourt, P. Morerio, C. S. Regazzoni, and M. Rauterberg, “The
May 2013. evolution of first person vision methods: A survey,” IEEE Trans.
[16] G. Srivastava, H. Iwaki, J. Park, and A. C. Kak, “Distrib- Circuits Syst. Video Technol., vol. 25, no. 5, pp. 744–760, May 2015.
uted and lightweight multi-camera human activity classification,” in [38] S.-W. Lee and K. Mase, “Activity and location recognition using
Proc. ACM/IEEE Int. Conf. Distrib. Smart Cameras, Aug. 2009, wearable sensors,” IEEE Pervasive Comput., vol. 1, no. 3, pp. 24–32,
pp. 1–8. Jul./Sep. 2002.
[17] Census: Homes with Cell Phones Nearly Double in First Half of [39] D. M. Karantonis, M. R. Narayanan, M. Mathie, N. H. Lovell, and
Decade. Accessed on Sep. 30, 2010. [Online]. Available: https://www. B. G. Celler, “Implementation of a real-time human movement clas-
census.gov/newsroom/releases/archives/income_wealth/cb09-174.html sifier using a triaxial accelerometer for ambulatory monitoring,” IEEE
Trans. Inf. Technol. Biomed., vol. 10, no. 1, pp. 156–167, Jan. 2006.
[18] O. D. Lara and M. A. Labrador, “A survey on human activity recog-
[40] S.-K. Song, J. Jang, and S. Park, “A phone for human activity
nition using wearable sensors,” IEEE Commun. Surveys Tuts., vol. 15,
recognition using triaxial acceleration sensor,” in Dig. Tech. Papers,
no. 3, pp. 1192–1209, Jul. 2013.
Jan. 2008, pp. 1–2.
[19] B. Bruno, F. Mastrogiovanni, A. Sgorbissa, T. Vernazza, and [41] S. Zhang, P. McCullagh, C. Nugent, and H. Zheng, “Activity mon-
R. Zaccaria, “Analysis of human behavior recognition algorithms itoring using a smart phone’s accelerometer with hierarchical clas-
based on acceleration data,” in Proc. IEEE Int. Conf. Robot. Autom., sification,” in Proc. 6th Int. Conf. Intell. Environ. (IE), Jul. 2010,
May 2013, pp. 1602–1607. pp. 158–163.
[20] S. Chernbumroong, S. Cang, A. Atkins, and H. Yu, “Elderly activities [42] J. R. Kwapisz, G. M. Weiss, and S. A. Moore, “Activity recogni-
recognition and classification for applications in assisted living,” Expert tion using cell phone accelerometers,” ACM SIGKDD Explorations
Syst. Appl., vol. 40, no. 5, pp. 1662–1674, Apr. 2013. Newslett., vol. 12, no. 2, pp. 74–82, Dec. 2010.
[21] J. Windau and L. Itti, “Situation awareness via sensor-equipped eye- [43] M. Arif, M. Bilal, A. Kattan, and S. I. Ahamed, “Better physical
glasses,” in Proc. IEEE Int. Conf. Intell. Robots Syst., Nov. 2013, activity classification using smartphone acceleration sensor,” J. Med.
pp. 5674–5679. Syst., vol. 38, no. 95, pp. 1–10, Jul. 2014.
[22] J.-Y. Yang, J. S. Wang, and Y. P. Chen, “Using acceleration mea- [44] Z. Yan, V. Subbaraju, D. Chakraborty, A. Misra, and K. Aberer,
surements for activity recognition: An effective learning algorithm for “Energy-efficient continuous activity recognition on mobile phones:
constructing neural classifiers,” Pattern Recognit. Lett., vol. 29, no. 16, An activity-adaptive approach,” in Proc. 16th Int. Symp. Wearable
pp. 2213–2220, 2008. Comput., Jun. 2012, pp. 17–24.
[23] S. Dernbach, B. Das, N. C. Krishnan, B. L. Thomas, and D. J. Cook, [45] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-
“Simple and complex activity recognition through smart phones,” in Ortiz, “Energy efficient smartphone-based activity recognition using
Proc. 8th Int Conf. Intell. Environ., 2012, pp. 214–221. fixed-point arithmetic,” J. Universal Comput. Sci., vol. 19, no. 9,
[24] D. Banerjee, S. Biswas, C. Daigle, and J. M. Siegford, “Remote pp. 1295–1314, 2013.
activity classification of hens using wireless body mounted sensors,” [46] M. Zhang and A. A. Sawchuk, “Human daily activity recognition with
in Proc. Int. Workshop Wearable Implant. Body Sensor Netw., 2012, sparse representation using wearable sensors,” IEEE J. Biomed. Health
pp. 107–112. Informat., vol. 17, no. 3, pp. 553–560, May 2013.
[25] J. Mantyjarvi, J. Himberg, and T. Seppanen, “Recognizing human [47] S. Weng et al., “A low power and high accuracy MEMS sensor based
motion with multiple acceleration sensors,” in Proc. IEEE Int. Conf. activity recognition algorithm,” in Proc. IEEE Int. Conf. Bioinformat.
Syst., Man, Cybern., vol. 2, Oct. 2001, pp. 2–7. Biomed. (BIBM), Nov. 2014, pp. 33–38.
[26] C. Zhu and W. Sheng, “Wearable sensor-based hand gesture and [48] Z. He, Z. Liu, L. Jin, L.-X. Zhen, and J.-C. Huang, “Weightless-
daily activity recognition for robot-assisted living,” IEEE Trans. ness feature–a novel feature for single tri-axial accelerometer based
Syst., Man, Cybern. A, Syst., Humans, vol. 41, no. 3, pp. 569–573, activity recognition,” in Proc. 19th Int. Conf. Pattern Recognit., 2008,
May 2011. pp. 1–4.
[27] K. Taylor, U. A. Abdull, R. J. N. Helmer, J. Lee, and I. Blanchonette, [49] Z. He and L. Jin, “Activity recognition from acceleration data based on
“Activity classification with smart phones for sports activities,” Proce- discrete consine transform and SVM,” in Proc. IEEE Int. Conf. Syst.,
dia Eng., vol. 13, pp. 428–433, Jul. 2011. Man, Cybern., Oct. 2009, pp. 5041–5044.
[28] D. Coskun, O. D. Incel, and A. Ozgovde, “Phone position/placement [50] A. Ataya, P. Jallon, P. Bianchi, and M. Doron, “Improving activity
detection using accelerometer: Impact on activity recognition,” in recognition using temporal coherence,” in Proc. Int. Conf. IEEE Eng.
Proc. IEEE 10th Int. Conf. Intell. Sensors, Sensor Netw. Inf. Process. Med. Biol. Soc., Jul. 2013, pp. 4215–4218.
(ISSNIP), Apr. 2015, pp. 1–6. [51] N. Ravi, N. Dandekar, P. Mysore, and M. L. Littman, “Activity
recognition from accelerometer data,” in Proc. 17th Conf. Innov. Appl.
[29] S. J. Preece, J. Y. Goulermas, L. P. J. Kenney, and D. Howard, “A
Artif. Intell., 2005, pp. 1541–1546.
comparison of feature extraction methods for the classification of
[52] A. Anjum and M. U. Ilyas, “Activity recognition using smartphone
dynamic activities from accelerometer data,” IEEE Trans. Biomed.
sensors,” in Proc. IEEE 10th Consum. Commun. Netw. Conf., (CCNC),
Eng., vol. 56, no. 3, pp. 871–879, Mar. 2009.
Jun. 2013, pp. 914–919.
[30] K. Kunze and P. Lukowicz, “Sensor placement variations in wear- [53] K. Basterretxea, J. Echanobe, and I. del Campo, “A wearable human
able activity recognition,” IEEE Pervasive Comput., vol. 13, no. 4, activity recognition system on a chip,” in Proc. Conf. Design Archit.
pp. 32–41, Dec. 2014. Signal Image Process. (DASIP), Oct. 2014, pp. 1–8.
[31] L. Atallah, B. Lo, R. King, and G.-Z. Y. G.-Z. Yang, “Sensor placement [54] R. Braojos, I. Beretta, J. Constantin, A. Burg, and D. Atienza, “A
for activity detection using wearable accelerometers,” in Proc. Int. wireless body sensor network for activity monitoring with low trans-
Conf. Body Sensor Netw. (BSN), Jul. 2010, pp. 24–29. mission overhead,” in Proc. 12th IEEE Int. Conf. Embedded Ubiquitous
[32] L. Atallah, B. Lo, R. King, and G.-Z. Yang, “Sensor positioning Comput., Aug. 2014, pp. 265–272.
for activity recognition using wearable accelerometers,” IEEE Trans. [55] F.-C. Chuang, J.-S. Wang, Y.-T. Yang, and T.-P. Kao, “A wearable
Biomed. Circuits Syst., vol. 5, no. 4, pp. 320–329, Aug. 2011. activity sensor system and its physical activity classification scheme,”
[33] Y. Prathivadi, J. Wu, T. R. Bennett, and R. Jafari, “Robust activity in Proc. Int. Joint Conf. Neural Netw., Jun. 2012, pp. 10–15.
recognition using wearable IMU sensors,” in Proc. IEEE SENSORS, [56] Ó. D. Lara and M. A. Labrador, “A mobile platform for real-time
Nov. 2014, pp. 486–489. human activity recognition,” in Proc. IEEE Consum. Commun. Netw.
[34] M. Kreil, B. Sick, and P. Lukowicz, “Dealing with human variability Conf. (CCNC), Jun. 2012, pp. 667–671.
in motion based, wearable activity recognition,” in Proc. IEEE Int. [57] N. Alshurafa et al., “Robust human intensity-varying activity recog-
Conf. Pervasive Comput. Commun. Workshops (PERCOM Workshops), nition using stochastic approximation in wearable sensors,” in Proc.
Mar. 2014, pp. 36–40. IEEE Int. Conf. Body Sensor Netw., May 2013, pp. 1–6.
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
CORNACCHIA et al.: SURVEY ON ACTIVITY DETECTION AND CLASSIFICATION USING WEARABLE SENSORS 399
[58] U. Maurer, A. Smailagic, D. P. Siewiorek, and M. Deisher, “Activity [79] D. Trabelsi, S. Mohammed, F. Chamroukhi, L. Oukhellou, and
recognition and monitoring using multiple sensors on different body Y. Amirat, “An unsupervised approach for automatic activity recogni-
positions,” in Proc. Int. Workshop Wearable Implant. Body Sensor Netw. tion based on hidden Markov model regression,” IEEE Trans. Autom.
(BSN), Apr. 2006, pp. 4–7. Sci. Eng., vol. 10, no. 3, pp. 829–835, Jul. 2013.
[59] W.-Y. Chung, A. Purwar, and A. Sharma, “Frequency domain [80] H. Gjoreski, S. Kozina, M. Gams, and M. Lustrek, “RAReFall—Real-
approach for activity classification using accelerometer,” in Proc. 30th time activity recognition and fall detection system,” in Proc. IEEE Int.
Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBS), Aug. 2008, Conf. Pervasive Comput. Commun. Workshops (PERCOM Workshops),
pp. 1120–1123. Mar. 2014, pp. 145–147.
[60] N. C. Krishnan and S. Panchanathan, “Analysis of low resolution [81] T. Hester, D. M. Sherrill, M. Hamel, K. Perreault, P. Boissy, and
accelerometer data for continuous human activity recognition,” in Proc. P. Bonato, “Using wearable sensors to analyze the quality of use of
IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), Mar. 2008, mobility assistive devices,” in Proc. Int. Workshop Wearable Implant.
pp. 3337–3340. Body Sensor Netw., 2006, pp. 6–9.
[61] M. Ermes, J. Pärkkä, J. Mäntyjärvi, and I. Korhonen, “Detection of [82] T. Huynh and B. Schiele, “Towards less supervision in activity
daily activities and sports with wearable sensors in controlled and recognition from wearable sensors,” in Proc. Int. Symp. Wearable
uncontrolled conditions,” IEEE Trans. Inf. Technol. Biomed., vol. 12, Comput. (ISWC), 2007, pp. 3–10.
no. 1, pp. 20–26, Jun. 2008. [83] H. Ghasemzadeh and R. Jafari, “Physical movement monitoring using
[62] L. Wang, S. Thiemjarus, B. Lo, and G.-Z. Yang, “Toward A mixed- body sensor networks: A phonological approach to construct spatial
signal reconfigurable ASIC for real-time activity recognition,” in decision trees,” IEEE Trans. Ind. Informat., vol. 7, no. 1, pp. 66–77,
Proc. 5th Int. Workshop Wearable Implant. Body Sensor Netw., 2008, Feb. 2011.
pp. 227–230. [84] N. A. Capela, E. D. Lemaire, and N. Baddour, “Feature selection
[63] C. Zhu and W. Sheng, “Human daily activity recognition in robot- for wearable smartphone-based human activity recognition with able
assisted living using multi-sensor fusion,” in Proc. IEEE Int. Conf. bodied, elderly, and stroke patients,” PLoS One, vol. 10, no. 4,
Robot. Autom., May 2009, pp. 2154–2159. pp. 1–18, 2015.
[64] T. Hao, L. Pang, X. Li, and S. Xing, “Wearable activity recognition [85] O. Dobrucali and B. Barshan, “The analysis of wearable motion sensors
for automatic microblog updates,” in Proc. IEEE/ASME Int. Conf. Adv. in human activity recognition based on mutual information criterion,”
Intell. Mechatron., Jul. 2009, pp. 1720–1723. in Proc. 22nd Signal Process. Commun. Appl. Conf. (SIU), Apr. 2014,
[65] J. Ryder, B. Longstaff, S. Reddy, and D. Estrin, “Ambulation: A tool pp. 1938–1941.
for monitoring mobility patterns over time using mobile phones,” in [86] B. Mortazavi, S. Nyamathi, S. I. Lee, T. Wilkerson, H. Ghasemzadeh,
Proc. Int. Conf. Comput. Sci. Eng., Aug. 2009, pp. 927–931. and M. Sarrafzadeh, “Near-realistic mobile exergames with wireless
[66] M. Li et al., “Multimodal physical activity recognition by fusing wearable sensors,” IEEE J. Biomed. Health Inform., vol. 18, no. 2,
temporal and cepstral information,” IEEE Trans. Neural Syst. Rehabil. pp. 449–456, Mar. 2014.
Eng., vol. 18, no. 4, pp. 369–380, Aug. 2010. [87] L. Zhang, X. Wu, and D. Luo, “Improving activity recognition with
[67] D. Curone, G. M. Bertolotti, A. Cristiani, E. L. Secco, and context information,” in Proc. IEEE Int. Conf. Mechatron. Autom.
G. Magenes, “A real-time and self-calibrating algorithm based on (ICMA), Aug. 2015, pp. 1241–1246.
triaxial accelerometer signals for the detection of human posture [88] C. H. Min, N. F. Ince, and A. H. Tewfik, “Generalization capability
and activity,” IEEE Trans. Inf. Technol. Biomed., vol. 14, no. 4, of a wearable early morning activity detection system,” in Proc. 15th
pp. 1098–1105, Jul. 2010. Eur. Signal Process. Conf., Sep. 2007, pp. 1556–1560.
[68] A. Mannini and A. M. Sabatini, “On-line classification of human [89] C.-H. Min, N. F. Ince, and A. H. Tewfik, “Classification of continuously
activity and estimation of walk-run speed from acceleration data using executed early morning activities using wearable wireless sensors,”
support vector machines,” in Proc. Int. Conf. IEEE Eng. Med. Biol. in Proc. 30th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBS),
Soc., Sep. 2011, pp. 3302–3305. Aug. 2008, pp. 5192–5195.
[69] S. Chernbumroong, A. S. Atkins, and H. Yu, “Activity classification [90] C.-H. Min, N. F. Ince, and A. H. Tewfik, “Early morning activity
using a single wrist-worn accelerometer,” in Proc. 5th Int. Conf. Softw., detection using acoustics and wearable wireless sensors,” in Proc. 16th
Knowledge Inf. Ind. Manag. Appl. (SKIMA), 2011, pp. 1–6. Eur. Signal Process. Conf., Aug. 2008, pp. 1–5.
[70] J. Andreu, R. D. Baruah, and P. Angelov, “Real time recognition of [91] E. Fortune, M. Tierney, C. N. Scanaill, A. Bourke, N. Kennedy, and
human activities from wearable sensors by evolving classifiers,” in J. Nelson, “Activity level classification algorithm using SHIMMER
Proc. IEEE Int. Conf. Fuzzy Syst., Jun. 2011, pp. 2786–2793. wearable sensors for individuals with rheumatoid arthritis,” in Proc.
[71] M. Luštrek, B. Cvetković, and S. Kozina, “Energy expenditure estima- Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., (EMBS), Aug. 2011,
tion with wearable accelerometers,” in Proc. IEEE Int. Symp. Circuits pp. 3059–3062.
Syst. (ISCAS), May 2012, pp. 5–8. [92] J.-K. Min and S.-B. Cho, “Activity recognition based on wearable
[72] P. Siirtola and J. Röning, “Ready-to-use activity recognition for smart- sensors using selection/fusion hybrid ensemble,” in Proc. IEEE Int.
phones,” in Proc. IEEE Symp. Comput. Intell. Data Mining, Apr. 2013, Conf. Syst., Man, Cybern. (SMC), Oct. 2011, pp. 1319–1324.
pp. 59–64. [93] M. Stikic, D. Larlus, S. Ebert, and B. Schiele, “Weakly supervised
[73] J. Margarito, R. Helaoui, A. M. Bianchi, F. Sartor, and A. Bonomi, recognition of daily life activities with wearable sensors,” IEEE Trans.
“User-independent recognition of sports activities from a single wrist- Pattern Anal. Mach. Intell., vol. 33, no. 12, pp. 2521–2537, Dec. 2011.
worn accelerometer: A template-matching-based approach,” IEEE [94] C.-W. Lin, Y.-T. C. Yang, J.-S. Wang, and Y.-C. Yang, “A wear-
Trans. Biomed. Eng., vol. 63, no. 4, pp. 788–796, Apr. 2016. able sensor module with a neural-network-based activity classification
[74] J. Lee, M. McCarthy, D. Rowlands, and D. James, “Decision-tree- algorithm for daily energy expenditure estimation,” IEEE Trans. Inf.
based human activity classification algorithm using single-channel foot- Technol. Biomed., vol. 16, no. 5, pp. 991–998, Sep. 2012.
mounted gyroscope,” Electron. Lett., vol. 51, no. 9, pp. 675–676, [95] Z. Wang, M. Jiang, Y. Hu, and H. Li, “An incremental learning method
2015. based on probabilistic neural networks and adjustable fuzzy clustering
[75] W. Xu, M. Zhang, A. A. Sawchuk, and M. Sarrafzadeh, “Co- for human activity recognition by using wearable sensors,” IEEE Trans.
recognition of human activity and sensor location via compressed Inf. Technol. Biomed., vol. 16, no. 4, pp. 691–699, Jul. 2012.
sensing in wearable body sensor networks,” in Proc. Int. Workshop [96] S. Zhang, M. H. Ang, W. Xiao, and C. K. Tham, “Detection of activities
Wearable Implant. Body Sensor Netw., 2012, pp. 124–129. for daily life surveillance: Eating and drinking,” in Proc. 10th IEEE
[76] M. Berchtold, M. Budde, D. Gordon, H. Schmidtke, and M. Beigl, Int. Conf. e-Health Netw., Appl. Service, (HEALTHCOM), Jul. 2008,
“ActiServ: Activity Recognition Service for mobile phones,” in Proc. pp. 171–176.
Int. Symp. Wearable Comput. (ISWC), Oct. 2010, pp. 1–8. [97] H. Koskimaki, V. Huikari, P. Siirtola, P. Laurinen, and J. Roning,
[77] D. Naranjo-Hernández, L. M. Roa, J. Reina-Tosina, and “Activity recognition using a wrist-worn inertial measurement unit: A
M. Á. Estudillo-Valderrama, “SoM: A smart sensor for human case study for industrial assembly lines,” in Proc. 17th Medit. Conf.
activity monitoring and assisted healthy ageing,” IEEE Trans. Biomed. Control Autom., 2009, pp. 401–405.
Eng., vol. 59, no. 11, pp. 3177–3184, Nov. 2012. [98] S.-J. Ryu and J.-H. Kim, “Classification of long-term motions using
[78] R. Rednic, E. Gaura, J. Brusey, and J. Kemp, “Wearable posture a two-layered hidden Markov model in a wearable sensor system,”
recognition systems: Factors affecting performance,” in Proc. IEEE- in Proc. IEEE Int. Conf. Robot. Biomimetics, (ROBIO), Dec. 2011,
EMBS Int. Conf. Biomed. Health Inf. (BHI), Jan. 2012, pp. 200–203. pp. 2975–2980.
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
400 IEEE SENSORS JOURNAL, VOL. 17, NO. 2, JANUARY 15, 2017
[99] P. Sarcevic, Z. Kincses, and S. Pletl, “Comparison of different classi- [123] Y.-J. Hong, I.-J. Kim, S. C. Ahn, and H.-G. Kim, “Mobile health
fiers in movement recognition using WSN-based wrist-mounted sen- monitoring system based on activity recognition using accelerometer,”
sors,” in Proc. IEEE Sensors Appl. Symp. (SAS), Apr. 2015, pp. 1–6. Simul. Model. Pract. Theory, vol. 18, no. 4, pp. 446–455, 2010.
[100] K.-T. Song and W.-J. Chen, “Human activity recognition using a mobile [124] Y. Nam, S. Rho, and C. Lee, “Physical activity recognition using mul-
camera,” in Proc. 8th Int. Conf. Ubiquitous Robots Ambient Intel., tiple sensors embedded in a wearable device,” ACM Trans. Embedded
Nov. 2011, pp. 3–8. Comput. Syst., vol. 12, no. 2, pp. 1–14, 2013.
[101] L. Li, H. Zhang, W. Jia, Z.-H. Mao, Y. You, and M. Sun, “Indi- [125] Y.-J. Hong, I.-J. Kim, S. C. Ahn, and H.-G. Kim, “Activity recognition
rect activity recognition using a target-mounted camera,” in Proc. using wearable sensors for elder care,” in Proc. 2nd Int. Conf. Future
4th Int. Congress Image Signal Process. (CISP), vol. 1, Oct. 2011, Generat. Commun. Netw., vol. 2, Dec. 2008, pp. 302–305.
pp. 487–491. [126] R. L. S. Torres, D. C. Ranasinghe, Q. Shi, and A. P. Sample, “Sensor
[102] Y. Watanabe, T. Hatanaka, T. Komuro, and M. Ishikawa, “Human gait enabled wearable RFID technology for mitigating the risk of falls near
estimation using a wearable camera,” in Proc. IEEE Workshop Appl. beds,” in Proc. IEEE Intl. Conf. (RFID), Apr. 2013, pp. 191–198.
Comput. Vis., Jan. 2011, pp. 276–281. [127] E. M. Tapia et al., “Real-time recognition of physical activities and
[103] K. Zhan, F. Ramos, and S. Faux, “Activity recognition from a wearable their intensitiies using wireless accelerometers and a heart monitor,” in
camera,” in Proc. 12th Int. Conf. Control Autom. Robot. Vis. (ICARCV), Proc. Int. Symp. Wearable Comput., 2007, pp. 37–40.
Dec. 2012, pp. 365–370. [128] J. Surana, C. S. Hemalatha, V. Vaidehi, S. A. Palavesam, and
[104] B. Yin, W. Qi, Z. Wei, and J. Nie, “Indirect human activity recognition M. J. A. Khan, “Adaptive learning based human activity and fall
based on optical flow method,” in Proc. 5th Int. Congress Image Signal detection using fuzzy frequent pattern mining,” in Proc. Int. Conf.
Process. (CISP), Oct. 2012, pp. 99–103. Recent Trends Inf. Technol. (ICRTIT), Jul. 2013, pp. 744–749.
[105] K. M. Kitani, T. Okabe, Y. Sato, and A. Sugimoto, “Fast unsu- [129] R. Jia and B. Liu, “Human daily activity recognition by fus-
pervised ego-action learning for first-person sports videos,” in Proc. ing accelerometer and multi-lead ECG data,” in Proc. IEEE Int.
IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Jun. 2011, Conf., Signal Process., Commun. Comput. (ICSPCC), Aug. 2013,
pp. 3241–3248. pp. 1–4.
[106] M. S. Ryoo and L. Matthies, “First-person activity recognition: What [130] C. M. El Achkar, F. Masse, A. Arami, and K. Aminian, “Physical
are they doing to me?” in Proc. IEEE Conf. Comput. Vis. Pattern activity recognition via minimal in-shoes force sensor configuration,” in
Recognit., Jun. 2013, pp. 2730–2737. Proc. 7th Int. Conf. Pervasive Comput. Technol. Healthcare Workshops,
[107] K. Ozcan, A. K. Mahabalagiri, M. Casares, and S. Velipasalar, (PervasiveHealth), May 2013, pp. 256–259.
“Automatic fall detection and activity classification by a wearable [131] D. De, P. Bharti, S. K. Das, and S. Chellappan, “Multimodal wear-
embedded smart camera,” IEEE J. Emerging Sel. Topics Circuits Syst., able sensing for fine-grained activity recognition in healthcare,” IEEE
vol. 3, no. 2, pp. 125–136, Jun. 2013. Internet Comput., vol. 19, no. 5, pp. 26–35, Sep./Oct. 2015.
[108] D. Sun, S. Roth, and M. J. Black, “Secrets of optical flow estima- [132] W. Tang and E. S. Sazonov, “Highly accurate recognition of human
tion and their principles,” in Proc. IEEE Conf. Comput. Vis. Pattern postures and activities through classification with rejection,” IEEE J.
Recognit., Jun. 2010, pp. 2432–2439. Biomed. Health Inform., vol. 18, no. 1, pp. 309–315, Jan. 2014.
[109] T. McCandless and K. Grauman, “Object-centric spatio-temporal pyra- [133] E. Sazonov, N. Hegde, R. C. Browning, E. L. Melanson, and
mids for egocentric activity recognition,” in Proc. Brit. Mach. Vis. N. A. Sazonova, “Posture and activity recognition and energy expen-
Conf., 2013, pp. 30.1–30.11. diture prediction in a wearable platform,” IEEE J. Biomed. Health
[110] H. Pirsiavash and D. Ramanan, “Detecting activities of daily living in Informat., vol. 19, no. 4, pp. 1339–1346, Jul. 2015.
first-person camera views,” in Proc. IEEE Conf. Comput. Vis. Pattern [134] J. Hernandez, Y. Li, J. M. Rehg, and R. W. Picard, “BioGlass: Physi-
Recognit. (CVPR), Jun. 2012, pp. 2847–2854. ological parameter estimation using a head-mounted wearable device,”
[111] K. Matsuo, K. Yamada, S. Ueno, and S. Naito, “An attention-based in Proc. EAI 4th Int. Conf. Wireless Mobile Commun. Healthcare
activity recognition for egocentric video,” in Proc. IEEE Conf. Comput. (Mobihealth), Nov. 2014, pp. 55–58.
Vis. Pattern Recognit. Workshops, Jun. 2014, pp. 565–570. [135] K. Zhan, S. Faux, and F. Ramos, “Multi-scale conditional random fields
[112] Y. Yan, E. Ricci, G. Liu, and N. Sebe, “Egocentric daily activity for first-person activity recognition,” in Proc. IEEE Int. Conf. Pervas.
recognition via multitask clustering,” IEEE Trans. Image Process., Comput. Commun. (PerCom), Mar. 2014, pp. 51–59.
vol. 24, no. 10, pp. 2984–2995, Oct. 2015. [136] J. Pärkkä, M. Ermes, P. Korpipää, J. Mäntyjärvi, J. Peltola, and
[113] A. Fathi, X. Ren, and J. M. Rehg, “Learning to recognize objects I. Korhonen, “Activity classification using realistic data from wear-
in egocentric activities,” in Proc. IEEE Conf. Comput. Vis. Pattern able sensors,” IEEE Trans. Inf. Technol. Biomed., vol. 10, no. 1,
Recognit., Jun. 2011, pp. 3281–3288. pp. 119–128, Jan. 2006.
[114] A. Fathi, A. Farhadi, and J. M. Rehg, “Understanding egocen- [137] M. Moghimi, P. Azagra, L. Montesano, A. C. Murillo, and S. Belongie,
tric activities,” in Proc. IEEE Int. Conf. Comput. Vis., Nov. 2011, “Experiments on an RGB-D wearable vision system for egocentric
pp. 407–414. activity recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog-
[115] A. Fathi and J. M. Rehg, “Modeling actions through state changes,” nit. Workshops, Jun. 2014, pp. 611–617.
in Proc. IEEE Comput. Vis. Pattern Recognit. (CVPR), Jun. 2013, [138] E. S. Sazonov, G. Fulk, J. Hill, Y. Schutz, and R. Browning,
pp. 2579–2586. “Monitoring of posture allocations and activities by a shoe-based
[116] S. Sundaram and W. W. M. Cuevas, “High level activity recognition wearable sensor,” IEEE Trans. Biomed. Eng., vol. 58, no. 4,
using low resolution wearable vision,” in Proc. IEEE Conf. Comput. pp. 983–990, Apr. 2011.
Vis. Pattern Recognit. Workshops, Jun. 2009, pp. 25–32. [139] D. Riboni and C. Bettini, “COSAR: Hybrid reasoning for context-
[117] Y. Li, A. Fathi, and J. M. Rehg, “Learning to predict gaze in egocentric Aware activity recognition,” Pers. Ubiquitous Comput., vol. 15, no. 3,
video,” in Proc. IEEE ICCV, Dec. 2013, pp. 3216–3223. pp. 271–289, 2011.
[118] W. W. Mayol and D. W. Murray, “Wearable hand activity recognition [140] A. R. Doherty et al., “Using wearable cameras to categorise type
for event summarization,” in Proc. 9th IEEE Int. Symp. Wearable and context of accelerometer-identified episodes of physical activity,”
Comput., (ISWC), Oct. 2005, pp. 122–129. Int. J. Behavioral Nutrition Phys. Activity, vol. 10, no. 1, p. 22,
[119] A. Fathi, J. K. Hodgins, and J. M. Rehg, “Social interactions: A first- Jan. 2013.
person perspective,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. [141] T. Fujimoto et al., “Wearable human activity recognition by electrocar-
Pattern Recognit., Jun. 2012, pp. 1226–1233. diograph and accelerometer,” in Proc. IEEE 43rd Int. Symp. Multiple-
[120] T. Starner, J. Weaver, and A. Pentland, “Real-time American sign Valued Logic (ISMVL), May 2013, pp. 12–17.
language recognition using desk and wearable computer based [142] K. Ouchi, “Smartphone-based monitoring system for activities of daily
video,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, no. 12, living for elderly people and their relatives etc,” in Proc. UbiComp,
pp. 1371–1375, Dec. 1998. 2013, pp. 103–106.
[121] O. Aghazadeh, J. Sullivan, and S. Carlsson, “Novelty detection from [143] A. M. Khan, A. Tufail, A. M. Khattak, and T. H. Laine, “Activity
an ego-centric perspective,” in Proc. IEEE Conf. Comput. Vis. Pattern recognition on smartphones via sensor-fusion and KDA-based SVMs,”
Recognit., Jun. 2011, pp. 3297–3304. Int. J. Distrib. Sensor Netw., vol. 10, May 2014, Art. no. 503291.
[122] H. H. Wu, E. D. Lemaire, and N. Baddour, “Change-of-state determina- [144] W. Puangpakisiri, T. Yamasaki, and K. Aizawa, “High level activity
tion to recognize mobility activities using a blackberry smartphone,” in annotation of daily experiences by a combination of a wearable
Proc. Ann. Int. Conf. IEEE Eng. Med. Biol. Soc., (EMBS), Aug. 2011, device and Wi-Fi based positioning system,” in Proc. IEEE Intl. Conf.
pp. 5252–5255. Multimedia Expo, (ICME), vol. 9, Jun. 2008, pp. 1421–1424.
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
CORNACCHIA et al.: SURVEY ON ACTIVITY DETECTION AND CLASSIFICATION USING WEARABLE SENSORS 401
[145] G. Bahle, P. Lukowicz, K. Kunze, and K. Kise, “I see you: How to [167] S. H. Fang, Y. C. Liang, and K. M. Chiu, “Developing a mobile phone-
improve wearable activity recognition by leveraging information from based fall detection system on Android platform,” in Proc. Comput.,
environmental cameras,” in Proc. IEEE Int. Conf. Pervasive Comput. Commun. Appl. Conf., 2012, pp. 143–146.
Commun. Workshops, Mar. 2013, pp. 409–412. [168] J. Dai, X. Bai, Z. Yang, Z. Shen, and D. Xuan, “PerFallD: A pervasive
[146] D. Tacconi et al., “Activity and emotion recognition to support early fall detection system using mobile phones,” in Proc. IEEE Int. Conf.
diagnosis of psychiatric diseases,” in Proc. 2nd Int. Conf. Pervasive Pervasive Comput. Commun. Workshops, Mar. 2010, pp. 292–297.
Comput. Technol. Healthcare, 2008, pp. 100–102. [169] M. Shoaib, S. Bosch, O. D. Incel, H. Scholten, and P. J. M. Havinga, “A
[147] S. R. Edgar, G. D. Fulk, and E. S. Sazonov, “Recognition of household survey of Online activity recognition using mobile phones,” Sensors,
and athletic activities using smartshoe,” in Proc. Annu. Int. Conf. IEEE vol. 15, no. 1, pp. 2059–2085, 2015.
Eng. Med. Biol. Soc., Aug. 2012, pp. 6382–6385. [170] P. Siirtola and J. Roning, “Recognizing human activities user-
[148] A. Fleury, M. Vacher, and N. Noury, “SVM-based multimodal clas- independently on smartphones based on accelerometer data,”
sification of activities of daily living in health smart homes: Sensors, Int. J. Interact. Multimedia Artif. Intell., vol. 1, no. 5, p. 38, 2012.
algorithms, and first experimental results,” IEEE Trans. Inf. Technol. [171] X. Long, B. Yin, and R. M. Aarts, “Single-accelerometer-based daily
Biomed., vol. 14, no. 2, pp. 274–283, Mar. 2010. physical activity classification,” in Proc. Int. Conf. IEEE Eng. Med.
[149] K. Ozcan and S. Velipasalar, “Wearable camera-and accelerometer- Biol. Soc., Sep. 2009, pp. 6107–6110.
based fall detection on portable devices,” IEEE Embedded Syst. Lett., [172] M. Muehlbauer, G. Bahle, and P. Lukowicz, “What can an arm holster
vol. 8, no. 1, pp. 6–9, Mar. 2016. worn smart phone do for activity recognition?” in Proc. Int. Symp.
[150] E. Shimokawara, T. Kaneko, T. Yamaguchi, M. Mizukawa, and Wearable Comput. (ISWC), 2011, pp. 79–82.
N. Matsuhira, “Estimation of basic activities of daily living using zig- [173] J. Chen, K. Kwong, D. Chang, J. Luk, and R. Bajcsy, “Wearable sensors
bee 3D accelerometer sensor network,” in Proc. Int. Conf. Biometrics for reliable fall detection,” in Proc. Int. Conf. IEEE Eng. Med. Biol.
Kansei Eng., pp. 251–256, Jul. 2013. Soc., vol. 4, Mar. 2005, pp. 3551–3554.
[151] D. Damen, A. Gee, W. Mayol-Cuevas, and A. Calway, “Ego- [174] A. Li, L. Ji, S. Wang, and J. Wu, “Physical activity classification using
centric real-time workspace monitoring using an RGB-D camera,” a single triaxial accelerometer based on HMM,” in Proc. IET Int. Conf.
in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Oct. 2012, Wireless Sensor Netw., Nov. 2010, pp. 155–160.
pp. 1029–1036. [175] C. Zhu and W. Sheng, “Motion–and location-based Online human
[152] A. Basak, S. Narasimhan, and S. Bhunia, “KiMS: Kids’ health monitor- daily activity recognition,” Pervasive Mobile Comput., vol. 7, no. 2,
ing system at day-care centers using wearable sensors and vocabulary- pp. 256–269, 2011.
based acoustic signal processing,” in Proc. IEEE 13th Int. Conf. [176] M. Ermes, J. Parkka, and L. Cluitmans, “Advancing from offline to
e-Health Netw., Appl. Services, (HEALTHCOM), Jun. 2011, pp. 1–8. Online activity recognition with wearable sensors,” in Proc. Int. Conf.
[153] T. Tagniguchi, K. Hamahata, and N. Iwahashi, “Unsupervised segmen- IEEE Eng. Med. Biol. Soc., Aug. 2008, pp. 4451–4454.
tation of human motion data using sticky HDP-HMM and MDL-based [177] W. Ugulino, D. Cardador, K. Vega, E. Velloso, R. Milidiü, and H. Fuks,
chunking method for imitation learning,” Adv. Robot., vol. 25, no. 17, “Wearable computing: Accelerometers,” in Proc. Data Classification
pp. 2143–2172, 2011. Body Postures Movements, 2012, pp. 52–61. [Online]. Available:
[154] N. Noury, A. Galay, J. Pasquier, and M. Ballussaud, “Preliminary http://dx.doi.org/10.1007/978-3-642-34459-6_6
investigation into the use of autonomous fall detectors,” in Proc. Int. [178] E. Mitchell, D. Monaghan, and N. E. O’Connor, “Classification of
Conf. IEEE Eng. Med. Biol. Soc., Aug. 2008, pp. 2828–2831. sporting activities using smartphone accelerometers,” Sensors (Basel,
[155] C. C. Wang et al., “Development of a fall detecting system for the Switzerland), vol. 13, no. 4, pp. 5317–5337, 2013.
elderly residents,” in Proc. 2nd Int. Conf. Bioinform. Biomed. Eng., [179] C. Torres-Huitzil and M. Nuno-Maganda, “Robust smartphone-based
2008, pp. 1359–1362. human activity recognition using a TRI-axial accelerometer,” in Proc.
[156] T. Tamura, “Wearable accelerometer in clinical use,” in Proc. Int. Conf. Circuits Syst., IEEE Latin Amer. Symp., Feb. 2015, pp. 1–4.
IEEE Eng. Med. Biol. Soc., vol. 7, Jan. 2005, pp. 7165–7166. [180] C. K. Schindhelm, “Activity recognition and step detection with
[157] Y. Cao, Y. Yang, and W. H. Liu, “E-FallD: A fall detection system smartphones: Towards terminal based indoor positioning system,” in
using Android-based smartphone,” in Proc. 9th Int. Conf. Fuzzy Syst. Proc. IEEE Int. Symp. Pers., Indoor Mobile Radio Commun. (PIMRC),
Knowl. Discovery, May 2012, pp. 1509–1513. Sep. 2012, pp. 2454–2459.
[158] M. R. Narayanan, S. R. Lord, M. M. Budge, B. G. Celler, and [181] R. C. Wagenaar, I. Sapir, Y. Zhang, S. Markovic, L. M. Vaina, and
N. H. Lovell, “Falls management: Detection and prevention, using a T. D. C. Little, “Continuous monitoring of functional activities using
waist-mounted triaxial accelerometer,” in Proc. 29th Annu. Int. Conf. wearable, wireless gyroscope and accelerometer technology,” in Proc.
IEEE Eng. Med. Biol. Soc. (EMBS), Aug. 2007, pp. 4037–4040. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBS), Aug. 2011,
[159] W. Wu, S. Dasgupta, E. E. Ramirez, C. Peterson, and pp. 4844–4847.
J. G. Norman, “Classification accuracies of physical activities [182] S. Inoue and Y. Hattori, “Toward high-level activity recognition
using smartphone motion sensors,” J. Med. Internet Res., from accelerometers on mobile phones,” in Proc. IEEE Int. Conf.
vol. 14, no. 5, p. e130, Oct. 2012. [Online]. Available: Internet Things Cyber, Phys. Soc. Comput., iThings/CPSCom,
http://www.ncbi.nlm.nih.gov/pubmed/23041431 Oct. 2011, pp. 225–231.
[160] I. Mandal, S. L. Happy, D. P. Behera, and A. Routray, “A framework for [183] S. Bambach. (Jan. 2015). “A survey on recent advances of com-
human activity recognition based on accelerometer data,” in Proc. 5th puter vision algorithms for egocentric video.” [Online]. Available:
Int. Conf. Next Generat. Inf. Technol. Summit (Confluence), Sep. 2014, https://arxiv.org/abs/1501.02825
pp. 600–603. [184] L. Li, H. Zhang, W. Jia, J. Nie, W. Zhang, and M. Sun, “Automatic
[161] L. Gao, A. K. Bourke, and J. Nelson, “Evaluation of accelerometer video analysis and motion estimation for physical activity classifica-
based multi-sensor versus single-sensor activity recognition systems,” tion,” in Proc. IEEE 36th Northeast Bioeng. Conf., Mar. 2010, pp. 1–2.
Med. Eng. Phys., vol. 36, no. 6, pp. 779–785, Jun. 2014. [185] H. Zhang, L. Li, W. Jia, J. D. Fernstrom, R. J. Sclabassi, and
[162] M. Yu, A. Rhuma, S. Naqvi, L. Wang, and J. Chambers, “Posture M. Sun, “Recognizing physical activity from Ego-motion of a camera,”
recognition based fall detection system for monitoring an elderly person in Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBS), Jun. 2010,
in a smart home environment,” IEEE Trans. Inf. Technol. Biomed., pp. 5569–5572.
vol. 16, no. 6, p. 1, Nov. 2012. [186] H. Zhang et al., “Physical activity recognition based on motion in
[163] J. Zheng, G. Zhang, and T. Wu, “Design of automatic fall detector images acquired by a wearable camera,” Neurocomputing, vol. 74,
for elderly based on triaxial accelerometer,” in Proc. 3rd Int. Conf. pp. 2184–2192, Jun. 2011.
Bioinformati. Biomed. Eng. (ICBBE), Jun. 2009, pp. 1–4. [187] X. Ren and C. Gu, “Figure-ground segmentation improves handled
[164] T. Degen, H. Jaeckel, M. Rufer, and S. Wyss, “SPEEDY: A fall detector object recognition in egocentric video,” in Proc. IEEE Comput. Soc.
in a wrist watch,” in Proc. 7th IEEE Int. Wearable Comput. Symp., Conf. Comput. Vis. Pattern Recognit., Jul. 2010, pp. 3137–3144.
Oct. 2003, pp. 184–187. [188] K. Sumi, M. Toda, A. Sugimoto, T. Matsuyama, and S. Tsukizawa,
[165] J. Yang, S. Wang, N. Chen, X. Chen, and P. Shi, “Wearable accelerom- “Active wearable vision sensor: Recognition of human activities and
eter based extendable activity recognition system,” in Proc. IEEE Int. environments,” in Proc. Int. Conf. Inf. Res. Develop. Knowl. Soc.
Conf. Robot. Autom., May 2010, pp. 3641–3647. Infrastructure, 2004, pp. 15–22.
[166] G. A. Koshmak, M. Linden, and A. Loutfi, “Evaluation of the Android- [189] R. O. Castle, D. J. Gawley, G. Klein, and D. W. Murray, “Towards
based fall detection system with physiological data monitoring,” simultaneous recognition, localization and mapping for hand-held and
in Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBS), Jul. 2013, wearable cameras,” in Proc. IEEE Int. Conf. Robot. Autom., Mar. 2007,
pp. 1164–1168. pp. 4102–4107.
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
402 IEEE SENSORS JOURNAL, VOL. 17, NO. 2, JANUARY 15, 2017
[190] B. Xiong and K. Grauman, “Detecting snap points in egocentric [212] T. Q. Trung, S. Ramasundaram, B.-U. Hwang, and N.-E. Lee,
video with a Web photo prior,” in Proc. Comput. Vis. (ECCV), 2014, “An all-elastomeric transparent and stretchable temperature
pp. 282–298. sensor for body-attachable wearable electronics,” Adv. Mater.,
[191] Y. Lee and K. Grauman, “Predicting important objects for egocentric vol. 28, no. 3, pp. 502–509, 2016. [Online]. Available:
video summarization,” Int. J. Comput. Vis., vol. 114, no. 1, pp. 38–55, http://dx.doi.org/10.1002/adma.201504441
2015. [213] Y. Chen, B. Lu, Y. Chen, and X. Feng, “Breathable and stretchable
[192] Y. J. Lee, J. Ghosh, and K. Grauman, “Discovering important people temperature sensors inspired by skin,” Sci. Rep., vol. 5, p. 11505,
and objects for egocentric video summarization,” in Proc. IEEE Conf. Jul. 2015. [Online]. Available: http://dx.doi.org/10.1038/srep11505
CVPR, Jun. 2012, pp. 1346–1353. [214] E. H. Spriggs, F. De La Torre, and M. Hebert, “Temporal segmentation
[193] Z. Lu and K. Grauman, “Story-driven summarization for egocentric and activity classification from first-person sensing,” in Proc. IEEE
video,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Conf. Comput. Vis. Pattern Recognit., Aug. 2009, pp. 17–24.
Jun. 2013, pp. 2714–2721. [215] Z. Li, Z. Wei, W. Jia, and M. Sun, “Daily life event segmentation for
[194] Y. J. Lee and K. Grauman, “Predicting Important Objects lifestyle evaluation based on multi-sensor data recorded by a wearable
for Egocentric Video Summarization,” Int. J. Comput. Vis., device,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., Jun. 2013,
vol. 114, no. 1, pp. 38–55, Jan. 2015. [Online]. Available: pp. 2858–2861.
http://link.springer.com/10.1007/s11263-014-0794-5 [216] S. Ishimaru, K. Kunze, and K. Kise, “In the blink of an eye: Combining
[195] R. E. Schapire, Explaining AdaBoost. Berlin, Heidelberg, head motion and eye blink frequency for activity recognition with
Germany: Springer, 2013, pp. 37–52. [Online]. Available: Google Glass,” in Proc. 5th Augmented Human Int. Conf., 2014,
http://dx.doi.org/10.1007/978-3-642-41136-6_5 p. 15.
[196] K. Yamada, Y. Sugano, T. Okabe, Y. Sato, A. Sugimoto, and [217] D. C. Ranasinghe, R. L. S. Torres, A. P. Sample, J. R. Smith, K. Hill,
K. Hiraki, “Attention prediction in egocentric video using motion and R. Visvanathan, “Towards falls prevention: A wearable wireless
and visual saliency,” in Proc. Lecture Notes Bioinformat., 2011, and battery-less sensing and automatic identification tag for real time
pp. 277–288. monitoring of human movements,” in Proc. Annu. Int. Conf. IEEE Eng.
[197] K. Ogaki, K. M. Kitani, Y. Sugano, and Y. Sato, “Coupling eye-motion Med. Biol. Soc., Aug. 2012, pp. 6402–6405.
and ego-motion features for first-person activity recognition,” in Proc. [218] B. Yuan and J. Herbert, “Web-based real-time remote monitoring for
IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Workshops, pervasive healthcare,” in Proc. Pervasive Comput. Commun. Work-
Jun. 2012, pp. 1–7. shops, IEEE Int. Conf., Mar. 2011, pp. 625–629.
[198] T. Pawar, N. S. Anantakrishnan, S. Chaudhuri, and S. P. Duttagupta, [219] S. Patel, C. Mancinelli, P. Bonato, J. Healey, and M. Moy, “Using
“Transition detection in body movement activities for wearable wearable sensors to monitor physical activities of patients with COPD:
ECG,” IEEE Trans. Biomed. Eng., vol. 54, no. 6, pp. 1149–1152, A comparison of classifier performance,” in Proc. 6th Int. Workshop
Jun. 2007. Wearable Implant. Body Sensor Netw., 2009, pp. 234–239.
[199] R. Kher, T. Pawar, and V. Thakar, “Combining accelerometer data [220] R. R. Fletcher, S. Tam, O. Omojola, R. Redemske, and J. Kwan,
with Gabor energy feature vectors for body movements classification in “Wearable sensor platform and mobile application for use in cognitive
ambulatory ECG signals,” in Proc. Int. Conf. Biomed. Eng. Informat., behavioral therapy for drug addiction and PTSD,” in Proc. Annu. Int.
2013, pp. 413–417. Conf. IEEE Eng. Med. Biol. Soc., Jun. 2011, pp. 1802–1805.
[221] D. De, P. Bharti, S. K. Das, and S. Chellappan, “Multimodal wear-
[200] S. Keskar, R. Banerjee, and R. Reddy, “A dual-PSoC based reconfig-
able sensing for fine-grained activity recognition in healthcare,” IEEE
urable wearable computing framework for ECG monitoring,” in Proc.
Internet Comput., vol. 19, no. 5, pp. 26–35, Sep. 2015.
Comput. Cardiol. (CinC), Sep. 2012, pp. 85–88.
[222] C. V. C. Bouten, K. T. M. Koekkoek, M. Verduin, R. Kodde, and
[201] M. Hardegger, L.-V. Nguyen-Dinh, A. Calatroni, D. Roggen, and
J. D. Janssen, “A triaxial accelerometer and portable data processing
G. Tröster, “Enhancing action recognition through simultaneous seman-
unit for the assessment of daily physical activity,” IEEE Trans. Biomed.
tic mapping from body-worn motion sensors,” in Proc. ACM Int. Symp.
Eng., vol. 44, no. 3, pp. 136–147, Mar. 1997.
Wearable Comput., 2014, pp. 99–106.
[223] M. J. Mathie, A. C. F. Coster, N. H. Lovell, and B. G. Celler,
[202] A. Bulling, J. A. Ward, H. Gellersen, and G. Tröster, “Eye move- “Detection of daily physical activities using a triaxial accelerometer,”
ment analysis for activity recognition using electrooculography,” IEEE Med. Biological Eng. Comput., vol. 41, no. 3, pp. 296–301, 2003.
Trans. Pattern Anal. Mach. Intell., vol. 33, no. 4, pp. 741–753, [224] M. B. D. Rosario et al., “A comparison of activity classification in
Apr. 2011. younger and older cohorts using a smartphone,” Physiological Meas.,
[203] S. C. Mukhopadhyay, “Wearable sensors for human activity moni- vol. 35, no. 11, p. 2269, 2014.
toring: A review,” IEEE Sensors J., vol. 15, no. 3, pp. 1321–1330, [225] B. Xiong and K. Grauman, “Detecting Snap Points in Egocentric
Mar. 2015. Video with a Web Photo Prior,” Comput. Vis., vol. 869, pp. 282–298,
[204] C. Jingyuan, O. Amft, G. Bahle, and P. Lukowicz, “Designing sensitive Sep. 2014.
wearable capacitive sensors for activity recognition,” IEEE Sensors J.,
vol. 13, no. 10, pp. 3935–3947, Oct. 2013.
[205] E. K. Wujcik, N. J. Blasdel, D. Trowbridge, and C. N. Monty, “Ion Maria Cornacchia (S’10) received the M.S. degree
sensor for the quantification of sodium in sweat samples,” IEEE Sensors in computer and information science from Syracuse
J., vol. 13, no. 9, pp. 3430–3436, Sep. 2013. University in 2009, where she is currently pursuing
[206] P. H. Veltink and D. D. Rossi, “Wearable technology for biomechanics: the Ph.D. degree in computer information science
E-textile or micromechanical sensors?” IEEE Eng. Med. Biol. Mag., and engineering. She is also a Computer Scientist
vol. 29, no. 3, pp. 37–43, May 2010. with the Air Force Research Laboratory, Information
[207] P. A. Shaltis, A. Reisner, and H. H. Asada, “Wearable, cuff-less ppg- Directorate, Rome, NY, USA. Her research interests
based blood pressure monitor with novel height sensor,” in Proc. Int. include target tracking, information fusion, data min-
Conf. IEEE Eng. Med. Biol. Soc., Oct. 2006, pp. 908–911. ing, and computer vision.
[208] P. Corbishley and E. Rodriguez-Villegas, “Breathing detection:
Towards a miniaturized, wearable, battery-operated monitoring sys-
tem,” IEEE Trans. Biomed. Eng., vol. 55, no. 1, pp. 196–204,
Jan. 2008. Koray Ozcan (S’11) received the B.S. (Hons.)
[209] J. A. C. Patterson, D. C. McIlwraith, and G. Z. Yang, “A flexible, low degree in electrical and electronics engineering from
noise reflective ppg sensor platform for ear-worn heart rate monitoring,” Bilkent University, Ankara, Turkey, in 2011. He is
in Proc. 6th Int. Workshop Wearable Implant. Body Sensor Netw., currently pursuing the Ph.D. degree with the Depart-
Jun. 2009, pp. 286–291. ment of Electrical and Computer Engineering, Syra-
[210] R. Dudde, T. Vering, G. Piechotta, and R. Hintsche, “Computer-aided cuse University, Syracuse, NY, USA. In 2015, he
continuous drug infusion: Setup and test of a mobile closed-loop system was a research intern with the Exploratory Computer
for the continuous automated infusion of insulin,” IEEE Trans. Inf. Vision Group, IBM Thomas J. Watson Research
Technol. Biomed., vol. 10, no. 2, pp. 395–402, Apr. 2006. Center, Yorktown Heights, NY, USA. His research
[211] C. H. Ahn et al., “Disposable smart lab on a chip for point-of- interests include video/image processing, computer
care clinical diagnostics,” Proc. IEEE, vol. 92, no. 1, pp. 154–173, vision, embedded systems, machine learning, video
Jan. 2004. event detection, and object detection.
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.
CORNACCHIA et al.: SURVEY ON ACTIVITY DETECTION AND CLASSIFICATION USING WEARABLE SENSORS 403
Yu Zheng (S’12) received the M.S. degree in elec- Senem Velipasalar (M’04–SM’14) received the
trical engineering from Syracuse University in 2012, B.S. (Hons.) degree in electrical and electronic engi-
where he is currently pursuing the Ph.D. degree in neering from Bogazici University, Istanbul, Turkey,
electrical and computer engineering. His research in 1999, the M.S. degree in electrical sciences
interests include object detection/tracking, multi- and computer engineering from Brown University,
camera systems, bio-medical image processing, and Providence, RI, USA, in 2001, and the M.A. and
computer vision. Ph.D. degrees in electrical engineering from Prince-
ton University, Princeton, NJ, USA, in 2004 and
2007, respectively. From 2001 to 2005, she was
with the Exploratory Computer Vision Group, IBM
T. J. Watson Research Center, Yorktown Heights,
NY, USA. From 2007 to 2011, she was an Assistant Professor with the Depart-
ment of Electrical Engineering, University of Nebraska–Lincoln, Lincoln, NE,
USA. She joined Syracuse University, Syracuse, NY, USA, in 2011, where
she is currently an Associate Professor with the Department of Electrical
Engineering and Computer Science. The focus of her research has been on
mobile camera applications, wireless embedded smart cameras, multicamera
tracking and surveillance systems, and automatic event detection from videos.
Her current research interests include embedded computer vision, video/image
processing, distributed multi-camera systems, pattern recognition and signal
processing.
Dr. Velipasalar is a member of the Editorial Board of the Journal of
Signal Processing Systems (Springer). She received a Faculty Early Career
Development Award from the National Science Foundation in 2011 and
the Best Student Paper Award at the IEEE International Conference on
Multimedia and Expo in 2006. She was a recipient of the Excellence in
Graduate Education Faculty Recognition Award, the EPSCoR First Award,
two Layman Awards, the IBM Patent Application Award, and Princeton and
Brown University Graduate Fellowships. She is the co-authored the paper,
which received the third place award at the 2011 ACM/IEEE International
Conference on Distributed Smart Cameras.
Authorized licensed use limited to: Tampere University. Downloaded on August 03,2021 at 16:35:44 UTC from IEEE Xplore. Restrictions apply.