Statistical Database of Human Motion Recognition Using Wearable IoTA Review
Statistical Database of Human Motion Recognition Using Wearable IoTA Review
I. I NTRODUCTION worn around the neck and were a popular symbol until
EPENDING on the definition of technology, the first
D wearable technologies were invented in the thirteenth
century, and this technology was the same as glasses. Later,
the invention of pocket watches and wristwatches. Another
prototype of technology was an abacus ring made in China
in the 17th century. The first wearable computer was created
in the 16th century, the first portable, wearable watch, the by mathematics professor Edward Thorp in the 1960s. In his
Nuremberg eggs, was invented. They were designed to be book Beat the Dealer: A Winning Strategy for the Game of
Manuscript received 25 January 2023; revised 21 March
Twenty-One, Thorp reveals that he built a computer small
2023 and 28 April 2023; accepted 20 May 2023. Date of publication enough to fit in a shoe to cheat in roulette. A timer predicted
6 June 2023; date of current version 14 July 2023. The associate editor where the ball would land from the roulette table and overall
coordinating the review of this article and approving it for publication
was Prof. Rosario Morello. (Corresponding author: Saeed Ebadollahi.)
helped Thorp and his colleague Claude Shannon by 44%in
Eghbal Foroughi Asl and Saeed Ebadollahi are with the Depart- the game [1]. Over the next few decades, newer products
ment of Electrical Engineering, Iran University of Science and Tech- will make wearable technologies more popular and modern.
nology (IUST), Tehran 16846-13114, Iran (e-mail: eghbalforoughiasl@
gmail.com; [email protected]).
Reducing the cost of living, saving time, fast and immediate
Reza Vahidnia is with Rogers Communications Inc., Toronto, ON M4Y detection, and many other factors force us to move toward
2Y5, Canada, also with TELUS, Vancouver, BC V6E 0A7, Canada, wearable sensors. The widespread deployment of sensors
and also with the Department of Electrical and Computer Engineering,
British Columbia Institute of Technology, Vancouver, BC V5G 3H2,
and the Internet of Things (IoT) facilitates this. Combining
Canada (e-mail: [email protected]). wearable sensors or wearable equipment in general with the
Aliakbar Jalali was with the School of Electrical Engineering, Iran IoT makes the equipment smarter and increases its efficiency
University of Science and Technology, Tehran 16846-13114, Iran. He is
now with the Faculty of Cybersecurity, University of Maryland, College
[2]. Dian et al. [3] have stated that “IoT-enabled wearables
Park, MD 20742 USA, and also with the Lane Department of Computer are smart devices that can be worn as external accessories,
Science and Electrical Engineering, West Virginia University, Morgan- embedded in clothing and garments, implanted in the body,
town, WV 26506 USA (e-mail: [email protected]).
Digital Object Identifier 10.1109/JSEN.2023.3282171
or even adhered to or tattooed on the skin. These devices are
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
15254 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
able to connect to the Internet in order to collect, send, and includes activity recognition, fall detection, classification of
receive the information that can be used for smart decision- daily activities, and so on [8], [9], [10], [11], [12], [13]. As it
making.” Wearable sensors have different types with different was mentioned earlier, this section includes three subsections:
applications. One of the most widely used types of wearable HAR, GA, and GR. In HAR, various human activities, such
sensors is inertial sensors. These sensors mainly include as walking, running, sitting, sleeping, standing, showering,
a gyroscope, an accelerometer, and a magnetometer. These cooking, driving, opening doors, and abnormal activities, will
sensors are used either individually or in groups to report the be detected. Data can be collected from wearable sensors or
specific force of the body, angular velocity, and orientation of through video frames or images [14]. An interesting definition
the human body [4]. As was mentioned, these sensors have of activity recognition subsets is provided in the paper [15]:
various applications, one of the most important of which is in activity recognition can be referred to as the process of
the human body motion analysis. Other types of sensors that describing and classifying actions, pinpointing specific move-
are used in human body motion analysis are introduced in this ments, and extracting unique patterns from the dataset using
article, such as force sensors, pressure sensors, and electrodes. heterogeneous sensing modalities. The GA subsection is also
This article, while providing a general definition of the human described in [16] as follows: GA is a measure that can be easily
body motion analysis, defines a general and universal chart of translated from animals to humans, especially in the case of
it and, while defining each part of this chart, focuses on the motor diseases, such as PD. This study involves quantifying
classification section. When defining each subsection of the (introducing and analyzing measurable gait parameters) and
chart, it introduces the steps to perform the project related interpreting [e.g., different conclusions about mobility (health,
to each subsection of movement classification [human activity age, weight, speed, and so on)] using the gait pattern. Wearable
recognition (HAR), gesture recognition (GR), and gait analysis sensors play a key role in data collection in this area. GR with
(GA)]. This article will give the reader the idea to adapt wearable sensors is an active research field that seeks to
the subject matter to one of the subsections of movement integrate the motor channel into human–computer interaction.
classification and then know the steps. Finally, the algorithms GR is a computational process that attempts to detect and
for classification, feature extraction, feature reduction, feature interpret human movements using mathematical algorithms.
selection, validation, segmentation, the actions desired by the The program is also used in virtual environment control, sign
authors for preprocessing the wearable sensor signals, and language translation, robot remote control, or music creation
the type of sensors are fully identified and reported based on [17]. GR with wearable sensors has received so much attention
repeated use in papers. Also, the most widely used software from researchers in recent years. In the following, we will
in this field is fully identified. extract the steps taken by scientists and researchers to carry
out the project in each of the above fields, respectively, and
II. M AIN B ODY OF T HIS A RTICLE identify the most widely used algorithms for each stage.
One of the most important applications of wearable sensors
is in the human body motion analysis. Human body motion A. Data Collection
analysis is defined as any method that involves any means The first step for all three subsections is the collection of
to obtain a quantitative or qualitative measurement of it [5]. motion data or, in general, the data related to the movement
Human body motion analysis has many applications in various classification operation by wearable sensors so that the wear-
fields including: 1) medical evaluation; 2) monitoring people; able sensors are attached either directly to the relevant parts or
and 3) activity recognition (activity recognition has subsec- by other wearable devices, such as gloves, shoes, smartphones,
tions of fitness, health care, entertainment, and games) [6]. and glasses, which are attached to the body of the person.
Human body motion analysis by wearable sensors is divided In the following, we will see that the available data in this
into two parts: 1) movement measurement and 2) movement field can also be used; in fact, many companies and unions
classification that has three subsections: GA, GR, and activity have created their own motion datasets for these issues, and
recognition [6] (see Fig. 1). this dataset can also be used. Now, in this part of this article,
In the first category, due to the specificity of the body using the studied papers, we will determine the type of sensors
part, only the estimation of movement parameters (e.g., the used for data collection, and we will also get acquainted with
orientation and position of each joint) is needed, and in the some of the available datasets in this field.
second category, in addition to estimating the uncertainties, 1) Wearable Sensors in the Literature: As it was mentioned,
there is a need to classify features or data related to movement wearable sensors can connect to various parts of the body
classification, which will require algorithms for classification, such as arms, legs, thighs, chest, head, and neck directly or
feature extraction, feature reduction, validation, and so on. The can be worn on wearable devices, such as glasses, shoes,
first category focuses on measurements of specific parts of the smartphones, and gloves. Wearable sensors available in the
body such as the neck, head, torso, and upper and lower limbs references are categorized and presented in a table (these
[6], [7]. This category has been briefly reviewed in several sensors are sometimes used individually and sometimes as
papers. However, the second part is very extensive and has a combination of several sensors, and in the meantime, the
been studied in many papers in various ways, which require fusion of sensors should not be neglected). In the bar chart, you
complex algorithms. This section estimates spatiotemporal gait can see the most commonly used sensors. To avoid increasing
parameters, assesses gait abnormalities, recognizes meaningful the number of pages of the paper, more information is available
human expressions, including hand, arm, and face, and also in the table named sensors. For more information, refer to the
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15255
TABLE II
P REPREPARED DATASET. ACC = ACCELEROMETER , G YRO = G YROSCOPE , C OMP = C OMPASS , FSR = F ORCE S ENSITIVE R ESISTOR , UWB
TAG = U LTRAWIDEBAND TAG , MFS = M AGNETIC F IELD S ENSOR , UST = U LTRASONIC T RANSMITTER (S ENSOR ), M AG = M AGNETOMETER ,
ECG = E LECTROCARDIOGRAPHY, PPG = P HOTOPLETHYSMOGRAPHY, T EMP = T EMPERATURE , O RIENT = O RIENTATION S ENSOR ,
HR = H EART R ATE M ONITOR , TOU = TOUCH S ENSORS , C AM = C AMERA , LAC = L INEAR ACCELERATION S ENSOR ,
M IC = M ICROPHONE , RTC = R EAL -T IME C LOCK , TI-SW = T ILT S WITCHES , L IGHT = L IGHT S ENSOR , P ROXI = P ROXIMITY
S ENSOR , AUDIO = AUDIO S ENSORS , IR/V L IGHT = I NFRARED /V ISIBLE L IGHT S ENSORS , HF L IGHT = H IGH -F REQUENCY
L IGHT, P RESSURE = P RESSURE S ENSORS , H UMID = H UMIDITY S ENSORS , AND GPS = GPS S ENSORS
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15257
TABLE II
(Continued.) P REPREPARED DATASET. ACC = ACCELEROMETER , G YRO = G YROSCOPE , C OMP = C OMPASS , FSR = F ORCE S ENSITIVE
R ESISTOR , UWB TAG = U LTRAWIDEBAND TAG , MFS = M AGNETIC F IELD S ENSOR , UST = U LTRASONIC T RANSMITTER (S ENSOR ),
M AG = M AGNETOMETER , ECG = E LECTROCARDIOGRAPHY, PPG = P HOTOPLETHYSMOGRAPHY, T EMP = T EMPERATURE , O RIENT =
O RIENTATION S ENSOR , HR = H EART R ATE M ONITOR , TOU = TOUCH S ENSORS , C AM = C AMERA , LAC = L INEAR ACCELERATION
S ENSOR , M IC = M ICROPHONE , RTC = R EAL -T IME C LOCK , TI-SW = T ILT S WITCHES , L IGHT = L IGHT S ENSOR , P ROXI = P ROXIMITY
S ENSOR , AUDIO = AUDIO S ENSORS , IR/V L IGHT = I NFRARED /V ISIBLE L IGHT S ENSORS , HF L IGHT = H IGH -F REQUENCY
L IGHT, P RESSURE = P RESSURE S ENSORS , H UMID = H UMIDITY S ENSORS , AND GPS = GPS S ENSORS
15258 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
name of audio input and output devices, the same condition gyroscope-based activity recognition system. In the paper [47],
is established, and this sensor is used in the paper [36]. this sensor is used along with other sensors to learn the
Positioning and Tracking Sensors: This category includes activities of the user with minimal user attention. If you check
sensors such as global positioning system (GPS) that uses the table in the attachment carefully, you will notice that,
different approaches such as position detection and location in two papers [47] and [48], touch and light detection and
tracking [37]. These sensors are mostly used in location ranging (LiDAR) sensors are used, respectively, and we have
context pattern recognition and location-based movement clas- considered the category of these sensors as other categories.
sification. GPS sensors, Bluetooth beacons, UWB tags, and The first sensor is used to detect the physical touch of the
tracking sensors are the sensors placed in this category. GPS object, and the second sensor is used to detect different ranges.
has been developed to enable accurate positioning and navi- Even though these sensors have been used much less than
gation anywhere on or near the surface of the Earth [38]. GPS other sensors, it is not without grace to introduce them and
sensors are often used for positioning and localization, provid- get to know their performance in the three mentioned areas.
ing geographical longitude, latitude, and height. However, their Now, we introduce a quantitative analysis in general, and with
application in movement classification is more limited due to approximate numbers, we introduce the most used sensors
challenges such as the difference in sampling frequency with in the field of human motion analysis and the most used
other sensors and occlusion in indoor environments, such as category in this field. With these numbers, it becomes easier to
buildings and malls. It is expected that, due to the increasing understand Fig. 2(a) and (b). In these two figures, the vertical
growth of smartphones, this sensor will be used more in the axis represents the frequency of use based on the number of
field of activity recognition, GA, and GR. Devices that provide repetitions in the papers, and the horizontal axis is the sensor
the ability to communicate with smart devices via Bluetooth name and sensor categories, respectively. This contract is also
are called beacons. Beacons are only signal transmitters; they valid for other similar figures. The names of the sensors in
only send signals to smart devices, such as smartphones. this article have been used a total of 358 times in the studied
Many applications of Bluetooth beacons have been proposed papers. Now, the sensors have been used in combination with
for interested ones. These applications include indoor local- each other, alone, or they have been mentioned as widely used
ization, proximity detection, and activity recognition [39]. sensors, and so on. The accelerometer with 138 repetitions is
They have been used for activity recognition according to the most popular sensor name and, in a way, the most widely
Table XVII. Their signal strength reads are used for activity used wearable sensor. The percentage of accelerometer usage
recognition via smartphones [40]. Ultrawideband (UWB) is is about 39%. The gyroscope has been mentioned 71 times
an indoor positioning technology with several advantages over in all the studied papers, which is about 20% of the total
other related methods; one important advantage is providing value. The next sensor that is used the most is EMG, which
long-term data on movement patterns without the influence has about 7% of the total value with 24 repetitions. Force
of the presence of the observer [41]. Other sensors in this and pressure sensors are ranked next with 5% and 4% of the
category are position-tracking sensors or simply trackers, total amount, respectively. The force sensor has been used in
which can calculate the positions and orientations of wearer 17 papers and the pressure sensor in 14 papers. The next sensor
subjects. These sensors are used in two papers according to is the temperature sensor, which is examined in nine papers
the table. The paper [42] used two cyber gloves and two and constitutes 3% of the total value. The magnetometer holds
3SPACE-position trackers for each hand to recognize Chinese the same rank, and it is mentioned in nine papers; this sensor
sign language and perform some kind of GR. Tao et al. [20] has something like 3% of the total amount, too. The light
have used an electromagnetic tracking system (ETS) that can sensor and the microphone are used in seven and six papers,
calculate the positions and orientations of an object in the field respectively, and have an equal share of about 2%. Other
of GA. sensors are used less than six times and have been excluded
Optical and Light Sensors: These sensors detect and quan- from Fig. 2(a). Now, we present a detailed numerical analysis
tify various properties of light, such as intensity, frequency, of the existing categories. The motion sensors category has
wavelength, and polarization [43]. These sensors convert the been used 225 times and has a share of about 63% of all
light energy into an electrical signal output and include light available categories. Bio and chemical sensors have been used
and linear optical gesture sensor rows in Table XVII. 54 times and have a share of about 15%. The third category
Proximity Sensors: A proximity sensor is a sensor that is pressure and force sensors, which has a share of 10% with
detects information about an object’s presentation [44]. Capac- 37 repetitions. Audio and visual sensors with nine repetitions
itive sensors for measuring height and distance, infrared, and have 3% of the total share. The positioning and tracking
electric field sensors are the sensors that fit in this category. sensors category has the same number of repetitions in the
Bend/Flex Sensors: Bend/flex sensors include bend sensors papers and has a share similar to the previous category. Optical
that are composed of a flexible substrate and a conductive and light sensors with eight repetitions and proximity sensors
layer, which changes its electrical resistance with the bending with seven repetitions have an almost equal share of 2%.
or the angular displacement [45]. Other categories are used less than seven times and have been
Motion Detectors: These sensors include passive infrared excluded from Fig. 2(b).
and ultrasonic sensors in Table XVII. A motion detector is 2) Available Datasets: The preprepared datasets in the stud-
a sensor that detects nearby motion. Ogris et al. [46] have ied papers are also presented in this article. Of course, these
demonstrated how ultrasonic hand tracking can be used to datasets are not necessarily related to IoT, but they are
improve the performance of a wearable, accelerometer, and generally public datasets for use in movement classification
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15259
repository, the Mobile Health (MHEALTH) dataset used in video image by Openpose. In fact, in the mentioned paper,
[55] comprises body motion and vital signs recordings for ten the Openpose dataset is used for GR, and the mentioned
volunteers. Activities in this dataset include standing still, sit- activities and gestures are squat, squat down and up, wave
ting and relaxing, lying down, walking, climbing stairs, waist left hand, raise left hand, wave both hands, and raise both
bends forward, the frontal elevation of arms, knee bending hands. The USC-HAD dataset is a Daily Activity dataset
(crouching), cycling, jogging, and so on. The Daphnet dataset for activity recognition using wearable sensors; 12 different
used in [56] is related to Parkinson’s disease and analyzes activities in this dataset are walking forward, walking left,
the gait patterns of Parkinson’s patients with fog symptoms. walking right, walking upstairs, walking downstairs, running
Users performed three kinds of tasks: straight-line walking, forward, jumping, sitting, standing, sleeping, the elevator up,
walking with numerous turns, going to different rooms while and the elevator down. [64], [65]. The SHO dataset uses
fetching coffee, opening doors, and so on. This information smartphone motion sensors for physical activity recognition.
is fully contained in the UCI machine learning repository. In the data collection experiments, they collected data for
Mahmud et al. [57] proposed a multistage long short-term seven physical activities. These are walking, running, sitting,
memory (LSTM)-based deep neural network to integrate mul- standing, jogging, biking, walking upstairs, and walking down-
timodal features from numerous sensors for activity recogni- stairs [66]. According to the UCI machine learning repository,
tion. For the training and evaluation of the proposed scheme, the Opportunity dataset is a dataset designed to benchmark
they have used a publicly available dataset from Physionet. HAR algorithms. The dataset contains the readings of wearable
This dataset contains wrist PPGs recorded during walking, motion sensors recording users’ daily activities. It is useful
running, and bike riding. Simultaneous motion estimates are for wearable activity recognition [67]. The Opportunity dataset
collected using both accelerometers and gyroscopes to give comprises a set of complex naturalistic activities performed by
multiple options for the elimination of motion interference four subjects in a daily living scenario performing morning
from the PPG traces. A reference chest ECG is also used to activities. During the recordings, each subject performed a
allow a gold-standard comparison of heart rate during exercise. session five times with ADLs and one drill session. During
The description given for the HAR Using Smartphones dataset each ADL session, subjects perform the activities without
in the UCI machine learning repository is that this dataset any restriction, and examples of activities are (preparing and
is an activity recognition database built from the recordings drinking a coffee, preparing and eating a sandwich, and so on).
of 30 subjects performing activities of daily living (ADLs) During the drill sessions, each subject performed a predefined
while carrying a waist-mounted smartphone with embedded sorted set of 17 activities 20 times [67]. In the Skoda Mini
inertial sensors. Each person performed six activities (walking, Checkpoint dataset, one person performed activities in a car
walking upstairs, walking downstairs, sitting, standing, and maintenance scenario using 20 accelerometers, placed on the
laying). This dataset is one of the most used datasets. Another left and right upper and lower arms. This dataset contains ten
dataset publicly available in the Physionet presented in the activity recordings, including writing on a notepad, opening
paper [58] contains measures of gait from 93 patients with the hood, closing the hood, checking gaps in the front door,
idiopathic Parkinson’s disease (66.3 years; 63% men) and opening the left front door, closing the left front door, closing
73 healthy controls (mean age: 66.3 years; 55% men). The both left doors, checking trunk gaps, opening and closing
PAMAP2 Physical Activity Monitoring dataset contains data the trunk, and checking steering wheel [68]. The Actitracker
on 18 different physical activities (such as walking, cycling, dataset consists of triaxial accelerometer data samples. Sub-
and playing soccer), performed by nine subjects wearing jects carried an Android phone in their front pants pocket
three inertial measurement units (IMUs) and a heart rate and walked, jogged, ascended or descended stairs, sat, stood,
monitor. The dataset can be used for activity recognition and and lay down for specific periods [69]. The Darmstadt Daily
intensity estimation while developing and applying algorithms Routines dataset contains seven days of continuous data. The
of data processing, segmentation, feature extraction, and clas- routines include dinner, commuting, lunch, and office work.
sification. This information is provided in the UCI machine Every routine contains various types of low-level activities;
learning repository, and the dataset is used in the paper [59]. for example, dinner contains preparing food, eating dinner,
According to the UCI machine learning repository, the Daily and washing dishes [67], [70], [71]. The ubicomp08 dataset is
and Sports Activities dataset provided by Bilkent University recorded in the house of a 26-year-old man. Seven different
comprises motion sensor data of 19 daily and sports activities, activities were annotated, namely, leaving the house, toileting,
each performed by eight subjects in their own style for showering, sleeping, preparing breakfast, preparing dinner,
5 min. This dataset can be used for activity recognition [60]. and preparing a beverage. Times during which no activity is
Kawaguchi et al. [61] have started a project named “HASC annotated are referred to as Idle [72]. In paper [73], to evaluate
Challenge” to collect a large-scale human activity corpus using their activity recognition model, they use two datasets called
accelerometers. The HASC dataset is the result of the research Bookshelf and Mirror. Bookshelf is a realistic dataset in a
of the mentioned paper and is used for activity recognition workshop scenario, in which subjects construct a wooden
and activities, including staying, walking, jogging, skipping, bookshelf. The dataset consists of a variety of activity events
stair-up, and stair-down [62]. Chen et al. [63] have stated and types. The second dataset called Mirror is recorded and
that the wearable motion capture device is used to take the used in this article. Similar to the bookshelf, it contains a
kinematics data of the key nodes of the human body and wide variety of activities, too. The first dataset is similar
fuse the data with the human skeleton data extracted from the to the woodshop dataset described before [54], but they are
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15261
used in different movement classification models. The Intel pattern mining, localization, tracking, and sensor fusion [79].
Research dataset is a dataset that contains various sensor data. For the collection of the SARD dataset, the authors developed
The data have recordings of various human activities, such a data collection application for Android devices. This Android
as sitting, walking, jogging, riding a bike, and driving a car app currently collects data from the GPS, an accelerometer,
[35], [74]. The MIT Place Lab dataset is recorded from a a magnetometer, and a gyroscope at a rate of 50 Hz. They
single subject wearing five accelerometers and a wireless heart used four smartphones for data collection. Using these smart-
rate monitor to perform a set of household activities. The phones, they recorded data for six different physical activities,
activities include preparing a recipe, doing dishes, cleaning including walking, running, sitting, standing, and walking
the kitchen, doing laundry, making a bed, and light cleaning upstairs and downstairs. Four smartphones were located in
around an apartment. In addition to the activities above, the four body positions (right jeans pocket, belt, right arm, and
subject also performs other everyday tasks such as searching right wrist) [80]. Micucci et al. [81] propose a new dataset
for items and talking on the phone [65]. The UC Berkeley named UniMiB SHAR of acceleration samples collected using
WARD dataset or simply the Wearable Action Recognition an Android smartphone designed for HAR and fall detection.
Database (WARD) dataset is developed by the University The subjects placed the smartphone in their front trouser
of California at Berkeley (UC Berkeley). WARD includes pockets: half of the time in the left one and the other half
20 human subjects (13 male and seven female) and a set time in the right one. The dataset contains samples of nine
of 13 activities, such as walking, standing, and jumping. types of ADLs, including running, sitting down, and so on,
The researchers have placed sensors at five body locations: and contains samples of eight types of falls, including falling
two wrists, two ankles, and the waist. Each built multimodal rightward, falling leftward, syncope, and so on in [82]; a
sensor unit contains a three-axis accelerometer and a two-axis mobile application (app) called ExtraSensory is developed,
gyroscope [65]. The CMU-MMAC database was collected in with versions for both iPhone and Android smartphones, and
the Carnegie Mellon University’s Motion Capture Laboratory a companion application for the Pebble smartwatch that inte-
and contains multimodal measures of the human activity of grates with both. The ExtraSensory dataset contains data from
subjects. The dataset focuses on cooking and food preparation. 60 users, 34 of the subjects were iPhone users, and 26 subjects
Wearable sensors that are used in data collection include were Android users. The dataset contains data from various
a camera, an accelerometer, and a gyroscope. Five subjects activities, such as walking, laying down, and bicycling. The
performed cooking five different recipes: brownies, pizza, phone was located in different places, such as in a bag, in hand,
sandwiches, salad, and scrambled eggs, and related data were in a pocket, or on the table. Kyritsis et al. [83] propose a
recorded [75]. Kwapisz et al. [76] presented a system that method for detecting food intake cycles during a meal using
uses phone-based accelerometers to perform activity recog- a wristband. They have presented a method that aims at
nition and collected a dataset named wireless sensor data detecting intake cycles. The FIC dataset contains acceleration
mining (WISDM) that contains labeled accelerometer data data of eight subjects, and their proposed method detects
from 29 users as they performed daily activities, such as five micro movements related to eating food. The WHARF
walking, jogging, climbing stairs, sitting, and standing. The dataset is presented as a freely available dataset of acceleration
paper [77] presents the UTD-MHAD dataset that consists of data, coming from a wrist-worn wearable device, targeting
four different data modalities that include RGB videos, depth the recognition of 14 different human activities. Activities are
videos, skeleton positions, and inertial signals from a Kinect mentioned in a table and include Brushing teeth, combing hair,
camera and a wearable inertial sensor for recording 27 human getting up from the bed, lying down on the bed, and so on [84].
actions. 27 actions performed constitutes sports actions, such For the Smartphone-Based Recognition of Human Activities
as bowling; hand gestures, such as drawing x; daily activities, and Postural Transitions Dataset (SBRHA) collection, a group
such as knocking on the door; and training exercises, such of 30 subjects was recruited. They were asked to perform
as the squat. Wearable inertial sensors are placed on the six activities (walking, laying, sitting, climbing up the stairs,
right thigh and the right wrist. Stisen et al. [78] have recorded climbing down the stairs, and standing). The authors placed
the HHAR dataset for detecting six different user activities: a smartphone on the waist and used it for the activity data
biking, sitting, standing, walking, stair up, and stair down. recording using the built-in triaxial accelerometers and triaxial
They have gathered data on nine users using smartphones and gyroscopes [85]. Liu et al. [86] present uWave, an efficient
smartwatches. Smartphones were carried by the users around GR algorithm using a single triaxial accelerometer. They
their waist, while smartwatches, were worn on each arm. evaluate uWave with a gesture vocabulary identified by a
The Sussex-Huawei Locomotion (SHL) dataset consists of VTT research for which they have recorded a library of
multimodal transportation data, recorded by three individuals 4480 gestures for eight gesture patterns from eight participants
in eight different modes of transportation in real environments. over multiple weeks. They have made the dataset open source.
Data were recorded using sensors of four smartphones located OU-ISIR and HAPT datasets are presented in the paper
at the torso, backpack, hand, and pocket. The eight main [87], which are datasets related to human activities, such as
activities in the dataset include standing or sitting, walking, walking gathered by accelerometers and gyroscopes attached
running, biking, bus standing or sitting in a bus, driving and to the waist. The HAG dataset was collected from 50 subjects
sitting in a car, and standing or sitting on a subway. The performing seven different activities in a controlled laboratory
SHL dataset can be used in a wide variety of studies such environment using an IMU sensor [88]. The HAG2 dataset
as transportation recognition, activity recognition, mobility is collected from 25 subjects using wearable IMU sensors
15262 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
for six different walking activities [89]. Raj et al. [90] have categories, and analyze the results seriously. The accelerometer
collected the robita-gait dataset of different gait using an has been repeated 67 times and has been ranked first again.
accelerometer. For collecting the HAG3 dataset, 25 different The second place, as before, belongs to the gyroscope with
subjects’ data are collected for the identification of seven 43 uses. The magnetometer has been used 21 times and ranks
different walking activities using accelerometer readings [91]. third. The fourth place goes to the temperature sensors with
The CASIA-B dataset contains the human walking pattern eight uses. The ECG ranks fifth with seven uses. Camera,
of 124 subjects collected using a camera, and CASIA-C light, and heart rate are ranked sixth with five uses each.
contains 153 subjects and considers four variations of walking The orientation sensor and compass are ranked seventh with
collected using the infrared camera that captures thermal four uses, followed by tilt switches that are used three times.
images [92]. It can be easily shown that many preprepared The FSR force sensor along with the UWB tag, pressure
datasets have set their agenda to identify walking activities sensor, microphone, and GPS is ranked ninth. All these sensors
because human walking styles, such as walking, running, and have been used twice. The magnetic field sensor, ultrasonic
jumping, are an important field for activity recognition. Now transmitter (sensor), PPG, touch, linear acceleration sensor,
that we have briefly introduced all the datasets, we want proximity, audio, and humidity each with only one use is
to perform a quantitative analysis of the information in the ranked tenth. The accelerometer and gyroscope sensors have
relevant table columns. Names ActRecTut, the Car Quality kept their position, and to some extent, these results confirm
Inspection, the Woodshop, the Drink and Work, the Daily and the validity of the above results. Now, we specify the sensors
Sports Activities dataset, HASC, Openpose, USC-HAD, SHO, that are placed in each category. Sensors placed in the motion
Opportunity, the Skoda Mini Checkpoint dataset, Actitracker, sensor category are accelerometers, gyroscopes, compasses,
Ubicomp08, Bookshelf, Mirror, Intel Research, the MIT magnetometers, magnetic field sensors, orientation sensors,
Place Lab dataset, the UC Berkeley WARD Dataset, CMU- linear acceleration sensors, and tilt switches. This category
MMAC, HHAR, SHL, SARD, UniMiB SHAR, ExtraSensory, has been used 144 times. The sensors that make up the
FIC, WHARF, SBRHA, HAG, HAG2, HAG3, robita-gait, category of biological and chemical sensors are temperature,
CASIA-B, CASIA-C, and uWave are each used in only one ECG, HR, PPG, and humidity. This category has been used
paper. From the Physionet website, two separate datasets have 22 times. The sensors that make up the category of audio
been used once each. The Car Quality Control dataset, OU- and visual sensors are the camera, microphone, and audio
ISIR, HAPT, and UTD-MHAD are used twice. The Daphnet sensor. These sensors have been used a total of eight times.
dataset and the Darmstadt Daily Routines dataset are used in Optical and light sensors, including light sensors, IR/V light
three papers. PAMP2 is used four times. WISDM is used five sensors, and HF-light sensors, are used five times. Position
times. MHEALTH has been used six times. HAR Using the and tracking sensors’ categories include UWB tags and GPS.
Smartphones dataset has been used seven times. 46 separate This category is used four times. The category of pressure and
datasets have been introduced, 12 of which are related to the force sensors has been used four times again. Motion detectors
application of (HAR and GR), 30 of which are related to HAR, with an ultrasonic sensor as their representative along with
one is related to GR, and three of which are related to GA. proximity sensors are used only once. The other categories,
The use of (HAR and GR) is usually related to the activities which include touch sensors, were repeated one time, too.
that are done by the subject’s hand and can be interpreted as a Bend sensors are not used in preprepared datasets. As you
gesture. This is clear from the description of each dataset. can see, the first and second places go to the categories of
By checking the sensors used column, we get interesting the motion sensor and bio and chemical sensors as in the
results. The definition of sensor categories in Section II-A1 is previous section. Because the total number of sensors used
valid here as well. Only in this section, there are some sensors in this section was 193 and was much less than the number of
that were not used in Section II-A1, and we will have a brief sensors in Section II-A1, we refused to provide the percentage
overview of them. The new sensors presented in this section share. Also, due to the smaller number of sensors, unlike
are magnetic field sensors, linear acceleration sensors, real- Section II-A1, we provided a complete statistical analysis.
time clocks, tilt switches, and IR/V light sensors. Magnetic The similarity of the results of these two sections confirms
field sensors and linear acceleration sensors are motion sensors the validity of the presented results, and in a way, the analysis
that act in a way like magnetometers and accelerometers, results of Section II-A1 are confirmed.
respectively, and generally, are not considered separate sensors
from them. The real-time clock is generally not included in the B. Preprocessing
work scope of this article and is not addressed, but, due to the Preprocessing actions convert raw data into a suitable and
respect of the producers of the datasets, it is only present in the preferred format for data processing and further analysis,
according table and is not present in the statistical analysis. Tilt and improve the quality of the dataset. These preprocessing
switches or tilt sensors, sometimes called inclinometers, are actions were identified from studied papers, and segmentation
used for measuring the angles or tilts of objects. Infrared light and feature extraction are considered separate steps in this
sensors are used to detect infrared light emitted by individuals article, but, due to respect for the authors who consider it
or objects and are not capable of detecting visible light. The as a member of preprocessing, they have been identified as
visible light sensor does not need a special definition. As in preprocessing in this section. Preprocessing actions are fully
Section II-A1, we perform numerical analysis on the number presented in Table III. For a better understanding of this table,
of sensors used, then place the sensors in the mentioned we first briefly define each preprocessing action defined in the
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15263
TABLE III
P REPROCESSING ACTIONS
table, specify its use, and then proceed to provide statistical [94], [95], and [96]. In the papers [94] and [95], just before
analysis. The first row presented in the relevant table belongs extracting the features, the magnitude of the accelerometer
to interpolation and papers that used this preprocessing action. signal was calculated, and then, the features were extracted
The main reason for using this preprocessing action is to fill from this value; in the paper [96], this action was done on the
in unknown or missing data values. Increasing the sampling gyroscope in addition to the accelerometer. Truncating and
rate of the sensor leads to the creation of data with unknown trimming data preprocessing actions have been used in two
values. Of course, this is not the only reason for creating papers [97] and [98]. It is somehow related to the concept
missing or unknown data. It may happen due to sensor signal of labeling, which is necessary for classifier training. In these
loss, failure of sensor equipment, and many other reasons. For papers, the data are limited to the beginning and end of the
example, GPS data may be lost when entering a building. The video clips associated with the labeling. Labeling is associated
next preprocessing action is filtering. Whenever the discussion with the training phase of classifiers and creates labeled data
of filtering in sensors comes up, it is unconsciously referred for classifier training, which will be explored in the following.
to as the discussion of noise in the sensor output. Sensor In some papers, as indicated in the table, labeling is considered
noise in signal processing is a general term for unwanted a preprocessing action. Smoothing is a method to adapt to
and unknown modifications that a signal may suffer during long-term changes in the output of sensors and, at the same
capture, storage, transmission, processing, or conversion [93]. time, smooth out short-term changes in the output. Smoothing
This definition is very general, and in this section, we do makes it easier to follow important data patterns. Signal
not intend to discuss the noises in different wearable sensors segmentation in the papers in the segmentation row in the
and different filtering methods. The third preprocessing action table is considered a part of the preprocessing, but, as we have
mentioned in the table of this section is data normalization. already announced, we will fully examine it as a separate step.
Data normalization is actually a method that converts data The next preprocessing action is creating an extra dimension
to the same scale and maps them all to the same range. or new dimensions. This preprocessing action has been used in
The main purpose of this preprocessing action is to reduce two papers [99] and [100]. In the paper [99], they create a new
the redundancy of the data, and in fact, it makes the data dimension with a special formula, such as signal magnitude,
consistent, that is, the data from different sensors with very and extract the features from four dimensions. Li et al. [100]
different values in the records have the same range. If the create four composite axes from the three main axes of each
sensor output changes independently for the same value as sensor and extract features again. Calibration is a widely used
the input, drift has occurred. Physical changes in the long term preprocessing action. Sensors must be calibrated to increase
cause drift, and it must be removed. The next preprocessing accuracy, that is, calibration must be done on the sensor
action is rectification. By rectifying the output signal of the so that the sensor works as accurately as possible. Feature
wearable sensor, its positive or negative part is practically normalization, such as data normalization, actually prepares
removed. The rectifier has two types: full wave and half wave. features with different scales for use in machine learning
The next preprocessing action is calculating signal magnitude, models. Providing the same scale for raw data, as it is clear in
which is not as well-known as the previous preprocessing the table, it has been used in only one paper [101]. Because
actions. This preprocessing action is used in three papers the sensor data from different participants are different in
15264 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
against uncertainties. Stacking algorithms, which are one of [112]. Voting methods, of which majority voting is one of the
the most widely used algorithms at this level, are algorithms main algorithms, are similar to the behavior-knowledge space
in which the classification results of several classifiers are (BKS) method in terms of performance. Classifiers producing
provided to a metaclassifier that determines the final clas- crisp, single-class labels provide the least amount of useful
sification result. Fuzzy integral algorithms, fuzzy template information for the fusion process. The fusion process with
algorithms, Dempster–Shafer methods, products of experts, these classifiers can be upgraded by voting methods [112].
and neural networks have similar performances to Bayesian Voting strategies can be applied to multiple classifier systems
fusion methods. These algorithms operate on classifiers that assuming that each classifier gives a single class label as
produce so-called soft outputs. The outputs are the real values an output. There are several approaches to the combination
in the range [0, 1]. These values are referred to as fuzzy of such uncertain information units to obtain the best final
measures, which cover all known measures of evidence. Mea- decision. However, they all lead to the generalized voting
sures of evidence are used to describe different dimensions definition [112]. The BKS method can efficiently aggregate
of information uncertainty. These algorithms try to reduce the decisions of individual classifiers. This method provides a
the level of uncertainty maximizing suitable measures of K -dimensional knowledge space by collecting the records of
evidence [112]. The intersection of Neighborhoods and Union the decisions of all K classifiers for each learned sample, then
of Neighborhoods are based on a class set reduction, and their combines decisions generated from individual classifiers, and
objective is to reduce the set of considered classes to as small enters a BKS method unit of the mentioned space. A unit of
a number as possible but ensure that the correct class is still BKS is an intersection of decisions of every single classifier
represented in the reduced set. These algorithms try to find the and makes a final decision by a rule that estimates the balance
tradeoff between minimizing the class set and maximizing the between the current classifiers’ decisions and the recorded
probability of inclusion of the true class [112]. The highest behavior information the knowledge in the BKS unit [112].
rank method, the Borda count method, and logistic regression Now that we are familiar with the performance of famous
aim at a class set reordering to obtain the true class ranked as and widely used fusion algorithms of every level, we intend
close to the top as possible. These algorithms try to improve to specify more precisely the algorithms that can be used
the overall rank of the true class [112]. It is not bad to know the at each level. Papers that have used these levels directly
strengths and weaknesses of these algorithms. An advantage with mentioned names or mentioned the fusion algorithm
of the highest rank method is that it utilizes the strength of precisely are listed in Table IV. Also, the papers that have
every single classifier, which means that, as long as there is used data fusion without presenting the level are not present
at least one classifier that performs well, the true class should in this table and are not counted among the final statistics.
always be near the top of the final ranking. The weakness is In this table, d, f, and c stand for the data level, the feature
that combined ranking may have many ties, which have to level, and the classifier level, respectively. Now, we present
be resolved by additional criteria. The Borda count method a detailed numerical analysis of the algorithms used at each
is easy to implement. The weak point of this technique is level. First, we start with the data level. Eight papers directly
that it treats all classifiers equally and does not take into refer to this level of data fusion and present 13 algorithms.
account individual classifiers’ capabilities. This disadvantage A total of seven papers have mentioned the Kalman filter
can be reduced to a certain degree by applying weights and algorithm for data fusion at this level. Six papers mentioned
calculation of the Borda count as a weighted sum of a number the Kalman filter algorithm, and one paper mentioned both
of classes. The weights can be different for every classifier, the Kalman filter and the EKF algorithms. Therefore, the
which, in turn, requires additional training [112]. The Borda Kalman filter with seven times of use has a share of about
count method does not recognize the quality of individual clas- 54%. The weighted average method along with concatenation
sifiers’ outputs. An improvement can be achieved by assigning each has a share equal to 15% by being used only twice.
the weights to each classifier reflecting their importance in The least-squares method and the particle filter have the least
a multiple-decision system and performing so-called logistic number of uses and have a share of about 8% with one
regression [112]. All these proposed algorithms, i.e., Intersec- use. Feature-level fusion has been proposed in 14 papers.
tion of Neighborhoods, Union of Neighborhoods, the highest In total, these 14 papers have proposed 26 algorithms for data
rank method, the Borda count method, and logistic regression, fusion at this level. SVM is one of the leading algorithms
may be applied to the same problem so that the set of classes at this level with three times of use and a share of about
is first reduced and then sorted [112]. A bagging algorithm 12%. According to the table, three different variants of ELM
creates a metaclassifier that runs each of the constituent have been used in only one paper for feature fusion. These
classifiers on random subsets of the target dataset and then variants are KELM, WELM, and regularized ELM. Therefore,
aggregates their predictions to form a final decision. Dynamic this fusion algorithm is one of the most widely used fusion
classifier selection, classifier structuring and grouping, and algorithms at the feature level, with three uses and a 12%
the hierarchical mixture of experts operate on the classifiers share. The k means, KNN, and LSTM derivatives (bi-LSTM
rather than their outputs, trying to improve the classification and stack of LSTM layers) have been used twice and have
rate by pushing classifiers into an optimized structure [112]. a share of 8%. All subsequent feature fusion algorithms have
The hierarchical mixture of experts does not seem to be been used only once and have a share of 4%; these algorithms
applicable to high-dimensional data because high-dimensional include concatenation, cluster analysis, Kohonen feature map,
data can lead to increased variance and numerical instability learning vector quantization, artificial neural network (ANN),
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15267
decision tree, GMM, PCA, CCA, combining features into a window; 2) energy-based segmentation; and 3) additional
single matrix, conditional random field (CRF), CNN, a score- sensors and contextual sources. Of course, other approaches
based sensor fusion scheme, and the fuzzy logic. Fusion at in addition to these three methods are presented in the papers.
the classifier level has been proposed in 27 papers. A total In this section, we present the segmentation algorithms used in
of 72 algorithms have been proposed, which are divided into papers. Before providing a comprehensive numerical analysis
35 distinct algorithms. In the meantime, majority voting is of the number of each algorithm or approach used, we first
the most widely used classifier-level fusion algorithm with provide a brief description of the performance of each member
ten uses and a share of 14%. After that, Bayesian approaches of the column named segmentation algorithm used in Table V.
with seven uses have a 10% share. The terms that specify The first and perhaps the most widely used approach in this
these algorithms in the relevant table are Bayesian inference, field is the sliding window. In the sliding window approach,
Bayesian inference, such as naïve Bayes, Bayesian fusion a window is moved over the time series data to “extract” a
methods, Bayesian fusion, naïve Bayes combiners (NBCs), portion of the data that can be used in subsequent processing
Bayesian framework, and Bayesian inference. The third place steps [51]. Energy-based segmentation relies on the fact that
goes to boosting with six times of use in papers and a different activities are performed with different intensities. The
share of about 8%. The closest follower of boosting is the intensity difference is directly related to the different energy
fusion method named stacking. This method is used only levels of the sensor signals. The signal energy (E) is calculated
once less than the previous method and has a share of about through the signal energy formula. By thresholding on E,
7%. Fuzzy methods have also been used four times (6% data segments belonging to the same activity can be found
share); in total, three papers have used these methods, and [51]. Additional sensors and contextual sources that we simply
according to the table, one paper has mentioned two different refer to as additional sensors are the third approach discussed
methods. The terms that are referred to as fuzzy methods are for segmentation. Sensor data recorded with one modality
fuzzy, fuzzy logic, fuzzy integrals, and fuzzy templates. The can be segmented through information derived from other
Dempster–Shafer method is ranked next with three uses and modalities [51]. For example, using GPS traces, acceleration
has a small percentage of 4%. The Borda count method also data recorded using mobile phone accelerometers can be seg-
has the same conditions. Neural networks, highest rank, logis- mented [51]. The head-based segmentation scheme, which is
tic regression-based methods, bagging, hierarchical weighted proposed in two papers, was proposed by Bulling et al. [114].
decision (HWD), and class-based weighted fusion all have a They developed a segmentation approach that requires only
share of about 3% with two times of use. Now, we specify a single-axis accelerometer on the head. Their segmentation
more precisely the terms that are included in some of these is based on two hypotheses. First, the reading happens only
methods. Two variants of HWD have been used in only one when the subject’s head is down. Second, the up and down
paper. The terms presented in this method in general numerical movements of the head can be detected using the mentioned
analysis are HWD and a novel HWD algorithm, called DC. accelerometer. They detect these head movements by thresh-
Class-based weighted fusion has the same conditions as the olding the x-component of the denoised, mean-subtracted head
previous one, and the terms presented in this method in acceleration signal [114]. Blanke et al. [73] use a segmen-
general numerical analysis are posterior-adapted class-based tation technique that replaces the standard sliding window
weighted fusion and class-based weighted fusion. All next approach. This segmentation technique is based on the human
algorithms used at this level are used only once and have a body model. Assuming low-motion moments at the beginning
share of 1%. These algorithms include average output, genetic and end of interactions, segments of interest are created using
algorithms (GAs), evolution algorithms, topic models, equal such points [73]. Symbolic aggregate approximation (SAX)-
weight fusion, recall combiners, body multipositional decision and GA-based approaches are proposed in the paper [115].
selection, plurality voting, an average of probabilities, dynamic The former approximates a given time series by piecewise
classifier selection, classifier structuring, grouping, a hier- constants encoded in a discrete alphabet, and the latter uses
archical mixture of experts, voting methods, BKS method, evolutionary search to find a suitable segmentation. The seg-
Intersection of Neighborhoods, Union of Neighborhoods, the mentation approach proposed in the paper [116] is obtained by
product of experts, summation, the logarithm opinion pool thresholding the acceleration variance and pairwise combined
(LOGP) technique, hierarchical decision (HD), model-based segments. In the paper [117], a rectangular window function
fusion, a decision tree, and multistream hidden Markov models with a window length of 4 s is used for segmentation. The win-
(HMMs). dowing itself is done in steps of 0.5 s. Khairuddin et al. [118]
have segmented the EMG signals into two distinct sections:
D. Signal Segmentation preintention and intention. Their purpose was to identify the
Segmentation is used frequently in papers and identifies intention of the movement. The intention signal is recorded
important information in the preprocessed dataset. To define based on the definition of a muscle burst that transpires
signal segmentation precisely, we use the definition of [113]. between 40 and 100 ms prior to any muscle activities. In the
Azami et al. [113] describe signal segmentation as follows: paper [119], the autocorrelation function (ACF) is used for
“signal segmentation is the act of splitting a signal into smaller the segmentation of accelerometer data by devising a concept
parts that each has the same statistical characteristics, such as of tuning parameters that are based on minimum standard
amplitude and frequency.” In [51], it is stated that segmenta- deviation. It does not seem necessary to provide numerical
tion can be done using the following approaches: 1) sliding analysis in this section because the sliding window is at the
15268 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
TABLE IV
F USION S TRATEGY AND M ETHODS U SED
to Table VIII, a total number of 100 times, papers have finally, the important principal components are selected by
presented the domain of their features. The time domain with examining the eigenvalues. Unlike PCA, which is an unsuper-
51 times of use has a share of 51%. The frequency domain has vised feature extraction method, LDA is a supervised feature
a share of 32% with 32 times of use. The spatial domain has extraction method that is also used as a machine learning
a 7% share with seven uses. The time–frequency domain with classification algorithm. Feature extraction or feature reduction
four times of use has a share of 4%. The other two domains is performed by this algorithm in such a way that the algo-
have a share of 3% with three uses. At the end of the statistical rithm calculates intraclass and interclass variances of data or
analysis, we intend to do a comparative analysis between some features and tries to extract or reduce features by minimizing
of the items in the table related to feature extraction methods. intraclass variance and maximizing interclass variance. ICA is
First, we want to make a comparison between the spectral an unsupervised feature extraction method and the machine
analysis method and the Fourier transform. The main feature learning algorithm that decomposes signals into independent
domain of both of these methods is frequency, but the spectrum subcomponents of non-Gaussian nature. This algorithm can
is the appearance and shape of a signal in the frequency be used for feature reduction, too. FA is also an unsupervised
domain, and the Fourier transforms generally transform a machine learning algorithm that is used for feature extraction
signal in the time domain into a function in the frequency and feature reduction; this algorithm removes the correlation
domain. In general, it can be stated that all feature extraction between a huge set of data or variables and extracts the basic
methods that take the signal to the frequency domain can be a factors that represent the dependents. The factors that are
basis for the spectral analysis method. That is, by using these created show the variance caused by similarity and correlation.
methods, the signal is transferred to the frequency domain, Incremental FA (IFA) is the FA that calculates covariance with
and then, spectrum analysis is done. Therefore, all three an incremental approach; incremental approaches are espe-
proposed feature extraction methods, i.e., Fourier transform, cially used in feature reduction and feature extraction methods
WT, and discrete cosine transform, can be used for this issue. to reduce time complexity and save storage space. It is not bad
We try to have a comparative analysis between these methods. to have a comparison between the performances of the above
The comparison of WT and Fourier transform has also been algorithms. All these algorithms look for linear combinations
discussed in Section V, but, in this section, we are also trying of variables that best describe the data. PCA is defined as
to make a general comparison of these methods. The Fourier an orthogonal linear transformation that aims to create new
transform decomposes a signal into simple sines and cosines. components that capture the maximum input variance. LDA
Unlike the Fourier transform that is limited to a scaled single creates new components that separate classes. The goal of ICA
sinusoidal function, the WT generates a two-parameter family is to recover the original features that are mixed in a linear
of wavelet functions by scaling and shifting the function [31]. combination in the input dataset. FA tries to describe a dataset
It can be stated that the WT displays the signal in both the via a linear combination of variables called factors. It was tried
time and frequency domains, while the Fourier transforms to check and compare the performance of the most widely used
displays the signal only in the frequency domain. Discrete and famous feature extraction algorithms. For more familiarity
cosine transforms express a signal in terms of the sum of with other feature extraction methods in the table, refer to the
cosine functions. The discrete cosine transform is very similar relevant papers.
to the discrete Fourier transform, and the obvious difference
between the discrete cosine transform and discrete Fourier F. Feature Selection
transform is that the former uses only cosine functions, while Feature selection has many applications in various fields,
the latter uses both cosine and sine. Therefore, the result of the such as machine learning, classification, pattern recognition,
discrete cosine transformation has only real values. A discrete data mining, and clustering, for reducing the size of the feature
cosine transform is equivalent to a discrete Fourier transform space [128], [129]. Feature selection algorithms and their type
of twice the length. EMD, which is another feature extraction will be specified in this section. There are three different meth-
method proposed in this article, is a well-known method for ods for feature selection: filter methods, wrapper methods,
data analysis that breaks a signal into intrinsic mode functions and embedded methods [129], [130]. Some papers have also
(IMFs) that describe the behavior of the signal [122]. They discussed hybrid methods for feature selection [129], [131],
consist of a single frequency or a narrowband of frequencies. [132]. Jović et al. [129] have described all these methods.
This method breaks the time signal into a series of basic Filter methods select features based on a performance measure
functions just like the Fourier transforms and the WTs, but, regardless of the employed data modeling algorithm. Only
unlike the two announced methods, this method extracts the after the best features are found, the modeling algorithms
basic functions from the data itself. PCA is an unsupervised can use them [129]. Filter methods are mostly based on
linear transformation that can be used for feature extraction similarities and statistical measurements. In this article, the
and feature reduction. We are trying to provide a general wrapper method is also defined as follows: wrappers consider
definition of how PCA works, which can be used for both feature subsets by the quality of the performance on a model-
feature extraction and feature reduction. This algorithm obtains ing algorithm. Embedded methods perform feature selection
the relationships between data using the covariance matrix. during the modeling algorithm’s execution. These methods
Then, using special relations from the covariance matrix, are, thus, embedded in the algorithm either as its normal or
eigenvalues and eigenvectors are obtained. Eigenvectors are extended functionality [129]. Hybrid methods were proposed
used to transform the data into principal components, and to combine the best properties of filters and wrappers. First,
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15271
a filter method is used in order to reduce the feature space be considered as the nonparametric version of the t-test. This
dimension space. Then, a wrapper is employed to find the best method calculates the p-value and removes features that have
candidate subset [129], [132]. Before dealing with the sta- a p-value less than a certain threshold [31]. ANOVA is a
tistical analysis, we are going to introduce the items in the statistical test that can be used to analyze the difference
feature selection table. The minimal-Redundancy Maximal- between the means of two or more than two datasets. It can be
Relevance (mRMR) method measures the relevance and redun- said that ANOVA is a generalization of the t-test. To select the
dancy of the feature candidates with the target class based features in this method, a variable called f-value is calculated
on mutual information and selects a promising feature subset for each feature from the variance of the data, and then, it is
that has maximal relevance and minimal redundancy [27]. converted into a p-value, which determines the importance of a
Generally, it can be said that mRMR, joint mutual information feature, and features are selected again by applying a threshold
(JMI), conditional mutual information maximum (CMIM), and on this value. The p-value threshold is usually set to 0.05 [31],
double-input symmetrical relevance (DISR) methods are based [58]. All the algorithms and methods mentioned above were of
on “relevance” and “redundancy,” and they can be considered the filter type; now, we introduce the wrapper-type methods.
mutual information-based feature selection methods. The JMI Backward elimination (BE), also named backward selection
method just calculates the JMI between a target class and or sequential backward selection (SBS), is a wrapper-type
each of the features, and selects the feature with the highest feature selection method that starts with all the features, then
performance. DISR has a similar structure to JMI. These eliminates the weaker features by scoring, and selects the new
two methods differ only in the objective function. CMIM feature set. This method is a type of sequential feature selec-
selects features by maximizing mutual information with a tion method. The sequential feature selection method has two
target class, given the preselected features. The information types (forward feature selection and backward feature selection
gain-based feature selection method calculates the information or elimination) and greedily selects features. In the paper
gain (entropy) for each feature. Features that contribute more [101], the type of this feature selection method is specified as
information will be selected, and those with lesser informa- the sequential forward selection method, so we also explain the
tion will be removed. The correlation-based feature selection forward selection method. Forward selection works exactly the
method selects the most useful features. This feature selection opposite of the BE method, that is, there are no features in the
method is fast and simple [59]. According to our studied model, and then one by one, features are added to the model.
papers, this method selects features that are highly correlated In this method, the features that improve the performance of
in a certain class but not correlated with each other. Relief the model in the best way are added one by one until the
is a feature selection algorithm or method that calculates a addition of features does not improve the performance of the
score for each feature, then uses this score for ranking, and model. Inoue et al. [62] reduced these 27 feature variables
selects high-scoring features to continue. Many updates have to 13 by applying stepwise feature selection using logistic
been made to fix the limitations of the ReliefF algorithm regression. Logistic regression is a classification algorithm that
[97]. These limitations include inadequate performance in can perform feature selection by using regulatory rules and
the presence of missing data, unreliable performance in the determining penalty variables. In the paper [91], the important
presence of noise, and so on. One of the most famous of these features for gait activity recognition are selected using the
updates is the ReliefF algorithm, which removes some of the biogeography-based optimization (BBO) technique. BBO is
limitations of the original algorithm, such as poor performance an evolutionary method. This algorithm is derived from the
in the presence of missing data, and can be used in multiclass theory of biogeography and is inspired by the analysis of
classification problems, unlike the original algorithm, which the geographical distribution of species. The greedy heuristic
was designed for binary classification problems. The t-test, feature selection method looks at the feature selection problem
f-test, paired t-test, Wilcoxon sum rank test, and analysis of as an optimization problem and finds local optimal solutions
variance (ANOVA) methods are statistical methods for feature for the problem. In the paper [118], the best features of the
selection. All of them select the best features by thresholding classification process are attained by means of an extremely
the p-value, that is, they compare this value for each feature randomized tree (ERT) technique. The ERT is a tree-based
and select the best feature. It is better to do a comparative ensemble learning technique that combines the results of mul-
analysis of how these methods work. The t-test is a statistical tiple decorrelated decision trees collected. The entropy-based
test used to compare the means of two groups. The t-test can information gain is essentially used as the decision criteria for
be used as a statistical feature selection method that assigns the significant features [118]. In the paper [122], to search for
a p-value to the features based on their discriminability and the near-optimal subset of features, which maximizes classifier
then selects the appropriate features based on the value of the performance, a floating forward–backward feature selection
p-values. Paired t-test is a special type of t-test that stands in algorithm was employed. The performance of each feature
front of an unpaired t-test; this test shows the mean difference subset was assessed using cross-fold validation. Sequentially,
between two dependent groups, and the second one shows by selecting the best feature from an unselected pool of
the mean difference between two independent groups. The features, the algorithm adds the feature to the existing selected
f-test compares the variance of the two groups, while the set of features, provided that the addition of this feature
t-test compares the mean of the two groups. The Wilcoxon increases the classification accuracy. After the selection of
sum rank test is a nonparametric statistical analysis method each feature, the removal of a feature from the selected set of
that selects the most relevant features [31]. The method can features was also considered. The selection procedure stopped
15272 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
TABLE IX
F EATURE S ELECTION (M ETHODS AND A LGORITHMS )
when no further classification performance improvement was
observed through the addition or removal of a feature from the
unselected pool [122]. Recursive feature elimination (RFE)
is a feature selection method that starts feature selection
with all features and removes weak features until a certain
number of important features remain. Ranking of features is
done using model coefficients or feature importance attributes.
RFE with cross-validation, which is abbreviated as RFECV,
as a type of RFE, performs the same feature ranking process
using the cross-validation score of the model. The brute-force
feature selection algorithm or method, which is also known
as exhaustive search feature selection, examines all candidate
feature subsets and, finally, selects the best subset in terms
of performance criteria. If the number of features is large,
this method will have a very high computation time. The last
wrapper-type feature selection algorithm that is introduced is
the Boruta algorithm, which is based on the random forest
classifier and finds the importance of a feature using shadow
features. Shadow features are random copies of all features.
This method compares the importance of the features with
their shadow features using a criterion and selects the more
important features. In total, 34 feature selection methods are
described. 20 methods are filter methods. The mrMR algorithm
is the most widely used algorithm with seven times of use
and having a share of 35%. Information gain-based feature
selection, correlation-based feature selection, and relief-based
algorithms have been used twice and have a share of 10%.
Other feature selection algorithms in Table IX are used only
once and have a share of 5%. In total, 14 methods are wrapper features without losing important information. To differentiate
methods. Two papers have not presented a specific name for between feature selection and feature reduction, we need to
the algorithm used for feature selection; the greedy heuristic know that in feature selection, and we simply choose from
approach along with the backward selection method has been the features and do not change them, while, in dimension
used two times and has a 14% share; and all the other reduction, some kinds of features with smaller dimensions are
algorithms used for feature selection with this method are used produced. In this section, feature reduction or dimensionality
only once and have a share of 7%. reduction methods are examined. Since we have explained the
main feature reduction algorithms (such as PCA, LDA, and
G. Feature Reduction IFA) in the feature extraction section and considering that
Feature reduction is one of the most famous machine the feature reduction algorithms can be used in the feature
learning glossaries and terms. Feature reduction is also known extraction step, the performance of the remaining algorithms
as dimension reduction, and according to the deepAI, machine will be fully investigated in Section V. Statistical analysis of
learning dictionary is the process of reducing the number of the table in this section is not necessary because the PCA
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15273
TABLE XI
C LASSIFICATION A LGORITHMS
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15275
TABLE XI
(Continued.) C LASSIFICATION A LGORITHMS
algorithms are the most widely used algorithms in this field. that are specified separately in the table of classification
We use bar charts to identify the most commonly used types algorithms. We try to introduce famous types of algorithms
of machine learning algorithms. For a better understanding of in the relevant table. Neural networks simulate the way the
bar charts, a full numerical analysis will also be provided. human brain classifies related concepts. A neural network
However, before that, we are trying to introduce some com- consists of several neurons in a layered structure. The neural
monly used machine learning algorithms, some algorithms network forms a mathematical function that takes the input
have been implicitly introduced in Section II-C, and we will data, transfers it to the output, learns the pattern, and performs
not introduce them in this section again. Decision trees are one the classification. The feedforward neural network is a neural
of the most widely used nonparametric supervised machine network that does not form a cycle or loop in the connections
learning algorithms. The nonparametric means data analysis between the constituent units. This neural network is the first
is done without different assumptions or specific parameters. and simplest type of neural network. In this neural network,
This algorithm has a branched tree-like structure. The decision information is transferred in one direction from input to output.
tree consists of different nodes. The main node is the root The multilayer perceptron neural network is also a special
node, which is considered the starting point of the algorithm, type of these neural networks, which, in its simplest form,
and the leaf nodes are the endpoints of the tree branch and can consists of three layers: the input layer, the hidden layer,
represent the endpoint of the set of decisions; the leaf with the and the output layer. Information is transferred from the
most records can be introduced as a class. A random forest is input to the output, and the output layer is responsible for
a metaclassifier consisting of several decision tree classifiers. the classification process. The backpropagation (BP) neural
This classifier usually has a better classification accuracy network (BPNN) is the feedforward neural network trained by
than a decision tree and prevents overfitting. AdaBoost is a the backpropagation method, which is a mathematical method
metaclassifier that can combine several weak classifiers, such to increase classification accuracy. A fully connected net (FC
as decision trees, and improve performance. AdaBoost is an net) is one of the most commonly used neural networks. In FC
abbreviation for adaptive boosting. Logistic regression is an net, every neuron in layer I have a connection with every
example of a binary supervised machine learning algorithm neuron in layer I + 1, while the nonfully connected networks
used for classification. It can be used to calculate or predict only have partial connections [106]. Deep neural networks
the probability of an event with two states (0 and 1). In general, refer broadly to neural networks that exploit many layers
this algorithm is used for binary classification problems, but, of nonlinear information processing for feature extraction
by changing the structure and creating the multinomial logistic and classification, organized hierarchically, with each layer
regression algorithm, it can also be used in multiclass problems processing the outputs of the previous layer [67]. A deep belief
[123]. The HMM is a statistical Markov model that models net (DBN) is a deep neural network model that is made by
the system as a Markov process with hidden states. The stacking several restricted Boltzmann machine (RBM) layers.
HMM is a generative probabilistic classifier. HMMs have The output of the RBM at the previous layer is set to be
been successfully used in modeling different types of time- the input of the RBM at the current layer, and there will
series data, such as speech recognition and gesture tracking be a soft-max layer at the top RBM layer. The purpose
[35]. Neural networks, also known as ANNs, form a large of the soft-max layer is to transform the model scores for
class of machine learning classifiers and have different types each class into the normalized probability distribution [106].
15276 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
A convolutional neural network (CNN) is a feedforward deep introduced a novel approach for modeling and discovering
neural network that uses convolution operation in some layers. daily routines from on-body sensor data based on this machine
CNN typically consists of a combination of three different learning algorithm. Inspired by machine learning methods
layers: a convolutional layer, a pooling layer, and a fully from the text processing community, they have converted a
connected layer. In the convolutional layer, the convolution stream of sensor data into a series of documents consisting of
operation is applied to learn local features automatically. sets of discrete activity labels. These sets are then mined for
A pooling layer is added to reduce the training time and common topics, i.e., activity patterns, using latent Dirichlet
avoid overfitting by reducing the feature representation. The allocation. In an evaluation using seven days of real-world
output of the pooling layer provides high-level distortion- activity data, they have shown that the discovered activity
invariant features. Both convolutional and pooling layers could patterns correspond to the high-level behavior of the user and
be applied multiple times depending on the CNN structure are highly correlated with daily routines. String search (or
[55]. The automatically extracted features by these layers string matching) algorithms are for finding places where one
are used to train a fully connected neural network layer. or more strings are found in a larger string or text. They are
The output of this fully connected layer is used to compute used to find the strings of a text or string. The use of this
the probability distribution over the learned activity classes algorithm in the field of movement classification is also a bit
inside a soft-max layer [55]. One of the main benefits of surprising, so, to disambiguate, we present some examples of
using CNN is that it does not require any prior knowledge how to use it. In the paper [114], eye movements are recorded
about the data [55]. Gated recurrent units (GRUs) follow a using an EOG system. The string matching algorithm is used
very similar approach to LSTM units. GRU has an updated for explicitly modeling the characteristic horizontal saccades
gate and a reset gate that are responsible for the flow of during reading. In the paper [115], string matching is used to
information vectors. These gates combinedly decide what part spot occurrences of gestures in a continuous stream of data.
of the tensor needs to be remembered in the next step and Now, we will check the statistics of the algorithms in the
which may be updated [88]. Dynamic neural networks are table. The total number of papers presented in Table XI is 402,
actually opposite to static neural networks and are created with including both machine learning and classical algorithms. The
structural changes in routine neural networks, for example, SVM algorithm with 67 uses has a share of 17%. The KNN
creating feedback from output to input in the structure of a algorithm with 35 uses has a share of 9%. Bayes derivatives
static neural network can lead to the creation of a dynamic include all the classification algorithms that use the Bayes
neural network. So far, we have tried to introduce numerous probability law for classification, such as naïve Bayes and
neural network algorithms that are used for classification. Now, Bayes net. These algorithms also have a share of about 8%
we introduce some other famous or widely used algorithms in with 31 uses. The decision tree with 27 uses has a share of
this field. To get acquainted with other less-used algorithms in 7%. HMM, with 24 times of use, has a share of 6%. Random
the table, refer to the papers provided for them. As we have forest with 17 uses and a 4% share is in pursuit of HMM.
previously announced, DA divides the data into two or more Neural networks or ANNs have a share of 4% with 16 uses.
classes by increasing the interclass variance and decreasing CNN also has a share approximately equal to the previous
the intraclass variance. There are two types of DA classifiers, algorithm with 15 uses. The DA algorithm is in the next rank
namely, LDA and quadratic DA (QDA) classifiers. In LDA with 13 uses and a share of about 3%. This algorithm has
classification, the decision boundary is linear, while the deci- different types, such as linear and quadratic, which are used
sion boundary in QDA is nonlinear. The second one is more in papers for classification. The multilayer perceptron is also
flexible than the first one. Gaussian mixture models (GMMs) used in 13 papers and has a share of 3%. LSTM is used
are probabilistic machine learning classification models that ten times in papers and has a 2% share. Fuzzy algorithms
assume that a dataset can be considered as a mixture of several are used eight times in papers and have a share of 2%.
Gaussian probability distributions and perform classification Other machine learning algorithms have a percentage share of
based on these criteria. CRF is a class of statistical modeling about 2% or less and are used less than eight times, so their
methods used for structured learning and prediction. CRF can presence in the related bar chart has been omitted. As can
support more complex and useful feature sets by modeling the be seen, supervised algorithms, such as SVM and KNN, are
posterior probabilities [40]. We already explained the KNN at the top of use, which is not far-fetched and is predictable,
classification algorithm; if k in that algorithm is considered because these algorithms have proven their usefulness over the
equal to one, the nearest neighbor algorithm is born. In this years. In total, there are 402 proposed classification algorithms,
algorithm, the output is simply labeled to the nearest neighbor. 21 are classical algorithms, and their presentation in Fig. 7(b)
Topic models stem from the text processing community. They is omitted; 280 algorithms are supervised and have a share of
regard a document—e.g., a scientific paper—as a collection about 70%. The number of probabilistic algorithms is 65, and
of words, discarding all positional information. This is called their share is 16%. Combined algorithms have been used in
a “bag-of-words” representation. As a single word captures a 21 papers and have a share of about 5%. Rule-based machine
substantial amount of information on their own, this simplifi- learning algorithms are used nine times and have a share
cation has been shown to produce good results in applications of 2%. Six algorithms are unsupervised and have a share
such as text classification [70]. Perhaps, the presence of of 1%. With these numbers in mind, the reader can have
this classification algorithm among movement classification a better understanding of bar charts and can choose freely
algorithms is surprising. However, Huynh et al. [70] have from the most used classification algorithms and classification
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15277
C and C++ two times each in the same paper and rapid miner In Table XIII, evaluation methods are presented. First,
once. MATLAB software has been used once for data fusion. we summarize the proposed methods as specific methods and
then define each specific method. Evaluation methods under
J. Evaluation the titles of tenfold cross-validation, k-fold cross-validation,
As it was mentioned earlier, the classifier allows us to twofold cross-validation, fivefold cross-validation (5-foldCV),
identify unknown data, tag it, and specify its motion class. This threefold cross-validation, sevenfold cross-validation, fourfold
occurs when the quality of the trained classifier is evaluated cross-validation, sixfold cross-validation, 20-fold cross-
using the evaluation step [147]. In some papers, evaluation is validation, and random split k-fold cross-validation are
divided into two parts, training and testing, and in some other methods with the same structure and are considered as k-fold
papers, this step is divided into three parts: training, validation, cross-validation. Evaluation methods with the titles leave-one-
and testing. Of course, it should be noted that unsupervised subject-out cross-validation, leave-one-out cross-validation,
classification does not require a specific training step and leave-one-participant-out cross-validation, leave-one-out
directly infers activities from sensor data. Ground truth, a con- test cross-validation, leave-one-day-out cross-validation,
cept related to the training phase that leads to the production of leave-one-person-out cross-validation, leave-one-user-out
labeled data, is not discussed in all papers, and we ignore it and cross-validation, leave-one-instance-out cross-validation,
only get acquainted with this concept. Generally, papers that and leave-one-out cross-comparison also have a similar
deal with hyperparameter tuning or optimization parameter structure and are considered leave-one-out cross-validation.
tuning need validation [106], [124], and papers that do not Biased cross-validation is considered a special method. Titles
need this part or use default hyperparameters will only run the such as subject-based cross-validation, cross-validation, and
training and testing part. Because validation is not an essential individual-based cross-validation are also considered cross-
part, its methods will not be covered much, but validation validation methods. Hold-out cross-validation is considered
methods are like evaluation methods. Now, we will define each a special method. User-specific training also has the same
part of the evaluation. The definition of training datasets and condition as the previous method. Titles such as repeated
test datasets is very comprehensive. We use the training dataset leave-one-out random subsampling cross-validation and
to fit the model and, in a general sense, to train and create the repeated random subsampling cross-validation are considered
desired model and for understanding the relationship between as repeated random subsampling cross-validation methods.
the dataset and its corresponding class. This dataset contains a The titles subjectwise leave-one-out, grouped stratified k-fold
large part of the entire existing dataset and usually determines cross-validation, and stratified k-fold cross-validation will
the weights of the nodes. Test datasets are also unknown also be defined separately. First, we define the concept of
and unlabeled datasets that determine how well our model cross-validation. Cross-validation is a method that determines
performs the labeling operation and examines the quality of how generalizable the classification results will be to an
the created model. To get acquainted with validation, we need independent and unknown dataset. The most widely used
to know what hyperparameter optimization is and why the method of this concept is k-fold cross-validation, which
hyperparameter needs to be tuned and then define validation. is used 52 times in the total of 106 evaluation methods
The paper [148] has stated that hyperparameter optimization proposed in the table, has a share of 49%, and is the most
is a process to find suitable hyperparameters for predictive used method. In this evaluation method, the dataset is divided
models. It typically incurs highly demanding computational into k groups of equal size. A subset is used to test the
costs due to the need for the time-consuming model training classification model, and k − 1 subsets are used to train
process to determine the effectiveness of each set of candidate the classification model; this process is repeated k times.
hyperparameter values. There is no guarantee that hyperparam- The next most used method is leave-one-out cross-validation,
eter optimization leads to improved performance. However, which is used a total of 37 times in the studied papers and
this can be achieved by thinking of measures. Hyperparameters has a share of 35%. In this method, the dataset is divided into
from the classifier in the toolbox of various softwares have a several groups; all groups except one are used for training
default value that the model performance can be maximized and only one is used for testing; and this is done so much
by tuning the hyperparameter. Hyperparameter tuning is very that all groups are selected as the test group once. The
common in SVM and neural networks, but hyperparameter third place goes to cross-validation, which has a share of
tuning of classifiers such as decision tree, random forest, KNN, 7% with seven repetitions. Repeated random subsampling
naïve Bayes, linear discriminate analysis, and AdaBoost has cross-validation with three repetitions and a 3% share is
also been discussed in the papers [148], [149]. As the last ranked fourth. This method is also known as the Monte Carlo
recommendation of this section, we want to announce the method. This method works in such a way that the dataset
data split rate for all three sections of training, testing, and is randomly divided into training and testing, the model is
validation. They usually allocate 70%–75% of the data for evaluated as many times as desired by the user, and the
training and 20%–25% for testing. If the hyperparameters need overall result is averaged. When this method is combined with
to be tuned, 70% of the data are generally intended for training, the leave-one-out method, the repeated leave-one-out random
20% for validation, and 10% for testing [150]. Evaluation subsampling cross-validation method is created, in which one
methods, metrics, methods of obtaining metrics, and methods set is randomly selected for testing and the rest for training,
of announcing the results are discussed in Tables XIII–XVI, the evaluation is performed, and the result is averaged.
respectively. The next rank is grouped stratified k-fold cross-validation
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15279
TABLE XII
S OFTWARE U SED A LONG W ITH T HEIR A PPLICATION A REA
with two uses and a share of about 2%. However, first, that provides stratified folds, which means that each fold has
we explain the stratified k-fold cross-validation, which has a the same percentage of samples with a given label. Grouped
1% share with one use. This method is a variant of k-fold stratified k-fold cross-validation benefits from the advantages
15280 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
TABLE XIV
M ETRICS
15282 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
TABLE XIV
(Continued.) M ETRICS
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15283
TABLE XIV
(Continued.) M ETRICS
(CAL), and more [152], [154], [155]. Metrics based on how 16 uses and a 4% share. The error metric also has a share of
well the model ranks the examples are ranking metrics [152]. 4% with 14 uses. The support value with eight uses has a share
The area under the receiver operating characteristic (ROC) and of 2%. Other proposed metrics have a share of about 1% or
precision/recall curves are the most important rank metrics. less and are presented in the table only for information. About
These are the three main types of metrics for evaluating the probabilistic metrics, we must also announce that they have
classifiers, but other metrics exist, which do not fall into been used ten times. Root mse (RMSE) is used three times and
these categories. Therefore, in this article, the main evaluation has a share of 30%. MAE, mse, and mean absolute percentage
metrics will be presented along with their formula, and those error (MAPE) metrics are used twice each and have a share
who are interested can refer to the relevant paper to get of 20%. Brier score and adjusted B are used in one paper
acquainted with other metrics. You need to know this as a rec- and have a share of about 10%. Ranking metrics have been
ommendation from us that accuracy in this field is introduced mentioned 14 times in total, 13 of which are assigned to AUC,
either based on the formula [correct/total] [94], [109] or based and the share of this metric is 93%. The c-index and adjusted
on [(tp+tn)/(tp+tn+fp+fn)] [156], [157], [158], [159], [160], c also have a share of 7% by being used in only one paper.
[161], [162]; this, metric alone, especially with imbalanced In Table XV, we will present the methods for obtaining these
data, is not a good measure of classification performance. metrics or, in fact, the graphical evaluation methods. At this
Therefore, the authors of this article strongly recommend level, we want to introduce you to each of these items in
that other metrics are be used to report classification results the table. Han et al. [166] have described confusion matrices
too. In Table XIV, we introduce the well-known evaluation comprehensively. As they have described, “a confusion matrix
metrics, and the following equations provide formulas for is a useful tool for analyzing how well your classifier can
the more well-known metrics. Most of the formulas and recognize tuples of different classes. True positive (TP) and
definitions are derived from [163], [164], and [165]. See [51] true negative (TN) tell us when the classifier is getting things
for more information on time-based evaluation metrics, such right, while false positive (FP) and false negative (FN) tell us
as insertion, overfill, and underfill, and event-based evaluation when the classifier is getting things wrong.” Han et al. [166]
metrics. These types of metrics are less common in contrast to have described ROC as a visual tool for comparing two
the other metrics in this field and, thus, are not mentioned in classifiers and have clarified that ROC shows the tradeoff
this section. However, in a few of the reference papers of our between the TP rate (TPR) and the FP rate (FPR). In order
paper, those metrics have been used along with the metrics to respect the authors of scientific papers, other definitions of
in Table XIV. The most widely used metrics are threshold ROC are reviewed in this article. Shaafi et al. [167] described
metrics that have been repeated a total of 398 times. As you ROC that it shows the variation of TPR concerning false
can see, some of the names presented in the table have the alarm rate. Another definition is that the ROC shows the
same formulas; however, we presented each possible name variation of correct acceptance concerning false acceptance
for a formula separately in the table. Accuracy is at the top [168]. Zinnen et al. [125] consider the ROC as recall changes
of usage with 151 times of use and a share of 38%. Precision in terms of precision. Tahafchi and Judy [169] define ROC
ranks second with 52 uses and a 13% share. Recall ranks third as the ratio of the true accept rate to the false accept rate.
with 46 uses and a 12% share. The f score ranks next with In [95], the ROC curve is expressed as sensitivity changes
34 uses and a 9% share. Specificity ranked fifth with 27 uses in terms of FPR. Papers [58] and [170] describe the ROC
and a share of 7%. Sensitivity takes sixth place with 26 times curve as sensitivity changes in terms of specificity. In the
of use and about 7% share. EER is in seventh place with paper [171], in the figure that describes the ROC diagram,
15284 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
the vertical axis is considered to be 1-fmr, and the horizontal sensors; 2) specialized systems, such as Vicon (Vicon Motion
axis is considered to be false rejection rate (FRR). In the Systems Ltd., Oxford, U.K.) or Optotrak (Northern Digital
papers [144] and [172], the ROC curve is plotted in terms Inc., Waterloo, ON, Canada); and 3) Kinect systems [6]. The
of FRR and false acceptance rate (FAR). In [173], the ROC second and third cases usually create image and video data
curve is plotted for the genuine acceptance ratio (GAR) and and have limitations [6], [51]. Specialized systems, such as
the FAR. The error division diagram (EDD) shows the ratio Vicon or Optotrak, have high accuracy when operating in
of the entire dataset, including error classes and other related controlled environments [6]. These systems can provide a
items. The event analysis diagrams (EADs) show counts of large amount of redundant data. Also, these systems are very
predefined events as a proportion of the total ground-truth expensive compared to the other two. Ambulatory systems,
event count. The use of these diagrams is not as extensive such as those using a Kinect (Microsoft Corporation, Red-
as other items in the table. For more information, you can mond, A, USA) to capture human motion, are set in relatively
refer to papers [114] and [174]. The precision–recall curves uncontrolled environments and have a restricted field of view.
are known as a suitable complement to ROC curves, which are These systems have a restricted margin of maneuverability
less commonly used, and these curves display precision values and are intended for indoor use mainly. In contrast, wearable
on the vertical axis and recall values on the horizontal axis sensors have the advantage of being portable and suitable
for different thresholds. Saito and Rehmsmeier [175] show the for outdoor environments [6]. It is not bad to announce the
advantages of this curve over the ROC curve for imbalanced other reasons for favoring wearable sensors. In addition to
datasets for binary classifiers. The specificity/sensitivity curve being portable and cheap, it can be said that these sensors
is introduced in the paper [117] and shows the values of are more ubiquitous, and it is easy to use them. The use of
specificity in the vertical axis and the sensitivity axis in these sensors does not require special knowledge. It is easy
the horizontal axis, and this curve shows the distribution of to teach the user how to use the wearable system with a little
sensitivity and specificity for detection accuracy in the paper. training. By equipping the wearable system with a memory,
To evaluate the performance of the classifier, a curve can it is possible to analyze the wearer’s behavior at any time.
be used, which is the FAR diagram versus the FRR. This Considering the variety of wearable sensors and the possibility
curve is called decision error tradeoff (DET). The DET curve of measuring different parameters by these sensors, it can
shows the performance of a biometric system under different be said that these sensors provide more diverse information
decision thresholds [176]. 102 methods of obtaining metrics compared to other methods. Wearable systems are easier to
are mentioned in the relevant table, where the confusion matrix update, can adapt to changes in society, and can advance with
is at the top with 67 repetitions and a share of 66%. ROC fashion. For these reasons, we have focused on human motion
is in second place with 20 repetitions and a 20% share. analysis by wearable sensors, and we have tried to review
Precision/recall curves with seven repetitions have a share of movement classification by wearable sensors. Of course, there
7%. Other methods have a share of 3% or less. In the last table are challenges in this area that will be discussed in the follow-
of this section, you can see the methods of announcing the ing. The purpose of this section is only to present a summary
results and comparing the results of the metrics, for example, of the findings, and for the readers to be familiar with the
for several types of classifiers, comparing different values main results of the statistical analysis by reading this section
of hyperparameters. Announcing and comparing the results and for a more general understanding, refer to the previous
by the table have been most used due to their ease of use. sections. In this article, a wide variety of papers have been
However, if we want to go into the statistical analysis of this studied, each of which is related to three areas of movement
table in a little more detail, we must state that 171 times the classification, namely, activity recognition, GR, and GA; for
methods of announcing and comparing the results have been the first time, all three of these areas have been addressed
presented in the papers, and the table with 116 repetitions has simultaneously, and other review papers in this area have only
a share of 68%. The bar chart with 41 repetitions has a share addressed one area [10], [14], [19], [20], [50], [51]; and in
of 24%, the box plot with six times of repetition has a share the common concepts associated with the steps, our research
of 4%, the scatter plot with five repetitions has a share of is much broader. In identifying the movement classification
3%, and the other two methods have a share of 1% or less. chain, the number of algorithms proposed for each part of
These methods include the cumulative matching score (CMS) this chain in this article is very large and very diverse. For
curve and the cumulative match curve (CMC), which can be example, only Table XVII introduces many different types of
used as metrics or for announcing and comparing recognition sensors available in this field, and the set of sensors introduced
results. Now, we are familiar with all the steps, and we can in this article is very diverse, which is the leader compared
easily do projects related to activity recognition, GR, and GA to the existing review papers in this field, because different
based on this information. The authors hope that this article sensor categories are presented, and the state-of-the-art papers
would be of great help to engineers, students, and researchers in human motion recognition fields or IoT-based wearables
interested in doing a project in the field of movement area may only mention some of these categories or sensors
classification. [3], [19], [20], [51]. A large number of datasets related to
movement classification are presented in this article. We iden-
III. OVERVIEW OF F INDINGS tified 18 preprocessing actions, which is a significant number.
First, we must state that data collection for human motion After identifying the levels of data fusion, we introduced many
analysis is usually done with three methods: 1) wearable different algorithms for each level. Different algorithms have
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15285
been identified for signal segmentation. Topics related to the of movement classification were examined. All the steps are
feature extraction step have been discussed extensively. Also, presented together with the corresponding algorithms in this
a comparative analysis has been done on the performance article. The purpose of this article was to clarify all the
of some feature extraction methods. It has been tried to common steps of the project in the movement classification
introduce all kinds of algorithms related to feature selection. of the global chart so that the readers of this article can easily
Each of the algorithms is defined separately, and the way they do the project in the field of movement classification. Common
work is specified. Various feature reduction algorithms have steps of project implementation in the studied papers are data
been proposed. The functioning of the algorithms has been collection, data fusion, preprocessing, segmentation, feature
investigated, and a comparative analysis has been done on extraction, feature selection, feature reduction, classification,
the performance of some of them. In the classification step, and evaluation. Some of these steps may not have been
a very large number of algorithms have been introduced if we used in all papers. Using the corresponding tables, the most
ignore the classic classification algorithms; a wide range of commonly used topics and algorithms can be easily identified.
machine learning algorithms have been identified along with Bar charts are also used for a better understanding of some
their types; and in this field, papers can be found that only deep steps of the operational plan. As can be seen from Table XVII
learning algorithms have been discussed [19], [50], but these and the bar chart, the most widely used sensors in this
algorithms are only a part of machine learning algorithms. The field are accelerometers, gyroscopes, EMGs, force sensors,
functioning of widely used or famous classification algorithms and pressure sensors, respectively. In the following, we have
has been fully investigated and analyzed. In evaluation, a wide specified the category of sensors, and we have specified the
range of metrics has been proposed along with their formula most used sensor categories through a bar chart. There are also
and type. There are many different evaluation methods, and a preprepared datasets produced by universities, companies, and
standard definition is provided for each one. Also, the most so on, which are a great help in creating papers in this field,
famous and most used of them have been examined in terms and with these datasets, there is no need for a data collection
of performance. The methods of obtaining the metrics and the step. For the preprepared datasets, we specified both the most
methods of announcing and comparing the results have been used sensors and the most used category, and the results were
fully investigated in this article. In fact, the authors have tried almost similar to Section II-A1. A side result of this section
to provide the readers with a suitable guide to continue and is to specify the importance of human movement in the field
conduct research in the field of movement classification. The of activity recognition because most of the datasets related to
reason for this effort is that there are major reasons that force HAR contain data on walking activities with different styles.
the world community to refer to wearable sensors, and also, Then, in this article, various sensor fusion strategies were
the need increases with human motion recognition. Due to the discussed, and the corresponding algorithms were identified.
aging of the global society, and the loneliness and inability of We then introduced the signal preprocessing, specified the
patients to attend medical centers, by using wearable sensors preprocessing actions in the table, and then identified the most
to detect human movement, patients can be saved from visiting commonly used actions using a bar chart. The most common
these centers, and costs can be reduced. Since the doctor can preprocessing actions are filtering, data normalization, sensor
notice changes in the patient’s movement pattern and may call calibration, amplification, segmentation, smoothing, rectifica-
the patient to inquire about their condition. Alternatively, a tion, interpolation, labeling, and drift removal, respectively.
special alarm may be activated to notify the patient. Even In Section IV, along with the definition of signal segmentation,
considering that the world society is facing the problem of we present various segmentation algorithms. Next, feature
obesity, it is easy to know the weight, degree of obesity, or the extraction methods, dominant feature type, and feature domain
discomfort of their organs by analyzing the walking of people are specified. Fourier transforms and WT are the first and sec-
or the speed of movement. In general, identifying human ond most widely used feature extraction methods, respectively.
activity through walking, examining the pattern of human The signal-based statistical feature is the dominant feature
walking make people aware of the current state of health, and type, and the other feature types are used lesser than this
based on these, decisions can be made about the future state of feature type. The time domain and the frequency domain have
health. Of course, this issue is not exclusive to human walking. the highest number of uses as feature domains. The issue of
Many movements of the human body can be used to evaluate a feature selection in the next step is examined. Feature selection
person’s body condition and health. Thus, GR, GA, and activ- methods have been introduced, and from the reviewed papers,
ity recognition by wearable sensors have many applications we find that filter and wrapper methods have been two of the
in the fields of medicine, education, entertainment, sports, authors’ favorite methods for feature selection in this field.
and games. With these explanations, the human movement Dimensional reduction algorithms were also examined, and
recognition chain or the movement classification chain has PCA is a dominant algorithm in this field. For classification,
been identified in general, and each part of this chain has after defining and clarifying the purpose of using it in papers,
been discussed in detail. The numerical analysis has been fully we recognized it as the last step of the project and completely
presented in relevant sections, and now, we only qualitatively identified the algorithms used in the papers in Table XI, and
repeat the results. After categorizing the human body motion then, we exhibited the most commonly used algorithms and
analysis with wearable sensors in a general and global format, their types by bar charts. SVMs are the most widely used
and briefly explaining why we have to combine this technology classifier, followed by KNN and Bayes derivatives (naïve
with IoT, the common steps for doing a project in each section Bayes, Bayes net, and so on). Decision trees, HMM, and
15286 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
random forest are also preferred by authors for classification, learning or artificial intelligence in general. The IoT can have
and they hold the next ranks. Supervised algorithms are at various components, the most important of which are sen-
the top. Probabilistic algorithms have taken second place. sors/devices, gateways and connections, cloud and database,
Combined algorithms are ranked third, and rule-based and analytics, and user interface. The first component is related
unsupervised machine learning algorithms are ranked next. to collecting and sending information by objects. Sensors can
In the section, software or language used and their field of be temperature, accelerometer, compass, proximity, humidity,
application, we introduced the software or languages that can pressure, light, or any other sensor. As mentioned, these
be utilized in movement classification steps. For each step or, sensors can be used alone or fused with other sensors. All the
in some way, each piece of the human motion detection chain, sensors mentioned in the data collection section can be used
we specified the software or language used. Most of the papers in the first component, and when talking about the device,
in this field mentioned the software used for classification; for you can easily remember things such as smartphones. The
classification, the first rank of the most widely used software second component is related to how the data reaches the cloud
goes to MATLAB, and another software that ranks second and is related to data flow management. There are various
is named Weka. Both well-known brands are widely used to methods for connecting sensors to the cloud, such as Wi-Fi,
implement classifiers, while they can be active in other steps Bluetooth, and ZigBee, and the choice of each of them depends
of the project. Side results can also be obtained about other on the application of the IoT. The third component provides a
software or languages used in papers related to the field of location to store and access IoT data. In the analytics section,
movement classification, which has been omitted from the the data of the sensors and the device are examined, and
presentation due to their low importance. In the last part of this various decisions are considered according to the conditions
article, we have announced the methods of evaluating the per- of the data. The user interface section informs the end user
formance of the model, the relevant metrics and types of them, of the results of the analysis and, actually, the decisions
graphical metrics or methods of obtaining the metrics, and made or the conditions and also gives the user the ability to
methods of presenting and comparing the results. In general, perform some operations related to the conditions. IoT can
evaluation cannot be identified as a separate step, and it should be implemented using many IoT connectivity schemes that
be considered as a part of the classification, but, because there connect an IoT device to other devices through the Internet.
are different concepts and parts related to it, we dealt with it The Internet connection can be either wired or wireless [2].
separately. Different evaluation methods are presented in this Wired and wireless communications have their advantages
article, and in general, k-fold cross-validation is considered a and disadvantages, and should be chosen depending on the
popular evaluation method. The most widely used metric types application. Understanding the benefits and drawbacks of
for evaluating the performance of classification are threshold- wired and wireless connectivity schemes enables us to make
based metrics. It is recommended to use several metrics to an informed decision regarding IoT implementation [2]. Wired
evaluate the performance of the model to have a more accurate connections are reliable, fast, and secure. They are more
understanding of the performance of the model. In Table XV, reliable than wireless connections since they are less prone to
we have presented the methods for obtaining these metrics packet loss as a result of path loss or interference from other
or, in fact, the graphical evaluation methods. The confusion electronic devices. However, they suffer from the higher cost
matrix with a relatively large difference is the most widely of implementation and lack of mobility support. Scalability
used method to obtain the metric. ROC and precision/recall is also another problem with wired networks. The wired IoT
curves are ranked next in terms of usage rate, respectively. network is only practical if IoT devices not only are close to
In the last table, the methods for announcing the results and each other to reduce the cabling cost but also at least one of
comparing the results of the metrics are presented. Announcing them is located close enough to a wired Internet access point.
and comparing the results by tables are the most common For many IoT applications, wired connectivity is not very
methods. Bar charts also are used but not as many as the practical, and wireless IoT implementations are the common
previous method. Box plots are placed in the next rank in solutions [2]. For a wireless connection, there is a need for an
terms of usage. These results are presented quantitatively IoT gateway, especially for short-range communications [2],
in more detail in the relevant sections, be careful that the [3]. IoT gateway connects sensors, devices, and so on to the
numbers presented are approximated and rounded, and this internet at the network’s edge and can perform computing
approximation may cause the sum of the share percentage to locally [2]. Regardless of whether the implementation is
not be 100%, but there will be no change in the overall results. wireless or wired; there are four types of data communication
in IoT: device-to-device, device-to-cloud, device-to-gateway,
IV. R ELATED C ONCEPTS ON THE I NTERNET and back-end data shape. Only wireless protocols related to
OF T HINGS IN T HIS F IELD each model are presented because of practicality; for wired
The IoT has many different applications and is not limited protocols, refer to [2]. In device-to-device communication,
to motion recognition. The IoT provides insights into many two or more devices are connected directly to each other.
applications in various sectors of a variety of industries and Bluetooth protocol is one of the most widely used protocols
businesses. It brings efficiency and safety, and can revolution- in this type of communication. In connecting the device to the
ize the way many businesses and industries operate [2]. In this cloud, a device is directly connected to the Internet cloud.
section, we intend to briefly present the structure of the IoT, Some of the widely used protocols in this connection are
its implementation methods, and concepts related to machine Wi-Fi and low-power wide-area networks (LPWANs). In the
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15287
TABLE XV
M ETHOD OF O BTAINING THE M ETRIC
TABLE XVI
M ETHODS FOR A NNOUNCING AND C OMPARING R ESULTS
model of connecting the device to the gateway, there is an fall detection system must be fast enough to detect fall fast to
application on the desired gateway that acts as a communi- be beneficial. However, in order to detect fall events accurately
cation interface. One of the protocols used in this model is and minimize FPs, the fall detection system must differentiate
Wi-Fi. The back-end data-sharing model is the extension of between a fall and other daily activities [3]. Machine learning
the device to the cloud connection. In this connection, the algorithms, such as SVM, along with other motion recognition
user can use cloud data along with data from other devices steps, such as feature extraction and feature selection, can
and sources. Artificial intelligence continuously improves per- be used to detect falls from raw sensor data, and this is
formance and decision-making capabilities and enhances the one of the important issues of artificial intelligence related
true potential of IoT. Artificial intelligence or specifically to the IoT. Other applications have similar conditions, but we
machine learning is an integral part of motion recognition. tried to examine the most used applications. In general, the
In general, the wearable IoT has many applications in motion benefits of IoT by adding human-like awareness and decision-
recognition. We have tried to examine some of the algorithms making using machine learning algorithms can lead to increase
and techniques available in these applications briefly. Motion efficiency and improve motion recognition.
recognition by IoT-based wearable sensors has applications in
health, gaming, sports, safety, and so on. The health wearable V. S COPE FOR F UTURE R ESEARCH
IoT device is mainly used for remote patient monitoring, Usually, to be able to provide a scope for future research
treatment, and, in some cases, rehabilitation purposes. The in any scientific subject, we must fully understand the chal-
sensors such as blood pressure, temperature, accelerometer, lenges and opportunities in that field. There are many public
and heart rate monitor collect health-related data, and the challenges in the field of human body motion recognition
user/patient’s health information will be sent to the Internet by wearable devices. Challenges such as power consumption
for further analysis. In many applications, wearable devices or battery life, ergonomic designs, user safety from wireless
are connected to smartphones to analyze the collected data transmission radiation, miniaturization, memory capacity, pri-
and then transmit it to a cloud computing-based framework, vacy, security issues, training the end user to use these devices
such as Microsoft Azure or Amazon Web Services (AWS) in and trust them, equipment flexibility, cheap and affordable
order to store, process, and analyze the data [3]. Detection and price, user comfort, wearability issues, and reliability are com-
prevention of falls are other applications of wearable sensors monly raised when commercializing products. The discussion
based on the IoT. To be able to detect falls, usually, inertial of creating standards in user interfaces and related application
sensors such as a gyroscope or accelerometer are used. The updates is also somehow included in this category. Dealing
15288 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
TABLE XVII
S ENSORS
with such commercial or public challenges should be the provide examples that address some of these challenges briefly.
responsibility of economists, marketers, researchers, managers You can find solutions for other challenges in different papers.
of famous companies in this field, and even governments. Since information security and privacy are one of the most
By addressing each of these commercial or public challenges, important challenges in this field, governments must consider
researchers and students can help solve a societal problem by strict laws for stealing information from wearable devices and
providing a solution to the challenge. Our goal is not to deal implement security policies. As another example in the field
with these types of challenges, and we have another intention of power consumption or battery life, Bluetooth low energy
of providing the scope of the future section, but we will has been proposed instead of Bluetooth in mobile phones and
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15289
has not been very successful, but, with the progress in the For example, the output of three sensors, an accelerometer,
semiconductor industry, integrated circuits with lower power a gyroscope, and a magnetometer, is affected by bias, scale
consumption can be produced. Also, energy harvesting tech- factor, and white noise. As stated, motion artifacts generally
nology provides additional means to extend battery life [177]. change the performance of the sensors and occur when the
Our goal in this section is to address the technical challenges user’s movements affect the placement of the sensors or other
of identifying the human body’s movement. In general, data factors related to sensors. The next source, which is perhaps
collection from different people is very time-consuming and less mentioned than these two, is the environmental factors,
challenging. Especially, since the ground-truth data need to be examples of which were mentioned a little earlier for GPS
collected for the training of the supervised classifiers, there are and magnetometer. Although these three cases are the most
various issues related to the sensors that should be addressed, famous causes of noise in wearable sensors, the main challenge
such as accelerometer bias [178], magnetometer dysfunction for researchers is to fully understand the causes of noise in
in the presence of the intrusive magnetic field, and the loss of the sensors, and they choose motion recognition. The next
GPS signals. The presence of noise in the output of sensors challenge is choosing the right filter, which is somehow related
is challenging because the presence of noise generally affects to the recognition of the sensor noise. According to the studied
the recognition performance. In the paper [97], it is stated that papers, the filters that are used to remove the noise are a
removing the noise from the corresponding wearable sensors notch filter, a linear Kalman filter, an EKF, an infinite impulse
has improved the classification performance of walking on response filter, a finite impulse response filter, a high-pass
the stairs. It is stated in the paper [144] that the presence filter, a bandpass filter, a low-pass filter, a median filter, and so
of noise in the output of the accelerometer generally causes on. The need to produce preprepared datasets specific to GR
problems in identifying the phases of gait. In general, it can is strongly felt. In the field of GR, it is necessary to collect
be concluded from the studied papers that noise disrupts the sign language datasets, publish the datasets, and make
the motion recognition process and weakens performance by them available to researchers for further research. Activity
reducing recognition accuracy. The motion recognition process recognition through human walking has led to the production
should be robust to noise. Generally, a higher signal-to-noise of many datasets, which shows the importance of human
ratio will provide better results. The noise of the sensors and walking for activity recognition and GA; this is because human
the data noise, in general, deteriorate the classification perfor- walking is a basic activity of daily living, and human walking
mance because the machine learning algorithm or any other or gait is defined as a particular way or manner of moving
classification algorithm can identify the noise as a pattern, on foot [179] and has many applications in health monitoring,
so misleading generalizations begin and eventually cause the sports, rehabilitation, video surveillance, and so on. If we try
false identification of patterns. Classification accuracy reduc- to provide examples, we must announce that, according to the
tion is only one of the problems that noise will cause, com- studied papers, human walking activities recognition has many
plicating classification, overfitting, increasing training time, or medical applications in the fields of poststroke rehabilitation,
maybe the whole system’s execution time, and so on are other detecting gait abnormality, Parkinson’s disease rehabilitation,
problems caused by the noise. In this section, we want to talk fog detection, analyzing neuropathy disorders, pathological
a little more about the concept of noise in data preprocessing gait assessment, walking stability detection in older people,
because the filtering action, which is generally mixed with postural stability analysis, postinjury rehabilitation, and so on,
the concept of noise, is the most widely used preprocessing so there is a need that the data collection are to be more
action. Eliminating noise, in general, is challenging, so we application-specific, which means that researchers collect data
try to define the challenges related to the concept of noise in related to specific diseases, sports, and so on. Data fusion for
this field to some extent. Noise is present in all the wearable different sensors from different categories is very challenging.
sensors presented in this article. For example, there is noise Regardless of the specific model, the challenges in this field
in the output of EEG, accelerometer, gyroscope, EMG, EOG, should be identified. Challenges related to data fusion mainly
and other sensors [31], [88], [96], [104], [114], [144]. In the include data association and management, sensor uncertainty,
paper [88], it is stated that, in general, the reading in IMU dynamic system modeling, and system validation [108], [111].
is noisy due to environmental noises, self-occlusions, reduced They arise from the inherent uncertainties in the sensory
accuracy due to fast movements, and so on in data collection. information, which are caused by not only device imprecision
In the paper [104], it is stated that noise should be removed but also noise sources within the system and the sensor itself
from the EMG sensor, and a common problem in sEMG is [108]. One of the examples of the uncertainty of sensor data
motion artifact that produces low-frequency noise. This type can be missing data. Target environments and natural behav-
of noise is caused by the movements of the muscles under ioral conditions can be responsible for these challenges, too,
the skin, and the movement of the electrode relative to the especially in system validation challenges [111]. The strategies
skin is another reason. There are various sources of noise in of data fusion should be capable of dealing with these uncer-
wearable sensors. Therefore, identifying noise sources in the tainties and result in a consistent perception efficiently [108].
output of different sensors is a challenging task, and it is very A proper data fusion mechanism or strategy is expected to
important to deal with it. In the studied papers, the presence of reduce overall sensory and even nonsensory uncertainties and,
intrinsic noises of sensors and motion artifacts, respectively, thus, serve to increase the accuracy of system performance and
has been challenging for researchers. Intrinsic noises are find the optimal structure for the structure of the recognition
the noises that exist in the output model of the sensors. system [106], [108]. Perhaps, one of the most important factors
15290 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
in the optimality of the structure and choosing the best strategy mation for movement classification. Therefore, by placing the
is the recognition delay, which affects real-time performance mentioned sensors in different places of the body and using
[106]. The other challenge is choosing between data fusion the Kalman filters to obtain orientation-related concepts, such
algorithms that can be used at the same level, and many as quaternions and Euler angles, different activities can be
factors are effective in this choice, but we will try to answer recognized. Since the authors of the papers studied in Table IV
this challenge by mentioning an example. Although we have have not fully mentioned the linear or nonlinear model used for
addressed this issue to some extent in the explanation of the data fusion by the Kalman filter in their papers, we present two
previous challenge, in this example, we are going to somehow models for use in data fusion of the mentioned sensors, which,
evaluate the performance of the Kalman filter and the EKF. in general, is used in the movement classification. The authors
Of course, these two algorithms have not been used in the of these papers have only mentioned the general names of the
studied papers on a common system according to Table IV, algorithms, and we have shown these names in the relevant
but an indirect comparison of their performance will be table to respect them. In the paper [180], a quaternion-based
useful. In general, to compare the performance of estimation EKF is developed for determining the orientation of a rigid
algorithms for data-level fusion on a shared system, usually, body from the outputs of a sensor, which is configured as
the accuracy of their performance on the system should be the integration of a triaxis gyroscope and an aiding system
considered, and after that, issues such as computational load mechanized using a triaxis accelerometer and a triaxis magne-
and ease of implementation can be considered. These cases tometer. The suggested applications are for studies in the field
can be generalized to the selection of fusion algorithms at of human movement. In the proposed EKF, the quaternion
all three levels. However, in any case, prechecking a series of associated with the body rotation is included in the state
issues related to algorithms will be effective in choosing them. vector together with the bias of the aiding system sensors.
The Kalman filter is a recursive type estimator and is utilized Moreover, in addition to the in-line procedure of sensor bias
in many engineering applications. Traditional Kalman filters compensation, the measurement noise covariance matrix is
need an accurate linear model of both the system dynamics and adapted to guard against the effects that body motion and
the observation process to be optimal in a least-mean-squared- temporary magnetic disturbance may have on the reliability
error sense [108]. The main advantages of the Kalman filter are of measurements of gravity and the Earth’s magnetic field,
its computational efficiency and ease of implementation. The respectively [180]. Another version of the quaternion-based
main limitations of this filter are its restriction to linear and Kalman filter can also be found in the paper [181]. The
Gaussian assumptions and low accuracy [108]. EKFs linearize paper [182] presents a successful design of a wearable device
the system model using Taylor series expansions around a to monitor walking patterns. It offers a low-cost wearable
stable operating point [108] and overcome the limitation of fitness monitoring device utilizing a six-axis IMU embedding
the linear Kalman filter. The main advantages of the EKF a three-axis gyroscope and a three-axis accelerometer. The
are computational efficiency, intuitiveness, ease of use, and Kalman filter has been employed to provide reliable angle
stability in practical estimations. The main limitations of these measurements that, in turn, are used to estimate the stride
filters are being limited to Gaussian noise and the need for the length. In this article, a linear Kalman filter has been used to
derivability of the system model and the measurement model measure foot angles; the system states in the linear Kalman
[108]. Therefore, when it is necessary to choose an algorithm model were the angle of the accelerometer and the bias
for data fusion in each of the three levels, it is necessary value of the gyroscope; and the measurement model consisted
to know the advantages and disadvantages of that algorithm of the angle of the accelerometer. In the process of signal
or even its structure. According to the statistical analysis in segmentation for choosing the length of the window, factors
this article, the Kalman filter and its nonlinear derivatives are should be considered so that both feature extraction is done
an important part of data-level fusion algorithms; although well and the system does not suffer from delays. However, it is
various algorithms are announced in Table IV for data-level better to examine the performance of different segmentation
fusion, the Kalman filter algorithm is by far the most widely algorithms to identify the effective factors in choosing a better
used. To the list of algorithms in the mentioned table, you can segmentation algorithm. Comparing the performance of the
also add algorithms such as the complementary filter, which classifier under different segmentation algorithms is one of
is a data-level fusion method that consists of a low-pass filter the factors in choosing the proper segmentation algorithm
and a high-pass filter, and is generally widely used in attitude [109], [114]. In the paper [109], an algorithm for segmentation
estimation. Therefore, we should also talk a little more about has been devised, its performance has been compared with
how to use this filter in the field of movement classification. sliding windows by different approaches, and it has a better
In general, it can be said that the Kalman filter is often used performance than sliding window segmentation in terms of
to fuse accelerometer and gyroscope information to provide precision and recall. The paper [114] has investigated the
better estimates, an example of which is the use of the KF performance of two segmentation algorithms, i.e., sliding
to detect postural sway during quiet standing (standing in window and head-based segmentation using the SVM classifier
one spot without performing any other activity or leaning performance. By examining the classification results, it has
on anything) [111]. For biomechanical modeling, the Kalman been stated that, by using a head-based segmentation scheme,
filter can be used to estimate the states [111]. Therefore, fusing precision and recall percentages are increased. It has also been
accelerometer, gyroscope, and magnetometer data to obtain announced that this algorithm is computationally lightweight.
related directions and angles can provide comprehensive infor- However, the lack of adaptation to different head movements
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15291
while reading (depending on the fact that the head should be feature subset, which is very difficult and tiring. To avoid this
down while reading) is one of the limitations of this method, complicated search operation, three types of feature selection
especially in short reading sequences. In the paper [115], one methods were generally introduced. To choose the type of
of the important factors in choosing a signal segmentation feature selection method, many factors should be considered,
algorithm is the computational load of the algorithm. For which will be discussed in general. However, first, it should
performance comparison reasons, three algorithms, which are be noted that, if we already have an algorithm for feature
commonly used for segmentation, have been implemented selection in mind, regardless of its method type, we must
and applied to the same dataset: 1) SAX; 2) SWAB, and first know whether that algorithm is useful for classification
3) a GA-based approach. A table indicates the CPU time or not because some feature selection algorithms are useful
required by each of the algorithms. Their availability for exclusively for regression or clustering and are not useful
real-time execution is compared. According to the above for classification [129]. However, algorithms such as relief or
literature review, computational load and overall recognition mRmR are used in both classification and regression topics,
performance are the main reasons for choosing the superior or the information gain algorithm is only used in the field of
segmentation algorithm. One of the most important technical classification. This issue should be considered especially in
challenges can be the feature extraction step. Such a step the selection of filter methods. Although the feature selection
imposes the need for feature selection/feature reduction steps. algorithm specific to regression or clustering may also be used
This is because this step is very time-consuming. As men- in the subject of classification, caution must be observed. This
tioned earlier, we should go for deep learning classification caution might act as a catalyst to speed up the work. For
algorithms because these algorithms eliminate the need to comparing the performance of the methods, several datasets
extract the features. Perhaps, another reason that increases should be employed, aiming at reviewing the performance of
the need to remove this step is that the extracted features three methods in the presence of a crescent number of irrel-
may only perform well in a specific application and are evant features, noise in the data, redundancy, and interaction
somehow application-specific. The next challenge is to choose between attributes, as well as a small ratio between the number
the algorithm or method of feature extraction. Although, in the of samples and the number of features [130]. Finally, announce
feature extraction section, feature extraction methods have which algorithm’s classification accuracy or, in general, which
been tried to be fully explained, it is not bad to compare the type of algorithm’s performance accuracy is better. Because
feature extraction methods because choosing between these there is no silver bullet method [129], it is possible to state the
methods is also a challenging matter. This issue is addressed advantages and disadvantages of all three methods in general.
by citing an example. Handojoseno et al. [31] investigated the The advantages of filter methods are independence from
EEG features determined by both Fourier and wavelet analysis the classifier, lower computational cost than wrappers, being
in the confirmation and prediction of FOG. In this study, they fast, and good generalization ability. The main disadvantage
attempted to find discriminating features by investigating the of this method is having no interaction with the classifier.
performance of Fourier-based features and their counterpart in An embedded method interacts with the classifier, has a
the wavelet domain. This article somehow compares Fourier lower computational cost than wrappers, too, and captures
and wavelet feature extraction methods and has announced feature dependencies, but its feature selection is classifier-
the reasons for the superiority of WT. Over the past few dependent. Wrapper methods are like embedded methods in
decades, wavelet analysis has been developed as an alternative terms of interactions with the classifier and capturing feature
and improvement to Fourier analysis. Its main advantage in dependencies, but they have high computational costs, they
analyzing physiological systems is its capability to detect and have overfitting risk in classification, and their feature selection
analyze nonstationarity in signals, and its aspects such as is classifier-dependent, too [130]. Future research should focus
trends, breakdown points, and discontinuity since wavelets on optimizing the efficiency and accuracy of the feature
are localized in both the time and frequency domains [31]. subset search strategy by combining earlier the best filter and
Even they have declared that the continuous WT has a better wrapper methods to produce hybrid methods. Most research
frequency (scale) representation compared to the discrete WT. tends to focus on a few datasets on which their method-
The sensitivity, specificity, accuracy, and the area under the ology works. Larger comparative studies should be pursued
ROC curve of the classification system were calculated by the in order to have more reliable results [129]. In the feature
authors to measure the performance of the features and feature reduction step, it is still necessary to apply different feature
extraction methods. By announcing the classification results, reduction algorithms on the preprepared dataset and compare
they have compared these methods. In this article, the compu- their performance in classification. Of course, according to
tational time has also been discussed as a comparative measure the table related to the feature reduction step, this issue has
of the performance of two feature extraction methods, and been presented in a few papers. These papers have expressed
this criterion has been examined in two methods and declared the classification results for different feature reduction algo-
that the continuous time WT has limitations for practical use. rithms in terms of accuracy, sensitivity, specificity, recall,
Thus, general classification performance, computational cost precision, and so on, and compared the performance of these
and time, and suitability for the nature of the data are the algorithms. From reviewing all of these papers, it can be
main criteria in choosing feature extraction methods. It can understood that the feature reduction algorithm should increase
be said that feature selection is also very challenging, and the the recognition accuracy and reduce the computational
main challenge of feature selection is choosing the optimal complexity [156], [159]. Thus, the best feature reduction
15292 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
algorithm is an algorithm that reduces the computational Locality-preserving projections (LPPs) are linear projective
load while increasing the classification accuracy. Also, the maps that arise by solving a variational problem that optimally
algorithm must be compatible with the type and complexity preserves the neighborhood structure of the dataset. LPP
of the data; these factors are especially useful in choosing the should be seen as an alternative to PCA [183]. Since LPP
linearity or nonlinearity of the algorithm. These are the main is derived by preserving local information, it is less sensitive
factors for comparing the performance of feature reduction to outliers than PCA [183]. Canonical correlation analysis
algorithms. It is recommended to familiarize yourself with the (CCA) summarizes the data correlation into fewer statistics
dos and don’ts of your movement classification problem and while preserving the main aspects of the relationships. The
then choose the best feature selection algorithm by examining motivation for CCA is very similar to PCA; however, in the
the advantages and disadvantages of the algorithms. It is stated latter, the next new variable represents the maximum variance
in the paper [156] that the most feature reduction methods in the individual datasets. On the other hand, in CCA, the new
used to combine with the machine learning classifiers are variable is identical for both sets of data such that the corre-
the unsupervised feature reduction methods (e.g., PCA) and lation between the two resulting new variables is maximized
DA feature reduction methods (e.g., LDA). However, there [159]. MRMI-SIG is an optimal data class separator that can
are some limitations in using the mentioned feature reduction be used as a linear feature reduction algorithm. The method
methods to deal with classification problems; for example, the uses a nonparametric estimation of Renyi’s entropy for feature
eigenvectors extracted by PCA are not robust to variations reduction by maximizing an approximation of the mutual
in the durations of subjects’ activities, and only most C – 1 information between the class labels and the reduced features
(number of classes minus one) features can be produced by DA [184]. In the classification step, there are many challenges,
feature reduction methods. That is, the DA feature reduction such as the null class problem (which is the presence of
has a poor performance on high-dimensional classification various activities that do not belong to the set of desired
problems. However, PCA is a suitable algorithm from the activities) [185], class imbalance (it happens when there is an
point of view of computational load, and LDA is generally unequal distribution of classes in the training data), interclass
considered an easy algorithm. To get to know more about the similarity, intraclass variability, overfitting, underfitting, and
feature reduction algorithms, we try to analyze the rest of the computational complexity that must be addressed seriously.
feature reduction algorithms available in the feature reduction Interclass similarity is a challenge caused by classes that are
section. The main feature reduction algorithms are explained fundamentally different, but that shows very similar character-
in the feature extraction section. CPCA stands for Common istics in the sensor data [51]. Intraclass variability occurs when
PCA [156]. CPCA is a generalization of ordinary PCA. The the same activity may be performed differently by different
latter works only on one group or dataset, but CPCA applies to individuals [51]. One of the main challenges in the discussion
several datasets or groups. The nonparametric weighted feature of classification is analyzing and comparing the performance
extraction (NWFE) is a feature extraction or feature reduction of classification algorithms. Classical algorithms for classifi-
method used to assign every sample with different weights and cation are generally simpler than machine learning algorithms
to define nonparametric between-class and within-class scatter but perform weaker, and according to statistical results, they
matrices for finding a linear transformation that can maximize are not comparable to machine learning algorithms in terms
the nonparametric between-class scatter and minimize the of usage. Specific disadvantages can also be found for these
nonparametric within-class scatter [156]. As we said before, algorithms. We present some of the disadvantages of the most
the main disadvantage of the DA feature extraction is that used classical algorithms. When threshold-based algorithms
only most C – 1 (number of classes minus one) features can are used for multiclass problems, it will be very difficult
be extracted. In order to solve the abovementioned problem, to find threshold values. For correlation-based algorithms,
NWFE is developed for obtaining more than (C – 1) features it should also be stated that, in general, the correlation-based
to deal with high-dimensional classification problems [156]. search cannot provide information about why the relationship
Kernel PCA (KPCA) and KDA are nonlinear counterparts of is found. Thus, to investigate this challenge, it is better to first
PCA and LDA, respectively. These algorithms are extensions compare different types of machine learning algorithms, and
of mentioned algorithms based on kernel techniques. In the finally, we declare a general rule for comparing the overall
paper [156], the combined feature extraction methods are used, performance of all classification algorithms. We try to express
which are PCA + LDA, NWFE + PCA, and NWFE + LDA. the main advantages and disadvantages of each machine learn-
The authors have compared the recognition performances ing algorithm type. As we stated, the most widely used type of
between the six feature reduction methods, such as PCA, LDA, classification algorithm is the supervised algorithm. In general,
NWFE, PCA + LDA, NWFE + PCA, and NWFE + LDA, it is useful to know the advantages and disadvantages of these
once the optimal dimensions of each of the feature reduction algorithms. One of the advantages of these algorithms is that
schemes were estimated. Algorithms have also been examined we can choose the labels carefully, and as a result, we can
from the point of view of computational time. In the paper easily determine the number of classes. Considering that we
[60], 1-D local binary patterns (1-D-LBPs) were employed know the data well along with their labels, we can say that
in order to exact relevant features. 1-D-LBP was based on these algorithms are usually more accurate, especially com-
LBPs. In 1-D LBP, all values in the 1-D signal are compared pared to unsupervised algorithms. These algorithms also have
with their neighbors and the histograms of the results of the disadvantages. The main disadvantage of these algorithms is
comparisons or the statistical features of extracted histograms. ground-truth annotation [54]. Ground-truth annotation is an
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15293
expensive and tedious task, as the annotator has to do the In the paper [103], 5-foldCV and leave-one-participant-out
annotation in real time or skim through the raw sensor data and cross-validation (LOPOCV) have been considered as the two
manually label all instances. Although motion data recorded evaluation methods widely used for recognition. The authors
from motion sensors, such as an accelerometer or gyroscope, have announced that they focused on the LOPOCV evaluation
are often more difficult to interpret than data from other results because, usually, it is more difficult to obtain good
sensors, such as cameras [54], in daily life settings, ground- recognition results when the subject’s signals are not involved
truth annotation can be much more difficult [54]. In addition, in the training set. The recognition accuracy of these two
more computation time is needed for training. Unsupervised algorithms has been considered, and they have announced
methods generally do not have the problem of labeling and that, according to the results, LOPOCV is a more suitable
can be used when labeled data are scarce or not available. evaluation method for their considered application. The paper
However, these algorithms are less accurate than the previous [120] also mentions the problem of overfitting the classifier,
algorithms. In these algorithms, it is not possible to accurately which should be considered when choosing the evaluation
comment on the relationship between input and output. Also, method; a good evaluation method can help us avoid this
the number of classes is not known in advance, which creates problem. Tahafchi and Judy [124] have stated that all the algo-
some kind of confusion. Probabilistic machine learning algo- rithms used in the motion recognition steps, including cross-
rithms can be supervised or unsupervised. The naïve Bayes validation algorithms, must be accurate and reliable. They also
algorithm is a supervised probabilistic algorithm, and GMM is stated that one of the criteria for choosing cross-validation
an unsupervised probabilistic algorithm. The main advantage algorithms is that they should work well with imbalanced
of these algorithms is that they express uncertainty, while data. They have stated that stratified-K-fold cross-validation is
other algorithms are unable to do so. Their main disadvantage helpful for imbalanced datasets. In the paper [151], classifiers
is that, due to their probabilistic nature, they require many were trained and tested using two protocols (user-specific
assumptions that may not always be true. The main advantage training protocol and leave-one-subject-out validation). Recog-
of rule-based classification algorithms is that they are easily nition accuracy was significantly higher for all algorithms
interpreted due to being close to human logic. However, under the leave-one-subject-out validation process. Because
providing a list of related rules is very difficult and requires of larger training sets, this protocol may have resulted in
experience and skill. Reinforcement learning algorithms do more generalized and robust activity classifiers. The markedly
not require labeled data. This is one of their advantages smaller training sets used for the user-specific training protocol
over supervised algorithms [54]. These algorithms are used may have limited the accuracy of classifiers. Another issue
to solve more complex classification problems, for example, that is important is the computational load; for example, if we
finding the best structure for a neural network. However, have two algorithms that have almost the same performance
reinforcement learning algorithms often have high compu- in terms of accuracy, recall, precision, recall, and so on, then
tational complexity. Combined algorithms are kind of the the algorithm with less computational load should be selected.
future of machine learning algorithms because, by adding the Now, we are trying to select the criteria for choosing the best
capabilities of one classifier to another classifier, many of evaluation method from the overview of the papers presented
the flaws and disadvantages of other types of classification above and the papers in Table XIII. The selection criteria of
algorithms can be avoided. For example, the combination of evaluation methods are low computational load, robust and
different algorithms leads to the production of semisupervised generalizable recognition performance, dealing with overfitting
algorithms that solve the problem of supervised algorithms of the classifier, suitable performance on imbalanced data, and
and perform classification well with a few labeled data [54]. so on. The next challenge is to choose the desired metric to
However, the difficulty here is that we must know the structure evaluate the performance of the classifier. As we mentioned,
of the algorithms that we want to combine, choose compatible the evaluation metrics are categorized into three different
algorithms, and know what defect of each algorithm, which types: threshold, probabilistic, and ranking metrics. Graphical
we want to solve by combining algorithms. This work requires metrics, such as confusion matrix and ROC, have also been
expertise and time, and may require trial and error. So far, examined in the evaluation section, but these metrics can
we have announced some factors that must be considered for be considered as methods for obtaining the metrics. Overall
choosing a classification algorithm and even comparing the comparison of metrics makes it easy to choose between these
performance of different classification algorithms. Now, we are items. It is not bad to first have a practical comparison
trying to declare a law that makes it possible to analyze the between three types of evaluation metrics and introduce useful
performance of classification in general. For having a good metrics in each of the applications. All these types of metrics
classification performance, a classification algorithm must be are scalar and present the performance using a single score
robust to the effective factors in classification; these factors value [153]. These types of metrics are mostly used in three
are numerous and can be mentioned, such as class imbalance different evaluation applications [153]. First, the evaluation
[153], noise in the data, and different distributions of train and metrics are used for evaluating the generalization ability of
test data [26]; robustness increases the generalization ability the trained classifier. Second, the evaluation metrics are used
of the classifier; and a robust algorithm can achieve higher to select the best classifier among different types of classifiers.
accuracy. Now, we will discuss the challenges related to the Third, the evaluation metrics are employed to discriminate
concepts of the evaluation section. First, we will examine and select the best solution among all generated solutions
what criteria are important for choosing the evaluation method. during training [153]. In the first and second applications, all
15294 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
three types of metrics can be used [153]. However, only a other metrics; for example, SVM classifiers optimize accuracy,
few types of metrics can be employed in the third one [153]. while neural networks optimize probabilistic metrics, such as
The third application is less common in motion recognition. RMSE and cross-entropy [154]. Therefore, after choosing one
For more familiarity with the metrics that can be used in or more classifiers for our work according to the stated criteria,
the third application, refer to the paper [153]. The paper in addition to considering the above factors, it is better to get
[153] has offered factors for the construction of new metrics familiar with what metric these classifiers optimize best and
for use in the third application. These factors, more or less, choose that metric to evaluate the classifier. Of course, it is not
can be used even to compare and select metrics in all three bad to provide a general answer to the question that, in general,
applications. Before dealing with these factors, we intend if we do not know the correct evaluation metric, which metric
to analyze the performance of some most frequently used should we use by default? The paper [154] generally stated
metrics. Before starting, we should point out that most of that RMSE might serve as a good general-purpose metric
the evaluation metrics are made for binary classification, and to use when a more specific optimization criterion is not
through modifications, they can be extended to multiclass known. Now with these factors in mind, one can choose a
mode as well. Through accuracy, the classification quality suitable metric for the classification problem. Now, we want
is evaluated based on the percentage of correct predictions to discuss the data communication model, power efficiency,
over total instances. The error rate is the complement metric and propagation delay since these concepts are very relevant
of accuracy and evaluates the classification by its percentage of for movement classification using wearable IoT. Familiarity
incorrect predictions [153]. Sensitivity measures the fraction of with these concepts can also solve many existing challenges.
positive patterns that are correctly classified [153]. Specificity Before dealing with the topics, we will talk about body sensor
does the same for negative patterns. Precision measures the networks (BSNs) that are an inseparable part of wearable IoT.
correctly predicted positive patterns rate to total predicted BSN is a set of sensors connected to the body that together
patterns in a positive class [153]. Recall measures the rate forms a network and collects the necessary information. The
of correctly predicted positive patterns. The f score is the BSN used in this field is usually wireless and can be con-
harmonic mean between recall and precision values [153]. sidered a type of wireless sensor network (WSN). BSN is
The declared cases were the most famous threshold metrics. an important component of the IoT [186]. First, we want to
We also introduce some probabilistic methods. The mse is specify the data communication model in BSN. The general
a measure of the difference between the predicted solutions architecture of a BSN consists of sensor nodes that are placed
and actual solutions. The smaller the mse value, the better in the body to collect data and perform preliminary processing.
the classification results [153]. RMSE is the square root of The data are gathered by a sink node and then transmitted to
the mse. The area under the ROC curve, known as the AUC, a base station to share over the Internet [186]. This method
is also one of the most famous examples of the ranking of data communication with a slight modification is also
type of evaluation metrics. Unlike the other two types of presented in the paper [187]. However, the entire structure
metrics, this value shows the overall ranking performance has the same skeleton. Sensors are the key components of
of a classifier [153]. Now that we are familiar with a few BSN, as they connect the physical world with electronic
evaluation metrics of the classification performance, we cite systems. They are mainly used to collect information about the
the advantages and disadvantages of some metrics, and then, human body. Sensor nodes, which have a sensor as their main
we will provide the general reasons that can be used to choose part, are responsible for processing information by format
the desired metric. According to the paper [153], accuracy and conversion, logical computing, data storage, and transmit-
error rate are easy to compute; applicable for multiclass and ting. One sensor node generally comprises a sensor module,
multilabel problems; and easy to understand by humans. This a processor module, a wireless communication module, and
article also states that accuracy has many weaknesses. For a power supply module. The sensor module is responsible
example, accuracy is not a good metric when dealing with for collecting the status of measurements and converting data
imbalanced class distribution and is biased toward majority to electrical signals. The processor module is responsible for
class data. Another disadvantage of accuracy is that this metric controlling the sensor nodes. The wireless communication
produces less distinctive and less discriminable values. The module, consisting of the network layer, the MAC layer, and
mse is also not suitable for working with imbalanced class the wireless transceiver in the physical layer, is responsible
data. The AUC is proven to be better than the accuracy metric for communication among sensors and computers. The power
for evaluating the classifier performance, but it has a very supply module is responsible for providing energy for the
high computational cost [153]. Factors can now be introduced entire sensor node [186]. Nowadays, BSN research still faces
for the comparison of existing metrics. It is better to choose many key technical challenges, such as energy consumption
a metric that can be used in multiclass problems and is not and service quality [186], [187]. Energy consumption and
limited to binary classification. It is better to choose a metric power efficiency are among the most important challenges in
that has a lower computational load. A good metric should these networks, which, of course, was briefly mentioned at
not be biased toward the majority class and must work well the beginning of this section, but, now, we intend to look
on imbalanced data. Of course, the factors raised are largely at the issue a little more generally. BSNs can be battery-
public factors. It is not bad to look at the matter a little powered. They can also be powered by kinetic energy and
more technically. Another challenge is that learning methods heat [186]. Our energy resources are limited, so we try to
that perform well on one metric may not perform well on explain the methods of reducing energy consumption and, thus,
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15295
improving power efficiency. First, we start energy consumption possibility, available bandwidth, end-to-end delay, jitter, and
reduction concerning BSN sensors. Low-power design is one so on. Examining other factors related to QoS, such as jitter,
of the main challenges for sensors in this network. Familiarity available bandwidth, and packet loss, is not on the agenda of
with the classification of sensors can be useful in reducing this article, and addressing them is not related to the main
their energy consumption. According to the types of measured topic of our paper and will take us away from the main goal,
signals, sensors in BSNs can be divided into two categories. so they will not be addressed. The specialized investigation of
The first category that collects signals continuously includes these factors is in the field of telecommunication engineering.
accelerometers, gyroscopes, ECG sensors, EEG sensors, EMG Therefore, we are going to briefly discuss the end-to-end delay.
sensors, and so on. The second category, including temperature In general, the end-to-end delay in the BSN is divided into four
sensors, humidity sensors, and so on, collects discrete-time types: propagation delay, transmission delay, queuing delay,
signals. Usually, the first category’s power consumption is and processing delay. The time that it takes for the data to
more than the second one. Therefore, it is better to choose the be transmitted from the source to the destination is called
second type of sensor among sensors that may measure sim- propagation delay. The time that it takes for the data to be
ilar signals for movement classification (of course, if energy completely transmitted is called transmission delay. The time
consumption is the priority). Another possible way to reduce that data must wait in the buffer until the busy destination
energy consumption is using the sleeping mode [186], [187]. can check it is called queuing delay. The time that it takes a
The most commonly used sensors in BSNs can be divided processor to process data is called the processing delay. These
into the following three categories according to the types four delays make up the end-to-end delay. It is better to know
of data transmission media: wireless sensors, which employ the causes of each of these delays. General factors that cause
wireless communication technologies, such as Bluetooth or propagation delay include the characteristics of the medium
Zigbee, and radio frequency identification devices (RFIDs), and environmental characteristics, such as humidity, pressure,
to communicate with other sensors or devices. Wired sensors, temperature, signal disturbances, and so on. However, we will
employing wired communication technologies, can replace try to provide some more specialized examples. Propagation
wireless sensors if wearability is not seriously affected. The delay in electronic circuits or logic gates is one of the most
transmission mode is more stable than the wireless one. How- obvious examples of this delay in BSN [186]. Addressing the
ever, their installation and deployment are complicated. The problem of propagation delay in electrical circuits involved in
third category is human-body communication (HBC) sensors BSN, such as logic circuits and SRAM in microcontrollers,
that use the human body as the transmission medium [186]. is one of the main concerns of electrical circuit designers
The latter can have lower power consumption and sensor node [186]. In paper [188], the end-to-end delay was considered
size than the first two, but it has been introduced in recent to include four types of delays: transmission delay, queuing
years and needs more time to settle [186]. In the design of delay, processing delay, and channel capture delay. In this
sensor nodes, issues related to reducing power consumption article, channel capture delay is the same as propagation delay.
can also be considered. In the sensor node design process, This phenomenon occurs when a device from a shared medium
energy control and reduction of sensor nodes can be consid- takes possession of the media for a significant period. In the
ered to meet the demands of low power consumption [186]. mentioned paper, the authors have presented a relay-based
Energy control has been one of the hot topics in the field of routing protocol for in vivo BSNs. The proposed protocol is
BSN sensors for the implementation of long-term monitoring provided with linear programming-based mathematical models
functions. The low-power architecture design, the low-power for network lifetime maximization and end-to-end delay min-
processor design, the low-power transceiver design, and the imization [188]. Therefore, we must know the various sources
energy acquisition design are preliminary research topics in of propagation delay in BSN and find a suitable solution for
energy control at present [186]. The goal of reducing sensor each of them. Now, we also announce the general factors
nodes is inertial sensors for activity recognition. It not only causing other delays. Factors such as transmission speed and
improves the wearability of the mentioned systems but also bandwidth are effective in causing transmission delay. Factors
lowers the cost, saves energy, and so on. Principal methods such as bandwidth, data volume, and the type of queuing
to solve the problem are node placement optimization and the method are also effective in causing queuing delays. The
improvement of activity recognition algorithms [186]. In the features of the processing device, the volume of data, and
paper [186], it is also stated that data fusion techniques can the complexity of the processing algorithms are also factors
reduce data redundancy and, thus, reduce the load and energy that cause the processing delay. Now that we are familiar with
consumption of BSN with the advantage of extending the the technical challenges in movement classification, we must
network lifetime. In the BSN communication section, the mention that recognizing complex activities (such as cooking
factors that can be addressed to improve energy efficiency and and doing the dishes) is also a technical challenge that
reduce power consumption include proper network topology, is beyond the scope of this article. Finding a solution for
energy-efficient MAC and routing protocols, and so on [186], this challenge will also be a very suitable topic for future
[187]. At the beginning of the discussions, we announced that papers.
service quality is one of the most challenging topics in BSN.
In BSNs, this concept is known as Quality of Service (QoS). VI. C ONCLUSION
QoS generally can be considered as a description of overall In this article, we announced that wearable IoTs will be
network performance. QoS can be characterized by packet loss widely used in the future. Human motion recognition by
15296 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
wearable sensors is investigated in this article. Since classi- [16] A. Mann and M.-F. Chesselet, “Techniques for motor assessment in
fication is an integral part of human body motion recognition, rodents,” in Movement Disorders, 2nd ed. Los Angeles, CA, USA: AP,
2015, ch. 8, pp. 139–157.
it can be claimed that movement classification is closely [17] T. Allevard, E. Benoit, and L. Foulloy, “Hand posture recognition with
related to human motion recognition. Movement classification the fuzzy glove,” Presented at the Mod. Inf. Process., Theory Appl.,
includes three subsections: GA, GR, and HAR. The goal 2006.
[18] B. H. Dobkin, “Wearable motion sensors to continuously measure real-
is to first introduce the reader to the important steps of world physical activities,” Current Opinion Neurol., vol. 26, no. 6,
human body movement classification by wearable sensors pp. 602–608, Dec. 2013, doi: 10.1097/WCO.0000000000000026.
and then determine the algorithms and methods used for [19] S. Zhang et al., “Deep learning in human activity recognition with
each step using tables. To better understand the results of wearable sensors: A review on advances,” Sensors, vol. 22, no. 4,
p. 1476, Feb. 2022, doi: 10.3390/s22041476.
the tables, approximate numbers and percentages have been [20] W. Tao, T. Liu, R. Zheng, and H. Feng, “Gait analysis using wearable
used. In some cases, bar charts have been used to visualize sensors,” Sensors, vol. 12, no. 2, pp. 2255–2283, Feb. 2012, doi:
numerical results. By reading this article, the readers will be 10.3390/s120202255.
[21] L. Dai, “C60 and carbon nanotube sensors,” in Carbon Nanotechnology.
fully acquainted with the concepts in movement classification, Amsterdam, The Netherlands: Elsevier, 2006, ch. 15, pp. 525–575.
know the steps of conducting research along with commonly [22] N. Bhalla, P. Jolly, N. Formisano, and P. Estrela, “Introduction to
used algorithms, wearable sensors, IoT concepts, and future biosensors,” Essays Biochem., vol. 60, no. 1, pp. 1–8, Jun. 2016, doi:
directions, and can carry out the project in the human motion 10.1042/EBC20150001.
[23] A. Nag et al., “Graphene-based wearable temperature sensors:
recognition area. A review,” Mater. Des., vol. 221, pp. 1–17, Jan. 2022, doi:
10.1016/j.matdes.2022.110971.
A PPENDIX [24] H.-S. Kim, J.-H. Kang, J.-Y. Hwang, and U. S. Shin, “Wearable CNTs-
based humidity sensors with high sensitivity and flexibility for real-
See Table XVII. time multiple respiratory monitoring,” Nano Converg., vol. 9, no. 1,
pp. 1–10, Aug. 2022, doi: 10.1186/s40580-022-00326-6.
R EFERENCES [25] S. K. Ameri et al., “Imperceptible electrooculography graphene sensor
[1] E. O. Thorp, Beat the Dealer: A Winning Strategy for the Game of system for human–robot interface,” NPJ 2D Mater. Appl., vol. 2, no. 1,
Twenty-One. New York, NY, USA: Vintage Books, 1966. pp. 1–7, Jul. 2018, doi: 10.1038/s41699-018-0064-4.
[2] F. J. Dian and R. Vahidnia, “IoT use cases,” in IoT Use Cases and [26] H. Wang, W. Yan, and S. Liu, “Physical activity recognition using
Technologies, Vancouver, BC, Canada: BCIT, 2020, pp. 1–23. multi-sensor fusion and extreme learning machines,” in Proc. Int.
[3] F. John Dian, R. Vahidnia, and A. Rahmati, “Wearables and the Joint Conf. Neural Netw. (IJCNN), Budapest, Hungary, Jul. 2019,
Internet of Things (IoT), applications, opportunities, and challenges: pp. 1–7.
A survey,” IEEE Access, vol. 8, pp. 69200–69211, 2020, doi: [27] S. Liu, R. X. Gao, D. John, J. W. Staudenmayer, and P. S. Freed-
10.1109/ACCESS.2020.2986329. son, “Multisensor data fusion for physical activity assessment,” IEEE
[4] N. Ahmad, R. A. R. Ghazilla, N. M. Khairi, and V. Kasi, “Reviews Trans. Biomed. Eng., vol. 59, no. 3, pp. 687–696, Mar. 2012, doi:
on various inertial measurement unit (IMU) sensor applications,” 10.1109/TBME.2011.2178070.
Int. J. Signal Process. Syst., vol. 10, pp. 256–262, Jan. 2013, doi: [28] L. Utari et al., “Wearable carbon monoxide sensors based
10.12720/ijsps.1.2.256-262. on hybrid graphene/ZnO nanocomposites,” IEEE Access, vol. 8,
[5] S. T. Pheasant, “A review of: “Human walking”. By V. T. INMAN, pp. 49169–49179, 2020, doi: 10.1109/ACCESS.2020.2976841.
H. J. RALSTON and F. TODD. (Baltimore, London: Williams & [29] M. Sharma, S. Kacker, and M. Sharma, “A brief introduction and
Wilkins, 1981.) [Pp.154.],” Ergonomics, vol. 24, no. 12, pp. 969–976, review on galvanic skin response,” Int. J. Med. Res. Professionals,
Dec. 1981, doi: 10.1080/00140138108924919. vol. 2, no. 6, pp. 13–17, Dec. 2016, doi: 10.21276/ijmrp.2016.2.6.003.
[6] I. H. López-Nava and A. Muñoz-Meléndez, “Wearable inertial sensors [30] P. K. Baheti and H. Garudadri, “An ultra low power pulse oximeter
for human motion analysis: A review,” IEEE Sensors J., vol. 16, no. 22, sensor based on compressed sensing,” in Proc. 6th Int. Workshop
pp. 7821–7834, Nov. 2016, doi: 10.1109/JSEN.2016.2609392. Wearable Implant. Body Sensor Netw., Berkeley, CA, USA, Jun. 2009,
[7] K. L. Moore, A. F. Dalley, and A. M. Agur, Clinically Oriented pp. 1–12.
Anatomy. Philadelphia, PA, USA: Wolters Kluwer Health, 2011. [31] A. M. A. Handojoseno, J. M. Shine, T. N. Nguyen, Y. Tran,
[8] N. Alshurafa et al., “Designing a robust activity recognition framework S. J. G. Lewis, and H. T. Nguyen, “Analysis and prediction of the
for health and exergaming using wearable sensors,” IEEE J. Biomed. freezing of gait using EEG brain dynamics,” IEEE Trans. Neural
Health Informat., vol. 18, no. 5, pp. 1636–1646, Sep. 2014, doi: Syst. Rehabil. Eng., vol. 23, no. 5, pp. 887–896, Sep. 2015, doi:
10.1109/JBHI.2013.2287504. 10.1109/TNSRE.2014.2381254.
[9] A. Jordao, A. C. Nazare Jr., J. Sena, and W. R. Schwartz, “Human [32] R. E. Morley, E. J. Richter, J. W. Klaesner, K. S. Maluf, and
activity recognition based on wearable sensor data: A standardization M. J. Mueller, “In-shoe multisensory data acquisition system,” IEEE
of the state-of-the-art,” 2018, arXiv:1806.05226. Trans. Biomed. Eng., vol. 48, no. 7, pp. 815–820, Jul. 2001, doi:
[10] S. Mitra and T. Acharya, “Gesture recognition: A survey,” IEEE 10.1109/10.930906.
Trans. Syst., Man Cybern., C, Appl. Rev., vol. 37, no. 3, pp. 311–324, [33] R. H. Roberts and Y. L. Mo, “Development of carbon nanofiber
May 2007, doi: 10.1109/TSMCC.2007.893280. aggregate for concrete strain monitoring,” in Innovative Developments
[11] A. N. Belbachir et al., “CARE: Dynamic stereo vision sensor system of Advanced Multifunctional Nanocomposites in Civil and Structural
for fall detection,” Presented at the IEEE Int. Symp. Circuits Syst. Engineering, 1st ed. Swaston, U.K.: Woodhead Publishing, 2016, ch. 2,
(ISCAS), Seoul, South Korea, May 2012. pp. 9–45.
[12] P. Pierleoni et al., “A wearable fall detector for elderly people based [34] J. Cong, J. Jing, and C. Chen, “Development of a PVDF sensor array
on AHRS and barometric sensor,” IEEE Sensors J., vol. 16, no. 17, for measurement of the dynamic pressure field of the blade tip in an
pp. 6733–6744, Sep. 2016, doi: 10.1109/JSEN.2016.2585667. axial flow compressor,” Sensors, vol. 19, no. 6, p. 1404, Mar. 2019,
[13] M. P. Lawton and M. E. Brody, “Assessment of older people self doi: 10.3390/s19061404.
maintaining and instrumental activities of daily living,” Gerontologist, [35] J. Lester, T. Choudhury, N. Kern, and G. Borriello, “A hybrid discrim-
vol. 9, no. 3, pp. 179–186, 1969. inative/generative approach for modeling human activities,” Presented
[14] C. Jobanputra, J. Bavishi, and N. Doshi, “Human activity recognition: at the 19th Int. Joint Conf. Artif. Intell., Jul. 2005.
A survey,” Proc. Comput. Sci., vol. 155, pp. 698–703, Jan. 2019, doi: [36] A. Pentland, “Healthwear: Medical technology becomes wear-
10.1016/j.procs.2019.08.100. able,” Computer, vol. 37, no. 5, pp. 42–49, May 2004, doi:
[15] A. Das and M. B. Kjærgaard, “Activity recognition using multi-class 10.1109/MC.2004.1297238.
classification inside an educational building,” in Proc. IEEE Int. Conf. [37] A. Farooq and S. Kamal, “Indoor positioning and tracking using sensors
Pervasive Comput. Commun. Workshops, Austin, TX, USA, Mar. 2020, of a smart device,” in Proc. Int. Conf. Appl. Eng. Math., Taxila,
pp. 1–6. Pakistan, Aug. 2019, pp. 99–104.
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15297
[38] Q. P. Chu and P. L. M. T. Van Woerkom, “GPS for low-cost attitude [58] S. V. Perumal and R. Sankar, “Gait monitoring system for patients with
determination: A review of concepts, in-flight experiences, and current Parkinson’s disease using wearable sensors,” in Proc. IEEE Health-
developments,” Acta Astronaut., vol. 41, nos. 4–10, pp. 421–433, care Innov. Point-Care Technol. Conf. (HI-POCT), Cancun, Mexico,
Aug. 1997, doi: 10.1016/S0094-5765(98)00046-0. Nov. 2016, pp. 21–24.
[39] K. E. Jeon, J. She, P. Soonsawad, and P. C. Ng, “BLE beacons for [59] A. K. Chowdhury, D. Tjondronegoro, V. Chandran, and S. G. Trost,
Internet of Things applications: Survey, challenges, and opportunities,” “Physical activity recognition using posterior-adapted class-
IEEE Internet Things J., vol. 5, no. 2, pp. 811–828, Apr. 2018, doi: based fusion of multiaccelerometer data,” IEEE J. Biomed.
10.1109/JIOT.2017.2788449. Health Informat., vol. 22, no. 3, pp. 678–685, May 2018, doi:
[40] D. De, P. Bharti, S. K. Das, and S. Chellappan, “Multimodal 10.1109/JBHI.2017.2705036.
wearable sensing for fine-grained activity recognition in healthcare,” [60] N. Inanç, M. Kayri, and Ö. F. Ertugrul, “Recognition of daily and sports
IEEE Internet Comput., vol. 19, no. 5, pp. 26–35, Sep. 2015, doi: activities,” in Proc. IEEE Int. Conf. Big Data (Big Data), Seattle, WA,
10.1109/MIC.2015.72. USA, Dec. 2018, pp. 2216–2220.
[41] M. Baxter and N. E. O’Connell, “Testing ultra-wideband technology [61] N. Kawaguchi et al., “HASC challenge: Gathering large scale human
as a method of tracking fast-growing broilers under commercial condi- activity corpus for the real-world activity understandings,” in Proc. 2nd
tions,” Appl. Animal Behav. Sci., vol. 233, Dec. 2020, Art. no. 105150, Augmented Human Int. Conf., Tokyo, Japan, Mar. 2011, pp. 1–5.
doi: 10.1016/j.applanim.2020.105150. [62] M. Inoue, S. Inoue, and T. Nishida, “Deep recurrent neural network
[42] J. Ma, W. Gao, J. Wu, and C. Wang, “A continuous Chinese sign for mobile human activity recognition with high throughput,” Artif. Life
language recognition system,” in Proc. 4th IEEE Int. Conf. Autom. Robot., vol. 23, no. 2, pp. 173–185, Jun. 2018, doi: 10.1007/s10015-
Face Gesture Recognit., Grenoble, France, Mar. 2000, pp. 1–3. 017-0422-x.
[43] L. Rodrigues and M. Mota, “Smart devices: Micro- and nano sensors,” [63] L. Chen, Y. Li, and Y. Liu, “Human body gesture recognition method
in Bioinspired Materials for Medical Applications, 1st ed. Swaston, based on deep learning,” in Proc. Chin. Control Decis. Conf. (CCDC),
U.K.: Woodhead Publishing, 2017, ch. 11, pp. 297–329. Hefei, China, Aug. 2020, pp. 587–591.
[44] P. Kosky, R. Balmer, W. Keat, and G. Wise, Mechatronics and Physical [64] W. Jiang and Z. Yin, “Human activity recognition using wearable
Computing, Exploring Engineering, 5th ed. Cambridge, MA, USA: AP, sensors by deep convolutional neural networks,” in Proc. 23rd ACM
2021, ch. 20, pp. 453–477. Int. Conf. Multimedia, Oct. 2015, pp. 1307–1310.
[45] M. Borghetti, E. Sardini, and M. Serpelloni, “Evaluation of bend [65] M. Zhang and A. A. Sawchuk, “USC-HAD: A daily activity dataset
sensors for limb motion monitoring,” in Proc. IEEE Int. Symp. Med. for ubiquitous activity recognition using wearable sensors,” in Proc.
Meas. Appl. (MeMeA), Lisboa, Portugal, Jun. 2014, pp. 1–5. ACM Conf. Ubiquitous Comput., Sep. 2012, pp. 1036–1043.
[46] G. Ogris, T. Stiefmeier, H. Junker, P. Lukowicz, and G. Troster, “Using [66] M. Shoaib, S. Bosch, O. Incel, H. Scholten, and P. Havinga, “Fusion of
ultrasonic hand tracking to augment motion analysis based recognition smartphone motion sensors for physical activity recognition,” Sensors,
of manipulative gestures,” in Proc. 9th IEEE Int. Symp. Wearable vol. 14, no. 6, pp. 10146–10176, Jun. 2014, doi: 10.3390/s140610146.
Comput., Osaka, Japan, Mar. 2005, pp. 1–5. [67] F. J. Ordóñez and D. Roggen, “Deep convolutional and LSTM recurrent
[47] K. Van Laerhoven and O. Cakmakci, “What shall we teach our pants?” neural networks for multimodal wearable activity recognition,” Sensors,
in Dig. Papers, 4th Int. Symp. Wearable Comput., Atlanta, GA, USA, vol. 16, no. 1, pp. 1–25, Jan. 2016, doi: 10.3390/s16010115.
2000, pp. 1–14.
[68] P. Zappi et al., “Activity recognition from on-body sensors:
[48] S. Hackett, Y. Cai, and M. Siegel, “Activity recognition from sensor Accuracy-power trade-off by dynamic sensor selection,” Presented
fusion on fireman’s helmet,” Presented at the 12th Int. Congr. Image at the 5th Eur. Conf. Wireless Sensor Netw., Bologna, Italy,
Signal Process., BioMed. Eng. Inform. (CISP-BMEI), Suzhou, China, Jan. 2008.
Oct. 2019. [69] J. W. Lockhart, G. M. Weiss, J. C. Xue, S. T. Gallagher, A. B. Grosner,
[49] S. Ruffieux, D. Lalanne, E. Mugellini, and O. A. Khaled, “A survey and T. T. Pulickal, “Design considerations for the WISDM smart
of datasets for human gesture recognition,” Presented at the 16th Int. phone-based sensor mining architecture,” in Proc. 5th Int. Workshop
Conf., HCI Int., Heraklion Greece, Jun. 2014. Knowl. Discovery Sensor Data, San Diego, CA, USA, Aug. 2011,
[50] K. Chen, D. Zhang, L. Yao, B. Guo, Z. Yu, and Y. Liu, “Deep learning pp. 1–4.
for sensor-based human activity recognition: Overview, challenges, and
[70] T. Huynh, M. Fritz, and B. Schiele, “Discovery of activity patterns
opportunities,” ACM Comput. Surv., vol. 54, no. 4, pp. 1–40, May 2022,
using topic models,” in Proc. 10th Int. Conf. Ubiquitous Comput.,
doi: 10.1145/3447744.
Seoul, South Korea, Sep. 2008, pp. 10–19.
[51] A. Bulling, U. Blanke, and B. Schiele, “A tutorial on human activity
recognition using body-worn inertial sensors,” ACM Comput. Surv., [71] U. Blanke and B. Schiele, “Daily routine recognition through activity
vol. 46, no. 3, pp. 1–33, Jan. 2014, doi: 10.1145/2499621. spotting,” Presented at the Location- Context-Awareness, 3rd Int.
Symp., Oberpfaffenhofen, Germany, Sep. 2007.
[52] T. Stiefmeier, D. Roggen, G. Ogris, P. Lukowicz, and G. Tr, “Wearable
activity tracking in car manufacturing,” IEEE Pervasive Comput., [72] T. van Kasteren, A. Noulas, G. Englebienne, and B. Kröse, “Accurate
vol. 7, no. 2, pp. 42–50, Apr. 2008, doi: 10.1109/MPRV.2008.40. activity recognition in a home setting,” in Proc. 10th Int. Conf.
Ubiquitous Comput., Sep. 2008, pp. 1–9.
[53] T. Stiefmeier, G. Ogris, H. Junker, P. Lukowicz, and G. Troster,
“Combining motion sensors and ultrasonic hands tracking for con- [73] U. Blanke and B. Schiele, “Remember and transfer what you have
tinuous activity recognition in a maintenance scenario,” in Proc. 10th learned–recognizing composite activities based on activity spotting,”
IEEE Int. Symp. Wearable Comput., Montreux, Switzerland, Oct. 2006, in Proc. Int. Symp. Wearable Comput. (ISWC), Oct. 2010, pp. 1–8.
pp. 1–10. [74] T. Huynh and B. Schiele, “Analyzing features for activity recog-
[54] U. Blanke, B. Schiele, M. Kreil, P. Lukowicz, B. Sick, and T. Gruber, nition,” in Proc. Joint Conf. Smart Objects Ambient Intell.,
“All for one or one for all? Combining heterogeneous features for Innov. Context-Aware Service Usages Technol., Grenoble, France,
activity spotting,” in Proc. 8th IEEE Int. Conf. Pervasive Comput. Oct. 2005, pp. 159–163.
Commun. Workshops, Mannheim, Germany, Mar. 2010, pp. 1–4. [75] F. Torre, J. Hodgins, J. Montano, S. Valcarcel, R. Forcada, and
[55] H. Nematallah and S. Rajan, “Comparative study of time series-based J. Macey, “Guide to the Carnegie Mellon university multimodal
human activity recognition using convolutional neural networks,” in activity (CMU-MMAC) database,” CMU, Pittsburgh, PA, USA,
Proc. IEEE Int. Instrum. Meas. Technol. Conf. (I2MTC), Dubrovnik, Tech. Rep., CMU-RI-TR-08-22, 2008.
Croatia, May 2020, pp. 1–6. [76] J. R. Kwapisz, G. M. Weiss, and S. A. Moore, “Activity recognition
[56] A. Arami, A. Poulakakis-Daktylidis, Y. F. Tai, and E. Burdet, “Pre- using cell phone accelerometers,” ACM SIGKDD Explor. Newslett.,
diction of gait freezing in parkinsonian patients: A binary classifi- vol. 12, no. 2, pp. 74–82, Mar. 2011, doi: 10.1145/1964897.1964918.
cation augmented with time series prediction,” IEEE Trans. Neural [77] C. Chen, R. Jafari, and N. Kehtarnavaz, “UTD-MHAD: A multimodal
Syst. Rehabil. Eng., vol. 27, no. 9, pp. 1909–1919, Sep. 2019, doi: dataset for human action recognition utilizing a depth camera and a
10.1109/TNSRE.2019.2933626. wearable inertial sensor,” in Proc. IEEE Int. Conf. Image Process.
[57] T. Mahmud, S. S. Akash, S. A. Fattah, W.-P. Zhu, and M. O. Ahmad, (ICIP), Quebec City, QC, Canada, Sep. 2015, pp. 168–172.
“Human activity recognition from multi-modal wearable sensor data [78] A. Stisen et al., “Smart devices are different: Assessing and Mitigating-
using deep multi-stage LSTM architecture based on temporal feature Mobile sensing heterogeneities for activity recognition,” in Proc. 13th
aggregation,” in Proc. IEEE 63rd Int. Midwest Symp. Circuits Syst. ACM Conf. Embedded Networked Sensor Syst., New York, NY, USA,
(MWSCAS), Springfield, MA, USA, Aug. 2020, pp. 1–3. Nov. 2015, pp. 127–140.
15298 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
[79] H. Gjoreski et al., “The university of Sussex-Huawei locomo- [98] Y. Li et al., “Hand gesture recognition and real-time game control
tion and transportation dataset for multimodal analytics with based on a wearable band with 6-axis sensors,” in Proc. Int. Joint
mobile devices,” IEEE Access, vol. 6, pp. 42592–42604, 2018, doi: Conf. Neural Netw. (IJCNN), Rio de Janeiro, Brazil, Jul. 2018,
10.1109/ACCESS.2018.2858933. pp. 1–6.
[80] M. Shoaib, H. Scholten, and P. J. M. Havinga, “Towards physical [99] A. Nandy, J. Saha, C. Chowdhury, and K. P. D. Singh, “Detailed
activity recognition using smartphone sensors,” in Proc. IEEE 10th human activity recognition using wearable sensor and smartphones,” in
Int. Conf. Ubiquitous Intell. Comput. IEEE 10th Int. Conf. Autonomic Proc. Int. Conf. Opto-Electron. Appl. Opt. (Optronix), Kolkata, India,
Trusted Comput., Dec. 2013, pp. 80–87. Mar. 2019, pp. 1–6.
[81] D. Micucci, M. Mobilio, and P. Napoletano, “UniMiB SHAR: [100] Y. Li, N. Yang, L. Li, L. Liu, and Y. Yang, “Finger gesture recognition
A dataset for human activity recognition using acceleration data using a smartwatch with integrated motion sensors,” Web Intell.,
from smartphones,” Appl. Sci., vol. 7, no. 10, pp. 1–19, 2017, doi: vol. 16, no. 2, pp. 123–129, Jun. 2018, doi: 10.3233/WEB-180378.
10.3390/app7101101. [101] H. Li et al., “Hierarchical sensor fusion for micro-gesture recognition
[82] Y. Vaizman, K. Ellis, and G. Lanckriet, “Recognizing detailed with pressure sensor array and radar,” IEEE J. Electromagn., RF
human context in the wild from smartphones and smartwatches,” Microw. Med. Biol., vol. 4, no. 3, pp. 225–232, Sep. 2020, doi:
IEEE Pervasive Comput., vol. 16, no. 4, pp. 62–74, Oct. 2017, doi: 10.1109/JERM.2019.2949456.
10.1109/MPRV.2017.3971131. [102] R. Patil, “Noise reduction using wavelet transform and singular vector
[83] K. Kyritsis, C. L. Tatli, C. Diou, and A. Delopoulos, “Automated decomposition,” Proc. Comput. Sci., vol. 54, pp. 849–853, Jan. 2015,
analysis of in meal eating behavior using a commercial wristband IMU doi: 10.1016/j.procs.2015.06.099.
sensor,” in Proc. 39th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. [103] T. Pan, W. Tsai, C. Chang, C. Yeh, and M. Hu, “A hierarchical
(EMBC), Jul. 2017, pp. 2843–2846. hand gesture recognition framework for sports referee training-based
[84] B. Bruno, F. Mastrogiovanni, and A. Sgorbissa, “A public domain EMG and accelerometer sensors,” IEEE Trans. Cybern., vol. 52, no. 5,
dataset for ADL recognition using wrist-placed accelerometers,” pp. 3172–3183, May 2022, doi: 10.1109/TCYB.2020.3007173.
in Proc. 23rd IEEE Int. Symp. Robot Human Interact. Commun., [104] C. Derr and F. Sahin, “Signer-independent classification of American
Aug. 2014, pp. 738–743. sign language word signs using surface EMG,” in Proc. IEEE Int.
[85] M. Bennasar, B. A. Price, D. Gooch, A. K. Bandara, and B. Nuseibeh, Conf. Syst., Man, Cybern. (SMC), Banff, AB, Canada, Oct. 2017,
“Significant features for human activity recognition using tri-axial pp. 665–670.
accelerometers,” Sensors, vol. 22, no. 19, p. 7482, Oct. 2022, doi: [105] N. Hegde et al., “The pediatric SmartShoe: Wearable sensor system
10.3390/s22197482. for ambulatory monitoring of physical activity and gait,” IEEE Trans.
[86] J. Liu, Z. Wang, L. Zhong, J. Wickramasuriya, and V. Vasudevan, Neural Syst. Rehabil. Eng., vol. 26, no. 2, pp. 477–486, Feb. 2018, doi:
“UWave: Accelerometer-based personalized gesture recognition and its 10.1109/TNSRE.2017.2786269.
applications,” in Proc. IEEE Int. Conf. Pervasive Comput. Commun., [106] Y. Yu, X. Chen, S. Cao, X. Zhang, and X. Chen, “Exploration of
Galveston, TX, USA, Mar. 2009, pp. 1–6. Chinese sign language recognition using wearable sensors based on
[87] R. Jain, V. B. Semwal, and P. Kaushik, “Deep ensemble learning deep belief net,” IEEE J. Biomed. Health Informat., vol. 24, no. 5,
approach for lower extremity activities recognition using wearable pp. 1310–1320, May 2020, doi: 10.1109/JBHI.2019.2941535.
sensors,” Expert Syst., vol. 39, no. 6, pp. 1–12, Jul. 2022, doi: [107] P. Arnon, “Classification model for multi-sensor data fusion apply
10.1111/exsy.12743. for human activity recognition,” in Proc. Int. Conf. Comput., Com-
[88] V. B. Semwal, A. Gupta, and P. Lalwani, “An optimized hybrid mun., Control Technol. (I4CT), Langkawi, Malaysia, Sep. 2014,
deep learning model using ensemble learning approach for human pp. 415–419.
walking activities recognition,” J. Supercomput., vol. 77, no. 11, [108] R. C. Luo, C. C. Chang, and C. C. Lai, “Multisensor fusion
pp. 12256–12279, Nov. 2021, doi: 10.1007/s11227-021-03768-7. and integration: Theories, applications, and its perspectives,” IEEE
[89] V. B. Semwal, N. Gaud, P. Lalwani, V. Bijalwan, and A. K. Alok, “Pat- Sensors J., vol. 11, no. 12, pp. 3122–3138, Dec. 2011, doi:
tern identification of different human joints for different human walking 10.1109/JSEN.2011.2166383.
styles using inertial measurement unit (IMU) sensor,” Artif. Intell. Rev., [109] J. A. Ward, P. Lukowicz, G. Troster, and T. E. Starner, “Activity
vol. 55, no. 2, pp. 1149–1169, Feb. 2022, doi: 10.1007/s10462-021- recognition of assembly tasks using body-worn microphones and
09979-x. accelerometers,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28,
[90] M. Raj, V. B. Semwal, and G. C. Nandi, “Bidirectional association no. 10, pp. 1553–1567, Oct. 2006, doi: 10.1109/TPAMI.2006.197.
of joint angle trajectories for humanoid locomotion: The restricted [110] J. Wannenburg and R. Malekian, “Physical activity recognition from
Boltzmann machine approach,” Neural Comput. Appl., vol. 30, no. 6, smartphone accelerometer data for user context awareness sensing,”
pp. 1747–1755, Sep. 2018, doi: 10.1007/s00521-016-2744-3. IEEE Trans. Syst., Man, Cybern., Syst., vol. 47, no. 12, pp. 3142–3149,
[91] V. B. Semwal, P. Lalwani, M. K. Mishra, V. Bijalwan, and J. S. Chadha, Dec. 2017, doi: 10.1109/TSMC.2016.2562509.
“An optimized feature selection using bio-geography optimization tech- [111] R. C. King, E. Villeneuve, R. J. White, R. S. Sherratt, W. Holderbaum,
nique for human walking activities recognition,” Computing, vol. 103, and W. S. Harwin, “Application of data fusion techniques and tech-
no. 12, pp. 2893–2914, Dec. 2021, doi: 10.1007/s00607-021-01008-7. nologies for wearable health monitoring,” Med. Eng. Phys., vol. 42,
[92] V. B. Semwal, A. Mazumdar, A. Jha, N. Gaud, and V. Bijalwan, “Speed, pp. 1–12, Apr. 2017, doi: 10.1016/j.medengphy.2016.12.011.
cloth and pose invariant gait recognition-based person identification,” [112] D. Ruta and B. Gabrys, “An overview of classifier fusion methods,”
in Machine Learning: Theoretical Foundations and Practical Applica- Comput. Inf. Syst., vol. 7, no. 1, pp. 1–10, 2000.
tions. Berlin, Germany: Springer, 2021, pp. 39–56. [113] H. Azami, B. Bozorgtabar, and M. Shiroie, “Automatic signal
[93] V. Tuzlukov, Signal Processing Noise (Electrical Engineering & segmentation using the fractal dimension and weighted moving
Applied Signal Processing Series), 1st ed. Boca Raton, FL, USA: CRC average filter,” J. Elect. Comput. Sci., vol. 11, no. 6, pp. 8–15,
Press, 2002. Oct. 2011.
[94] I. Khokhlov, L. Reznik, J. Cappos, and R. Bhaskar, “Design of activity [114] A. Bulling, J. A. Ward, and H. Gellersen, “Multimodal recogni-
recognition systems with wearable sensors,” in Proc. IEEE Sensors tion of reading activity in transit using body-worn sensors,” ACM
Appl. Symp. (SAS), Mar. 2018, pp. 1–6. Trans. Appl. Perception, vol. 9, no. 1, pp. 1–21, Mar. 2012, doi:
[95] G. Cola, M. Avvenuti, A. Vecchio, G. Yang, and B. Lo, 10.1145/2134203.2134205.
“An on-node processing approach for anomaly detection in gait,” [115] T. Stiefmeier, D. Roggen, and G. Tröster, “Gestures are strings: Effi-
IEEE Sensors J., vol. 15, no. 11, pp. 6640–6649, Nov. 2015, doi: cient online gesture spotting and classification using string matching,”
10.1109/JSEN.2015.2464774. in Proc. 2nd Int. Conf. Body Area Netw. BodyNets, Florence, Italy,
[96] T. Le, T. Tran, and C. Pham, “The Internet-of-Things based hand 2007, pp. 16:1–16:8.
gestures using wearable sensors for human machine interaction,” in [116] U. Blanke, R. Rehner, and B. Schiele, “South by south-east or sitting
Proc. Int. Conf. Multimedia Anal. Pattern Recognit. (MAPR), Ho Chi at the design: Can orientation be a place?” in Proc. 15th Annu. Int.
Minh City, Vietnam, May 2019, pp. 1–6. Symp. Wearable Comput., Jun. 2011, pp. 43–46.
[97] K. Leuenberger, R. Gonzenbach, E. Wiedmer, A. Luft, and R. Gassert, [117] M. Bachlin et al., “Wearable assistant for Parkinson’s disease
“Classification of stair ascent and descent in stroke patients,” in Proc. patients with the freezing of gait symptom,” IEEE Trans. Inf.
11th Int. Conf. Wearable Implant. Body Sensor Netw. Workshops, Technol. Biomed., vol. 14, no. 2, pp. 436–446, Mar. 2010, doi:
Zurich, Switzerland, Jun. 2014, pp. 11–16. 10.1109/TITB.2009.2036165.
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15299
[118] I. Khairuddin, S. N. Sidek, A. P. A. Majeed, M. Razman, A. A. Puzi, [138] H. Lu, W. Pan, N. D. Lane, T. Choudhury, and A. T. Campbell,
and H. Yusof, “The classification of movement intention through “SoundSense: Scalable sound sensing for people-centric applications
machine learning models: The identification of significant time-domain on mobile phones,” in Proc. 7th Int. Conf. Mobile Syst., Appl., Services,
EMG features,” PeerJ Comput. Sci., vol. 7, pp. 1–15, Feb. 2021, doi: Jun. 2009, pp. 165–178.
10.7717/peerj-cs.379. [139] U. Maurer, A. Smailagic, D. P. Siewiorek, and M. Deisher, “Activity
[119] R. Jain, V. B. Semwal, and P. Kaushik, “Stride segmentation of recognition and monitoring using multiple sensors on different body
inertial sensor data using statistical methods for different walking positions,” in Proc. Int. Workshop Wearable Implant. Body Sensor
activities,” Robotica, vol. 40, no. 8, pp. 2567–2580, Dec. 2021, doi: Netw., Cambridge, MA, USA, 2006, pp. 113–116.
10.1017/s026357472100179x. [140] A. Bulling, J. A. Ward, H. Gellersen, and G. Tröster, “Eye movement
[120] D. Jung et al., “Deep neural network-based gait classification using analysis for activity recognition using electrooculography,” IEEE Trans.
wearable inertial sensor data,” in Proc. 41st Annu. Int. Conf. Pattern Anal. Mach. Intell., vol. 33, no. 4, pp. 741–753, Apr. 2011, doi:
IEEE Eng. Med. Biol. Soc. (EMBC), Berlin, Germany, Jul. 2019, 10.1109/TPAMI.2010.86.
pp. 3624–3628. [141] B. T. Nukala et al., “A real-time robust fall detection system using a
[121] C. Oz and M. C. Leu, “American sign language word recogni- wireless gait analysis sensor and an artificial neural network,” in Proc.
tion with a sensory glove using artificial neural networks,” Eng. IEEE Healthcare Innov. Conf. (HIC), Seattle, WA, USA, Oct. 2014,
Appl. Artif. Intell., vol. 24, no. 7, pp. 1204–1213, Oct. 2011, doi: pp. 219–222.
10.1016/j.engappai.2011.06.015. [142] B. T. Nukala et al., “An efficient and robust fall detection system
[122] N. Wang, S. J. Redmond, E. Ambikairajah, B. G. Celler, and using wireless gait analysis sensor with artificial neural network
N. H. Lovell, “Can triaxial accelerometry accurately recognize inclined (ANN) and support vector machine (SVM) algorithms,” Open J. Appl.
walking terrains?” IEEE Trans. Biomed. Eng., vol. 57, no. 10, Biosensor, vol. 3, no. 4, pp. 29–39, 2014, doi: 10.4236/OJAB.2014.
pp. 2506–2516, Oct. 2010, doi: 10.1109/TBME.2010.2049357. 34004.
[143] J. Taborri, S. Rossi, E. Palermo, F. Patanè, and P. Cappa, “A novel
[123] L. Peng, L. Chen, X. Wu, H. Guo, and G. Chen, “Hierarchical complex
HMM distributed classifier for the detection of gait phases by means
activity representation and recognition using topic model and classifier
of a wearable inertial sensor network,” Sensors, vol. 14, no. 9,
level fusion,” IEEE Trans. Biomed. Eng., vol. 64, no. 6, pp. 1369–1379,
pp. 16212–16234, Sep. 2014, doi: 10.3390/s140916212.
Jun. 2017, doi: 10.1109/TBME.2016.2604856.
[144] L. Rong, D. Zhiguo, Z. Jianzhong, and L. Ming, “Identification of
[124] P. Tahafchi and J. W. Judy, “Freezing-of-gait detection using wearable- individual walking patterns using gait acceleration,” in Proc. 1st Int.
sensor technology and neural-network classifier,” in Proc. IEEE Sen- Conf. Bioinf. Biomed. Eng., Wuhan, China, 2007, pp. 543–546.
sors, Montreal, QC, Canada, Oct. 2019, pp. 1–4.
[145] H. Li, A. Shrestha, H. Heidari, J. Le Kernec, and F. Fioranelli, “Bi-
[125] A. Zinnen, C. Wojek, and B. Schiele, “Multi activity recognition based LSTM network for multimodal continuous human activity recognition
on body model derived primitives,” Presented at the 4th Int. Symp. and fall detection,” IEEE Sensors J., vol. 20, no. 3, pp. 1191–1201,
Location Context Awareness, Tokyo, Japan, May 2009. Feb. 2020, doi: 10.1109/JSEN.2019.2946095.
[126] A. Zinnen, U. Blanke, and B. Schiele, “An analysis of sensor-oriented [146] J. A. Paradiso, S. J. Morris, A. Y. Benbasat, and E. Asmussen, “Inter-
vs. model-based activity recognition,” in Proc. Int. Symp. Wearable active therapy with instrumented footwear,” in Proc. CHI Extended
Comput., Linz, Austria, Sep. 2009, pp. 1–4. Abstr. Hum. Factors Comput. Syst., Vienna, Austria, Apr. 2004,
[127] R. Z. U. Rehman, S. Del Din, Y. Guan, A. J. Yarnall, J. Q. Shi, pp. 1341–1343.
and L. Rochester, “Selecting clinically relevant gait characteristics [147] K. Stapor, “Evaluation of classifiers: Current methods and future
for classification of early Parkinson’s disease: A comprehensive research directions,” in Proc. Ann. Comput. Sci. Inf. Syst., Prague,
machine learning approach,” Sci. Rep., vol. 9, no. 1, pp. 1–12, Czech Republic, Sep. 2017, pp. 37–40.
Nov. 2019. [148] N. Tran, J. G. Schneider, I. Weber, and A. K. Qin, “Hyper-parameter
[128] A. Bommert, X. Sun, B. Bischl, J. Rahnenführer, and M. Lang, optimization in classification: To-do or not-to-do,” Pattern Recognit.,
“Benchmark for filter methods for feature selection in high-dimensional vol. 103, pp. 1–32, Jul. 2020, doi: 10.1016/j.patcog.2020.107245.
classification data,” Comput. Statist. Data Anal., vol. 143, pp. 1–22, [149] F. Khan, S. Kanwal, S. Alamri, and B. Mumtaz, “Hyper-parameter
Mar. 2020, doi: 10.1016/j.csda.2019.106839. optimization of classifiers, using an artificial immune network and
[129] A. Jovic, K. Brkic, and N. Bogunovic, “A review of feature selection its application to software bug prediction,” IEEE Access, vol. 8,
methods with applications,” in Proc. 38th Int. Conv. Inf. Commun. Tech- pp. 20954–20964, 2020, doi: 10.1109/ACCESS.2020.2968362.
nol., Electron. Microelectron. (MIPRO), Opatija, Croatia, May 2015, [150] J. P. Mueller and L. Massaron, “Validating machine learning,” in
pp. 1200–1205. Machine Learning for Dummies, 2nd ed. Hoboken, NJ, USA: JWS,
[130] V. Bolón-Canedo, N. Sánchez-Maroño, and A. Alonso-Betanzos, 2021, ch. 9.
“A review of feature selection methods on synthetic data,” KAIS, vol. 8, [151] L. Bao and S. S. Intille, “Activity recognition from user-annotated
no. 1, pp. 1–37, Jul. 2005, doi: 10.1007/s10115-012-0487-8. acceleration data,” Presented at the Pervasive Comput., 2nd Int. Conf.,
[131] A. Ben Brahim and M. Limam, “A hybrid feature selection Vienna, Austria, Apr. 2004.
method based on instance learning and cooperative subset search,” [152] C. Ferri, J. Hernández-Orallo, and R. Modroiu, “An experimen-
Pattern Recognit. Lett., vol. 69, pp. 28–34, Jan. 2016, doi: tal comparison of performance measures for classification,” Pat-
10.1016/j.patrec.2015.10.005. tern Recognit. Lett., vol. 30, no. 1, pp. 27–38, Jan. 2009, doi:
[132] M. Naseriparsa, A.-M. Bidgoli, and T. Varaee, “A hybrid feature 10.1016/j.patrec.2008.08.010.
selection method to improve performance of a group of classifica- [153] M. Hossin and M. N. Sulaima, “A review on evaluation metrics for
tion algorithms,” Int. J. Comput. Appl., vol. 69, no. 17, pp. 28–35, data classification evaluations,” Int. J. Data Mining Knowl. Manage.
May 2013, doi: 10.5120/12065-8172. Process, vol. 5, no. 2, pp. 1–11, Mar. 2015, doi: 10.5121/ijdkp.
2015.5201.
[133] A. Subasi, “Classification examples for healthcare,” in Practical
[154] R. Caruana and A. Niculescu-Mizil, “Data mining in metric space:
Machine Learning for Data Analysis Using Python. Cambridge, MA,
An empirical analysis of supervised learning performance criteria,” in
USA: AP, 2020, ch. 4, pp. 203–322.
Proc. 10th ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining,
[134] T. I. Netoff, “The ability to predict seizure onset,” in Engineering in Aug. 2004.
Medicine Advances and Challenges. Cambridge, MA, USA: AP, 2019, [155] A. Gupta and V. B. Semwal, “Occluded gait reconstruction in multi
ch. 14, pp. 365–378. person gait environment using different numerical methods,” Multi-
[135] M. Landgraf et al., “Gesture recognition with sensor data fusion of media Tools Appl., vol. 81, no. 16, pp. 23421–23448, Jul. 2022, doi:
two complementary sensing methods,” in Proc. 7th IEEE Int. Conf. 10.1007/s11042-022-12218-2.
Biomed. Robot. Biomechatronics (Biorob), Aug. 2018, pp. 795–800. [156] Y. Hsu, S. Yang, H. Chang, and H. Lai, “Human daily and sport
[136] R. Su, X. Chen, S. Cao, and X. Zhang, “Random forest-based recog- activity recognition using a wearable inertial sensor network,” IEEE
nition of isolated sign language sub words using data from accelerom- Access, vol. 6, pp. 31715–31728, 2018, doi: 10.1109/ACCESS.2018.
eters and surface electromyographic sensors,” Sensors, vol. 16, no. 1, 2839766.
pp. 1–15, Jan. 2016, doi: 10.3390/s16010100. [157] S. Kumar, T. Yogesh, M. Prithiv, S. Alam, M. Hashim, and R. Amutha,
[137] N. Dawar and N. Kehtarnavaz, “Data flow synchronization of a real- “Data mining technique based ambient assisted living for elderly
time fusion system to detect and recognize smart TV gestures,” in Proc. people,” in Proc. 4th Int. Conf. Comput. Methodologies Commun.
IEEE Int. Conf. Consum. Electron. (ICCE), Jan. 2018, pp. 1–4. (ICCMC), Erode, India, Mar. 2020, pp. 505–508.
15300 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
[158] J. Wu, L. Sun, and R. Jafari, “A wearable system for recognizing [179] J. P. Gupta, N. Singh, P. Dixit, V. B. Semwal, and S. R. Dubey,
American sign language in real-time using IMU and surface EMG sen- “Human activity recognition using gait pattern,” Int. J. Comput.
sors,” IEEE J. Biomed. Health Informat., vol. 20, no. 5, pp. 1281–1290, Vis. Image Process., vol. 3, no. 3, pp. 31–53, Jul. 2013, doi:
Sep. 2016, doi: 10.1109/JBHI.2016.2598302. 10.4018/ijcvip.2013070103.
[159] S. U. Yunas, A. Alharthi, and K. B. Ozanyan, “Multi-modality fusion of [180] A. M. Sabatini, “Quaternion-based extended Kalman filter for deter-
floor and ambulatory sensors for gait classification,” in Proc. IEEE 28th mining orientation by inertial and magnetic sensing,” IEEE Trans.
Int. Symp. Ind. Electron. (ISIE), Vancouver, BC, Canada, Jun. 2019, Biomed. Eng., vol. 53, no. 7, pp. 1346–1356, Jul. 2006, doi:
pp. 1467–1472. 10.1109/TBME.2006.875664.
[160] S. Mekruksavanich and A. Jitpattanakul, “Classification of gait pattern [181] S. Sabatelli, M. Galgani, L. Fanucci, and A. Rocchi, “A double-stage
with wearable sensing data,” in Proc. Joint Int. Conf. Digit. Arts, Kalman filter for orientation tracking with an integrated processor in
Media Technol. ECTI Northern Sect. Conf. Electr., Electron., Comput. 9-D IMU,” IEEE Trans. Instrum. Meas., vol. 62, no. 3, pp. 590–598,
Telecommun. Eng. (ECTI DAMT-NCON), Jan. 2019, pp. 137–141. Mar. 2013, doi: 10.1109/TIM.2012.2218692.
[161] N. Hnoohom, S. Mekruksavanich, and A. Jitpattanakul, “Human activ- [182] K. H. Tran and M. T. Chew, “Kalman filtering for wearable fitness
ity recognition using triaxial acceleration data from smartphone and monitoring,” in Proc. IEEE Int. Instrum. Meas. Technol. Conf., Pisa,
ensemble learning,” in Proc. 13th Int. Conf. Signal-Image Technol. Italy, May 2015, pp. 2020–2025.
Internet-Based Syst. (SITIS), Jaipur, India, Dec. 2017, pp. 408–412. [183] X. F. He and P. Niyogi, “Locality preserving projections,” in Advances
[162] N. Hnoohom, A. Jitpattanakul, P. Inluergsri, P. Wongbudsri, and in Neural Information Processing Systems. Cambridge, MA, USA: MIT
W. Ployput, “Multi-sensor-based fall detection and activity daily living Press, 2004, pp. 153–160.
classification by using ensemble learning,” in Proc. Int. ECTI Northern [184] K. E. Hild, D. Erdogmus, K. Torkkola, and J. C. Principe, “Feature
Sect. Conf. Electr., Electron., Comput. Telecommun. Eng., Chiang Rai, extraction using information-theoretic learning,” IEEE Trans. Pattern
Thailand, Feb. 2018, pp. 111–115. Anal. Mach. Intell., vol. 28, no. 9, pp. 1385–1392, Sep. 2006, doi:
[163] A. Tharwat, “Classification assessment methods,” Appl. Com- 10.1109/TPAMI.2006.186.
put. Informat., vol. 17, no. 1, pp. 168–192, Jan. 2021, doi: [185] A. Akbari and R. Jafari, “An autoencoder-based approach for recog-
10.1016/j.aci.2018.08.003. nizing null class in activities of daily living in-the-wild via wearable
motion sensors,” in Proc. IEEE Int. Conf. Acoust., Speech Signal
[164] M. Zhang and Z. Zhou, “A review on multi-label learning algo-
Process. (ICASSP), Brighton, U.K., May 2019, pp. 3392–3396.
rithms,” IEEE Trans. Knowl. Data Eng., vol. 26, no. 8, pp. 1819–1837,
[186] X. Lai, Q. Liu, X. Wei, W. Wang, G. Zhou, and G. Han, “A survey
Aug. 2014, doi: 10.1109/TKDE.2013.39.
of body sensor networks,” Sensors, vol. 13, no. 5, pp. 5406–5447,
[165] R. F. De Vellis, “Inter-rater reliability,” in Encyclopedia of Social
Apr. 2013, doi: 10.3390/s130505406.
Measurement. Cambridge, MA, USA: AP, 2005, pp. 317–322.
[187] M. Yaghoubi, K. Ahmed, and Y. Miao, “Wireless body area network
[166] J. Han, J. Pei, and M. Kamber, “Classification: Basic concepts,” in (WBAN): A survey on architecture, technologies, energy consumption,
Data Mining: Concepts and Techniques, 3rd ed., Amsterdam, The and security challenges,” J. Sensor Actuator Netw., vol. 11, no. 4, p. 67,
Netherlands: Elsevier, 2011, ch. 8, pp. 327–391. Oct. 2022, doi: 10.3390/jsan11040067.
[167] A. Shaafi, O. Salem, and A. Mehaoua, “Improving human activity [188] N. Javaid, A. Ahmad, Y. Khan, Z. A. Khan, and T. A. Alghamdi,
recognition algorithms using wireless body sensors,” Presented at the “A relay based routing protocol for wireless in-body sensor networks,”
Int. Wireless Commun. Mobile Comput. (IWCMC), Limassol, Cyprus, Wireless Pers. Commun., vol. 80, no. 3, pp. 1063–1078, Feb. 2015, doi:
Apr. 2020. 10.1007/s11277-014-2071-x.
[168] B. Clarkson, K. Mase, and A. Pentland, “Recognizing user’s context [189] Y. Wang, S. Cang, and H. Yu, “A data fusion-based hybrid sensory
from wearable sensors: Baseline system,” J. Neurol. Sci., vol. 248, system for older people’s daily activity and daily routine recognition,”
pp. 1–10, Jan. 2000. IEEE Sensors J., vol. 18, no. 16, pp. 6874–6888, Aug. 2018, doi:
[169] P. Tahafchi and J. W. Judy, “Freezing-of-gait detection using wearable 10.1109/JSEN.2018.2833745.
sensor technology and possibilistic K-nearest-neighbor algorithm,” in [190] C. Zhu, Q. Cheng, and W. Sheng, “Human activity recognition via
Proc. 41st Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), Berlin, motion and vision data fusion,” in Proc. Conf. Rec. 44th Asilomar
Germany, Jul. 2019, pp. 4246–4249. Conf. Signals, Syst. Comput., Pacific Grove, CA, USA, Nov. 2010,
[170] S. Chen et al., “Discriminative information added by wearable sensors pp. 332–336.
for early screening—A case study on diabetic peripheral neuropathy,” [191] Y. Zhang et al., “Static and dynamic human arm/hand gesture capturing
in Proc. IEEE 16th Int. Conf. Wearable Implant. Body Sensor Netw. and recognition via multiinformation fusion of flexible strain sensors,”
(BSN), Chicago, IL, USA, May 2019, pp. 1–4. IEEE Sensors J., vol. 20, no. 12, pp. 6450–6459, Jun. 2020, doi:
[171] G. Cola, M. Avvenuti, A. Vecchio, G. Yang, and B. Lo, “An unsuper- 10.1109/JSEN.2020.2965580.
vised approach for gait-based authentication,” in Proc. IEEE 12th Int. [192] Y. Bao, F. Sun, X. Hua, B. Wang, and J. Yin, “Operation action
Conf. Wearable Implant. Body Sensor Netw. (BSN), Cambridge, MA, recognition using wearable devices with inertial sensors,” in Proc. IEEE
USA, Jun. 2015, pp. 1–6. Int. Conf. Multisensor Fusion Integr. Intell. Syst. (MFI), Nov. 2017,
[172] L. Rong, Z. Jianzhong, L. Ming, and H. Xiangfeng, “A wearable pp. 536–541.
acceleration sensor system for gait recognition,” in Proc. 2nd IEEE [193] J. G. Colli-Alfaro, A. Ibrahim, and A. L. Trejos, “Design of user-
Conf. Ind. Electron. Appl., Harbin, China, May 2007, pp. 1–4. independent hand gesture recognition using multilayer perceptron
[173] J. Mantyjarvi, M. Lindholm, E. Vildjiounaite, S. Makela, and networks and sensor fusion techniques,” in Proc. IEEE 16th Int.
H. Ailisto, “Identifying users of portable devices from gait pattern Conf. Rehabil. Robot. (ICORR), Toronto, ON, Canada, Jun. 2019,
with accelerometers,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal pp. 1103–1108.
Process., Philadelphia, PA, USA, Mar. 2005, pp. 1–12. [194] K. Murao, T. Terada, A. Yano, and R. Matsukura, “Evaluating gesture
[174] A. Bulling, J. A. Ward, H. Gellersen, and G. Tröster, “Robust recogni- recognition by multiple-sensor-containing mobile devices,” in Proc.
tion of reading activity in transit using wearable electrooculography,” 15th Annu. Int. Symp. Wearable Comput., San Francisco, CA, USA,
Presented at the 6th Int. Conf., Pervasive, Sydney, NSW, Australia, Jun. 2011, pp. 55–58.
May 2008. [195] C. Zhu and W. Sheng, “Wearable sensor-based hand gesture and daily
[175] T. Saito and M. Rehmsmeier, “The precision-recall plot is more activity recognition for robot-assisted living,” IEEE Trans. Syst., Man,
informative than the ROC plot when evaluating binary classifiers on Cybern., A, Syst. Hum., vol. 41, no. 3, pp. 569–573, May 2011, doi:
imbalanced datasets,” PLoS ONE, vol. 10, no. 3, pp. 1–12, Mar. 2015, 10.1109/TSMCA.2010.2093883.
doi: 10.1371/journal.pone.0118432. [196] J. Liu, J. Sohn, and S. Kim, “Classification of daily activities for the
[176] D. Gafurov, E. Snekkenes, and P. Bours, “Improved gait recognition elderly using wearable sensors,” J. Healthcare Eng., vol. 2017, pp. 1–7,
performance using cycle matching,” in Proc. IEEE 24th Int. Conf. Adv. Jan. 2017, doi: 10.1155/2017/8934816.
Inf. Netw. Appl. Workshops, Perth, WA, Australia, Apr. 2010, pp. 1–6. [197] H. Huang, X. Li, and Y. Sun, “A triboelectric motion sensor in wearable
body sensor network for human activity recognition,” in Proc. 38th
[177] R. R. Fletcher, M.-Z. Poh, and H. Eydgahi, “Wearable sensors: Oppor-
Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), Orlando, FL,
tunities and challenges for low-cost health care,” in Proc. Annu. Int.
USA, Aug. 2016, pp. 4889–4892.
Conf. IEEE Eng. Med. Biol., Buenos Aires, Argentina, Aug. 2010,
[198] G. Bhat, J. Park, H. G. Lee, and U. Y. Ogras, “Sensor-classifier co-
pp. 1–14.
optimization for wearable human activity recognition applications,” in
[178] VN-100 Introduction. Accessed: Jun. 8, 2023. [Online]. Available:
Proc. IEEE Int. Conf. Embedded Softw. Syst. (ICESS), Las Vegas, NV,
https://www.vectornav.com/products/vn-100
USA, Jun. 2019, pp. 1–4.
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15301
[199] H. Kim et al., “Collaborative classification for daily activity recognition [219] M. Bachlin et al., “Potentials of enhanced context awareness in
with a smartwatch,” in Proc. IEEE Int. Conf. Syst., Man, Cybern. wearable assistants for Parkinson’s disease patients with the freezing of
(SMC), Budapest, Hungary, Oct. 2016, pp. 3707–3712. gait syndrome,” in Proc. Int. Symp. Wearable Comput., Linz, Austria,
[200] A. Moschetti, L. Fiorini, D. Esposito, P. Dario, and F. Cavallo, “Daily Sep. 2009, pp. 1–10.
activity recognition with inertial ring and bracelet: An unsupervised [220] S.-W. Lee and K. Mase, “Activity and location recognition using
approach,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Singapore, wearable sensors,” IEEE Pervasive Comput., vol. 1, no. 3, pp. 24–32,
May 2017, pp. 3250–3255. Jul. 2002, doi: 10.1109/MPRV.2002.1037719.
[201] X. Zhou, W. Liang, K. I. Wang, H. Wang, L. T. Yang, and [221] G. Pirkl, K. Stockinger, K. Kunze, and P. Lukowicz, “Adapting
Q. Jin, “Deep-learning-enhanced human activity recognition for Inter- magnetic resonant coupling based relative positioning technology for
net of Healthcare Things,” IEEE Internet Things J., vol. 7, no. 7, wearable activitiy recogniton,” in Proc. 12th IEEE Int. Symp. Wearable
pp. 6429–6438, Jul. 2020, doi: 10.1109/JIOT.2020.2985082. Comput., Pittsburgh, PA, USA, Oct. 2008, pp. 1–3.
[202] Z. Chen, C. Jiang, S. Xiang, J. Ding, M. Wu, and X. Li, “Smartphone [222] J. Cheng, O. Amft, and P. Lukowicz, “Active capacitive sensing:
sensor-based human activity recognition using feature fusion and Exploring a new wearable sensing modality for activity recogni-
maximum full a posteriori,” IEEE Trans. Instrum. Meas., vol. 69, no. 7, tion,” Presented at the 8th Int. Conf., Pervasive, Helsinki, Finland,
pp. 3992–4001, Jul. 2020, doi: 10.1109/TIM.2019.2945467. May 2010.
[203] T. Zebin, P. J. Scully, and K. B. Ozanyan, “Evaluation of supervised [223] U. Anliker et al., “AMON: A wearable multiparameter medical mon-
classification algorithms for human activity recognition with inertial itoring and alert system,” IEEE Trans. Inf. Technol. Biomed., vol. 8,
sensors,” in Proc. IEEE Sensors, Glasgow, U.K., Oct. 2017, pp. 1–3. no. 4, pp. 415–427, Dec. 2004, doi: 10.1109/TITB.2004.837888.
[204] S. Benatti et al., “A versatile embedded platform for EMG acquisition [224] N. Oliver and F. Flores-Mangas, “HealthGear: Automatic sleep apnea
and gesture recognition,” IEEE Trans. Biomed. Circuits Syst., vol. 9, detection and monitoring with a mobile phone,” J. Commun., vol. 2,
no. 5, pp. 620–630, Oct. 2015, doi: 10.1109/TBCAS.2015.2476555. no. 2, pp. 1–12, Mar. 2007, doi: 10.4304/jcm.2.2.1-9.
[205] X. Zhang, X. Chen, Y. Li, V. Lantz, K. Wang, and J. Yang, [225] W.-J. Kang, J.-R. Shiu, C.-K. Cheng, J.-S. Lai, H.-W. Tsao, and
“A framework for hand gesture recognition based on accelerometer and T.-S. Kuo, “The application of cepstral coefficients and maximum like-
EMG sensors,” IEEE Trans. Syst. Man Cybern., A Syst. Hum., vol. 41, lihood method in EMG pattern recognition [movements classification],”
no. 6, pp. 1046–1076, Nov. 2011, doi: 10.1109/TSMCA.2011.2116004. IEEE Trans. Biomed. Eng., vol. 42, no. 8, pp. 777–785, Aug. 1995, doi:
10.1109/10.398638.
[206] Y. Li, X. Chen, X. Zhang, K. Wang, and Z. J. Wang,
[226] P. Lukowicz, F. Hanser, C. Szubski, and W. Schobersberger,
“A sign-component-based framework for Chinese sign language
“Detecting and interpreting muscle activity with wearable force
recognition using accelerometer and sEMG data,” IEEE Trans.
sensors,” in Proc. 4th Int. Conf., Pervasive, Dublin, Ireland,
Biomed. Eng., vol. 59, no. 10, pp. 2695–2704, Oct. 2012, doi:
May 2006, pp. 101–116.
10.1109/TBME.2012.2190734.
[207] A. Bashir, F. Malik, F. Haider, M. Ehatisham-ul-Haq, A. Raheel, and [227] C. Mattmann, O. Amft, H. Harms, G. Troster, and F. Clemens,
A. Arsalan, “A smart sensor-based gesture recognition system for media “Recognizing upper body postures using textile strain sensors,” in Proc.
player control,” in Proc. 3rd Int. Conf. Comput., Math. Eng. Technol. 11th IEEE Int. Symp. Wearable Comput., Boston, MA, USA, Oct. 2007,
(iCoMET), Sukkur, Pakistan, Jan. 2020, pp. 1–6. pp. 1–3.
[228] S. J. Morris and J. A. Paradiso, “Shoe-integrated sensor system for
[208] G. Yuan, X. Liu, Q. Yan, S. Qiao, Z. Wang, and L. Yuan, “Hand gesture
wireless gait analysis and real-time feedback,” in Proc. 2nd Joint 24th
recognition using deep feature fusion network based on wearable
Annu. Conf. Annu. Fall Meeting Biomed. Eng. Soc., Eng. Med. Biol.,
sensors,” IEEE Sensors J., vol. 21, no. 1, pp. 539–547, Jan. 2021, doi:
Houston, TX, USA, Oct. 2002, pp. 2468–2469.
10.1109/JSEN.2020.3014276.
[209] S. Shin and W. Sung, “Dynamic hand gesture recognition for wearable [229] L. Liao, D. Fox, and H. Kautz, “Location-based activity recognition
devices with low complexity recurrent neural networks,” in Proc. IEEE using relational Markov networks,” Presented at the 19th Int. Joint
Int. Symp. Circuits Syst. (ISCAS), Montreal, QC, Canada, May 2016, Conf. Artif. Intell., Jul. 2005, pp. 773–778.
pp. 2274–2277. [230] J. Krumm and E. Horvitz, “Predestination: Inferring destinations from
partial trajectories,” Presented at the 8th Int. Conf. Ubiquitous Comput.,
[210] K. Czuszynski, J. Ruminski, and A. Kwasniewska, “Gesture recog-
Orange County, CA, USA, Sep. 2006.
nition with the linear optical sensor and recurrent neural networks,”
IEEE Sensors J., vol. 18, no. 13, pp. 5429–5438, Jul. 2018, doi: [231] G. Schindler, C. Metzger, and T. Starner, “A wearable interface for
10.1109/JSEN.2018.2834968. topological mapping and localization in indoor environments,” Pre-
[211] Z. Lu, X. Chen, Q. Li, X. Zhang, and P. Zhou, “A hand gesture sented at the 2nd Int. Conf. Location- Context-Awareness, Dublin,
recognition framework and wearable gesture-based interaction proto- Ireland, May 2006.
type for mobile devices,” IEEE Trans. Hum.-Mach. Syst., vol. 44, no. 2, [232] A. Cassinelli, C. Reynolds, and M. Ishikawa, “Augmenting spatial
pp. 293–299, Apr. 2014, doi: 10.1109/THMS.2014.2302794. awareness with haptic radar,” in Proc. 10th IEEE Int. Symp. Wearable
Comput., Montreux, Switzerland, Oct. 2006, pp. 61–64.
[212] O. Amft, H. Junker, and G. Troster, “Detection of eating and drinking
arm gestures using inertial body-worn sensors,” in Proc. 9th IEEE Int. [233] V. E. Kosmidou and L. J. Hadjileontiadis, “Sign language recognition
Symp. Wearable Comput., Osaka, Japan, Oct. 2005, pp. 160–163. using intrinsic-mode sample entropy on sEMG and accelerometer data,”
IEEE Trans. Biomed. Eng., vol. 56, no. 12, pp. 2879–2890, Dec. 2009,
[213] A. Bulling and D. Roggen, “Recognition of visual memory recall
doi: 10.1109/TBME.2009.2013200.
processes using eye movement analysis,” in Proc. 13th Int. Conf.
[234] V. E. Kosmidou, L. J. Hadjileontiadis, and S. M. Panas, “Evaluation of
Ubiquitous Comput., Beijing, China, Sep. 2011, pp. 1–4.
surface EMG features for the recognition of American sign language
[214] J. Lester, T. Choudhury, and G. Borriello, “A practical approach to gestures,” in Proc. Int. Conf. IEEE Eng. Med. Biol. Soc., New York,
recognizing physical activities,” Presented at the Int. Conf. Pervasive NY, USA, Aug. 2006, pp. 6197–200.
Comput., Dublin, Ireland, May 2006. [235] Q. Chen, A. El-Sawah, C. Joslin, and N. D. Georganas, “A dynamic
[215] T. Starner, J. Weaver, and A. Pentland, “A wearable computer based gesture interface for virtual environments based on hidden Markov
American sign language recognizer,” in Dig. Papers, 1st Int. Symp. models,” in Proc. IREE Int. Worksho Haptic Audio Vis. Environments
Wearable Comput., Cambridge, MA, USA, Oct. 1997, pp. 241–250. Appl., Ottawa, ON, Canada, 2005, pp. 1–3.
[216] T. Westeyn, K. Vadas, X. Bian, T. Starner, and G. D. Abowd, “Recog- [236] S. T. Moore, H. G. MacDougall, and W. G. Ondo, “Ambu-
nizing mimicked autistic self-stimulatory behaviors using HMMs,” in latory monitoring of freezing of gait in Parkinson’s disease,”
Proc. 9th IEEE Int. Symp. Wearable Comput., Osaka, Japan, Oct. 2005, J. Neurosci. Methods, vol. 167, no. 2, pp. 340–348, Jan. 2008, doi:
pp. 164–167. 10.1016/j.jneumeth.2007.08.023.
[217] G. Ogris, T. Stiefmeier, P. Lukowicz, and G. Troster, “Using a complex [237] J. Camps et al., “Deep learning for detecting freezing of gait
multi-modal on-body sensor system for activity spotting,” in Proc. 12th episodes in Parkinson’s disease based on accelerometers,” Presented
IEEE Int. Symp. Wearable Comput., Pittsburgh, PA, USA, Oct. 2008, at the 14th Int. Work-Conf. Artif. Neural Netw., Cadiz, Spain,
pp. 1–4. Jun. 2017.
[218] A. Godfrey, R. Conway, D. Meagher, and G. Ó. Laighin, [238] C. Azevedo Coste, B. Sijobert, R. Pissard-Gibollet, M. Pasquier,
“Direct measurement of human movement by accelerometry,” Med. B. Espiau, and C. Geny, “Detection of freezing of gait in Parkinson
Eng. Phys., vol. 30, no. 10, pp. 1364–1386, Dec. 2008, doi: disease: Preliminary results,” Sensors, vol. 14, no. 4, pp. 6819–6827,
10.1016/j.medengphy.2008.09.005. Apr. 2014, doi: 10.3390/s140406819.
15302 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
[239] B. T. Cole, S. H. Roy, and S. H. Nawab, “Detecting freezing-of- [259] Y. Xu, W. Chen, J. Wang, and J. Zhang, “Fuzzy control in gait
gait during unscripted and unconstrained activity,” in Proc. Annu. Int. pattern classification using wearable sensors,” in Proc. IEEE 10th
Conf. IEEE Eng. Med. Biol. Soc., Boston, MA, USA, Aug. 2011, Conf. Ind. Electron. Appl. (ICIEA), Auckland, New Zealand, Jun. 2015,
pp. 5649–5652. pp. 62–67.
[240] B. T. Cole, S. H. Roy, C. J. De Luca, and S. H. Nawab, “Dynamic [260] Z. Li, W. Chen, J. Wang, and J. Liu, “An automatic recognition system
neural network detection of tremor and dyskinesia from wearable sen- for patients with movement disorders based on wearable sensors,” in
sor data,” in Proc. Annu. Int. Conf. IEEE Eng. Med. Biol., Aug. 2010, Proc. 9th IEEE Conf. Ind. Electron. Appl., Hangzhou, China, Jun. 2014,
pp. 6062–6065. pp. 1948–1953.
[241] P. Tahafchi et al., “Freezing-of-gait detection using temporal, spatial, [261] R. L. Evans and D. K. Arvind, “Detection of gait phases using orient
and physiological features with a support-vector-machine classifier,” specks for mobile clinical gait analysis,” in Proc. 11th Int. Conf.
in Proc. 39th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), Wearable Implant. Body Sensor Netw., Zurich, Switzerland, Jun. 2014,
Jul. 2017, pp. 2867–2870. pp. 149–154.
[242] N. Shibuya et al., “A real-time fall detection system using a wearable [262] S. Crea et al., “Development of gait segmentation methods for wearable
gait analysis sensor and a support vector machine (SVM) classifier,” foot pressure sensors,” in Proc. Annu. Int. Conf. IEEE Eng. Med. Biol.
in Proc. 8th Int. Conf. Mobile Comput. Ubiquitous Netw. (ICMU), Soc., Aug. 2012, pp. 5018–5021.
Hakodate, Japan, Jan. 2015, pp. 1–5. [263] D. Novak et al., “Early recognition of gait initiation and termination
[243] T. Nakano et al., “Gaits classification of normal vs. patients by wireless using wearable sensors,” in Proc. 4th IEEE RAS EMBS Int. Conf.
gait sensor and support vector machine (SVM) classifier,” in Proc. Biomed. Robot. Biomechatronics (BioRob), Rome, Italy, Jun. 2012,
IEEE/ACIS 15th Int. Conf. Comput. Inf. Sci. (ICIS), Okayama, Japan, pp. 1937–1942.
Jun. 2016, pp. 1–6. [264] C. Wong, S. McKeague, J. Correa, J. Liu, and G. Yang, “Enhanced
[244] R. Williamson and B. J. Andrews, “Gait event detection for FES using classification of abnormal gait using BSN and depth,” in Proc. 9th Int.
accelerometers and supervised machine learning,” IEEE Trans. Rehabil. Conf. Wearable Implant. Body Sensor Netw., London, U.K., May 2012,
Eng., vol. 8, no. 3, pp. 312–319, Oct. 2000, doi: 10.1109/86.867873. pp. 166–171.
[245] M. Yuwono, S. W. Su, Y. Guo, B. D. Moulton, and H. T. Nguyen, [265] M. ElSayed, A. Alsebai, A. Salaheldin, N. E. Gayar, and M. ElHelw,
“Unsupervised nonparametric method for gait analysis using a waist- “Ambient and wearable sensing for gait classification in pervasive
worn inertial sensor,” Appl. Soft Comput., vol. 14, pp. 72–80, Jan. 2014. healthcare environments,” in Proc. 12th IEEE Int. Conf. E-Health
[246] U. Martinez-Hernandez, I. Mahmood, and A. A. Dehghani-Sanij, Netw., Appl. Service, Lyon, France, Jul. 2010, pp. 240–245.
“Simultaneous Bayesian recognition of locomotion and gait phases [266] M. Chen, J. Yan, and Y. Xu, “Gait pattern classification with integrated
with wearable sensors,” IEEE Sensors J., vol. 18, no. 3, pp. 1282–1290, shoes,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., St. Louis, MO,
Feb. 2018, doi: 10.1109/JSEN.2017.2782181. USA, Oct. 2009, pp. 833–839.
[247] T. Liu, Y. Inoue, and K. Shibata, “A wearable sensor system for human [267] M. A. Hanson, H. C. Powell Jr., A. T. Barth, J. Lach, and
motion analysis and humanoid robot control,” in Proc. IEEE Int. Conf. M. Brandt-Pearce, “Neural network gait classification for on-body
Robot. Biomimetics, Kunming, China, Dec. 2006, pp. 1–8. inertial sensors,” in Proc. 6th Int. Workshop Wearable Implant. Body
[248] S. K. Al Kork, I. Gowthami, X. Savatier, T. Beyrouthy, J. A. Korbane, Sensor Netw., Berkeley, CA, USA, Jun. 2009, pp. 1–5.
and S. Roshdi, “Biometric database for human gait recognition using [268] W. H. Wu, A. A. T. Bui, M. A. Batalin, D. Liu, and W. J. Kaiser, “Incre-
wearable sensors and a smartphone,” in Proc. 2nd Int. Conf. Bio-Eng. mental diagnosis method for intelligent wearable sensor systems,” IEEE
Smart Technol. (BioSMART), Paris, France, Aug. 2017, pp. 1–4. Trans. Inf. Technol. Biomed., vol. 11, no. 5, pp. 553–562, Sep. 2007,
[249] R. D. Gurchiek et al., “Remote gait analysis using wearable sensors doi: 10.1109/TITB.2007.897579.
detects asymmetric gait patterns in patients recovering from ACL [269] B. Coley, B. Najafi, A. Paraschiv-Ionescu, and K. Aminian, “Stair
reconstruction,” in Proc. IEEE 16th Int. Conf. Wearable Implant. Body climbing detection during daily physical activity using a miniature
Sensor Netw. (BSN), Chicago, IL, USA, May 2019, pp. 1–4. gyroscope,” Gait Posture, vol. 22, no. 4, pp. 287–294, Dec. 2005, doi:
[250] M. S. H. Aung et al., “Automated detection of instantaneous gait events 10.1016/j.gaitpost.2004.08.008.
using time frequency analysis and manifold embedding,” IEEE Trans. [270] M. Sekinea, T. Tamurab, T. Togawac, and Y. Fukuia, “Classification of
Neural Syst. Rehabil. Eng., vol. 21, no. 6, pp. 908–916, Nov. 2013, waist-acceleration signals in a continuous walking record,” Med. Eng.
doi: 10.1109/TNSRE.2013.2239313. Phys., vol. 22, no. 4, pp. 285–291, May 2000, doi: 10.1016/s1350-
[251] T. Liu, Y. Inoue, K. Shibata, and X. Tang, “A wearable inertial sensor 4533(00)00041-2.
system for human motion analysis,” in Proc. Int. Symp. Comput. Intell. [271] N. Kern, B. Schiele, and A. Schmidt, “Multi-sensor activity context
Robot. Autom., Espoo, Finland, Jun. 2005, pp. 1–4. detection for wearable computing,” in Proc. 1st Eur. Symp., Veldhoven,
[252] C. P. Burgos et al., “In-ear accelerometer-based sensor for gait classifi- The Netherlands, Nov. 2003, pp. 220–232.
cation,” IEEE Sensors J., vol. 20, no. 21, pp. 12895–12902, Nov. 2020, [272] A. Turner and S. Hayes, “The classification of minor gait alter-
doi: 10.1109/JSEN.2020.3002589. ations using wearable sensors and deep learning,” IEEE Trans.
[253] S. Potluri, A. B. Chandran, C. Diedrich, and L. Schega, “Machine learn- Biomed. Eng., vol. 66, no. 11, pp. 3136–3145, Nov. 2019, doi:
ing based human gait segmentation with wearable sensor platform,” in 10.1109/TBME.2019.2900863.
Proc. 41st Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. (EMBC), Berlin, [273] C. Strohrmann et al., “Automated assessment of gait deviations in
Germany, Jul. 2019, pp. 588–594. children with cerebral palsy using a sensorized shoe and active shape
[254] V. Mandalapu, N. Homdee, J. M. Hart, J. Lach, S. Bodkin, and models,” in Proc. IEEE Int. Conf. Body Sensor Netw., Cambridge, MA,
J. Gong, “Developing computational models for personalized ACL USA, May 2013, pp. 1–6.
injury classification,” in Proc. IEEE 16th Int. Conf. Wearable Implant. [274] S. Bamberg, A. Y. Benbasat, D. M. Scarborough, D. E. Krebs,
Body Sensor Netw. (BSN), Chicago, IL, USA, May 2019, pp. 1–4. and J. A. Paradiso, “Gait analysis using a shoe-integrated wireless
[255] K. Bichave, O. Brewer, M. Gusinov, P. P. Markopoulos, and sensor system,” IEEE Trans. Inf. Technol. Biomed., vol. 12, no. 4,
I. Puchades, “Gait recognition based on tensor analysis of accelera- pp. 413–423, Jul. 2008, doi: 10.1109/TITB.2007.899493.
tion data from wearable sensors,” in Proc. IEEE Western New York [275] I. P. I. Pappas, M. R. Popovic, T. Keller, V. Dietz, and M. Morari,
Image Signal Process. Workshop (WNYISPW), Rochester, NY, USA, “A reliable gait phase detection system,” IEEE Trans. Neural
Oct. 2018, pp. 1–5. Syst. Rehabil. Eng., vol. 9, no. 2, pp. 113–125, Jun. 2001, doi:
[256] K. Kitagawa, T. Uezono, T. Nagasaki, S. Nakano, and C. Wada, 10.1109/7333.928571.
“Classification method of assistance motions for standing-up with [276] O. A. Malik, S. M. N. A. Senanayake, and D. Zaheer, “An adaptive
different foot anteroposterior positions using wearable sensors,” in interval type-2 fuzzy logic framework for classification of gait patterns
Proc. Int. Conf. Inf. Commun. Technol. Robot., Sep. 2018, pp. 1–3. of anterior cruciate ligament reconstructed subjects,” in Proc. IEEE Int.
[257] C. Caramia et al., “IMU-based classification of Parkinson’s disease Conf. Fuzzy Syst., Beijing, China, Jul. 2014, pp. 1068–1075.
from gait: A sensitivity analysis on sensor location and feature selec- [277] J. Kim, K. B. Lee, and S. G. Hong, “Random forest based-biometric
tion,” IEEE J. Biomed. Health Informat., vol. 22, no. 6, pp. 1765–1774, identification using smart shoes,” in Proc. 11th Int. Conf. Sens. Technol.
Nov. 2018, doi: 10.1109/JBHI.2018.2865218. (ICST), Sydney, NSW, Australia, Dec. 2017, pp. 1–4.
[258] A. Gümüsçü, K. Karadag, M. Çaliskan, M. E. Tenekeci, and [278] I. P. I. Pappas, T. Keller, S. Mangold, M. Popovic, V. Dietz, and
D. Akaslan, “Gender classification via wearable gait analysis sensor,” in M. Morari, “A reliable gyroscope-based gait-phase detection sen-
Proc. 26th Signal Process. Commun. Appl. Conf. (SIU), Izmir, Turkey, sor embedded in a shoe insole,” IEEE Sensors J., vol. 4, no. 2,
May 2018, pp. 1–4. pp. 268–274, Apr. 2004, doi: 10.1109/JSEN.2004.823671.
FOROUGHI ASL et al.: STATISTICAL DATABASE OF HUMAN MOTION RECOGNITION 15303
[279] A. H. Johnston and G. M. Weiss, “Smartwatch-based biometric gait [299] E. Tapia, S. Intille, L. Lopez, and K. Larson, “The design of a portable
recognition,” in Proc. IEEE 7th Int. Conf. Biometrics Theory, Appl. kit of wireless sensors for naturalistic data collection,” in Proc. Int.
Syst. (BTAS), Arlington, VA, USA, Sep. 2015, pp. 1–6. Conf. Pervasive Comput., Dublin, Ireland, May 2006, pp. 1–6.
[280] S. Kuen Ng and H. J. Chizeck, “Fuzzy model identification for [300] A. Y. Yang, R. Jafari, S. S. Sastry, and R. Bajcsy, “Distributed
classification of gait events in paraplegics,” IEEE Trans. Fuzzy Syst., recognition of human actions using wearable motion sensor networks,”
vol. 5, no. 4, pp. 536–544, Nov. 1997, doi: 10.1109/91.649904. J. Ambient Intell. Smart Environ., vol. 1, no. 2, pp. 103–115, 2009.
[281] G. Pan, Y. Zhang, and Z. Wu, “Accelerometer-based gait recognition [301] S. Dalal, M. Alwan, R. Seifrafi, S. Kell, and D. Brown, “A rulebased
via voting by signature points,” Electron. Lett., vol. 45, no. 22, approach to the analysis of elders activity data: Detection of health and
pp. 1116–1118, Nov. 2009, doi: 10.1049/el.2009.2301. possible emergency conditions,” Presented at the Fall Symp., Arlington,
[282] S. Sprager and D. Zazula, “Gait identification using cumulants of VI, USA, 2005.
accelerometer data,” Presented at the 2nd WSEAS Int. Conf. Sensors, [302] P. Patil, K. S. Kumar, N. Gaud, and V. B. Semwal, “Clinical human gait
Signals Vis., Imag. Simulation Mater. Sci., Baltimore, MD, USA, classification: Extreme learning machine approach,” in Proc. 1st Int.
Nov. 2009. Conf. Adv. Sci., Eng. Robot. Technol. (ICASERT), Dhaka, Bangladesh,
[283] M. Sekine, T. Tamura, T. Fujimoto, and Y. Fukui, “Classification of May 2019, pp. 1–6.
walking pattern using acceleration waveform in elderly people,” in [303] N. Dua, S. N. Singh, V. B. Semwal, and S. K. Challa, “Inception
Proc. 22nd Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., Chicago, IL, inspired CNN-GRU hybrid network for human activity recognition,”
USA, Jul. 2000, pp. 1356–1359. Multimedia Tools Appl., vol. 82, no. 4, pp. 5369–5403, Mar. 2022,
[284] C. A. Kirkwood, B. J. Andrews, and P. Mowforth, “Automatic detection doi: 10.1007/s11042-021-11885-x.
of gait events: A case study using inductive learning techniques,” [304] A. Gupta and V. B. Semwal, Multiple Task Human Gait Analysis
J. Biomed. Eng., vol. 11, no. 6, pp. 511–516, Nov. 1989, doi: and Identification: Ensemble Learning Approach. Berlin, Germany:
10.1016/0141-5425(89)90046-0. Springer, 2020, pp. 185–197.
[285] D. Gafurov, E. Snekkenes, and P. Bours, “Gait authentication and [305] N. Dua, S. N. Singh, and V. B. Semwal, “Multi-input CNN-GRU
identification using wearable accelerometer sensor,” in Proc. IEEE based human activity recognition using wearable sensors,” Computing,
Workshop Autom. Identificat. Adv. Technol., Alghero, Italy, Jun. 2007, vol. 103, no. 3, pp. 1–18, Jul. 2021, doi: 10.1007/s00607-021-00928-8.
pp. 220–225. [306] V. Bijalwan, V. B. Semwal, G. Singh, and T. K. Mandal, “HDL-
[286] D. Gafurov, K. Helkala, and T. Soendrol, “Gait recognition using PSR: Modelling spatio-temporal features using hybrid deep learning
acceleration from MEMS,” in Proc. 1st Int. Conf. Availability, Rel. approach for post-stroke rehabilitation,” Neural Process. Lett., vol. 55,
Secur. (ARES), Vienna, Austria, Apr. 2006, pp. 1–5. no. 1, pp. 279–298, Jan. 2022, doi: 10.1007/s11063-022-10744-6.
[287] L. Palmerini, L. Rocchi, S. Mazilu, E. Gazit, J. M. Hausdorff, [307] S. K. Challa, A. Kumar, and V. B. Semwal, “A multibranch CNN-
and L. Chiari, “Identification of characteristic motor patterns pre- BiLSTM model for human activity recognition using wearable sensor
ceding freezing of gait in Parkinson’s disease using wearable data,” Vis. Comput., vol. 38, no. 12, pp. 4095–4109, Aug. 2021, doi:
sensors,” Frontiers Neurol., vol. 8, pp. 1–12, Aug. 2017, doi: 10.1007/s00371-021-02283-3.
10.3389/fneur.2017.00394. [308] V. Bijalwan, V. B. Semwal, and V. Gupta, “Wearable sensor-based pat-
[288] M. M. Skelly and H. J. Chizeck, “Real-time gait event detection for tern mining for human activity recognition: Deep learning approach,”
paraplegic FES walking,” IEEE Trans. Neural Syst. Rehabil. Eng., Ind. Robot, Int. J. Robot. Res. Appl., vol. 49, no. 1, pp. 21–33,
vol. 9, no. 1, pp. 59–68, Mar. 2001, doi: 10.1109/7333.918277. Jan. 2022, doi: 10.1108/IR-09-2020-0187.
[289] M. Song and J. Kim, “An ambulatory gait monitoring system with [309] V. Bijalwan, V. B. Semwal, G. Singh, and R. G. Crespo, “Hetero-
activity classification and gait parameter calculation based on a single geneous computing model for post-injury walking pattern restoration
foot inertial sensor,” IEEE Trans. Biomed. Eng., vol. 65, no. 4, and postural stability rehabilitation exercise recognition,” Expert Syst.,
pp. 885–893, Apr. 2018, doi: 10.1109/TBME.2017.2724543. vol. 39, no. 6, pp. 1–21, Jul. 2022, doi: 10.1111/exsy.12706.
[290] E. Sabelman, A. Fiene, and A. Timbie, “Accelerometric activity identi-
fication for remote assessment of quality of movement,” in Proc. 26th
Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., San Francisco, CA, USA, Eghbal Foroughi Asl received the B.S. degree in electrical engineering
Sep. 2004, pp. 1–12. from the Imam Khomeini International University (IKIU), Qazvin, Iran,
[291] S. Yin, C. Chen, H. Zhu, X. Wang, and W. Chen, “Neural networks for in 2015, and the M.S. degree in electrical engineering from Shahid
pathological gait classification using wearable motion sensors,” in Proc. Beheshti University, Tehran, Iran, in 2018. He is currently pursuing the
IEEE Biomed. Circuits Syst. Conf. (BioCAS), Nara, Japan, Oct. 2019, Ph.D. degree in electrical engineering with the Iran University of Science
pp. 1–4. and Technology, Tehran.
[292] F. Demrozi, R. Bacchin, S. Tamburin, M. Cristani, and G. Pravadelli, His research interests include artificial intelligence, the develop-
“Toward a wearable system for predicting freezing of gait ment of wearable sensor networks, machine learning, data fusion, and
in people affected by Parkinson’s disease,” IEEE J. Biomed. Internet-of-Things (IoT)-based applications.
Health Informat., vol. 24, no. 9, pp. 2444–2451, Sep. 2020, doi: Mr. Foroughi Asl’s awards and honors include studying at the National
10.1109/JBHI.2019.2952618. Organization for Development of Exceptional Talents (NODET) School
[293] N. K. Orphanidou, A. Hussain, R. Keight, P. Lishoa, J. Hind, and for seven consecutive years and earning the title of Top Student in three
H. Al-Askar, “Predicting freezing of gait in Parkinsons disease patients consecutive courses at that school.
using machine learning,” in Proc. IEEE Congr. Evol. Comput. (CEC),
Rio de Janeiro, Brazil, Jul. 2018, pp. 1–8.
[294] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Ortiz, “Human
Saeed Ebadollahi was born Tabriz, Iran,
activity recognition on smartphones using a multiclass hardware-
in 1981. He received the B.Sc. and M.Sc.
friendly support vector machine,” Presented at the 4th Int. Conf.
degrees in electrical engineering from the
Ambient Assist. Living Home Care, Dec. 2012.
School of Electrical and Computer Engineering,
[295] R. San-Segundo, R. Cordoba, J. Ferreiros, and L. F. D’Haro-Enríquez, University of Tehran, Tehran, Iran, in 2005 and
“Frequency features and GMM-UBM approach for gait-based person 2008, respectively, and the Ph.D. degree in con-
identification using smartphone inertial signals,” Pattern Recognit. trol systems from the K. N. Toosi University of
Lett., vol. 73, pp. 60–67, Apr. 2016, doi: 10.1016/j.patrec.2016.01.008. Technology, Tehran, in 2012.
[296] L. Hong-Li and M. A. Yao-Feng, “Target recognition method based He joined the Department of Electrical Engi-
on multi feature fusion and hybrid kernel SVM,” J. Shenyang Univ. neering, Iran University of Science and Tech-
Technol., vol. 40, no. 4, pp. 441–446, 2018, doi: 10.7688/j.issn.1000- nology, Tehran, in 2012, where he is currently
1646.2018.04.15. an Assistant Professor of Control Systems Engineering. His fields
[297] D. Anguita, A. Ghio, L. Oneto, X. Parra, and J. L. Reyes-Oritz, “A pub- of research include positioning, a broad spread of applications of
lic domain dataset for human activity recognition using smartphones,” sensor/data fusion, and information fusion concepts in mechatronics,
Presented at the 21st Eur. Symp. Artif. Neural Netw., Comput. Intell. robotics, estimation theory, and optimal filtering.
Mach. Learn., Bruges, Belgium, Apr. 2013. Prof. Ebadollahi’s honors and awards include winning a silver medal
[298] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, “Realtime multi-person in the Mathematical Olympiad and obtaining the first rank in the doctoral
2D pose estimation using part affinity fields,” 2016, arXiv:1611.08050. exam at Khajeh Nasir al-Din Tusi University.
15304 IEEE SENSORS JOURNAL, VOL. 23, NO. 14, 15 JULY 2023
Reza Vahidnia (Member, IEEE) was born in Aliakbar Jalali (Member, IEEE) received the
Tehran, Iran. He received the B.Sc. degree in bachelor’s degree in electronic engineering from
electrical engineering from Tehran University, Khajeh Nasir al-Din Tusi University, Tehran, Iran,
Tehran, in 2006, the M.Sc. degree in electri- in 1985, the master’s degree in electrical control
cal engineering from Tarbiat Modares University, engineering from the University of Oklahoma,
Tehran, in 2009, and the Ph.D. degree from Norman, OK, USA, in 1989, and the Ph.D.
Ontario Tech University, Oshawa, ON, Canada, degree and postdoctoral fellowships from West
in April 2014. Virginia University, Morgantown, WV, USA, in
He has been a Senior Internet of Things (IoT) 1993 and 1994, respectively.
Specialist with Rogers Communications Inc., He has been a Professor with the School of
Toronto, ON, Canada, and TELUS, Vancouver, Electrical Engineering, Iran University of Sci-
BC, Canada, since 2016. He is currently a Faculty Member with the ence and Technology, Tehran, since 1994. Throughout his work at
Department of Electrical and Computer Engineering, British Columbia the Iran University of Science and Technology (IUST), Tehran, he has
Institute of Technology (BCIT), Vancouver. His research interests include defined, led, and managed research and development teams in the
wireless communications, IoT, and signal processing. areas of information and communication technology, the Internet of
Things (IoT), 3-D printers, and control systems design and their applica-
tions. He is currently a Visiting Professor with the University of Maryland,
College Park, MD, USA. He has been appointed as an Adjunct Professor
at the Lane Department of Computer Science and Electrical Engineer-
ing, West Virginia University.
Prof. Jalali is a member of the Iranian Society of Instrument and
Control Engineering (ISCI).