Human Activity Recognition System For Moderate Per
Human Activity Recognition System For Moderate Per
Abstract
There has been increasing interest in the application of artificial intelligence technologies to improve the
quality of support services in healthcare. Some constraints, such as space, infrastructure, and environmental
conditions, present challenges with assistive devices for humans. This paper proposed a wearable-based real-
time human activity recognition system to monitor daily activities. The classification was done directly on the
device, and the results could be checked over the internet. The accelerometer data collection application was
developed on the device with a sampling frequency of 20Hz, and the random forest algorithm was embedded
in the hardware. To improve the accuracy of the recognition system, a feature vector of 31 dimensions was
calculated and used as an input per time window. Besides, the dynamic window method applied by the
proposed model allowed us to change the data sampling time (1-3 seconds) and increase the performance
of activity classification. The experiment results showed that the proposed system could classify 13 activities
with a high accuracy of 99.4%. The rate of correctly classified activities was 96.1%. This work is promising for
healthcare because of the convenience and simplicity of wearables.
data obtained from inertial sensors to monitor daily to monitor sports cycles, monitor living habits and
activities [7–12]. determine energy consumption for activities.
Internet of things (IoT) applications [13–15] in the The performance of the recognition model improved
field of predicting human activities were growing substantially when machine learning was utilized to
strongly. Many works on human activity recognition identify human activities [3, 6, 7, 10–12, 15, 19, 24,
(HAR) [16, 17] ways of increasing interest due 29]. While the device’s integrated processors were
to its availability in practical applications such as low-power microcontrollers, they had memory and
sports tracking systems [18]; monitoring systems and processing speed limitations. Besides, ML application
prevention of sedentary conditions at work in the recognition models must be optimised and compiled
office [19]. Moreover, sensors integrated with wearable into the C/C++ language before being embedded in
devices have many advantages, such as cost savings, the microcontroller. Thus, choosing an embeddable
fast response time, cost savings, and low energy machine learning algorithm on microcontrollers was a
consumption, making them suitable for wearable big challenge. Embedded devices typically have limited
applications [7, 12, 20–22]. memory, processing power, power capacity, and more.
Sensor placement (Fig 1) greatly affected the Since embedded systems were designed for specific
performance of the recognition model [6, 11, 12, 15, 19, uses, there were limited resources left for machine
20, 23, 24]. Sensors must be attached at an appropriate learning models. To solve this problem, we built a
location on the body. This was determined by the human activity recognition model based on a machine
activities that required monitoring. Smart watches are learning algorithm with low complexity, small size, and
usually worn on the left wrist. It is often less related embedding ability in microcontrollers.
to activities that do not use the hand, such as sitting, Machine learning algorithms (Fig 2) based on
standing, lying down... Besides, the sensors attached to past experience, observations, and data to predict
the waist help with overall body movement monitoring future corresponding work instead of just following
and major body movements. But data collected from established rules pre-programmed. In general, there
waist circumference is often less sensitive to activities were four main approaches in the field of machine
involving the use of hands, such as writing, holding, learning, including: 1) Supervised learning [30, 31]; 2)
grasping, etc. Therefore, recent works [25–28] have Unsupervised learning [32]; 3) Reinforcement learning
combined many sensors in different locations to [33]; 4) Semi-supervised learning [34]. With HAR,
increase the amount of information collected about supervised learning algorithms would predict the label
activities. However, carrying many wearable devices on (activity) of a new descriptive dataset based on the
the body could cause discomfort. In addition, wearable correlation between that label and the previously
devices needed to be compact in size and light in weight known as descriptive data. In mathematics, a set of
to create a comfortable feeling for the users, especially input data had the form χ = {x0 , x1 , x2 , ..., xn } and a
for the elderly or those recovering from an accident. set of labels had the form γ = {y0 , y1 , x2 , ..., yn }. Where,
These devices were worn on the body during treatment xi was a data descriptor vector for the label yi with
to monitor the progress of daily activities and detect i = 1, 2, ..., N . Data pairs of the form (xi , yi ) ∈ χ × γ
abnormal activities. An example: a sudden change were divided into training and testing datasets. On the
in posture, such as lying down while walking. Not training dataset, a correlation function was calculated
only that, healthy people could use wearable devices to map the elements of the set χ to a corresponding
element of the set γ. From there, when new data x footsteps, gyroscope acceleration, and heart rate to
was available, this function f would predict the label recognize human activities. The features were extracted
y corresponding to y = f (x). However, this function did using the principal component analysis (PCA) method.
not exist in practice, so the function f needed to be built Classification algorithms were used in C4.5, random
well enough for the best classification performance forest (RF), K nearest neighbours algorithm (KNN), and
so that y ≈ f (x). Building a good HAR model was SVM. Both the k-nearest neighbors and Nave Bayes
influenced by many factors, including the quality of classifiers were compared to the use of accelerometers
data describing the action, the size of the data window, and gyroscopes separately by Aiguo et al. [36]. We
features used, the machine learning algorithm applied, know from experiments that using both gyroscopes
and the complexity of activities. and accelerometers together improves categorization
In addition, limited memory on wearables made accuracy. Naive Bayes achieved a better overall accuracy
it difficult to process large amounts of data and an (90.1% and 87.8%) than KNN. A large number of
increased number of activities. By using the dynamic sensors could improve accuracy, but it would be
windowing approach, our study has concentrated on impractical to have to wear them all the time.
enhancing data quality to capture the timeframe of state Adding more sensors would also make the system
change operations. In this study, a device was designed more expensive. Another study by Biagetti et al.
like a belt, which can be worn on the waist in order [8] proposed a human activity recognition system
to classify many activities and cause less discomfort consisting of wireless sensor network nodes (biological
for users. First, we collected accelerometer data for and accelerometer) and transmitted to a computer for
each activity and extracted a set of 31 features (time data analysis. Results when applying the KNN classifier
domain) per data window. Next, sets of features were achieved an overall accuracy of 85.7%. In the study
divided into training and testing sets at the rate of [35], Yang and Zhang propose a wearable operationally
75/25. The random forest algorithm was applied to categorized system that resembles a wristwatch and is
classify 13 routine activities. Finally, the appropriate worn on the hand. The time and frequency domain
model would be installed on a microcontroller with characteristics of the accelerometer data are extracted,
moderate performance (ESP32). Volunteers supported followed by the use of the decision tree method. On the
us in completing research to assess the usefulness of STM32L low-power microcontroller, their modelling
the dynamic window approach. Each device was able can be executed in real time. Despite the short number
to transmit activity data to the data server via wifi of activities (walking, sitting, jumping, bicycling, and
or the internet. Through an application called "Human jogging), the accuracy is below 90%. Five biaxial
Activity Classification" that we provide to the Google accelerometers were worn in a variety of positions on
Play store, users can track the activities of volunteers. the study team by Bao et al. [37] including the hip, wrist,
The activity information was logged on the data server arm, ankle, and thigh. 20 activities were categorized
and backed up on the integrated memory card of each with an accuracy of 84% by the decision tree classifier.
device. Each volunteer would wear a belt-mounted The increased number of sensors, moving data, and
device. Since then, an internet-based remote activity accelerometer set to the designated orientation were
monitoring system has been developed. With this the restriction, though. Many scientists in the last few
method, we could evaluate the classification accuracy years have looked into the possibility of employing a
of wearable devices in a real-time environment. As single accelerometer to collect the signal necessary for
a result, the experimental result was evaluated and activity recognition [38]. Piyush Gupta et al. [38] used a
compared with a number of related works on the belt-worn 3-axis accelerometer to construct an activity
classification of real-time HAR in the papers [12, 22, identification and feature selection system. Using Nave
35]. Bayes and KNN, they observed that wrapper-based
feature selection was superior than filter-based. Data
collection was limited to seven volunteers. All were
2. Related works
young (22–28). Thu and co-authors [12] built a real-
Various machine learning algorithms have been time recognition system for six activities on low-
applied to build activity recognition models in performance microcontrollers. Their system used two
many recent studies and achieved impressive results. features (mean and standard deviation) in a combined
Research by Mannini et al. [25] suggested using decision tree algorithm. The result achieved above,
support vector machine (SVM) classifiers and activity 92%, was quite good, but this result was lower than
data collected from sensors located at the ankle the result achieved on their collected data (99%). The
and wrist. The obtained results showed that the difference came from the data itself and the limited
data collected at the ankle was better (10%) than number of activities. For example: when switching from
the data collected at the wrist. Unlike them, Bali a static state (lying) to a dynamic state (sitting up) or
and co-authors [29] combined information including from a dynamic state (sitting down) to a stationary state
3. System model
3.1. Wearable device
Frequently, the posture and velocity of human
movements alter. Different activities produced different
3-axis acceleration values according to where the
accelerometer was placed on the body [12, 23, 35]. In
this research, accelerometer signals from the MPU6050
sensor were collected. Fig 3 shows the block diagram
of the proposed system. The inertial sensor used was
the MPU6050, capable of measuring 6 axes, including
Fig 3. Structure of the wirelessly connected wearable device.
3 axes of acceleration and 3 axes of gyroscope. The
ESP32 was a power-saving microcontroller circuit
that integrated Wi-Fi and Bluetooth. Additionally, the
ESP32 was equipped with a 16MB flash memory, which
was widely used in IOT applications. Besides, the
ESP32 also supported the integration of embedding
mini machine learning algorithms through integrated
tools such as tensorflow, micropython. Therefore, this
was the ideal choice when integrating machine learning
algorithms such as random forest, support vector
machine, et cetera. In this work, the central processor
(ESP32) used I2C communication (inter-integrated
circuit) with MPU6050 and DS1307 (time integrated
circuit) to collect acceleration data over time at a
sampling rate of 20 Hz (20 samples per second).
Fig 4. A voltage divider circuit diagram for battery power
A human activity usually lasts from 2 seconds to
calculation.
3 seconds and this time is longer in the elderly or
people who have just recovered from an accident.
Besides, a sampling frequency of 50Hz or more did not In order to limit power depletion during operation,
give a better result [36], and the performance of the the device needed to be charged after a period of
device depended on the accuracy when combining the continuous operation. The alarm information was
classification algorithm with data features. Acceleration sent to users via the mobile application (Fig 5 left),
values were calculated according to the formula (1). and the alarm sounds from the device. Because the
The source of use of the wearable device was a 3.7V- battery voltage (VBAT) was between 3.7V and 4.2V but
2000 mah lithium battery, so the battery power it could the microcontroller’s maximum withstand voltage was
provide was 3.7V × 2000maH = 7400mW h. 3.3V, the battery voltage signal was passed through the
voltage divider circuit (Fig 4). Next, the capacity of the
3.2. Energy consumption battery was calculated by formula (2) as a percentage.
As a result, the battery reached 100% when the battery
The average power consumption of the device in voltage was 4.2V. The device stopped working when the
each working hour was about 60mWh. The device battery voltage dropped to 3.7V, or 0%. On the phone
could work for up to 7400 : 60 ≈ 123.33 hours, which application, the capacity warning icon needed to be
is equivalent to 5.14 days. charged when the battery capacity was below 10%.
Sami adc ∗ Vref R4 + R5
!
100%
× R − Oi P erpin = × − 3.7 × (2)
Ai = 1024 (1) 2re − 1 R5 0.5
Si
Duration
Activity Description
time(s)
Walking normally, moving slowly,
Walking 2
walking forward and backward.
Jogging Jogging and running. 2
Squatting Squatting down then standing up. 5
Bending Bending about 90 degrees. 1
Bending to pick up an object on
Bendp 4
the floor
Limp Walking with a limp. 2
Tripping or stumbling but then
Tripover 2
stand up and walking normally
Fig 7. Rising after pre-processing. Certain acceleration onto a chair, a
Sit-down 3
sofa, air or bed.
Lie-down Sitting to lying on the bed. 3
The public dataset contained the data for activities Rising From lying to sitting 4
that follow the same process: state-dynamic-static but Lying horizontally, can turn
Lying 1
including repetitive and non-repeating activities. For around horizontally.
Stand up and standing straight,
example, lying down (lie-down) was a non-repetitive
Standing coughing or sneezing while stand- 1
activity, volunteers had to rise and then did it again,
ing.
similar to activities like sit-down, rising, tripover, Sitting motionless, back against
and squatting. In contrast, activities such as walking, Sitting the chair and body slightly tilted 1
jogging, and limp could be performed continuously. back.
Therefore, in order to improve the classification
performance, we classified three activities in a static
state, including sitting, standing, and lying. The data
for these activities was the data areas separated from
the data processing of non-repeating activities. For
example, the descriptive data for two activities sitting
and lying were the data areas in the static state
before and after the lie-down activity was performed.
Similarly, the data of standing was the signal area
before the sit-down activity took place. However, the
timing of the activities was not exactly the same. For
example, the squatting activity had a duration of up
to 5 seconds, while the transitions such as lie-down,
rising, and sit-down had a duration of 3 seconds. Unlike
them, activities that take place when a person moves,
such as jogging, walking, tripover, and limp only need
2 seconds to be classified. Therefore, we proposed using
different sized windows for each activity in order to
provide better activity information. 13 activities were
presented in Tab 1.
Private dataset. The private dataset was built based on Fig 8. The wearable device (red) was attached to the waist and
a group of volunteers wearing waist data collection secured by an elasticated strap. The x-axis of the accelerometer
devices (Fig 8) and performing the following activities: was parallel to the earth’s gravity.
walking, jogging, squatting, bending, bend-p, limp,
tripover, sit-down, lie-down, rising, lying, standing, and
sitting. This group consisted of 20 students, including when it started to create the file. When performing data
12 males and 8 females. The recorded data had the collection, activities such as sit-down, lie-down, bendp,
following format: activity, timestamp, sensor, values of squatting, and rising were performed as an unwritten
x, y, and z, frequency. Activity logger files with a type rule: nothing before starting in the short time, doing
of TXT were named with the activity and the time activity, nothing after finishing. For example, rising
Fig 12. Activities might be easily recognized by the use of Fig 13. The decision tree would build relationships between
features. There was some confusion between the various activities features and activities.
in the moving state, though.
Fig 14. Many single decision trees are built that will work with
random data from a dataset. Thus, the results of object prediction
were aggregated based on the majority rule from these trees.
Human activity
Sitting Standing Bending
Sitting 100 2 1
Predicted Standing 4 50 2
Bending 3 0 70
Tab 4. Evaluation of applying the RF algorithm on the public Tab 5. Evaluation of recognition models with 5-fold cross-
dataset. validation.
Tab 6. Result of applying the RF on the private dataset. Tab 7. Evaluation of experimental result.
Activity ACC(%) SPE(%) PPV(%) NPV(%) Activity ACC(%) SPE(%) PPV(%) NPV(%)
Walking 100 100 100 100 Walking 98.8 98.4 89.6 99.8
Jogging 100 100 100 100 Jogging 99.3 95.1 96.7 99.5
Squatting 99.7 96.0 100 99.7 Squatting 99.4 93.4 100 99.4
Bending 100 100 100 100 Bending 99.6 100 94.3 100
Bendp 100 100 100 100 Bendp 99.3 93.0 98.1 99.4
Limp 100 100 100 100 Limp 99.4 96.6 96.6 99.7
Tripover 100 100 100 100 Tripover 99.0 90.6 96.0 99.2
Sit-down 100 100 100 100 Sit-down 99.8 100 98.0 100
Lie-down 100 100 100 100 Lie-down 99.7 96.2 100 99.7
Rising 100 100 100 100 Rising 99.9 100 97.9 100
Standing 100 100 100 100 Standing 99.3 94.5 96.3 99.5
Sitting 100 100 100 100 Sitting 99.1 96.5 93.2 99.7
Lying 99.7 100 96.6 100 Lying 99.6 100 94.4 100
All 100 99.7 99.7 100 All 99.4 96.5 96.2 99.7
Performance evaluation on the private dataset. The private deviation increased to 0.6%. Other recognition models
dataset gave a result of 99.7%. True to our previous such as GBDT, SVM, KNN, and DT give results of
analysis, the limited time (3 seconds) method of 98%, 98.4%, 97.4%, and 97%, respectively. Similar to
collecting activity data has helped our data to be highly the proposed model, the recognition models applying
reliable. The negative impacts on classification results algorithms such as DT, GBDT, and KNN had their
were minimised. These negative impacts were caused standard deviation increase by 0.8%. Unlike that, the
for three reasons: 1) One other activity interfered model applying the SVM algorithm had the standard
when an activity took place over a long period of deviation reduced to 0.5%. However, the resulting
time; 2) The time difference of an activity with each difference from 0.5%-0.8% was not significant. Overall,
volunteer; 3) The data sampling process had not been the results of the proposed model evaluation on both
well controlled leading to excess or lack of activity datasets were good.
data. These catches especially affect transitions such as:
rising, lie-down, sit-down, tripover, bending pick up 5.2. Experimental evaluation
(bendp), squatting. If the timing of these activities could
not be unknown, the data describing them threatened to
degrade the performance of recognition models. Fig 18
The experimental process was conducted on volun-
shows most of the actions distinguished with a 100%
teers for a period of 30 seconds to 60 seconds, and
accuracy rate, except for squatting (24/25). However,
the sampling frequency was 20Hz. Volunteers wore
the classification results obtained were impressive. The
wearables suggested and performed all 13 activities
problems encountered when classifying on the public
according to a predefined scenario. Transitions such as
dataset had almost been solved.
sit-down, lie-down, rising, tripover, and bendp were
limited to a maximum execution time of 3 seconds.
The remaining activities took place in 10-15 seconds.
The classification results on the private dataset A portion of the experimental procedure for finding
showed the strong performance of the proposed model. mixed activity sequences was shown in Fig 19. Since
The evaluation indexes of ACC, SP E, P P V , and N P V we concentrated on recognizing when activities were
all reached over 96% (Tab 6). Except for squatting and started in a dynamic state, activities could be discrim-
lying, all activities had indicators reaching 100%. This inated against more quickly. When tested with the
was greater than the result for squatting with ACC = real sequence of activities, the device was still able to
99.7%, SP E = 96%, P P V = 100%, N P V = 99.7%, and detect the activity with high accuracy even though the
lying with ACC = 99.7%, SP E = 100%, P P V = 96.6%, sampling method for the activities was uniform and no
N P V = 100%. However, the results of the recognition additional activities were present.
model evaluation on this dataset was impressive with Overall, the proposed model reached 96.1%. The
ACC = 100% and N P V = 100%. results of classification versus actual observation were
The results when applying 5-folds cross validation presented as a confusion matrix as shown in Fig
on this dataset were also gradually different when 20. Besides, transition activities were mistaken for
they were applied on the public dataset. The proposed each other when the speed of them was so slow. For
model got the best results with 99.1%, but the standard example, 2/52 lie-down was mistaken for lying, and
1/49 rising was mistaken for lying or sitting. This exhaustive set of training data from all types of activity.
was similar to squatting, where 1/61 squatting was As a result, the samples of test data would be different
mistaken for rising or lie-down. However, the rate from those of training data.
of occurrence of these errors was not significant. In The recognition model was unable to perform well
general, the proposed model gave good classification if the training dataset was insufficient. Therefore,
results. The experimental result helped to evaluate the we tried to collect activity data for a limited time
proposed model’s performance when the input was and surveyed many volunteers. In addition, it was
real-time data (Tab 7). The classification accuracy with a significant task to classify 13 activities. Previous
activities (ACC) and the correct prediction rate of non- studies investigated some repetitive activities such
occurrence actions (N P V ) were both above 99%. The as sitting, lying, standing, walking, walking upstairs
lowest correct prediction rate for actions (P P V ) was and walking downstairs as in [43]; walking, jogging,
89.6% for walking activity, while this reached 100% for upstairs, downstairs, sitting, standing in [20, 44]. With
squatting and lie-down activities. A sensitivity (SP E) of these activities, the application of static windows gave
over 90% showed that it was feasible to apply dynamic good results [41, 45, 46]. However, with state transition
windowing methods to detect activities, especially with activities, the static window had a major drawback: it
state transition activities. For example, bendp and lie- was not able to determine the activity time because the
down activities had corresponding sensitivity indexes time of these activities was not the same.
of SP E = 93% and SP E = 96.2%. Mover, rising and sit- Many related works in the field of HAR have been
down activities were both SP E = 100%. In general, the interested in real-time classification capabilities and the
proposed model had good evaluation indicators with application of different classification algorithms (Tab 8).
ACC = 99.4%, SP E = 96.5%, P P V = 96.2%, N P V = Thu et al. [12] applied 2 features (mean and standard
99.7% in reality. These overall evaluation indicators deviation) to a 3-axis accelerometer on each time
had a negligible difference with those calculated when window of size 6 seconds and combined it with
evaluating our model on public and private datasets. a decision tree algorithm. The result when applied
In addition, the proposed model, when conducting experimentally was 92%, and the accuracy was 95.2%.
experiments, has achieved relatively uniform indexes of Their device classified 6 activities, including sitting,
over 90%. standing, lying, walking, and jogging, in real-time.
The classification performance of our model has A three-level decision tree algorithm (DT) was built
significantly improved between the public dataset and as a recognition model suitable for low-performance
real-time data. For example, squatting and tripover microcontrollers, but the classified activities were
increased from 65% and 80.9% to 93.4% and 90.6%, repetitive activities over time and they had low
respectively. This result was possible thanks to the complexity. Besides, the time of 6 seconds for
sampling process having been improved to increase each classifier applied was too long, leading to a
the quality of the activities and applying the dynamic large delay if changing activities. As the number
window method to understand the activity process. of activities increased and more complex ones were
added, it was difficult for their model to achieve
6. Discussion high accuracy because of memory limitations and
usage features. Similar to Yang’s study [35], the stm32
This work attempted to develop a recognition model microcontroller was used in his study to embed a real-
for a real-time application that would recognize human time recognition model using decision tree algorithm
activity. The dynamic window technique was merged (C4.5). In their study, they used up to 16 features
and optimized with the applicable algorithm (random from 6 measures, including mean, magnitude of the
forest). As a result, the human activity recognition acceleration of the three axes, variance, cumulative,
system is much more accurate. The result showed that skewness, and coefficients. The classification accuracy
the ability to classify real-time activities on wearables for five activities, including sitting, walking, jumping,
was good at 96.1%, although it was slightly lower than jogging, and cycling, was 90% on average. This
the evaluation results on public and private data. The established that the DT algorithm could not provide
first reason was the dispersion among feature vectors perfect accuracy, as their idea was that data acquired
in real-time data. Meanwhile, with public and private from several people would be trained and analyzed
datasets, activities were closely monitored and feature jointly. Consequently, it is possible that their algorithm
quality was enhanced. Additionally, because activities is not optimal for the individual.
occur in sequential order in daily life, the acquired Embedding on a recognition model on low-
data may be more homogeneous than real-time data. In performance microcontrollers required optimization of
reality, training and testing datasets derived from real- machine learning algorithms, the number of features
world scenarios may differ significantly. The existence and time window size, so the results were not good
of so many emergent scenarios made it hard to gather an [12, 35]. In particular, the static window method in
Tab 8. Compare several related studies. model had high performance. Research results were
shared at “https://github.com/daohieuictu/HAR-realtime-
Comparison Accuracy Result Model) Features random-forest”. To increase the recognition model’s
(%) (%)
accuracy, we will combine different algorithms when
Thu [12] 95.2 92 DT 6
developing it in the future. Besides, we tend to
Yang [35] 90.1 90.0 DT 16
Suto [22] - 88.8 ANN 15 study the support system for firefighters with complex
Ours 99.4 96.1 RF 31 behaviors (rolling, crawling) and survival states (falling,
unconscious).
Low Performance Microcontrollers,” Research and Devel- [19] P. Casale, O. Pujol, and P. Radeva, “Human activity
opment on Information and Communication Technology, recognition from accelerometer data using a wearable
vol. 12/2021, no. 2, pp. 69–76, 2021. device,” in Lecture Notes in Computer Science (including
[8] G. Biagetti, P. Crippa, L. Falaschetti, S. Orcioni, subseries Lecture Notes in Artificial Intelligence and Lecture
and C. Turchetti, “Human activity monitoring Notes in Bioinformatics), vol. 6669 LNCS, 2011, pp. 289–
system based on wearable sEMG and accelerometer 296.
wireless sensor nodes,” BioMedical Engineering Online, [20] M. Milenkoski, K. Trivodaliev, S. Kalajdziski, M. Jovanov,
vol. 17, no. S1, pp. 1–18, 2018. [Online]. Available: and B. R. Stojkoska, “Real time human activity recogni-
https://doi.org/10.1186/s12938-018-0567-4 tion on smartphones using LSTM networks,” 2018 41st
[9] S. Chung, J. Lim, K. J. Noh, G. Kim, and H. Jeong, International Convention on Information and Communica-
“Sensor data acquisition and multimodal sensor fusion tion Technology, Electronics and Microelectronics, MIPRO
for human activity recognition using deep learning,” in 2018 - Proceedings, pp. 1126–1131, 2018.
Sensors (Switzerland), vol. 19, no. 7, 2019. [21] P. Van Thanh, D. T. Tran, D. C. Nguyen,
[10] G. Şengül, M. Karakaya, S. Misra, O. O. Abayomi-Alli, N. Duc Anh, D. Nhu Dinh, S. El-Rabaie, and
and R. Damaševičius, “Deep learning based fall detec- K. Sandrasegaran, “Development of a Real-Time,
tion using smartwatches for healthcare applications,” Simple and High-Accuracy Fall Detection System
Biomedical Signal Processing and Control, vol. 71, no. for Elderly Using 3-DOF Accelerometers,” Arabian
October 2021, p. 103242, 2022. Journal for Science and Engineering, vol. 44,
[11] Y. Zhao, R. Yang, G. Chevalier, X. Xu, and Z. Zhang, no. 4, pp. 3329–3342, 2019. [Online]. Available:
“Deep Residual Bidir-LSTM for Human Activity Recog- https://doi.org/10.1007/s13369-018-3496-4
nition Using Wearable Sensors,” Mathematical Problems [22] J. Suto, S. Oniga, C. Lung, and I. Orha, “Comparison
in Engineering, vol. 2018, 2018. of offline and real-time human activity recognition
[12] N. T. Thu, T.-h. Dao, B. Q. Bao, D.-n. Tran, P. V. results using machine learning techniques,” Neural
Thanh, and D.-T. Tran, “Real-Time Wearable-Device Computing and Applications, vol. 32, no. 20,
Based Activity recognition Using Machine Learning pp. 15 673–15 686, 2020. [Online]. Available: https:
Methods,” International Journal of Computing and Digital //doi.org/10.1007/s00521-018-3437-x
Systems, vol. 12, no. 1, pp. 321–333, 2022. [Online]. [23] A. T. Özdemir and B. Barshan, “Detecting Falls with
Available: https://dx.doi.org/10.12785/ijcds/120126 Wearable SensorsUsing Machine Learning Techniques,”
[13] D. N. Tran, T. N. Nguyen, P. C. P. Khanh, and D. T. Trana, Sensors, vol. 14, pp. 10 691–10 708, 2014.
“An IoT-based Design Using Accelerometers in Animal [24] T. H. Dao, M. H. Le, D. N. Tran, and D. T. Tran, “Xay dung
Behavior Recognition Systems,” IEEE Sensors Journal, mang giam sat hanh vi trong toa nha su dung cong nghe
vol. 12, no. 18, pp. 17 515–17 528, 2021. wifi,” in REV-ECIT2021. 978-604-80-5958-3, 2021, pp.
[14] P. C. P. Khanh, D.-T. Tran, V. T. Duong, N. H. Thinh, and 48–53.
D.-N. Tran, “The new design of cows’ behavior classifier [25] A. Mannini, S. S. Intille, M. Rosenberger, A. M. Sabatini,
based on acceleration data and proposed feature set,” and W. Haskell, “Activity recognition using a single
Mathematical Biosciences and Engineering, vol. 17, no. 4, accelerometer placed at the wrist or ankle,” Medicine and
pp. 2760–2780, 2020. [Online]. Available: https://www. Science in Sports and Exercise, vol. 45, no. 11, pp. 2193–
aimspress.com/article/doi/10.3934/mbe.2020151 2203, 2013.
[15] V. Bianchi, M. Bassoli, G. Lombardo, P. Fornacciari, [26] C. Torres-Huitzil and M. Nuno-Maganda, “Robust
M. Mordonini, and I. De Munari, “IoT Wearable smartphone-based human activity recognition using a
Sensor and Deep Learning: An Integrated Approach for tri-axial accelerometer,” in 2015 IEEE 6th Latin American
Personalized Human Activity Recognition in a Smart Symposium on Circuits and Systems, LASCAS 2015 -
Home Environment,” IEEE Internet of Things Journal, Conference Proceedings, 2015, pp. 2–5.
vol. 6, no. 5, pp. 8553–8562, 2019. [27] D. Rodriguez-Martin, A. Samà, C. Perez-Lopez,
[16] N. Damodaran, E. Haruni, M. Kokhkharova, and A. Català, J. Cabestany, and A. Rodriguez-Molinero,
J. Schäfer, “Device free human activity and fall “SVM-based posture identification with a single waist-
recognition using WiFi channel state information (CSI),” located triaxial accelerometer,” Expert Systems with
CCF Transactions on Pervasive Computing and Interaction, Applications, vol. 40, no. 18, pp. 7203–7211, 2013.
vol. 2, no. 1, pp. 1–17, 2020. [Online]. Available: [Online]. Available: http://dx.doi.org/10.1016/j.eswa.
https://doi.org/10.1007/s42486-020-00027-1 2013.07.028
[17] P. Kumar and S. Chauhan, “RETRACTED ARTICLE: [28] D. Naranjo-Hernández, L. M. Roa, J. Reina-Tosina, and
Human activity recognition with deep learning: M. Á. Estudillo-Valderrama, “SoM: A smart sensor for
overview, challenges and possibilities,” CCF Transactions human activity monitoring and assisted healthy ageing,”
on Pervasive Computing and Interaction, vol. 3, IEEE Transactions on Biomedical Engineering, vol. 59, no.
no. 3, p. 339, 2021. [Online]. Available: https: 12 PART2, pp. 3177–3184, 2012.
//doi.org/10.1007/s42486-021-00063-5 [29] S. Balli, E. A. Sağbaş, and M. Peker, “Human activity
[18] J. Qi, P. Yang, M. Hanneghan, S. Tang, and B. Zhou, “A recognition from smart watch sensor data using a hybrid
hybrid hierarchical framework for gym physical activity of principal component analysis and random forest
recognition and measurement using wearable sensors,” algorithm,” Measurement and Control (United Kingdom),
IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1384– vol. 52, no. 1-2, pp. 37–45, 2019.
1393, 2019.