Technologies 08 00072 v2
Technologies 08 00072 v2
Article
Deep Learning Based Fall Detection Algorithms for
Embedded Systems, Smartwatches, and IoT Devices
Using Accelerometers
Dimitri Kraft 1, * , Karthik Srinivasan 2 and Gerald Bieber 3, *
1 Faculty of Computer Science and Electrical Engineering, University of Rostock, 18059 Rostock, Germany
2 Next Step Dynamics AB, 211 19 Malmö, Sweden; [email protected]
3 Fraunhofer-Institut fuer Graphische Datenverarbeitung IGD, 18057 Rostock, Germany
* Correspondence: [email protected] (D.K.); [email protected] (G.B.)
Received: 30 October 2020; Accepted: 29 November 2020; Published: 2 December 2020
Abstract: A fall of an elderly person often leads to serious injuries or even death. Many falls
occur in the home environment and remain unrecognized. Therefore, a reliable fall detection is
absolutely necessary for a fast help. Wrist-worn accelerometer based fall detection systems are
developed, but the accuracy and precision are not standardized, comparable, or sometimes even
known. In this work, we present an overview about existing public databases with sensor based
fall datasets and harmonize existing wrist-worn datasets for a broader and robust evaluation.
Furthermore, we are analyzing the current possible recognition rate of fall detection using deep
learning algorithms for mobile and embedded systems. The presented results and databases can
be used for further research and optimizations in order to increase the recognition rate to enhance
the independent life of the elderly. Furthermore, we give an outlook for a convenient application and
wrist device.
Keywords: fall detection; accelerometer; datasets; deep learning; neural networks; wrist; smart bands;
watches; IoT devices; edge computing
1. Introduction
The independent life of an elderly person can be changed drastically after a fall. Depending on
the health condition of the elderly, almost 10 percent of the people who fall will suffer from serious
injuries, or might even die directly after a fall if no intermediate help is available [1]. To prevent
the severe consequences of such falls, a reliable fall detection is needed. One common approach to fall
detection is using wrist worn detection systems that are measuring acceleration forces. These wrist
devices are gaining more and more acceptance across the population and becoming increasingly
powerful in terms of computational performance that the usage of artificial intelligence is reasonable.
In general, older adults appear to be interested in using such devices although they express concerns
over privacy and understanding exactly what the device is doing at specific times [2]. The evaluation
of mobile fall detection systems is highly sophisticated because live data from falls of elderly people
are rare. Boyle et al. tried to use real-time data with 15 adults over the course of 300 days and was only
able to record four falls during that time [3]. Even simulated data are barely available and they are
existing only in various characteristics. Therefore, the aim of this paper is
• to invite and motivate researchers to compete and contribute in fall detection by using the existing
databases and to provide various achievements for the future life of the elderly.
2. Related Work
The detection of falls can be done through several approaches and technologies. Some approaches
are infrastructure based and using external cameras [4] or floor sensors [5,6], while other approaches
consist of mobile sensor analysis through body worn sensors, e.g., accelerometers, gyroscopes, and air
pressure sensors. Each system records a sensor data stream that can be analyzed to recognize
a fall [2]. To distinguish between no fall and fall events, various strategies were employed. Often,
datasets are used to train machine learning algorithms. Other approaches do not rely on certain
datasets as they consist of constructing rules to distinguish between those events. The quality of such
datasets is an important constraint for the quality of the trained fall detection algorithm.
2.1. Datasets
The availability of fall data is mandatory for the development of fall recognition systems. At first,
datasets were interspersed throughout the scientific community until Casilari et al. [7] provided
a comprehensive overview of publicly available fall detection datasets. Since then, new datasets were
published. We augmented the overview of Casilari et al. by adding recently published datasets to our
overview table (see Table 1) with their corresponding characteristics (see Table 2).
Table 1. Overview of fall detection datasets recorded with a body worn sensor.
Sensor
Dataset Reference Sensors Sampling Rate Year
Location
DLR [8] A, G, M Waist 100 Hz 2010
MobiFall [9] A, G, O Thigh 87 Hz (A) 100 Hz (G, O) 2013
Waist
TST Fall Detection [10] A 100 Hz 2014
Right Wrist
Thigh
tFall [11] A 45 Hz 2014
Hand bag
UR Fall Detection [12] A Waist 256 Hz 2014
Head, Chest,
Simulated Falls
[13] A, G, M, O Waist, Right Ankle, 25 Hz 2014
and ADL
Right Wrist, Right Thigh
Cogent Labs [14] A, G Chest, Thigh 100 Hz 2015
Project Gravity [15] A Thigh, Wrist* 50 Hz 2015
Graz [16] A, O Waist 5 Hz 2015
Ankle, Chest,
MUMAFall [7] A, G, M 100 Hz / 20 Hz* 2016
Thigh, Waist, Wrist
Falls/ Validation
Source Dataset Features Classifier Performance
ADLS [n/n] Method
Naive Bayes,
342/ 75–25 split,
[31] MobiFall Time domain LS, ANN, 87.5% accuracy
288 mixed user
SVM
Raw time
[38] SisFall Not specified LSTM Not specified 97.16% accuracy
series window
Ct = f (W ∗ Xt− k :t+ k + b)
2 2
were W, k, t, b equals weights of the kernel, length of the kernel, timestamp, and bias, respectively.
After applying n convolutions on the input accelerometer data with length l, we settle with n channels,
where each channel represents a new filtered time series. These n channels with shape n × l are then
convolved with m different filters with shape n × m × k, where each mi−th filter is slid across all n
channels resulting in m additional time series, where each mi channel is a sum of the convolutions of
mi−th filter across all n channels.
3. Experimental Setup
as a text file with the label being part of the files name. For each recording (containing either falls
or activities of daily living) in a data set, we segment the recording into 10 s of non overlapping
windows and down-or upsample each window to 50 Hz. The range and quantization of the raw data
remain unchanged. The variety in coupling between human body and sensor, precisely the attachment
of the sensor at the wrist, and also different sensor weights are not considered. Each fall type is labeled
as fall and every other activity is labeled as not fall.
After applying this strategy, SmartFall, Smartwatch, and Notch consist of solely fall samples.
We do not use samples where a fall sample is mixed with an activity of daily living labeled as non
fall. This would result in an ambiguity, as we already use samples containing fall and activities of
daily living labeled as fall. To enrich the Notch, Smartwatch, and SmartFall datasets with additional
activities of daily living, we added 500 random 10 s segments from the RealWorld Human Activity
Recognition [47] dataset to the SmartFall, Smartwatch, and Notch datasets.
3.4. Preprocessing
Contrary to other approaches e.g., [32], we do not think that a scaling or standardization
preprocessing technique should be performed on the input data, as the magnitude information
is a crucial aspect to distinguish between falls and activities of daily living. To evaluate the impact
of preprocessing on the performance, we apply a min-max scaling of the training and testing dataset.
The Signal Magnitude Vector transformation greatly reduces the input size for neural networks
and thus reducing the required memory and computational load of an IoT device. Note that running
deep learning models on IoT devices requires a trade-off between classification performance
and computational complexity.
• Shifting with a probability of p = 0.75 and random shift value of [−150 . . . 150] in samples along
the time axis (only used during signal magnitude vector training)
• Rotation with a probability of p = 0.75 around x, y, and z-acceleration axes with a random angle
between [−180◦ . . . 180◦ ] (only used during three-axis training)
We suspect that shifting in the time dimension does not contribute to the performance of a
one-dimensional CNN, as the learned filters are translation-invariant by definition. However, by shifting
(rolling) the vector in the time dimension, elements that roll beyond the last position of the vector
are re-introduced at the first position. This induces some additional variance to data and may affect
the performance of the neural networks.
3.6.2. 1D ResNet
We use a small-scale 1D ResNet model consisting of two basic residual connection blocks. We settle
with a convolutional layer followed by two blocks with two convolutional layers and a shortcut
connection each block. The kernel size is set to 8 for the first convolutional layer and is changed to four
for all consecutive convolutional layers. The classification head consists of a fully connected layer
mapping 64 neurons to two neurons.
3.6.3. LSTM
The LSTM Classifier consists of a single unidirectional LSTM layer with 128 hidden neurons
followed by a dropout layer with a dropout probability of p = 0.25. A fully connected layer is used to
map the 128 hidden neurons to two neurons for classification.
which reduce the footprint of the model drastically. After employing an extensive gridsearch, we settle
with the following structure:
Each convolutional layer C ( N ), with N channels, uses a stride of 4 and a kernel size of 8 with no
padding, except for the last convolutional layer which uses stride and kernel size of 1 with no padding.
3.7.4. Quantization
Quantization reduces the footprint of a model greatly, by converting the weights of the neural network
from four byte floats to one byte unsigned integer. Wu et al. [55] showed that a quantization occurs with
a minor loss in accuracy while reducing the computation time greatly. We use a Quantization method
provided by the PyTorch 1.4.0 Framework, in particular the min and max values to compute the necessary
quantization parameters. We do not evaluate different quantization strategies to conserve space.
4. Evaluation
TP TP precision · recall
precision = recall = F1 = 2 ·
TP + FP TP + FN precision + recall
Amplitude [dB]
Amplitude [dB]
Amplitude [dB]
Amplitude [dB]
Amplitude [dB]
Amplitude [dB]
Amplitude [dB]
15 2.5
10 0
20 5.0 20 15 10
10
2
25 15 7.5 20
30 4 15
30 10.0 15 25
20 6 30
35 12.5
40 20
20
0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25 0 5 10 15 20 25
Frequency [Hz] Frequency [Hz] Frequency [Hz] Frequency [Hz] Frequency [Hz] Frequency [Hz] Frequency [Hz] Frequency [Hz]
Figure 4. Class Activation Map examples for fall data sampled from each dataset.
where Am (t) is the univariate time series for the variable m ∈ [1, M], which is in fact the result
c is the weight between the m − th filter (last convolutional layer)
of applying the m − th filter and wm
and the output neuron of class label c [54]. Examples of class activation maps with a sample for
each dataset are depicted in Figure 4. In most cases, the impact region (region with the highest
acceleration magnitude) and free fall phase (region right before the impact) are the most contributing
parts to the decision-making of our convolutional neural network.
(ResNet, AUC) (Santos, AUC) (LSTM, AUC) (ResNet, AUC) (Santos, AUC) (LSTM, AUC)
(FallNet, AUC) (FallNet, AUC)
1.0 1.0
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0.0 0.0
Notch MUMA Sim Fall SmartWatch SmartFall Up Fall Mean Notch MUMA Sim Fall SmartWatch SmartFall Up Fall Mean
(a) AUC 1-Axes with Data Augmentation (b) AUC 3-Axes with Data Augmentation
Figure 5. AUC Results with data augmentation for single-axis (left) and three-axis (right). Bar charts
showing the AUC performance for each dataset and network.
Technologies 2020, 8, 72 12 of 17
4.5. Results
Referring to Tables 5 and 6, we demonstrated that our proposed algorithm performs very accurate
by an F1 = 0.96 without, and F1 = 0.97 with data augmentation (both on SMV). The results in Figure 5
suggest that a larger number of parameters does not lead to increased performance in terms of AUC.
Furthermore, the three-axis models do not perform better than their single-axis equivalent. The LSTM
neural network especially shows increasing performance when using the Signal Magnitude Vector as
input. In particular, the three-axis version of our IoT-CNN does perform poorly on the MUMA dataset
and SimFall dataset if no augmentation is applied. We suspect that this may be due to the relative
small number of parameters compared to other models. All CNNs perform better on the SmartWatch,
SmartFall, Notch, and UP Fall datasets. This indicates that certain datasets are easier to handle than
other and may contain less variety. Note that the SmartWatch, SmartFall, and Notch dataset was
published by the same researchers. We further augmented the SmartFall, Smartwatch, and Notch
dataset with samples of a different dataset. This is done due to the sequence length of a fall in these
datasets, as they only contain fall events shorter than 2 s. Regarding the LSTM-based neural network,
the single-axis version shows a comparable performance with respect to their CNN counterparts.
Quantization, on the other hand, achieved, as expected, worse results. In line with other work,
quantization reduces the classification performance (see Table 7). While the performance on most
datasets remains comparable to the results without quantization, the performance on the Notch dataset
decreases by a large amount. This may be due to the gravitational offset from the used sensor.
Table 7. Results (Precision, Recall, and Weighted F1 Score) using our CNN with Data Augmentation
and Weight Quantization.
Our
SMV
Dataset
Precision Recall F1
µ σ µ σ µ σ
Notch 0.854 0.092 0.853 0.078 0.826 0.110
MUMA 0.926 0.017 0.910 0.024 0.912 0.023
Sim Fall 0.908 0.016 0.902 0.023 0.902 0.023
SmartWatch 0.982 0.004 0.982 0.004 0.982 0.004
SmartFall 0.976 0.009 0.975 0.010 0.975 0.010
UP Fall 0.952 0.013 0.940 0.035 0.942 0.029
Average 0.933 0.025 0.927 0.029 0.923 0.033
5. Discussion
Historically, fall detection algorithms were discrete and engineers developed a mobile system
especially for this purpose. Today, neural networks can be exploited on embedded systems, so a flexible
structure can be used. Because of the connectivity of IoT devices, a broad database is or will be available
in future. Therefore, the realization of a reliable fall detection becomes available. The performance
consideration of fall detection is ambiguous as far as varying datasets are used. Therefore, we identified
available datasets and evaluated the capabilities of a small-scale neural network. Large networks that can
run on servers or high end machines might outperform small-scale neural networks, but these can be
integrated on edge or small devices and are more relevant for the real life scenarios, as the computation
is done on the device. This further ensures (a) that sensible data are processed on the device and
no network connection is needed, besides the alarming mechanism, and (b) that privacy is protected.
Regarding the existing fall datasets, we assume that resampling, range, resolution, and sensor type
(internal filtering) only have a minor effect. However, this has to be confirmed by further research.
Figure 6. UMA Device with an integrated neural net for fall detection.
Elderly persons are usually very calm; even a normal adult is performing moderate to vigorous
leisure time physical activity only less than one hour a day [57]. The simple gatekeeper algorithm
clears inactive periods, so the deep learning algorithm is used only for active periods.
7. Conclusions
In this paper, we presented 19 datasets with fall raw data, assessed at the wrist (7), waist (10),
and/or other positions (2) with accelerometers. We illustrated that our optimized neural network could
be applied on an embedded system like IoT devices, smart watches, or activity trackers. In former
research, the consideration of the accuracy of fall detection was performed within a focus group
and its dataset. Because of the identification of multiple datasets, a broader evaluation was feasible.
We could show that neural networks are performing well on our harmonized dataset. Furthermore,
the increasing calculation power of mobile devices enables the usage of deep learning algorithm for
fall detection on wrist based embedded systems. We demonstrated that small scale convolutional
neural networks achieve a reasonable accuracy of 97% on our harmonized fall detection data set.
While applying quantization, our neural network performs less accurately, which may be addressed
in future work. We suspect that calibration of the neural network based on the activity of the user may
enhance the performance significantly by lowering the false positive rate. For future work, we see that
the sensitivity and specificity of fall detection is highly relevant in the everyday usage. A waterproof,
wrist based sensing device can be worn 24/7 and should indicate no false detection. This is a high
demand and requires the knowledge about the general condition about the user. Very active people
are moving differently compared to passive and calm people. This requirement of a low false positive
rate can be achieved by individualized algorithms or calibration. We assume that even energy efficient
mobile wrist devices allow a reliable fall detection system to assist the elderly in everyday life.
Author Contributions: Conceptualization, D.K. and G.B.; methodology, G.B. and K.S.; software, D.K.;
data curation, D.K.; writing—original draft preparation, D.K., K.S. and G.B.; writing—review and editing,
D.K., K.S. and G.B.; supervision, G.B. All authors have read and agreed to the published version of the manuscript.
Funding: The research was funded by Next Step Dynamics AB..
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Stevens, J.A.; Corso, P.S.; Finkelstein, E.A.; Miller, T.R. The costs of fatal and non-fatal falls among older
adults. Inj. Prev. 2006, 12, 290–295. [CrossRef]
2. Chaudhuri, S.; Thompson, H.; Demiris, G. Fall detection devices and their use with older adults:
A systematic review. J. Geriatr. Phys. Ther. 2014, 37, 178–196. [CrossRef]
Technologies 2020, 8, 72 15 of 17
3. Boyle, J.; Karunanithi, M. Simulated fall detection via accelerometers. In Proceedings of the 2008 30th
Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC,
Canada, 20–25 August 2008; pp. 1274–1277.
4. Nunez-Marcos, A.; Azkune, G.; Arganda-Carreras, I. Vision-based fall detection with convolutional neural
networks. Wirel. Commun. Mob. Comput. 2017, 2017, 9474806. [CrossRef]
5. Alwan, M.; Rajendran, P.J.; Kell, S.; Mack, D.; Dalal, S.; Wolfe, M.; Felder, R. A smart and passive
floor-vibration based fall detector for elderly. In Proceedings of the 2006 2nd International Conference on
Information & Communication Technologies, Damascus, Syria, 24–28 April 2006; Volume 1, pp. 1003–1007.
6. Daher, M.; Diab, A.; El Najjar, M.E.B.; Khalil, M.A.; Charpillet, F. Elder tracking and fall detection system
using smart tiles. IEEE Sens. J. 2016, 17, 469–479. [CrossRef]
7. Casilari, E.; Santoyo-Ramón, J.A.; Cano-García, J.M. UMAFall: A Multisensor Dataset for the Research on
Automatic Fall Detection. Procedia Comput. Sci. 2017, 110, 32–39. [CrossRef]
8. Frank, K.; Vera Nadales, M.J.; Robertson, P.; Pfeifer, T. Bayesian recognition of motion related activities with
inertial sensors. In Proceedings of the 12th ACM International Conference Adjunct Papers on Ubiquitous
Computing—Adjunct, Copenhagen, Denmark, 26–29 September 2010; Bardram, J.E., Ed.; ACM: New York, NY,
USA, 2010; p. 445. [CrossRef]
9. Vavoulas, G.; Pediaditis, M.; Spanakis, E.G.; Tsiknakis, M. The MobiFall dataset: An initial evaluation of
fall detection algorithms using smartphones. In Proceedings of the IEEE 13th International Conference on
Bioinformatics and Bioengineering (BIBE), Chania, Greece, 10–13 November 2013; pp. 1–4. [CrossRef]
10. Gasparrini, S.; Cippitelli, E.; Gambi, E.; Spinsante, S.; Wåhslén, J.; Orhan, I.; Lindh, T. Proposal and
Experimental Evaluation of Fall Detection Solution Based on Wearable and Depth Data Fusion. In ICT
Innovations 2015; Advances in Intelligent Systems and Computing; Loshkovska, S.; Koceski, S., Eds.;
Springer: Cham, Switzerland, 2016; Volume 399, pp. 99–108. [CrossRef]
11. Medrano, C.; Igual, R.; Plaza, I.; Castro, M. Detecting falls as novelties in acceleration patterns acquired
with smartphones. PLoS ONE 2014, 9, e94811. [CrossRef]
12. Kwolek, B.; Kepski, M. Human fall detection on embedded platform using depth maps and wireless
accelerometer. Comput. Methods Programs Biomed. 2014, 117, 489–501. [CrossRef]
13. Özdemir, A.T.; Barshan, B. Detecting falls with wearable sensors using machine learning techniques. Sensors
2014, 14, 10691–10708. [CrossRef]
14. Ojetola, O.; Gaura, E.; Brusey, J. Data set for fall events and daily activities from inertial sensors.
In Proceedings of the 6th ACM Multimedia Systems Conference, Portland, OR, USA, 18–20 March 2015;
Ooi, W.T., Feng, W., Liu, F., Eds.; ACM: New York, NY, USA, 2015; pp. 243–248. [CrossRef]
15. Vilarinho, T.; Farshchian, B.; Bajer, D.G.; Dahl, O.H.; Egge, I.; Hegdal, S.S.; Lones, A.; Slettevold, J.N.;
Weggersen, S.M. A Combined Smartphone and Smartwatch Fall Detection System. In Proceedings of
the CIT/IUCC/DASC/PICom 2015, Liverpool, UK, 26–28 October 2015; pp. 1443–1448. [CrossRef]
16. Wertner, A.; Czech, P.; Pammer-Schindler, V. An Open Labelled Dataset for Mobile Phone Sensing Based
Fall Detection. In Proceedings of the 12th EAI International Conference on Mobile and Ubiquitous Systems:
Computing, Networking and Services, Coimbra, Portugal, 22–24 July 2015; Zhang, P., Silva, J.S., Lane, N.,
Boavida, F., Rodrigues, A., Eds. MobiQuitous: Coimbra, Portugal, 2015. [CrossRef]
17. Aziz, O.; Musngi, M.; Park, E.J.; Mori, G.; Robinovitch, S.N. A comparison of accuracy of fall detection
algorithms (threshold-based vs. machine learning) using waist-mounted tri-axial accelerometer signals
from a comprehensive set of falls and non-fall trials. Med. Biol. Eng. Comput. 2017, 55, 45–55. [CrossRef]
18. Sucerquia, A.; López, J.D.; Vargas-Bonilla, J.F. SisFall: A Fall and Movement Dataset. Sensors 2017, 17, 198,
[CrossRef]
19. Micucci, D.; Mobilio, M.; Napoletano, P. UniMiB SHAR: A Dataset for Human Activity Recognition Using
Acceleration Data from Smartphones. Appl. Sci. 2017, 7, 1101. [CrossRef]
20. Mauldin, T.R.; Canby, M.E.; Metsis, V.; Ngu, A.H.H.; Rivera, C.C. SmartFall: A Smartwatch-Based Fall
Detection System Using Deep Learning. Sensors 2018, 18, 3363. [CrossRef]
21. Chan, H.L. CGU-BES Dataset for Fall and Activity of Daily Life. Figshare 2018. [CrossRef]
22. Martínez-Villaseñor, L.; Ponce, H.; Brieva, J.; Moya-Albor, E.; Núñez-Martínez, J.; Peñafort-Asturiano, C.
UP-Fall Detection Dataset: A Multimodal Approach. Sensors 2019, 19, 1988, [CrossRef]
23. Cotechini, V.; Belli, A.; Palma, L.; Morettini, M.; Burattini, L.; Pierleoni, P. A dataset for the development and
optimization of fall detection algorithms based on wearable sensors. Data Brief 2019, 23, 103839. [CrossRef]
Technologies 2020, 8, 72 16 of 17
24. Mauldin, T.; Ngu, A.H.; Metsis, V.; Canby, M.E.; Tesic, J. Experimentation and analysis of ensemble deep
learning in iot applications. Open J. Internet Things 2019, 5, 133–149.
25. Pannurat, N.; Thiemjarus, S.; Nantajeewarawat, E. Automatic Fall Monitoring: A Review. Sensors 2014,
14, 12900–12936. [CrossRef]
26. Krupitzer, C.; Sztyler, T.; Edinger, J.; Breitbach, M.; Stuckenschmidt, H.; Becker, C. Hips Do Lie!
A Position-Aware Mobile Fall Detection System. In Proceedings of the 2018 IEEE International Conference
on Pervasive Computing and Communications (PerCom); Communications, Athens, Greece, 19–23 March
2018; pp. 1–10. [CrossRef]
27. Liao, M.; Guo, Y.; Qin, Y.; Wang, Y. The application of EMD in activity recognition based on a single triaxial
accelerometer. Bio-Med. Mater. Eng. 2015, 26 ( Suppl. 1), S1533–S1539. [CrossRef]
28. Bourke, A.K.; O’Brien, J.V.; Lyons, G.M. Evaluation of a threshold-based tri-axial accelerometer fall detection
algorithm. Gait Posture 2007, 26, 194–199. [CrossRef]
29. Kangas, M.; Konttila, A.; Winblad, I.; Jämsä, T. Determination of simple thresholds for accelerometry-based
parameters for fall detection. In Proceedings of the Annual International Conference of the IEEE Engineering
in Medicine and Biology Society, Lyon, France, 22–26 August 2007; pp. 1367–1370. [CrossRef]
30. Salomon, R.; Lueder, M.; Bieber, G. iFall—A New Embedded System for the Detection of Unexpected
Falls. In Proceedings of the Eighth Annual IEEE International Conference on Pervasive Computing
and Communications Workshops (PerCom Workshops), Mannheim, Germany, 29 March–2 April 2010.
[CrossRef]
31. Vallabh, P.; Malekian, R.; Ye, N.; Bogatinoska, D.C. Fall detection using machine learning algorithms.
In Proceedings of the 2016 24th International Conference on Software, Telecommunications and Computer
Networks (SoftCOM), Split, Croatia, 22–24 September 2016; pp. 1–9. [CrossRef]
32. Santos, G.L.; Endo, P.T.; Monteiro, K.H.C.; Rocha, E.S.; Silva, I.; Lynn, T. Accelerometer-Based Human Fall
Detection Using Convolutional Neural Networks. Sensors 2019, 19, 1644. [CrossRef]
33. Wang, G.; Li, Q.; Wang, L.; Zhang, Y.; Liu, Z. Elderly Fall Detection with an Accelerometer Using
Lightweight Neural Networks. Electronics 2019, 8, 1354. [CrossRef]
34. Jahanjoo, A.; Tahan, M.N.; Rashti, M.J. Accurate fall detection using three-axis accelerometer sensor and
MLF algorithm. In Proceedings of the 2017 3rd International Conference on Pattern Recognition and Image
Analysis (IPRIA), Shahrekord, Iran, 19–20 April 2017; pp. 90–95. [CrossRef]
35. Ramachandran, A.; Adarsh, R.; Pahwa, P.; Anupama, K.R. Machine Learning-based Fall Detection
in Geriatric Healthcare Systems. In Proceedings of the 2018 IEEE International Conference on Advanced
Networks and Telecommunications Systems (ANTS), Indore, India, 16–19 December 2018; pp. 1–6.
[CrossRef]
36. Wisesa, I.W.W.; Mahardika, G. Fall detection algorithm based on accelerometer and gyroscope sensor data
using Recurrent Neural Networks. IOP Conf. Ser. Earth Environ. Sci. 2019, 258, 012035. [CrossRef]
37. Hussain, F.; Hussain, F.; Ehatisham-ul Haq, M.; Azam, M.A. Activity-Aware Fall Detection and Recognition
Based on Wearable Sensors. IEEE Sens. J. 2019, 19, 4528–4536. [CrossRef]
38. Musci, M.; De Martini, D.; Blago, N.; Facchinetti, T.; Piastra, M. Online fall detection using recurrent neural
networks. arXiv 2018, arXiv:1804.04976.
39. Liu, K.C.; Hsieh, C.Y.; Hsu, S.J.P.; Chan, C.T. Impact of Sampling Rate on Wearable-Based Fall Detection
Systems Based on Machine Learning Models. IEEE Sens. J. 2018, 18, 9882–9890. [CrossRef]
40. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks.
Advances in Neural Information Processing Systems; AcM: New York, NY, USA, 2012; pp. 1097–1105.
41. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image
Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami,
FL, USA, 20–25 June 2009.
42. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, Chengdu, China, 15–17 December 2017; pp. 770–778.
43. Wang, Z.; Yan, W.; Oates, T. Time Series Classification from Scratch with Deep Neural Networks: A Strong
Baseline. CoRR 2016, abs/1611.06455. Available online: https://arxiv.org/abs/1611.06455 (accessed on
30 October 2020).
44. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature
1986, 323, 533–536. [CrossRef]
Technologies 2020, 8, 72 17 of 17
45. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [CrossRef]
46. Virtanen, P.; Gommers, R.; Oliphant, T.E.; Haberland, M.; Reddy, T.; Cournapeau, D.; Burovski, E.;
Peterson, P.; Weckesser, W.; others. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python.
Nat. Methods 2020, 17, 261–272. [CrossRef]
47. Sztyler, T.; Stuckenschmidt, H. On-body Localization of Wearable Devices: An Investigation of
Position-Aware Activity Recognition. In Proceedings of the 2016 IEEE International Conference on
Pervasive Computing and Communications (PerCom), IEEE Computer Society, Sydney, Australia, 14–19
March 2016; pp. 1–9. [CrossRef]
48. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA. 2016.
49. Um, T.T.; Pfister, F.M.J.; Pichler, D.; Endo, S.; Lang, M.; Hirche, S.; Fietzek, U.; Kulić, D. Data
augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural
networks. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, Glasgow,
Scotland, 13–17 November, 2017; Lank, E., Ed.; ACM: New York, NY, 2017; pp. 216–220. [CrossRef]
50. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.;
Lerer, A. Automatic differentiation in PyTorch. In Proceedings of the NIPS 2017 Workshop Autodiff
Submission, Long Beach, CA, USA, 9 December, 2017.
51. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980.
52. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on
imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago,
Chile, 7–13 December 2015; pp. 1026–1034.
53. Springenberg, J.T.; Dosovitskiy, A.; Brox, T.; Riedmiller, M. Striving for simplicity: The all convolutional net.
arXiv 2014, arXiv:1412.6806.
54. Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P. Deep learning for time series classification:
A review. CoRR 2018, abs/1809.04356. Available online: https://arxiv.org/abs/1809.04356 (accessed on 30
October 2020).
55. Wu, J.; Leng, C.; Wang, Y.; Hu, Q.; Cheng, J. Quantized convolutional neural networks for mobile devices.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA,
27–30 June 2016; pp. 4820–4828.
56. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative
localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas,
NV, USA, 27–30 June 2016; pp. 2921–2929.
57. Aadahl, M.; Andreasen, A.H.; Hammer-Helmich, L.; Buhelt, L.; Jørgensen, T.; Glümer, C. Recent temporal
trends in sleep duration, domain-specific sedentary behaviour and physical activity. A survey among
25–79-year-old Danish adults. Scand. J. Public Health 2013, 41, 706–711. [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional
affiliations.
c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).