0% found this document useful (0 votes)
16 views12 pages

Fall Detection System With Artificial Intelligence-Based Edge Computing

Uploaded by

muhammedlatheef
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views12 pages

Fall Detection System With Artificial Intelligence-Based Edge Computing

Uploaded by

muhammedlatheef
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Received December 13, 2021, accepted December 30, 2021, date of publication January 4, 2022, date of current version

January 13, 2022.


Digital Object Identifier 10.1109/ACCESS.2021.3140164

Fall Detection System With Artificial


Intelligence-Based Edge Computing
BOR-SHING LIN 1 , (Senior Member, IEEE), TIKU YU 2 , CHIH-WEI PENG 3,

CHUEH-HO LIN 4,5 , HUNG-KAI HSU1 , I-JUNG LEE 1,6 ,


AND ZHAO ZHANG1,6,7 , (Graduate Student Member, IEEE)
1 Department of Computer Science and Information Engineering, National Taipei University, New Taipei City 237303, Taiwan
2 Department of Communication Engineering, National Taipei University, New Taipei City 237303, Taiwan
3 School of Biomedical Engineering, College of Biomedical Engineering, Taipei Medical University, Taipei 11031, Taiwan
4 Master Program in Long-Term Care, College of Nursing, Taipei Medical University, Taipei 11031, Taiwan
5 Center for Nursing and Healthcare Research in Clinical Practice Application, Wan Fang Hospital, Taipei Medical University, Taipei 11031, Taiwan
6 College of Electrical Engineering and Computer Science, National Taipei University, New Taipei City 237303, Taiwan
7 School of Mechanical and Electrical Engineering, Wuyi University, Wuyishan, Fujian 354300, China

Corresponding author: Chueh-Ho Lin ([email protected])


This work was supported in part by the Ministry of Science and Technology in Taiwan under Grant MOST 110-2314-B-305-001 and Grant
MOST 109-2221-E-305-001-MY2; in part by the University System of Taipei Joint Research Program under Grant
USTP-NTPU-TMU-107-02, Grant USTP-NTPU-TMU-108-01, Grant USTP-NTPU-TMU-109-03, and Grant
USTP-NTPU-NTOU-110-01; in part by the Faculty Group Research Funding Sponsorship by National Taipei University under Grant
2021-NTPU-ORDA-02; and in part by the National Taipei University, Taiwan, through the Academic Top-Notch and Features Field Project
under Grant 110-NTPU_ORDA-F-003.

ABSTRACT Falls are the second leading cause of death from unintentional injuries in older adults. Although
many systems have been used to detect falls, they are limited by the computational complexity of the
algorithm. The images taken by the camera must be transmitted through a network to the back-end server
for calculation. As the demand for Internet of Things increases, this architecture faces such problems as
high bandwidth costs and server computing overload. Emerging methods reduce the workload of servers
by transferring certain computing tasks from cloud servers to edge computing platforms. To this end, this
study developed a fall detection system based on neuromorphic computing hardware, which streamlines and
transplants the neural network model of the back-end computer to the edge computing platform. Through
the neural network model with integer 8 bit precision deployed on the edge computing platform, the object
photos obtained by the camera are converted into human motion features, and a support vector machine is
then used for classification. After experimental evaluation, an accuracy of 96% was reached, the detection
speed of the overall system was 11.5 frames per second, and the power consumption was 0.3 W. This system
can monitor the fall events of older adults in real time and over a long period. All data were calculated on
the edge computing platform. The system only reports fall events via Wi-Fi, thereby protecting the privacy
of the user.

INDEX TERMS Deep learning, edge computing, fall detection, neuromorphic computing hardware, the IoT.

I. INTRODUCTION medical cost of fatal older adult falls was an estimated US


The older adult population is expected to reach 1.4 billion 754 million dollars in 2015 [2].
by 2030 and 2.1 billion by 2050 [1]. With age, older adults In traditional fall detection systems for older adults, sen-
experience more impairment in vision, balance, and cogni- sors and cameras are used to track the motion of individu-
tion, all of which increase the chances of a fall. Thirty percent als, and the sensor data and image data are sent to servers
of elderly people over 65 years fall at least once every year, for analysis [3]–[9]. When a fall event is detected by the
causing severe or even fatal damage. However, only one-third system, the server immediately notifies medical staff of the
of people received medical assistance following a fall. The emergency. In research that employs cloud analysis, data can
be analyzed using relatively sufficient and powerful back-
The associate editor coordinating the review of this manuscript and end equipment to achieve higher accuracy. However, the
approving it for publication was György Eigner . main disadvantage of uploading a large amount of data to

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
4328 VOLUME 10, 2022
B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

the cloud server is the resultant high cost in network band- fall detection, edge computing fall detection, and cloud-edge
width, high latency, and privacy concerns [10]. With too computing fall detection.
many users, the network bandwidth and loading of the cloud Harrou et al. [7] proposed an integrated vision-based fall
computing may become unfeasible. The rapid advancement detection approach implemented on a backend computer.
of chip technology resulted in algorithms that could not be The integrated vision-based fall detection approach involves
calculated in real time on the front-end being transferred image processing (background subtraction), morphological
from the cloud to the front-end, reducing the burden on the processing (erosion and dilation operators), centroid calcu-
server. An edge computing-based system could also be used lation, generalized likelihood ratio (GLR) calculation, and
to perform better in real time than a cloud-based system [11], SVM. Image processing is used to segment the human body
which is highly valuable for fall detection systems. There- from the picture of the University of Rzeszow (UR) fall
fore, some fall detection systems based on edge computing detection dataset. The human body contour obtained through
have been proposed, including architectures using general- image processing is divided into five areas. The five areas
purpose processors [12]–[17] and neuromorphic computing are passed to GLR-SVM classifiers to distinguish between
hardware [18]. The general-purpose processor architecture real falls and certain like-falls activities. The approach was
used in many studies is relatively easy to implement but designed to detect fall event with fewer false-positive.
consumes considerable time and computing resources to Wang et al. [8] proposed a novel visual-based fall detection
complete the neural network algorithm. These studies have approach by dual-channel feature integration. The research
been limited to the use of relatively low-rank robust methods combines traditional signal processing with deep learning
such as statistics and thresholds. Although [17] used deep model. YOLO and OpenPose were used to obtain position
learning methods in which continuous frames were input to and key points of a human body. A dual-channel sliding
the neural network to train it to dynamically recognize falls, window was designed to extract the features (centroid speed,
the system was forced to reduce the image resolution to 32 × upper limb velocity, and human external ellipse) from result
32 to achieve real-time computation on the central processing of deep learning model. Multilayer perceptron (MLP) and
unit (CPU). Low-resolution images can only provide close- random forest were used to classify feature data. Then, the
range information, which limits their potential applicability. results of the classifiers were combined to detect fall events.
Other studies have demonstrated that a fall detection system The proposed approach achieved an accuracy of 97.33% and
based on deep learning can effectively improve detection 96.91% on UR dataset and Le2i dataset separately.
accuracy [18]. However, if a fall detection system based on Lotfi et al. [9] also proposed a novel visual-based fall
deep learning is to be implemented on an edge computing detection approach. Background subtraction is used for pre-
platform with general-purpose processor architecture, it may processing. Ten features are extracted from the human sil-
be more time-consuming and may not achieve real-time fall houette, including the motion information, orientation, ratio,
detection. the major semi-axis and the minor semi-axis of the fitting
To address the aforementioned problems, this study pro- ellipse, the projection histogram, the y-coordinate of the head
posed a fall detection system based on edge computing, which point, the standard deviation of y-coordinate, the absolute
combined a camera and neuromorphic computing hardware difference of y-coordinate and the standard deviation of abso-
based on an application-specific integrated circuit. The You lute difference of y-coordinate. These features are fed into
Only Look Once lightweight (YOLO-LW) deep neural net- MLP neural network for fall classification. The proposed
work was implemented on the neuromorphic computing algorithm produces a high recognition rate of 99.60% on UR
hardware. Experiments have validated that the YOLO-LW dataset.
algorithm combined with a support vector machine (SVM) Some computationally intensive technologies including
can run smoothly on the edge computing platform and can image processing, and deep learning are used in in the afore-
accurately detect fall events in real time. In this study, the mentioned studies. These technologies will burden backend
captured images are not uploaded to the cloud server, so when computer when the number of cameras is large. Because the
a large number of cameras are installed in practical applica- image data need to be transmitted to the backend computer,
tions, fall detection system does not occupy additional large may cause security and privacy issues. In addition, some
amount of bandwidth, and server is not blocked by processing algorithms require continuous high-resolution images. This
images from all cameras at the same time. And the edge means that a lot of bandwidth will be occupied, may result in
computing platform sends a warning to server only if a fall packet blockages and losses due to insufficient bandwidth.
event is detected, so the transmission delay can be negligible. ShahZad and Kim [13] proposed a two-step algorithm to
Thus, user privacy is protected to a certain extent. monitor and detect fall events using the embedded accelerom-
eter signals. The fall detection system was developed on
II. RELATED WORKS a smartphone. The smartphone is placed on the waist or
Several studies on fall detection have been proposed to reduce leg. The accelerometer signals on smartphone are passed to
older adult fall injuries or provide emergency assistance after two-step algorithm for fall classification. Two-step algorithm
falls [7]–[9], [13], [14], [19], [20]. This section presents three is combination of threshold-based method and multiple ker-
categories of fall detection technologies: backend computing nel learning SVM. Experimental results reveal that the system

VOLUME 10, 2022 4329


B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

detects falls with high accuracy 97.8% and 91.7% on the waist
and the leg.
Saleh and Jeannès [14] proposed a machine learning-based
fall detection algorithm designed to deploy on wearable
sensor. 12-component statistical features vector is extracted
from a triaxial acceleration signal. The SVM-based algo-
rithm is used to classify fall events. The experimental results
show that the proposed algorithm can reach 99.9% on Sisfall
dataset. In addition, the system implements algorithm with a
computational cost of less than 500 floating point operations
per second.
Yu et al. proposed a fall detection system based on neuro-
morphic computing hardware [19] in which the user wore a
device with an embedded inertial measurement unit (IMU) to FIGURE 1. Prototype of the edge computing platform with an area of
measure human movement and capture five types of activi- 60 × 88 mm.
ties, including falls.
The studies [13], [14], [19] implement fall detection algo-
rithm on an embedded system and experimental results show
high accuracy. These studies are limited to the use of lower
computational cost methods such as SVM or threshold.
In addition, wearable system is inconvenience.
Rajavel et al. [20] proposed an IoT-based healthcare smart
system. The system detects fall events with cloud server,
edge computing devices and cameras. The images captured
from cameras are transmitted to edge computing devices via
Wi-Fi. The edge computing devices filter the non-sensitive
data to reduce the communication bandwidth between the
edge layer and cloud layer by transmitting only the needful
data to the cloud layer. A deep convolution neural network
classifier is used to detect fall events on the cloud server. FIGURE 2. Flowchart of the proposed approach.
The system exploits only 150 kbps network bandwidth and
80 s response time compared to past research. In addition,
the system spends 72.76 s on prediction and accuracy reaches can perform AI operations, offering 0.25 tera operations per
94.5%. The research [20] uses cloud-edge based computing second (TOPS) at 0.3 W, 400 MHz and 0.5 TOPS when it
framework to implement fall prediction and performance overclocks to 800 MHz. The K210 can therefore perform
exceeds previous research. But the long prediction time and object recognition at 60 frames per second (FPS) with a video
response time may cause a fall event that cannot be handled graphics array. When the system is under operation, the edge
immediately in the actual situation. computing platform continuously takes pictures through the
In order to address the aforementioned problems, a fall camera, which are stored in the dedicated memory of the AI
detection method based on artificial intelligence (AI) edge chip. The trained neural network structure deployed on the AI
computing with vision sensor was proposed in this study. chip reads the data in the memory and calculates the bounding
The proposed algorithms are deployed on edge computing box of the human body in the image. If the human body is not
platforms with AI chips. All operations are done on the edge captured in the image, no bounding box is computed. Accord-
computing platform. ing to the bounding box, the shape aspect ratio (SAR) and the
difference between width and height are obtained and used
in the SVM classifier to classify actions, including standing,
III. METHOD bending, and falling, in the CPU. When the state of human
A. SYSTEM OVERVIEW body changes from being at higher altitudes to fallen state and
The main components of the edge computing platform used continues to be in that state for a period, it is determined a fall
in this study were a camera (ov2640, OmniVision Technolo- event. The system then transmits the fall event to the cloud
gies, Shanghai, China) and an AI development board (Sipeed server via Wi-Fi. The cloud server is only for monitoring
MAix GO, Sipeed Technology, Shenzhen, China), as depicted and calling for emergency treatment. The entire process is
in Fig. 1. The core chip Sipeed M1W of the AI development illustrated in Fig. 2. The system reports the detection results
board was composed of Kendryte K210 (Canaan, Beijing, and does not transmit images to protect the privacy of users.
China) and the Wi-Fi chip ESP8285. The K210 has two This study differs from those that have relied on cloud
RISC-V CPUs and a neural network processor (KPU) that server computing [3], [4] or nondeep learning methods

4330 VOLUME 10, 2022


B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

FIGURE 4. (a) Labeled bounding box; (b) Coordinate tuple of the ground
truth: left, top, width, and height.

and minimum weight value in all neurons. The weight value


FIGURE 3. Process of training and deploying neural networks.
between −T–T is remapped to −127 to 127, and other weight
values are discarded. To minimize the influence of the miss-
ing model information, the model is calibrated using the
[12], [13] to detect falls. In the AI model training phase, calibration dataset. The model uses different Ts to infer the
it is first trained using collected posture photos on the PC, calibration dataset to determine the threshold T that least
and then the trained neural network on the edge comput- affects the accuracy of the model. This study employed the
ing platform is deployed. The PC specifications comprised training dataset as the calibration data set, and the converted
a CPU (I7-10700F, Intel, Santa Clara, CA, USA), graph- model was approximately one-fourth of the original model.
ics processing unit (GPU) display card (RTX 3080 10G,
AsusTeK, Taipei, Taiwan) and 64-gigabyte dynamic random- B. DEVELOPMENT OF THE AI ALGORITHM AND
access memory. The neural network training framework CLASSIFIER ON EDGE AI BOARD
Darknet [21] was used for training the AI model. The trained The most representative neural networks in object detec-
neural network is input to the conversion program and con- tion include single shot multibox detector [25], faster
verted into a TensorFlow-based model that the KPU can infer. R-convolutional neural network (CNN) [26], You Only Look
The KPU only partially supports TensorFlow operators, and Once (YOLO) v2 [27], and YOLO v3 [28]. Compared
the rebuilding of neural networks using supported operators with the traditional two-stage detection algorithm, YOLO v2
to develop new edge computing neural networks is a gradual directly converts the bounding box positioning problem into
process. an end-to-end regression solution and uses anchor boxes to
Fig. 3 depicts the process of training and deploying neural detect objects of different sizes. The anchor box is determined
networks. In preparation of the AI model training, the col- by preset anchor points that represent the various propor-
lected images were first labeled using free image labeling tions of the bounding box that may appear in the image.
software (Labellmg) to manually generate the coordinate Because YOLO v2 avoids the process of generating hundreds
tuple data of the bounding box, including the width and of candidate boxes, the execution speed of the algorithm
height of the bounding box and the x- and y-axis distance has markedly improved, ensuring the practical applicabil-
from the upper left corner of the bounding box to the origin ity of network. The YOLO v3 model was constructed in
(upper left corner of the image). The coordinate tuple of 2018 with the addition of multiscale prediction and a better
the bounding box is used as the ground truth to train the basic classification network (i.e., Darknet-53). With faster
neural network. Details of the labeled bounding box and speed and better accuracy in small target detection, detection
coordinate tuple are presented in Fig. 4. During the following distance can be extended in actual scenes. This study tested
training process, the neural network makes inferences on the the lightweight models YOLO v2-tiny and YOLO v3-tiny on
input images and generate estimated coordinate tuples. The the PC by applying k-means clustering to group the training
estimated coordinate tuple and coordinate tuple of ground dataset. The center of each group represents the proportion of
truth are then calculated for the loss function, and the neurons different bounding boxes in the training dataset. These values
of the neural network are updated through back propagation. contain the proportional information of the human body set as
When the training is complete, low-precision quantization of anchor points, and the width and height of the neural network
the neural network is performed to increase the speed of the input layer is set as the maximum size limit size that the
model and reduce the power consumption of the inferring AI chip can process (width, 320 pixels; height, 224 pixels).
model [22]–[24]. This low precision includes 16-bit float The architectures of YOLO v2-tiny and YOLO v3-tiny are
format, 8-bit integer format, and 4-bit integer format. The presented in Fig. 5 and Fig. 6, respectively. Although these
neural network on the computer side is converted from 32-bit methods achieve high accuracy on the PC, the model size
float format to 8-bit integer format through low-precision is too large to be executed on the edge computing platform,
quantization, during which the offset value in the neuron is which has limited memory. Therefore, a deep separable con-
ignored, and a threshold T is selected between the maximum volutional layer [29] is used to modify the neural network

VOLUME 10, 2022 4331


B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

and simplify the neurons. According to [29], the ratio of The collected data were captured in five different indoor
the amount of deep separable convolutional layers to that of environments. The camera was installed at a height of 1.7 m
the calculated ordinary convolutional layers can be expressed from the ground and a distance of 2–3.5 m from the object.
as (1). The optical axis of the camera was tilted downwards at an
Cost depth_conv + Cost point_conv 1 1 angle of 22.5◦ to the horizontal. The camera recorded the fall
= + 2 (1) process from eight different angles relative to the direction in
Cost standard_conv N K
which the participant fell. The data set contained a total of
where Cost depth_conv represents the computational cost of the 2030 photos, consisting of 1077 falling and 953 nonfalling
deep convolutional layer; Cost point_conv represents that of the photos. This data set contained information on 152 falls and
point convolutional layer; Cost standard_conv represents that of other actions such as walking, bending, squatting, sitting, and
the ordinary convolutional layer; N represents the number of kneeling, and used horizontal flips to double the data.
channels; and K denotes the size of the kernel. As Fig. 2, the system obtains the images of the person
Figs. 7 and 8 depict the modified YOLO v2-tiny and YOLO through the camera. Next, the YOLO model is used to cap-
v3-tiny. According to Eq. (1), when the common convolu- ture the silhouette of person, and then passing the extracted
tional layer is replaced by the deep separable convolutional features from the silhouette of person to the SVM classifier.
layer with more filters (i.e., channels) and a larger kernel size, Results of SVM are classified into three classes: standing,
the computational cost is reduced. Considering the limitation blending, and falling. Finally, a sliding window is used to
of memory, we chose to replace the convolutional layer, with detect fall event.
a filter amount greater than or equal to 512, with the deep This study employed intersection over union (IoU) to eval-
separated convolutional layer. The filters in the output layer uate the neural network bounding performance, as calculated
of YOLO v3-tiny are fewer than those in YOLO v2-tiny, and in (3).
the calculation cost of the output layer is therefore reduced to
approximately two-thirds that of YOLO v2-tiny. To further IoU
Overlap of Ground Truth and Predicted Bounding Box
shrink YOLO v3-tiny, some convolutional layers with a filter =
amount of 256 were replaced with deep separated convolu- Union of Ground Truth and Predicted Bounding Box
tional layers. (3)
However, after low-precision quantization, the test indi- Shape aspect ratio (SAR), and difference (D) between
cated that the accuracy of the neural network executed on the width and height, are extracted from the bounding box, the
edge computing platform decreased. To improve the accu- formulas of these two features are listed in (4) and (5).
racy, some convolutional layers were added to increase the
Width of Bounding Box
depth of the neural network. Because the categories were SAR = (4)
changed from 85 of the original YOLO v3-tiny to 1 (only Height of Bounding Box
one category for humans), the filters of the output layer must D = Width of Bounding Box − Height of Bounding Box
be changed from the original 255 of the YOLO v3-tiny to (5)
18. The addition of convolutional layers between the out- A sliding window is designed to detect fall event. When
put layer and the previous layer and a decrease in filters the falling state appears more than three times in a sliding
better condenses the features, and the increased amount of window, the result is considered a fall event.
calculation is sustainable. Eq. (2) expresses the relation- To evaluate the performance of the classifier and the over-
ship between the categories and the filters of output layer. all system, including indicators such as accuracy, precision,
Fig. 9 presents the architecture of the modified YOLO v3-tiny recall, specificity, and F1-score, 10-fold cross-validation was
following the addition of the convolutional layers. In this used. The indicator definitions are expressed in (6)–(10).
study, this lightweight modified YOLO v3-tiny was named
YOLO-LW. TP + TN
Accuracy = (6)
TP + TN + FP + FN
F = (C + 5) × A (2) TP
Precision = (7)
where F is the number of filters in the output layer; C is TP + FP
the number of classes; A is the number of anchor points; TP
Recall = (8)
and the number of YOLO v3-tiny anchor points is equal to TP + FN
3. Therefore, the number of filters in the modified YOLO TN
Specificity = (9)
v3-tiny output layer is (1 + 5) × 3 = 18. TN + FP
2 × Precision × Recall
F1 − score = (10)
C. EXPERIMENTAL PROCEDURES Precision + Recall
In this study, we collected data from 19 participants, includ- where true positive (TP) refers to the number of falls correctly
ing 11 men and 8 women. The ratio of young to old was detected; true negative (TN) refers to the number of normal
9:10, and their average age, height, and weight was 46.3 ± activities that are correctly detected; false positive (FP) refers
16.1 years, 166.1 ± 9.9 cm, and 67.3 ± 12.8 kg, respectively. to the number of normal activities that are mistaken for

4332 VOLUME 10, 2022


B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

FIGURE 5. Architecture of YOLO v2-tiny.

FIGURE 6. Architecture of YOLO v3-tiny.

FIGURE 7. Architecture of the modified YOLO v2-tiny.

falls; and false negative (FN) is the number of falls that are IV. RESULTS
mistaken for normal activities. A. EXPERIMENTAL RESULTS FOR THE ORIGINAL AND
In the performance evaluation of YOLO and the overall MODIFIED YOLO MODELS
system (YOLO and SVM), the PC and edge computing plat- The key part of this system is the design of the AI model
form were used to evaluate and compare the performance of of the bounding human body. If an AI model with effective
different YOLO models, respectively. The overall experimen- performance can be selected for use, the overall fall detec-
tal process is detailed in Fig. 10. tion function is greatly improved. The AI model used for

VOLUME 10, 2022 4333


B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

FIGURE 8. Architecture of the modified YOLO v3-tiny.

FIGURE 9. Architecture of the modified YOLO v3-tiny following the addition of the convolutional layers.

TABLE 1. Comparison Yolo v2-tiny and Yolo v3-tiny IoU. were noted between the two models, YOLO v3-tiny outper-
formed YOLO v2-tiny. YOLO v3-tiny could detect all objects
within 2 to 3.5 m, and the IoU was 98.16%, but YOLO v2-tiny
could not detect some body curls and tiny objects and had a
lower IoU.
The performance of the modified YOLO v2-tiny and
modified YOLO v3-tiny was tested and compared on the
PC, and the comparison results are presented in Table 2.
The experimental results revealed that the modified YOLO
v3-tiny still detected all objects, but the IoU decreased to
the bounding human body in this system was scheduled to 95.51%. However, YOLO v2-tiny exhibited poorer perfor-
be implemented with YOLO v2-tiny, YOLO v3-tiny, or an mance in human body detection, and the IoU decreased
improved version, because the algorithm of the YOLO system to 74.86%.
is effective for human body detection and its small size facili-
tates transplantation to the edge computing platform. To eval- B. EXPERIMENTAL RESULTS FOR DIFFERENT PRECISION
uate the AI model, 5-fold cross-validation was employed. FORMAT MODELS
During training, the k-means was used to re-find the anchor By comparing the performance of YOLO v2-tiny and YOLO
points of the training dataset. The learning rate was set to v3-tiny and that of the modified YOLO v2-tiny and modi-
0.001, and the loss of neural network after training was lower fied YOLO v3-tiny, we determined that the modified YOLO
than 0.05. v3-tiny or its simplified version was more suitable for use.
The effectiveness of YOLO v2-tiny and YOLO v3-tiny for We then tested the performance of the modified YOLO
bounding the human body area was tested on the PC, and the v3-tiny with float 16 precision format, modified YOLO
comparison is described in Table 1. Significant differences v3-tiny with integer 8 precision format, and modified

4334 VOLUME 10, 2022


B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

TABLE 3. Comparison of different precision format models on edge


computing platform.

is the smallest at approximately half the size of the other two


models.

C. SYSTEM EVALUATION
The first step of the proposed system is human bounding.
Table 3 presents the performances of bounding models. The
final result shows that model on edge computing platform can
reach 94.5% average IoU with 11.5 FPS.
The second step of the proposed system is SVM classifica-
tion. Table 4 presents the performance of bounding model and
SVM classifier. System spends less than 0.001 s on feature
extraction and SVM classification. The system using YOLO
v3-tiny achieved an accuracy of 92.5% on the PC, which is
FIGURE 10. Overall experimental process for the performance evaluation
almost the same as the classification result using the ground
of different AI models, stages, and platforms. truth. After part of the convolutional layer was replaced with
a deep separated convolutional layer, the modified YOLO
v3-tiny achieved an accuracy of 91.6%, a decrease of 0.9%.
TABLE 2. Comparison between the modified Yolo v2-tiny and modified
Yolo v3-tiny IoU. The modified ability of YOLO v3-tiny to detect objects with
curved bodies was slightly reduced, but the classification
was largely correct. When the modified YOLO v3-tiny was
converted to the integer 8 precision format, the information
of the neural network was lost, resulting in a decreased IoU.
The addition of a convolutional layer can effectively improve
the performance of the neural network, which forms a new
model, YOLO-LW. In comparison with the system that used
the modified YOLO v3-tiny, the system that used YOLO-LW
exhibited a slight decrease in accuracy (0.5%).
The final step of the proposed system is fall event detection.
YOLO v3-tiny with integer 8 precision format and a con- A sliding window is used to detect fall event. When the falling
volutional layer (Fig. 9) on the edge computing platform. state appears more than three times in a sliding window, the
The comparison of the experimental results is detailed in result is considered as a fall. Fig. 11 shows performance
Table 3. The float 16 precision format model used the CPU under different sizes of sliding windows. According to the
on the edge computing platform to infer the neural network. result, system reaches the highest performance when size of
The IoU was 94.6%, which is 0.9% lower than that of the sliding window is four. A sliding window takes four images to
computer version of the float 32 precision format model at consider a fall event. Therefore, entire system spends 0.344 s
95.51%. The integer 8 precision format model used the KPU to process a fall event.
on the edge computing platform to infer the neural network;
the resultant IoU of 91.2% was 4.3% lower than that of the V. DISCUSSION
float 32 precision format model. Although the IoU of the float In this study, a fall detection system that combined a cam-
16 precision format model was higher than that of the integer era and neuromorphic computing hardware was investigated.
8 model, the FPS of the integer 8 model was 14.7 times Although research demonstrated that the implementation of
that of the float 16 model. Following the addition of the deep learning in automatic fall detection systems can enhance
convolutional layer to the integer 8 precision format model, fall detection performance [18], the use of deep learning
the IoU increased to 94.5% and FPS decreased by 0.3%. in embedded systems greatly increases the computing time.
Among the three models, the integer 8 precision format model This study is the attempt to use neuromorphic computing

VOLUME 10, 2022 4335


B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

TABLE 4. Comparison of SVM and different models. In the fall detection system, the bounding of the human
body is a critical technology. Generally, an improved IoU
increases the hit rate of human body area, and a reduction in
IoU often occurs when the detection object is self-occluded
during bending of the body, such as having the individual
having their back to camera or curling their limbs. Addi-
tionally, in the process of porting the PC-side programs to
the edge computing platform, some components may be
abandoned, causing IoU decline. Some studies have indi-
cated that a remapping of the neural network with high
precision format to the neural network with low precision
TABLE 5. Performance of whole system. format can effectively reduce the model size and maintain
the accuracy [23], [24]. The experimental results outlined in
Table 3 demonstrated that the modified YOLO v3-tiny with
float 16 format exhibited only a 0.91% decrease in IoU than
the model before conversion, but the model size is half that of
the original. However, the modified YOLO v3-tiny with float
16 precision format could not make inferences in the neuro-
morphic computing hardware and only reached 0.8 FPS with
CPU inference. The modified YOLO v3-tiny with integer
8 precision format used neuromorphic computing hardware to
infer and reached 11.8 FPS, but the IoU decreased by 4.3%.
Despite the robustness of the modified YOLO v3-tiny with
integer 8 precision format and its ability to detect all objects in
different scenes and with different actions, the IoU decreased
markedly. If the complexity of the model is increased through
the addition of a convolutional layer, the IoU can reach
94.5%; the model size would be smaller than the modified
YOLO v3-tiny with the float 16 precision format, and the
speed only 0.3 FPS slower than the modified YOLO v3-tiny
with integer 8 precision format.
The experimental results of Table 4 revealed that the fall
FIGURE 11. Results under different sizes of sliding windows.
detection system composed of YOLO v3-tiny and SVM and
implemented on the PC side, with a GPU used for inference,
hardware to replace the software for deep learning in embed- achieved an accuracy rate of 92.5%. However, the high costs
ded systems. We executed our self-developed YOLO-LW on of the GPUs and desktop computers make it impossible for
neuromorphic computing hardware to maintain the running the fall detection system to be deployed flexibly in actual
time of the neural network with the integer 8 precision format, scenarios. Despite the PC acting as a server, the system still
without losing the region of interest (ROI) changes related to faces problems with network delays and processing large
possible fall events. amounts of data. The system implemented on the PC and
To evaluate the fall detection system involving the ROI, composed of the modified YOLO v3-tiny and SVM exhibited
various action photos collected from five indoor scenes were a decreased IoU, but the overall accuracy only decreased by
transmitted to five YOLO models with different neural layer 0.9%. However, after the modified YOLO v3-tiny was con-
structures, and their performances were compared through verted to the integer 8 precision format model, the accuracy
5-fold cross-validation. From the experimental results of of the overall system was considerably reduced. The system
Table 1, at a resolution of 320 × 224, the YOLO v3-tiny composed of YOLO-LW and SVM on the edge computing
outperformed the YOLO v2-tiny. This is attributable to platform achieved an accuracy rate of 91.1% and a speed
the YOLO v3-tiny use of upsampling to obtain high-scale of 11.5 FPS. Among the various indicators, recall was the
feature maps and retain the information of small targets. only one lower than 90%. False alarms mostly occurred when
As described in Table 2, the modified YOLO v3-tiny exhib- most of the skin area of the subject was occluded, causing
ited greater accuracy than the modified YOLO v2-tiny but the system to confuse the clothing of the subject with the
exhibited 2.65% less accuracy compared with that of the background and partially deforming the bounding box.
YOLO v3-tiny. The modified YOLO v3-tiny used a deep Table 6 presents a comparison between our proposed
separation of convolutional layers to reduce the complexity method and those in other studies based on edge computing.
of the model; thus, the accuracy was slightly reduced, but the Yu et al. [19] proposed a wearable fall detection system
model size was also reduced by one-fifth. based on neuromorphic computing hardware. The Hopfield

4336 VOLUME 10, 2022


B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

TABLE 6. Comparison of the proposed method and other fall detection systems.

neural network was simulated using PSpice as a circuit of this system reached 11.5 FPS, which provided effective real-
the neuromorphic computing hardware. The system analyzed time performance. Additionally, the FPGA debugging pro-
the IMU data to determine falls with an accuracy of 88.9%, cess is difficult and extends the development time.
though the author performed a circuit simulation rather than
constructing the wearable device. Yang et al. [30] used field VI. CONCLUSION
programmable gate array (FPGA)-based ZYNQ-7020 hard- In this study, a fall detection system with neuromorphic com-
ware to implement a CNN model with an accuracy rate of puting hardware for AI-based edge computing was proposed.
92%, but the power consumption of 2.5 W is too costly for The images of individuals were captured through the camera
a fall monitoring system that must operate for a long time. and transmitted to the neural network model on the edge com-
The detection time was also 0.43 s, with an FPS of only puting platform. After detection of the object characteristics,
2.42. Mauldin et al. [31] employed three-layer open system the SVM was used for classification, and the detection result
architecture to transmit the sensor data from a smart watch was communicated to the manager via Wi-Fi. This study
to a smartphone for edge computing. They implemented a successfully deployed an improved neural network model
recurrent neural network on a smartphone based on an ARM YOLO-LW on the edge computing platform. YOLO-LW uses
processor, but an accuracy of only 70% was achieved. The a deep separated convolutional layer to improve computa-
method of Alaoui et al. [32] is to first calculate the key tional efficiency, differing from the model with float 32 preci-
points of human skeleton, and then use principal component sion format on the computer side. YOLO-LW is converted to
analysis (PCA) and SVM to detect whether someone has integer 8 precision format to increase FPS and reduce model
fallen in the image data. The accuracy of the experimental size, with an additional convolutional layer added to maintain
results of their algorithm is 98.5%. The overall performance the accuracy of the model. In the experiment, we collected
is good, but the entire study is based on an analysis of a ready- normal and falling photos of people of all ages under five
made dataset, and it is impossible to confirm the performance different indoor backgrounds through the camera of the plat-
in the real environment. The system will transmit the video form and fed these images to the model for training and ver-
to the server, there will be the problem of personal privacy ification to validate the robustness of the proposed method.
leakage. Chang et al. [33] constructed OpenPose-light and After experimental evaluation, an average IoU of 94.5% was
SVM algorithms on an edge computing platform (Jetson obtained on the edge computing platform; the accuracy of
TX2, NVIDIA Corp., Santa Clara, CA, USA) to detect falls the overall system reached 96%, and the FPS reached 11.5.
for elderly people with an accuracy of 98.1%. Its overall It exhibited excellent real-time performance and a power
performance is good, and the use of edge computing can consumption of only 0.3 W. Power consumption is a crucial
avoid personal privacy issues. But its processing time is a bit factor for fall monitoring systems that must be operated for a
of long and power consumption is a bit of high. long time. All data were calculated on the edge computing
Overall, the method proposed in this study was more platform, thereby protecting the privacy of users. Despite
accurate than the aforementioned methods. In terms of use, occlusion problems, the proposed neural network has good
we used a camera as the input device, which is more con- generalizability. At a distance of 2–3.5 m, the object can still
venient than a wearable fall detection system using IMU. be captured even if one-third is occluded, which presents a
Regarding the same vision-based solution, compared with potential edge computing solution. The proposed framework
the 2.5 W and 15 W power consumption reported in [30] is client server-based and single-tier architecture [34], which
and [33] respectively, the architecture proposed in this study assist with cost savings and safety. The proposed framework
required 0.3 W, providing a low-cost, low-computing, feasi- also satisfies network bandwidth saving and real-time data
ble resource allocation solution. In terms of detection speed, processing [35]. However, complex occlusion situations and

VOLUME 10, 2022 4337


B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

variety of light may affect performance of fall detection [20] R. Rajavel, S. K. Ravichandran, K. Harimoorthy, P. Nagappan, and
system. K. R. Gobichettipalayam, ‘‘IoT-based smart healthcare video surveillance
system using edge computing,’’ J. Ambient Intell. Hum. Comput., pp. 1–13,
In future work, the fall detection system can incorporate the Mar. 2021, doi: 10.1007/s12652-021-03157-1.
use of different sensors, such as a thermal camera installation [21] S. Carata, R. Mihaescu, E. Barnoviciu, M. Chindea, M. Ghenescu, and
to monitor the activities of older adults in lightly lit environ- V. Ghenescu, ‘‘Complete visualisation, network modeling and training,
web based tool, for the Yolo deep neural network model in the darknet
ments and at night or a fisheye lens installation to expand framework,’’ in Proc. IEEE 15th Int. Conf. Intell. Comput. Commun.
the detection range. In addition, different shooting angles and Process. (ICCP), Sep. 2019, pp. 517–523.
complex occlusion situations can be further evaluated. [22] V. Leon, S. Mouselinos, K. Koliogeorgi, S. Xydis, D. Soudris, and
K. Pekmestzi, ‘‘A tensorflow extension framework for optimized gener-
ation of hardware CNN inference engines,’’ Technology, vol. 8, no. 1,
REFERENCES Jan. 2020, Art. no. 6.
[23] P. E. Novac, G. B. Hacene, A. Pegatoquet, B. Miramond, and V. Gripon,
[1] S. J. Naja, M. M. Makhlouf, and M. Chehab, ‘‘An ageing world of the 21st ‘‘Quantization and deployment of deep neural networks on microcon-
century: A literature review,’’ Int. J. Community Med. Public, vol. 4, no. 12, trollers,’’ Sensors, vol. 21, no. 9, 2021, Art. no. 2984.
pp. 4364–4369, 2017. [24] P. Gysel, J. Pimentel, M. Motamedi, and S. Ghiasi, ‘‘Ristretto: A frame-
[2] C. S. Florence, G. Bergen, A. Atherly, E. Burns, J. Stevens, and C. Drake, work for empirical study of resource-efficient inference in convolutional
‘‘Medical costs of fatal and nonfatal falls in older adults,’’ Ann. Long-Term neural networks,’’ IEEE Trans. Neural Netw. Learn. Syst., vol. 29, no. 11,
Care, vol. 66, no. 4, pp. 693–698, Jul. 2018. pp. 5784–5789, Nov. 2018.
[3] E. Auvinet, F. Multon, A. Saint-Arnaud, J. Rousseau, and J. Meunier, ‘‘Fall [25] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and
detection with multiple cameras: An occlusion-resistant method based on A. C. Berg, ‘‘SSD: Single shot multibox detector,’’ in Proc. Eur. Conf.
3-D silhouette vertical distribution,’’ IEEE Trans. Inf. Technol. Biomed., Comput. Vis. (ECCV), Amsterdam, The Netherlands, Oct. 2016, pp. 21–37.
vol. 15, no. 2, pp. 290–300, Mar. 2011. [26] S. Ren, K. He, R. Girshick, and J. Sun, ‘‘Faster R-CNN: Towards real-
[4] N. Lu, Y. Wu, L. Feng, and J. Song, ‘‘Deep learning for fall detection: time object detection with region proposal networks,’’ IEEE Trans. Pattern
Three-dimensional CNN combined with LSTM on video kinematic data,’’ Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017.
IEEE J. Biomed. Health Inform., vol. 23, no. 1, pp. 314–323, Jan. 2019. [27] J. Redmon and A. Farhadi, ‘‘YOLO9000: Better, faster, stronger,’’ in Proc.
[5] E. Akagunduz, M. Aslan, A. Sengu, H. Wang, and M. C. Ince, ‘‘Silhouette IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA,
orientation volumes for efficient fall detection in depth videos,’’ IEEE Jul. 2017, pp. 6517–6525.
J. Biomed. Health Informat., vol. 21, no. 3, pp. 756–763, May 2017. [28] J. Redmon and A. Farhadi, ‘‘YOLOv3: An incremental improvement,’’
[6] Z.-P. Bian, J. Hou, L.-P. Chau, and N. Magnenat-Thalmann, ‘‘Fall detection 2018, arXiv:1804.02767.
based on body part tracking using a depth camera,’’ IEEE J. Biomed. Health [29] F. Chollet, ‘‘Xception: Deep learning with depthwise separable convo-
Inform., vol. 19, no. 2, pp. 430–439, Mar. 2015. lutions,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR),
[7] F. Harrou, N. Zerrouki, Y. Sun, and A. Houacine, ‘‘An integrated vision- Honolulu, HI, USA, Jul. 2017, pp. 1251–1258.
based approach for efficient human fall detection in a home environment,’’ [30] X. Yang, S. Tang, T. guo, K. Huang, and J. Xu, ‘‘Design of indoor fall
IEEE Access, vol. 7, pp. 114966–114974, 2019. detection system for the elderly based on ZYNQ,’’ in Proc. IEEE 9th Joint
[8] B.-H. Wang, J. Yu, K. Wang, X.-Y. Bao, and K.-M. Mao, ‘‘Fall detec- Int. Inf. Technol. Artif. Intell. Conf. (ITAIC), Chongqing, China, Dec. 2020,
tion based on dual-channel feature integration,’’ IEEE Access, vol. 8, pp. 1174–1178.
pp. 103443–103453, 2020. [31] T. R. Mauldin, M. E. Canby, V. Metsis, A. H. H. Ngu, and C. C. Rivera,
[9] A. Lotfi, S. Albawendi, H. Powell, K. Appiah, and C. Langensiepen, ‘‘SmartFall: A Smartwatch-based fall detection system using deep learn-
‘‘Supporting independent living for older adults: Employing a visual based ing,’’ Sensors, vol. 18, no. 10, 2018, Art. no. 3363.
fall detection through analyzing the motion and shape of the human body,’’ [32] A. Y. Alaoui, S. E. Fkihi, and R. O. H. Thami, ‘‘Fall detection for elderly
IEEE Access, vol. 6, pp. 70272–70282, 2018. people using the variation of key points of human skeleton,’’ IEEE Access,
vol. 7, pp. 154786–154795, 2019.
[10] N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie, ‘‘Mobile edge computing:
[33] W.-J. Chang, C.-H. Hsu, and L.-B. Chen, ‘‘A pose estimation-based fall
A survey,’’ IEEE Internet Things J., vol. 5, no. 1, pp. 450–465, May 2018.
detection methodology using artificial intelligence edge computing,’’ IEEE
[11] Y. Liu, M. Peng, G. Shou, Y. Chen, and S. Chen, ‘‘Toward edge intelli- Access, vol. 9, pp. 129965–129976, 2021.
gence: Multiaccess edge computing for 5G and Internet of Things,’’ IEEE [34] R. M. A. Haseeb-Ur-Rehman, M. Liaqat, A. H. M. Aman, S. H. A. Hamid,
Internet Things J., vol. 7, no. 8, pp. 6722–6747, Aug. 2020. R. L. Ali, J. Shuja, and M. K. Khan, ‘‘Sensor cloud frameworks: State-of-
[12] K. Ozcan and S. Velipasalar, ‘‘Wearable camera and accelerometer-based the-art, taxonomy, and research issues,’’ IEEE Sensors J., vol. 21, no. 20,
fall detection on portable devices,’’ IEEE Embed. Syst. Lett., vol. 8, no. 1, pp. 22347–22370, Oct. 2021.
pp. 6–9, Mar. 2015. [35] E. Ahmed, A. Ahmed, I. Yaqoob, and J. Shuja, ‘‘Bringing computation
[13] A. Shahzad and K. Kim, ‘‘FallDroid: An automated smart-phone-based closer toward the user network: Is edge computing the solution?’’ IEEE
fall detection system using multiple kernel learning,’’ IEEE Trans. Ind. Commun. Mag., vol. 55, no. 11, pp. 138–144, Nov. 2017.
Informat., vol. 15, no. 1, pp. 35–44, Jan. 2018.
[14] M. Saleh and R. L. B. Jeannès, ‘‘Elderly fall detection using wearable
sensors: A low cost highly accurate algorithm,’’ IEEE Sensors J., vol. 19,
no. 8, pp. 3156–3164, Apr. 2019.
[15] G. Shi, C. S. Chan, W. J. Li, K.-S. Leung, Y. Zou, and Y. Jin, ‘‘Mobile
BOR-SHING LIN (Senior Member, IEEE)
human airbag system for fall protection using MEMS sensors and embed-
received the B.S. degree in electrical engineering
ded SVM classifier,’’ IEEE Sensors J., vol. 9, no. 5, pp. 495–503,
May 2009.
from National Cheng Kung University, Taiwan,
in 1997, and the M.S. and Ph.D. degrees in electri-
[16] K. Ozcan, S. Velipasalar, and P. K. Varshney, ‘‘Autonomous fall detection
with wearable cameras by using relative entropy distance measure,’’ IEEE cal engineering from National Taiwan University,
Trans. Human-Mach. Syst., vol. 47, no. 1, pp. 31–39, Feb. 2017. Taiwan, in 1999 and 2006, respectively. Since
[17] J. H. Kim, ‘‘Embedded real-time fall detection using deep learning for 2009, he has been an Assistant Professor with the
elderly care,’’ in Proc. 31st Int. Conf. Inf. Process. Syst. (NIPS), Long Department of Computer Science and Information
Beach, CA, USA, Dec. 2017, pp. 2110–2118. Engineering, National Taipei University, Taiwan.
[18] A. El Kaid, K. Baïna, and J. Baïna, ‘‘Reduce false positive alerts for elderly He is currently a Professor with the Department of
person fall video-detection algorithm by convolutional neural network Computer Science and Information Engineering, National Taipei University.
model,’’ Proc. Comput. Sci., vol. 148, pp. 2–11, May 2019. He is also the Director of the Computer and Information Center, National
[19] Z. Yu, A. Zahid, S. Ansari, H. Abbas, and A. Abdulghani, ‘‘Hardware- Taipei University. His research interests include smart medicine, embedded
based Hopfield neuromorphic computing for fall detection,’’ Sensors, systems, wearable systems, biomedical signal processing, biomedical image
vol. 20, no. 24, 2020, Art. no. 7226. processing, and portable biomedical electronic system design.

4338 VOLUME 10, 2022


B.-S. Lin et al.: Fall Detection System With Artificial Intelligence-Based Edge Computing

TIKU YU received the B.S. and M.S. degrees HUNG-KAI HSU is currently pursuing the M.S.
in electrical engineering from National Taiwan degree with the Department of Computer Sci-
University, Taipei, Taiwan, in 2001 and 2003, ence and Information Engineering, National Taipei
respectively, and the Ph.D. degree in electri- University, Sanxia, New Taipei City, Taiwan. His
cal engineering from the University of Cal- research interests include embedded systems and
ifornia at San Diego, La Jolla, CA, USA, artificial neural network applied to biomedical
in 2009. From 2009 to 2010, he was a Post- engineering.
doctoral Researcher with the IBM T. J. Watson
Research Center, Yorktown Heights, NY, USA.
From 2010 to 2013, he was a Technical Staff with
Mediatek USA in San Jose, California. Since 2013, he has been with the
Department of Communication Engineering, National Taipei University,
Taiwan, where he is currently an Associate Professor. His research interests
include design and development of RFIC, MMIC, and microwave circuits.

CHIH-WEI PENG received the B.S. degree in I-JUNG LEE received the B.S. degree in com-
physical therapy from Chang Gung University, puter science and information engineering from
Taoyuan, Taiwan, in 2000, and the M.S. and National Taipei University, Sanxia, New Taipei
Ph.D. degrees in biomedical engineering from City, Taiwan, in 2014, where she is currently
National Cheng Kung University, Tainan, Taiwan, pursuing the Ph.D. degree with the College of
in 2002 and 2007, respectively. He is currently Electrical Engineering and Computer Science.
a Full Professor and the Chairperson with the Her research interests include embedded sys-
School of Biomedical Engineering, Taipei Medi- tems and wearable devices applied to biomedical
cal University. His research interests include neu- engineering.
ral engineering, biomaterial, bio-signal process,
biosensor, rehabilitation engineering, neural prostheses, urological electro-
physiology, and functional electrical stimulation.

CHUEH-HO LIN received the B.S. degree from


the Department of Physical Therapy, Hung Kung ZHAO ZHANG (Graduate Student Member,
University, Taiwan, in 2005, and the M.S. and IEEE) received the B.S. and M.S. degrees in
Ph.D. degrees from the Department of Physi- electrical engineering from the Hefei Univer-
cal Therapy and Assistive Technology, National sity of Technology, China, in 2005 and 2008,
Yang-Ming University, Taiwan, in 2007 and 2014, respectively. He is currently pursuing the Ph.D.
respectively. Since 2015, he has been an Assistant degree with of the College of Electrical Engi-
Professor with the Master Program in Long-Term neering and Computer Science, National Taipei
Care, Taipei Medical University, Taiwan. He is University, Taiwan. After graduating, he was
currently the Director and an Associate Professor the Principal Engineer at EDAN Instruments,
with the Master Program in Long-Term Care, Taipei Medical University. His from 2008 to 2009. Since 2011, he has been a
research interests include developing the novel assistive devices with multi- Lecturer with the College of Mechanical and Electrical Engineering, Wuyi
ple sensors to investigate age-related changes in muscle strength generation, University, China. His research interests include smart medicine, embed-
bilateral limbs coordination, motor control, and brain-behavior relationships ded systems, wearable systems, and portable biomedical electronic system
during specific task in both non-disabled individuals (health young and design.
elderly adults) and individuals with neurologic disorders.

VOLUME 10, 2022 4339

You might also like