0% found this document useful (0 votes)
120 views4 pages

Fall Detection Using OpenPose

Falls are a fatal threat to the elderly peoples health. It is a notable cause of morbidity and mortality in elders. Falls can even lead to serious injuries and death of the person , if they are not given proper attention.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views4 pages

Fall Detection Using OpenPose

Falls are a fatal threat to the elderly peoples health. It is a notable cause of morbidity and mortality in elders. Falls can even lead to serious injuries and death of the person , if they are not given proper attention.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Volume 6, Issue 5, May – 2021 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Fall Detection using OpenPose


Divya R Riya T B , Rona Johns P , Sreelakshmi T J , Theres Davies
Asst. Professor UG Students
Dept. of Computer Science and Engineering Dept. Of Computer Science and Engineering
SCET , Thrissur SCET , Thrissur

Abstract:- Falls are a fatal threat to the elderly peoples Human Activity Recognition (HAR) is a vast topic of
health. It is a notable cause of morbidity and mortality in research that tries to categorise statistical activities. Normal
elders. Falls can even lead to serious injuries and death of bodily movements such as standing up, sitting down, leaping,
the person , if they are not given proper attention. Above walking, and so on can be included in these exercises. When a
30% of persons aged 65 years or above , fall each year and person does an action, it normally takes a few seconds. Such
they mostly are reoccurring. The severity of such falls are photos are not classified into an activity class by image
due to the increasing age, cognitive impairment and classification techniques. To recognise the movements,
sensory deficits. A multidisciplinary approach should be existing systems rely on sensor data acquired by
developed to prevent future falls. This paper emphasizes accelerometers, smart phones, or similar harnessing devices.
the need and development of an advanced fall detection The collection of such data is difficult and expensive,
system using Machine Learning and Artificial Intelligence requiring several sensors, specialised technology, and
technologies. The fall detection systems are currently software.
categorized into wearable and non-wearable devices
existing in the market. These wearable devices use sensors A. PROBLEM DOMAIN
which may not be accurate always and it would be difficult Existing systems uses image processing techniques for
for the elderly person to wear it around their body all the human action and pose detection. In earlier methods, they
time. The architecture that is proposed in this paper uses capture human images and process the image using techniques
open source libraries such as OpenPose for a much better like Artificial Neural Networks to get the output. The human
detection and alert system, among non-wearable devices. actions are happening simultaneously in a fraction of seconds.
The system retrieves the locations of 18 joint points of the There fore existing method is time consuming and less
human body and detects human movement through effective. Some of the existing systems uses hardware devices
detecting its location changes. The system is able to like movement sensors. These sensors can be fit onto the
effectively identify the various joints of the human body as human body and can look for shakes. When a particular level
well as eliminating environmental noise for an improved of shake is detected, then it is guessed as a fall detection. This
accuracy. This results in improved effective training time is not accurate as every shakes may not be falls.
as well as eliminating blurriness, light, and shadows. The
developed approach falls within the scope of computer In the marketplace, there are a variety of products to
vision-based human activity recognition and has attracted choose from. The main drawback of such goods is that they
a lot of interest. necessitate large hardware components and would be
prohibitively expensive. Some of the systems use base stations
Keywords:- OpenPose ; OpenCV ; Fall detection ; Artificial and Radio Frequency, which are hardwired to a phone line
Intelligence ; Human Action Recognition ; Convolutional and then call a call centre for assistance. The problem is that
Neural Networks ; LSTM ; Image preprocessing ; Recurrent they all require an intermediary call service, which costs a lot
Neural Network. of money each month and is confined to the range of a house
because they rely on a central base station for outside
I. INTRODUCTION communication.

The ageing of the population is caused by a decrease in The Human Activity Recognition system continuously
the birth rate and a longer life span. According to studies, the monitors human actions, which could be useful in senior
senior population will grow substantially in the future, with surveillance. It aids in health-care surveillance, aberrant
the share of senior people in the global population reaching behaviour detection, identity, mental state awareness, and
28% in 2050. Human function declines as people age, geriatric care. Despite the progress made in this field,
increasing the chance of falling. Fractures and other long-term substantial dependencies remain, such as data intake from
diseases are the result of falls. It can result in disability, loss of wearable sensors and their installation, accuracy, and
freedom, and a psychological fear of re-falling. Falls not only annoyance. Sensors are placed throughout the body to collect
hurt the elderly, but they also put a mental and financial strain input data. Smart phones and smart watches take data from
on the person and their family. As a result, a reliable fall accelerometers, gyroscopes, and other sensors.
detection system and emergency aid are required.

IJISRT21MAY1119 www.ijisrt.com 1340


Volume 6, Issue 5, May – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Over a period of time, the Patterns are examined. connections, allowing them to handle both single data points
Sleeping activities are frequently discovered by interpreting and sequences of data. The LSTM algorithm is appropriate
minute motions. The person is identified, a unique identity is since it quickly learns the essential point attributes and returns
created, and the activity is checked out by the video an activity class. Running, sitting, standing, waving one hand,
surveillance system. Due to background noise, subject waving two hands, jumping, and clapping are all detected in
occlusions, low light circumstances, and the difficulty in real time by this system.
tracking a person, developing a low-error system is tough. The
suggested classification system for human activities looks This approach examines each frame acquired by a
promising and addresses the issues raised by the previous camera, extracting skeletal data from humans using the
systems. The data is collected in real time with the help of OpenPose skeleton extraction algorithm and displaying it on
connected cameras. This eliminates the need for the wearable the screen. Each node in the plane coordinate system is
sensors to be connected. Openpose derives keypoint properties represented by its horizontal and vertical coordinates. The
from the skeleton form and handles situations such as ground velocity of descent at the centre of the hip joints, the human
noise and distorted light. As a result, the suggested algorithm body centre line angle with the ground, and the width-to-
is robust. As a result, a new fall detection method is necessary. height ratio of the human body exterior rectangle are used to
identify falling behaviour. The following is how the paper is
B. PROPOSED SYSTEM structured: The second section examines the system's flow and
Human Pose Estimation is a fascinating area in Artificial architecture. The working explanation is detailed in Section 3.
Intelligence. The practise of locating human joints from The software is described in Section 4. Section 5 discusses the
photographs or movies is known as pose estimation. It's also conclusion.
known as the process of looking for a certain posture among
all articulated positions. The method considers human joints, II. SYSTEM ARCHITECTURE
which are frequently referred to as key points, and estimates
the pose from a live video.

Openpose is utilised in the proposed method to


determine the exact human position. OpenPose is a popular
bottom-up methodology for estimating multi-person human
poses. It's an open source, real-time project that uses a single
photograph to recognise the human body, including hands,
face, facial expressions, and legs. OpenPose, like many other
bottom-up techniques, first discovers keypoints that belong to
each person in the image, then assigns keypoints to separate
individuals. The suggested system takes the user's video and
sends it to the openpose estimater for analysis. The COCO
dataset is used with the openpose estimator, which is a pre-
trained model. (The coco dataset is made up of 18 different
human activities that were recorded under rigorous
conditions.) When the user tries to wake up, the openpose
estimater, which is running in the background, collects these
behaviours and generates an alarm.

A graphical representation of the Human Pose Skeleton


is available. The set of coordinates is linked to describe the
person's stance. A joint is the name given to each co-ordinate
in the skeleton (also known as keypoint). A pair is a legitimate
connection between two joints (or a limb). Not every part
combination would be a good match. Action recognition,
animation, gaming, and other applications are among them.
The first method approximated a single person's stance in an
image. These methods work by first identifying the individual
keypoints, then connecting them to construct the pose.

Using OpenPose and Long Short-Term Memory


Networks, this research describes a systematic way for
recognising human behaviours in real time. This method is Fig 1: Flow diagram
based on real-time images collected by the camera. Using a
sliding window technique, the output of body features would
be separated into sub-sequences called windows. The LSTM
(Long Short Term Memory) is a type of recurrent neural
network (RNN). Feed forward neural networks have feedback

IJISRT21MAY1119 www.ijisrt.com 1341


Volume 6, Issue 5, May – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
III. WORKING EXPLANATION IV. HARDWARE REQUIREMENTS

A. PSEUDOCODE  Processor : i5
1. Identifying two distinct activities Fall action (FA) and  RAM : 4 GB
current action (CA)  Hard Disk : 500 GB
2. Begin recording video using the system camera.
3. Launch the OpenPose library. V. SOFTWARE REQUIREMENTS
4. Using the video captured, draw the essential points of the
human body.  Front end : Python
5. The keypoints are linked together, resulting in CA: Python is a general-purpose programming language that
if CA==FA, may be used for a variety of tasks, including back-end
a fall has been detected, and an email has been issued; development, software development, and writing system
else, scripts. Python 3.7 was used in this project.
video recording will continue.
6. End.  Back end : MySql
Python programmes can use MySQL connector/Python
 Object detection and extraction from live video: The live to connect to a MySQL database.
video was collected using the system web camera, and the
object (human body) was detected and extracted from the  IDE : Spyder
backdrop using OpenCV algorithms. It's a cross-platform open source IDE. The Python
 Image Pre-processing: From the live video, the collected Spyder IDE is entirely written in Python. Spyder is best used
image (16 frames per second) is converted to RGB format with Jupyter notebooks or other scientific computing tools
and prepared for processing. For processing, the image is such as Anaconda, rather than as a general Python
resized to 250x250 pixels. development environment. It was created by scientists and is
 OpenPose is a programme that determines the exact human primarily composed of scientists, data analysts, and engineers.
posture. OpenPose is a bottom-up method for estimating
multi-person human poses. OpenPose finds the keypoints,  Tools : OpenPose
which are mostly the joints of the human body skeleton Openpose is a technique for determining a person's exact
belonging to each individual in the retrieved image. Get position. OpenPose is a popular bottom-up methodology for
crucial features including thigh deflection angles, calf estimating multi-person human poses. It's an open source, real-
deflection angles, spine deflection angle, and spine ratio time project that uses a single photograph to recognise the
using OpenPose. These features are calculated, and the human body, including hands, face, facial expressions, and
body position is determined depending on the features. legs. The COCO dataset was used to train the Openpose model
 As demonstrated in Fig 2, the OpenPose estimater, which to extract 18 body key points.
is a pre-trained model, works with the COCO dataset to
extract 18 essential points of the human anatomy from a VI. CONCLUSION
2D image. The attributes collected from the collected live
video are then compared to the specified action to Falling and being seriously injured are two of the most
determine whether or not the current action is a fall. If the important public health issues in an ageing society. It is
system detects a fall, it will emit a buzzer sound and send required to determine the features of the fall movement in
an alert notification to the provided email address; order to detect a fall. This study presents a method for
otherwise, it will repeat the capture procedure. detecting human body keypoints from collected frames using
the TensorFlow OpenPose module. This approach extracts the
human skeleton and detects falls using skeleton data, i.e. the
changing trajectory of joint points is related to human
movements. The suggested system employs computer vision
technologies in conjunction with artificial intelligence
libraries such as OpenPose to detect human falls and provide
timely alerts to users. Because OpenPose extracts the body
main point features in skeleton form, which can handle
instances like background noise and low light distorted
situations, this method has been proved to operate well in a
complex space environment with lower equipment costs.

In some postures and motions, the joint points can


become forgotten, causing the model to coagulate throughout
training. To circumvent this, set the relative position of the
joint points and interpolate a portion of the data to reduce
excessive noise and unnormalized events. Recursive neural
Fig 2: Keypoints networks can aid in the learning of time changes in human
joint points while also shortening training time. The fall

IJISRT21MAY1119 www.ijisrt.com 1342


Volume 6, Issue 5, May – 2021 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
detection model proposed in this paper learns the changes in [8]. Q. Chen, Y. H. Chen, and W. G. Jiang, (2015) The
human joint points over time. The falling motion is then change detection of high spatial resolution remotely
recognised by setting three conditions: (a) the speed of sensed imagery based on OB-HMAD algorithm and
descent at the centre of the hip joint, (b) the angle between the spectral features Guang Pu Xue Yu Guang Pu Fen Xi=
human body's centerline and the ground, and (c) the exterior Guang Pu, vol. 35, no. 6, pp. 1709–1714.
rectangular width-to-height ratio. Based on the awareness of [9]. Y. Li, Q. Miao, K. Tian, Y. Fan, X. Xu, Z. Ma, and J.
falls and consideration of the condition of persons rising up Song,}, (2019), Y. Li, Q. Miao, K. Tian, Y. Fan, X. Xu,
after falls, the process of standing up after falls is seen as an Z. Ma, and J. Song. Lett., vol. 119, pp. 187–194.
inverse process of falling. As a result, the suggested algorithm [10]. B. Xie, X. He, and Y. Li, (2018), RGB-D static gesture
is robust. Experiments have proven that the procedure works recognition based on convolutional neural network J.
and that it produces the desired result, as well as a proper Eng., vol. 2018, no. 16, pp. 1515–1520.
alert. [11]. M. Arsalan and A. Santra, (2019), Character recognition
in air-writing based on network of radars for human-
ACKNOWLEDGMENT machine interfaceIEEE Sensors J., vol. 19, no. 19, pp.
8855–8864.
We want to offer our heartfelt gratitude and heartfelt [12]. S. Huo, T. Hu, and C. Li, (2017) Improved collaborative
thanks to everyone who helped us make this initiative a huge representation classifier based on l2-regularized for
success. We thank the almighty god for all the benefits he has human action recognition J. Electr. Comput. Eng., vol.
bestowed upon us. Prof. Divya R, our guide, deserves a huge 2017, pp. 1–6.
thanking . We want to thank our Principal, Dr. Nixon [13]. J. Wang, T. Zheng, and P. Lei, (2018), Hand gesture
Kuruvila, and our Department Head, Dr. M Rajeswari, for recognition method by radar based on convolutional
their unwavering support throughout the project. neural network J. Bjing Univ. Aeronaut. Astronaut., vol.
44, no. 6, pp. 1117–1123.
REFERENCES [14]. E. Tsironi, P. Barros, C. Weber, and S. Wermter, (2017),
An analysis of convolutional long short-term memory
[1]. Romeo L.,Marani R., Lorusso N.,Angelillo M.T. recurrent neural networks for gesture recognition
Cicirelli, G. Vision-based Assessment of Balance Neurocomputing, vol. 268, pp. 76–86.
Control in Elderly People. In Proceedings of the 2020 [15]. S. Y. Kim, H. G. Han, J. W. Kim, S. Lee, and T. W.
IEEE International Symposium on Medical Kim, (2017), A hand gesture recognition sensor using
Measurements and Applications (MeMeA), Bari, Italy, reflected impulses IEEE Sensors J., vol. 17, no. 10, pp.
1–3 July 2020; pp. 1–6. 2975–2976.
[2]. Lie, W.; Le, A.T.; Lin, G. Human Fall-Down Event [16]. C. D. Santos, J. L. A. Samatelo, and R. F. Vassallo
Detection Based on 2D Skeletons and Deep Learning (2020), Dynamic gesture recognition by using CNNs
Approach. In Proceedings of the 2018 International and star RGB: A temporal information condensation
Workshop on Advanced Image Technology (IWAIT), Neuro computing, vol. 400, pp. 238–254.
Chiang Mai, Thailand, 7–10 January 2018; pp. 1–4. [17]. L. Chen, J. Fu, H. Li, B. Zheng, and Y. Wu (2020) Hand
[3]. Lin, C.; Wang, S.; Hong, J.; Kang, L.; Huang, C. Vision- gesture recognition using compact CNN via surface
Based Fall Detection through Shape Features. In electromyography signalsSensors, vol. 20, no. 3, pp.
Proceedings of the IEEE Second International 672–683.
Conference on Multimedia Big Data (BigMM), Taipei, [18]. Tsai, T.; Hsu, C. Implementation of Fall Detection
Taiwan, 20–22 April 2016; pp. 237–240. System Based on 3D Skeleton for Deep Learning
[4]. M. Zadghorban and M. Nahvi, (2018), An algorithm on Technique. IEEE Access 2019, 7, 153049–153059.
sign words extraction and recognition of continuous [19]. S. K. Leem, F. Khan, and S. H. Cho, (2020), Detecting
Persian sign language based on motion and shape mid-air gestures for digit writing with radio sensors and
features of hands Pattern Anal. Appl., vol. 21, no. 2, pp. a CNN IEEE Trans. Instrum. Meas vol. 69, no. 4, pp.
323–335. 1066–1081.
[5]. Z. Wu, Y. Huang, L. Wang, X. Wang, and T. Tan, [20]. X. Zhang and X. Li, (2019), Dynamic gesture
(2017) A comprehensive study on cross-view gait based recognition based onFuture Internet, vol. 11, no. 4, pp.
human identification with deep CNNsIEEE Trans. 91–102.
Pattern Anal. Mach. Intell., vol. 39, no. 2, pp. 209–226.
[6]. P. Ji, C. Wu, H. Li, and X. Xu}, (2016) Vision-based
posture recognition using an ensemble classifier and a
vote filter Proc. SPIE, vol. 157, pp. 101571J-101584J.
[7]. J. C. Núñez, R. Cabido, J. J. Pantrigo, A. S.
Montemayor, and J. F. Vélez, (2018) Convolutional
neural networks and long short-term memory for
skeleton-based human activity and hand gesture
recognition Pattern Recognit.vol. 76, pp. 80–94.

IJISRT21MAY1119 www.ijisrt.com 1343

You might also like