0% found this document useful (0 votes)
91 views10 pages

NAYAK-Development of A Fully Automated RULA Assessment System Based On Computer Vision

Uploaded by

vincentia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views10 pages

NAYAK-Development of A Fully Automated RULA Assessment System Based On Computer Vision

Uploaded by

vincentia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

International Journal of Industrial Ergonomics 86 (2021) 103218

Contents lists available at ScienceDirect

International Journal of Industrial Ergonomics


journal homepage: www.elsevier.com/locate/ergon

Development of a fully automated RULA assessment system based on


computer vision
Gourav Kumar Nayak, Eunsik Kim *
Mechanical, Automotive, and Materials Engineering, University of Windsor, 401 Sunset Ave, Windsor, ON, N9B 3P4, Canada

A R T I C L E I N F O A B S T R A C T

Keywords: The purpose of this study was to develop an automated, RULA-based posture assessment system using a deep
RULA learning algorithm to estimate RULA scores, including scores for wrist posture, based on images of workplace
Deep learning algorithm postures. The proposed posture estimation system reported a mean absolute error (MAE) of 2.86 on the vali­
Musculoskeletal injuries
dation dataset obtained by randomly splitting 20% of the original training dataset before data augmentation. The
Automated posture assessment system
results of the proposed system were compared with those of two experts’ manual evaluation by computing the
intraclass correlation coefficient (ICC), which yielded index values greater than 0.75, thereby confirming good
agreement between manual raters and the proposed system. This system will reduce the time required for
postural evaluation while producing highly reliable RULA scores that are consistent with those generated by
manual approach. Thus, we expect that this study will aid ergonomic experts in conducting RULA-based surveys
of occupational postures in workplace conditions.

1. Introduction have been developed to evaluate postural risks associated with work­
place tasks. These tools are used for initial screenings of postures that
Work-related musculoskeletal disorders (WMSDs) are injuries to the can lead to WMSDs and that may require further analysis with more
limbs of workers induced or aggravated by working conditions comprehensive tools. RULA is an observational survey method devel­
(Schneider et al., 2010). The working conditions that may lead to oped to assess postures of the neck, trunk, upper arm, lower arm, and
WMSDs include routine lifting of heavy objects, daily exposure to whole wrists (McAtamney and Corlett, 1993). It needs no special equipment,
body vibration, routine overhead work, work with the neck in chronic which makes it easy for investigators to use. The RULA survey tool has
flexion position, or performing repetitive forceful tasks (Bernard, 1997). proven useful in postural assessments of such occupational fields and
According to researchers from the National Reference Center for Rare setting as supermarkets (Ryan, 1989), agriculture (Tuure, 1992), ship
Autoimmune Diseases at University Hospitals of Strasbourg, WMSDs are maintenance (Van Wendel de Joode et al., 1997), soft drink distribution
ranked second worldwide in shortening people’s working years, pre­ (Wright and Haslam, 1999), metalworking (González et al., 2003),
ceded only by mental illness and substance abuse (Sebbag et al., 2019). transport driving (Massaccesi et al., 2003), carpet mending (Choobineh
WMSDs also account for 4.1 million early deaths in 2015, an increase of et al., 2004), etc. Furthermore, RULA allows for quick assessment of the
46% since 2000 (Sebbag et al., 2019). WMSDs have contributed to upper body, making it popular and reliable in industry (Kee, 2020; Kong
almost 400,000 injuries, costing industries over $20 billion per year et al., 2018). Present methods of evaluating postural risk are based on
(Middlesworth, 2020). Previous studies have identified strong associa­ observational techniques that requires an ergonomic analyst observe the
tions between WMSDs and awkward postures in the workplace work in real-time or from recorded video to manually segment the
(Anderson et al., 1997; Simoneau et al., 1996; Van Wely, 1970). Various relevant body parts and evaluate risk associated with the posture
postural evaluation tools such as the Ovako Working Posture Analysis (Andrews et al., 2012). Due to human error, however, these techniques
System (OWAS) (Karhu et al., 1977), the Novel Ergonomic Postural produce results with low consistency and repeatability, both of which
Assessment Method (NERPA) (Sanchez-Lite et al., 2013), the Rapid can be reduced or eliminated by using advanced technologies (Peppo­
Upper Limb Assessment (RULA) (McAtamney and Corlett, 1993) and the loni et al., 2015; Plantard et al., 2017; Sasikumar and Binoosh, 2020;
Rapid Entire Body Assessment (REBA) (Hignett and McAtamney, 2000) Clark et al., 2012).

* Corresponding author.
E-mail addresses: [email protected] (G.K. Nayak), [email protected] (E. Kim).

https://doi.org/10.1016/j.ergon.2021.103218
Received 4 March 2021; Received in revised form 15 September 2021; Accepted 16 September 2021
Available online 1 October 2021
0169-8141/© 2021 Elsevier B.V. All rights reserved.
G.K. Nayak and E. Kim International Journal of Industrial Ergonomics 86 (2021) 103218

Human Pose Estimation has significantly progressed with the occupational tasks. The present study applied this method to common
advancement of Convolution Neural Networks (CNN) and popular occupational postures and compares the resulting RULA scores with
keypoints datasets such as Microsoft Common Objects In Context those calculated by two ergonomics experts. This method reduces the
(COCO) (Lin et al., 2014), MPII Human Pose Dataset (Andriluka et al., time required for RULA evaluation by eliminating the need for in­
2015) and Human 3.6M (Ionescu et al., 2014). The use objective mea­ vestigators to spend time sampling and evaluating posture from video
surements in postural assessment has been very popular because it camera recordings of workplace tasks. This study aims to determine
eliminates the need for experts to manually segment body parts and whether it is possible to develop an automated system, based on a deep
evaluate movements and because technological advancements in the learning algorithm, that will reduce evaluation time and produce RULA
field of computing have made it easier than ever to access these tools score results that are sufficiently similar to those generated by manual
(Lowe et al., 2019). Earlier attempts to achieve this were based on evaluators in observational postural assessment.
wearable devices (Peppoloni et al., 2015) and Kinect based systems
(Plantard et al., 2017) for online assessment of WMSD risks. 2. Method
In order to track upper limb movements for calculating work cycles,
Peppoloni et al. (2015) used a wireless, wearable device system that 2.1. Data preparation
relies on EMG signals to determine muscle effort intensity and on inertial
measurement units (IMUs) to reconstruct the posture of the human This study used Whole-Body Human Pose Estimation in the Wild,
upper limb. This method requires IMU and EMG sensors to be mounted extending the MS COCO 2017 dataset that was manually annotated for
on a subject’s body, but sensor application in real work conditions can 68 facial points, 42 hand points and 23 body and feet points, for training
be difficult due to signal interference, and trained professionals are and validation of the proposed neural network (Jin et al., 2020). For
required to conduct the study and run calibration procedures, which can each key point, the annotations indicate absolute horizontal and vertical
also be a challenging process, as threshold parameters of the system vary distance from the top-left corner of the given image, as well as a nu­
with a subject’s motion during calibration (Peppoloni et al., 2015). merical visibility flag assigned a value of 0, 1, or 2, where 0 is not visible
Another method uses marker-less motion capture systems like and not available, 1 is available but not visible, and 2 is available and
Microsoft Kinect, which are easy-to-use motion capture devices that can visible. The dataset can be found on the official repository, available at
provide real-time anatomical landmark position data in three di­ https://github.com/jin-s13/COCO-WholeBody (Jas and Fang, 2020).
mensions (Clark et al., 2012). A significant drawback to this method is Based on RULA notation for whole-body postural analysis, this study
its occlusion of a body’s joints, which can lead to insufficient informa­ used images with a visibility flag of 1 or 2 for the following 17 key
tion to accurately predict posture and hence, unrealistic results (Clark points: nose (N), right eye (RI), left eye (LI), right shoulder (RS), left
et al., 2012). Another disadvantage of this method is that it requires shoulder (LS), right elbow (RE), left elbow (LE), right wrist (RW), left
wrist, wrist twist and neck twist RULA scores to be disregarded because wrist (LW), right trunk (RT), left trunk (LT), right knee (RK), left knee
the industrial environment produces too much noise and can lead to (LK), right ankle (RA), left ankle (LA), right knuckle (RN) and left
error in RULA computation. knuckle (LN).
A third approach to evaluating postural risk with respect to RULA is This study also used a pre-trained neural network to detect and crop
to use a computer vision algorithm that predicts the RULA grand score a human figure from an original image to reduce the reception area for
from images. Sasikumar and Binoosh (2020) compared the performance the postural detection network, thereby improving the accuracy of
of popular supervised machine learning classifiers such as the Random detection of the postural estimation network. The model weights and
Forest algorithm, the Naïve Bayes Classifier, the Decision Tree algo­ model architecture for the pre-trained neural network can be found at
rithm, the k-Nearest Neighbors algorithm, and Neural Networks and https://github.com/experiencor/keras-yolo3, available under open-
Support Vector Machines when predicting the risk of WMSDs in com­ source license. The class labels in the script were modified to detect
puter professionals considering postural, physiological, and only ‘person’ class. Data augmentation was applied to generate the vast
work-related factors. The attributes for evaluating postural risks have amount of data required for training CNNs by rotating images at random
been selected based on data collected from the Nordic musculoskeletal angles and flipping each along its mid-vertical axis, as shown in Fig. 1.
questionnaire survey (Crawford, 2007). Because this study was con­ The images in the COCO dataset are of people in different postures
ducted among computer professionals, it is task-specific and therefore engaged in common, real-life activities, which ensured that the model
applicable only to IT industries. Li et al. (2020) conducted a study using was trained on real data.
a Convolution Neural Network (CNN) to predict kinematic joint infor­
mation from images and another dense network to classify the output as 2.2. Network architecture
RULA grand scores in real-time. This method uses the MS COCO dataset
as training data for a pose detector model to predict key joint locations A regression based human pose estimation network was introduced
for workers. One major drawback of this technique, however, is that it to predict posture from images. The deep learning model in this study
assumes a uniform posture score for the wrist. The pose detector model was drawn from a study by Toshev et al. (2013) and was trained using 17
is also sensitive to visual noise in images such as poor lighting conditions key points from the recently published Whole-Body Human Pose Esti­
and dust, which makes the application difficult in on-site environments. mation in the Wild dataset (Jin et al., 2020). The input layer of the
In general, although direct methods such as motion capture systems network accepts grayscale images of 128 x 128 pixels. Since the joint
or IMU sensors are more accurate than observation-based approaches, coordinates are in the absolute image coordinate system and the neural
they also have the disadvantages of being expensive, requiring training, network input layer accepts images of 128 x 128 pixels, images were
and being difficult to implement for practitioners in field. In addition, all resized, and the label coordinates transformed relative to the new size of
the previous studies related to automating the task of posture assessment image. The images were normalized by subtracting the mean from each
either require additional mounted equipment or consider the wrist score pixel across the channel and then dividing the result by the standard
uniform for group-A postural evaluation and cannot justifiably be deviation, thereby speeding up convergence during network training.
applied broadly to general occupational and industrial tasks (Peppoloni
et al., 2015; Plantard et al., 2017; Clark et al., 2012; Li et al., 2020). X− μ
XS =
Thus, the proposed study aims to automate the observation-based pro­ σ
cess of employing RULA in a workplace by developing a deep learning Equation (1) X is the pixel intensity, μ is the mean and σ is standard
algorithm that can predict full body posture, including wrist posture, deviation of pixel intensity across entire channel.
from images and evaluate the postural risks associated with Table 1 represents the architecture of the posture estimation network

2
G.K. Nayak and E. Kim International Journal of Industrial Ergonomics 86 (2021) 103218

Fig. 1. (a) Sample training images from MS COCO dataset (b) Results of person detector model (c) Horizontal flip augmentation (d) Random rotation augmentation.
Images used under CC by 2.0 license.

used in this study. The network consists of 5 convolutional layers for loss function, and learnable parameters were updated during back­
feature extraction where the first and second consecutive convolutional propagation using Adam optimization (Diederik and Ba, 2015). The
layers are stacked with a MaxPooling layer for reducing dimensionality network was trained with a batch size of 64 and an initial learning rate of
of the feature map and decreasing the number of subsequent trainable 1e-2, which reduces by a factor of 2 up to 1e-8 if the validation loss does
parameters (Yamashita et al., 2018). Input to convolution layers were not decrease for 6 consecutive epochs and which stops training when
linearly transformed to have a zero mean and unit variance by this condition is met to prevent the network from overfitting (Caruana
employing batch normalization as part of the architecture (Ioffe and et al., 2001). The capability of the network to generalize was assessed by
Szegedy, 2015). A Rectified Linear Unit was used as an activation the mean absolute error (MAE) of the model’s performance on a vali­
function between each convolutional layer and mapped all negative dation set during training. MAE is the mean of the absolute difference
inputs to zero (Maas et al., 2013). The output feature maps of the final between the predicted and absolute joint coordinates. To validate the
convolutional layer were flattened and connected to 3 fully connected model and to ensure that training and testing data are representative of
layers with 4096, 4096 and 1000 neurons, respectively, that were con­ the same sample, 20% of the dataset was reserved from the overall
nected to the output layer of dimensions 34 x 1. The dropout layer was training set without any augmentation. The network was trained from
implemented with a dropout regularization factor of 0.4 between each scratch using a Google Cloud-based service called Google Colab with
fully connected layer (Huang et al., 2017). The additional dropout layer 25GiB GPU memory (Bisong, 2019).
reduced overfitting of the network and improved generalization by
randomly dropping out neurons (Srivastava et al., 2014).
The performance of the model during forward propagation was 2.3. RULA posture score estimation
calculated using Mean Squared Error (Sammut and Webb, 2011) as a
RULA body posture scores were estimated from 2-D kinematic joint

3
G.K. Nayak and E. Kim International Journal of Industrial Ergonomics 86 (2021) 103218

Table 1 gravity assisted the posture and required to be initialized by investigator


Architecture of postural estimation network. depending on the nature of the task.
Layer Type Output Shape #Parameters Wrist midline posture (θW6 ): Lower arm joints (elbow and wrist)
and hand joints (wrist and knuckle) in frontal plane were used to
Conv_2D_1(Convolution) (None, 32, 32, 96) 11712
Bn_1(BatchNormalization) (None, 32, 32, 96) 384 calculate wrist bending from midline posture. A score of +1 was added
Max_Pooling_2D_1 (None, 16, 16, 96) 0 to the wrist score when this angle was inferior to − 10◦ (radial deviation)
Conv_2D_@(Convolution) (None, 16, 16, 256) 614656 or superior to 10◦ (ulnar deviation).
Bn_2(BatchNormalization) (None, 16, 16, 256) 1024 Neck Flexion/Extension (θN1 ): The neck key point is located at the
Max_Pooling_2D_2 (None, 8, 8, 256) 0
Conv_2D_3(Convolution) (None, 8, 8, 384) 885120
center of left and right shoulder key points considering symmetry of
Bn_3(BatchNormalization) (None, 8, 8, 384) 1536 human body. Upper body joints (trunk and shoulder) and head joints
Conv_2D_4(Convolution) (None, 8, 8, 384) 1327488 (neck and head) in sagittal plane were used to calculate neck flexion/
Bn_4(BatchNormalization) (None, 8, 8, 384) 1536 extension angle.
Conv_2D_5(Convolution) (None, 8, 8, 256) 884992
Neck Twist (SN2 ): The Euclidean distances between nose joints and
Bn_5(BatchNormalization) (None, 8, 8, 256) 1024
Max_Pooling_2D_3 (None, 4, 4, 256) 0 left and right shoulder joints were used in the frontal plane to estimate
Flatten (None, 4096) 0 neck side-twist.
Dense_1 (None, 4096) 16781312 Neck Side-Bending (θN3 ): We assumed that the head follows the
Bn_6(BatchNormalization) (None, 4096) 16384 same orientation as the line connecting the center of left and right eye to
Dropout_1 (None, 4096) 0
Dense_2 (None, 4096) 16781312
the mid-shoulder joints. Head joints (neck and head) were used to
Bn_7(BatchNormalization) (None, 4096) 16384 determine neck side-bending with respect to the vertical axis in the
Dropout_2 (None, 4096) 0 frontal plane.
Dense_3 (None, 1000) 4097000 Trunk Flexion/Extension (θT1 ): The mid-points of the right and left
Bn_8(BatchNormalization) (None, 1000) 4000
trunk key point were used to compute the mid-body trunk joint. The
Dropout_3 (None, 1000) 0
Dense_4 (None, 34) 34034 neck and mid-body trunk key point positions were used to calculate
Bn_9(BatchNormalization) (None, 34) 136 trunk flexion/extension with respect to vertical axis in the sagittal plane.
Trunk Twist (ST2 ): Estimated by the Euclidean distance of shoulder
joints from trunk joints in the frontal plane.
locations obtained from the proposed deep learning model using Trunk Side-Bending (θT3 ): Mid-upper body joint (trunk and neck)
Euclidean distance and the cosine of the angle between 2D vectors. positions were used to calculate trunk side bend with respect to the
Euclidean distance was used to calculate the distance between two joints vertical axis in the frontal plane.
or the length of a limb if the two joints belong to the same body part, and Leg Posture (SL7 ): Legs were assumed to be evenly balanced if the
the inverse cosine was used to calculate angles between two limbs. operator was in sitting posture or if the difference between right and left
→ → ankle y-coordinates was less than half of torso length in the frontal
a. b
θab = cos− 1 ⃒→⃒
⃒ ⃒ plane. Although Li and Xu (2019) used a threshold of 5 cm, using torso
|→
a |⃒ b ⃒ dimension is preferable because it considers the scale of the person in
→ image.
Equation (2) → a and b are vectors with their respective heads Shoulder raise: As the worker did not raise his arm upward during
pointing towards a joint and respective tails lying on the connecting the task, the ‘shoulder raising score’ was fixed at 0.
limb. Force/Muscle use score: The force and muscle use scores need to be
Angles are denoted by \theta with subscripts indicating the specific initialized in advance by the investigator according to the nature of task.
limbs on either side of \theta as denoted by the first letter of the relevant For this study, muscle use score and load score were set to +1 because
limb, e.g., U for upper arm and L for lower arm, followed by a number to the posture was static, and the load intermittent.
distinguish movement of single body part. For instance, flexion/exten­
sion movement is assigned the number 1, twist movement – 2, side- 2.4. RULA grand score estimation
bending – 3, abduction/adduction movement - 5. S represents scores
of a single body part, and its subscripts denote a specific limb and the The Group A score, as determined from upper arm, lower arm and
movement of that limb in accordance with the numerical labels defined wrist scores, and the Group B score, as determined from neck, trunk, and
above. Fig. 2 and Fig. 3 represent RULA posture score calculations for leg scores, combine with the muscle use score and force/load score to
each group from the 2D locations of body joints. This study has also provide an overall posture score for both groups. Muscle use and load on
incorporated thresholds for some parameters that are necessary to the body parts were assumed to be constant throughout the work cycle.
compute RULA scores as given by Vignais et al. (2017). The scores from both groups were summarized into one action level, the
Upper Arm Flexion/Extension (θU1 ): Upper body joints (trunk and grand score, that depicts the level of musculoskeletal risk associated
shoulder) and upper arm joints (shoulder and elbow) in sagittal plane with a given body posture and the corresponding priority of action
were used to calculate upper arm flexion/extension angles. associated with that risk, as indicated in Table 2.
Upper Arm Abduction (θU5 ): Upper Arm was considered elevated if
the angle between upper body and upper arm was greater than 45◦ in the 2.5. Validation of the RULA posture score estimation
frontal plane.
Lower Arm Flexion/Extension (θL1 ): Upper arm joints and lower The posture angles obtained from the algorithm have been validated
arm joints (wrist and elbow) in sagittal plane were used to determine on 11 common occupational workplace posture images, as shown in
lower arm flexion/extension angles. Fig. 4, that have been manually evaluated in accordance with RULA by
Lower Arm Midline Posture (SL6 ): Left and Right Wrist joints were two ergonomics experts. One of the experts computed the angles be­
tracked in frontal plane to estimate whether lower arms were working tween body parts for each posture manually by placing a digital goni­
across the midline. ometer directly on the subject’s body during the task for each side,
Wrist Flexion/Extension (θW1 ): Lower arm joints (elbow and wrist) whereas another expert computed the angles from the test images using
and hand joints (wrist and knuckle) in sagittal plane were used to an online web angle measurement tool (Ginifab – Online Protractor).
calculate wrist flexion/extension angles. The images were taken from two different perspectives, one from the
Upper Body Leaning (θU4 ): Upper Arm score was adjusted by − 1 if side to capture information about joint locations in the sagittal plane,

4
G.K. Nayak and E. Kim International Journal of Industrial Ergonomics 86 (2021) 103218

Fig. 2. Group A postural evaluation technique.

and the other from the front to capture information about joint locations whether any significant difference exists between the new method and
in the frontal plane. Lowe et al. (2014) discusses the guidelines to record the traditional manual approach.
postures in a workplace for better quality and accuracy of analysis, and
their guidelines have been followed in the current study for test postures 3. Results
in Fig. 4. The images were captured using a general-purpose camera
(Panasonic Lumix DC-ZS70) and contain full body human postures in Fig. 5 plots the MAE loss for training and validation data computed at
indoor and outdoor environments and performing different activities every epoch during training of the postural estimation network. The
such as pushing, lifting, machining, and handling equipment. model trained for 187 epochs before stopping early to avoid overfit. The
The Intraclass Correlation Coefficient (ICC) and one-way ANOVA MAE loss computed at the end of final epoch on training data is 1.72,
were applied to the collected data using SPSS software version 25.0 (IBM whereas the MAE loss on the validation data is 2.86. The trained model
Corp., Armonk, NY). The ICC was calculated for the Group A score, weights can be found https://tinyurl.com/model-weights-fbp.
Group B score and Grand Score for both left and right sides to assess the Table 3 indicates the RULA evaluation for Group A, Group B and
inter-reliability of the postural evaluation between the two rating Grand Score along with the inference time of the algorithm for evalu­
methods, i.e., manual evaluation and the proposed algorithm. The ICC ating the postures. The table compares the evaluations from the two
serves as a reliability index that reflects both degree of correlation and experts, denoted E1 and E2, and the results of the proposed automated
agreement between the results of two evaluation methods, comparing technique, denoted A. Among the computed grand score results for both
specifically the ratings from 2 ergonomics experts and those from the sides of the body posture, the proposed algorithm assigned the same
proposed machine learning algorithm in this study. The obtained ICC evaluation scores those of the ergonomics experts to 40.91% of postures,
values were computed by a single-rating, absolute agreement, 2-way whereas 36.36% of postures were assigned higher evaluation scores and
mixed effect model for postures shown in Fig. 4. This index is used the remaining 22.73% were assigned lower scores. The average infer­
when subjects are chosen at random, raters are fixed and difference ence time for evaluating the postures using the algorithm was 14.64 s.
among the ratings is considered relevant. A one-way ANOVA test was As shown in Table 4, the ICC index for the Group A score is 0.776 for
performed on RULA grand scores obtained by manual technique from left-body posture and 0.867 for right-body posture, whereas the Group B
the 2 experts (E1 and E2) and by the proposed algorithm to determine score is 0.851 for both sides of the body. Similarly, the ICC index of the

5
G.K. Nayak and E. Kim International Journal of Industrial Ergonomics 86 (2021) 103218

Fig. 3. Group B postural evaluation technique

noise and color and can be used to assess postures in workplaces. The
Table 2
subject was recorded using two video cameras throughout the work
RULA grand score summary.
cycle. One video-camera was positioned to capture information from the
Grand Level of WMSD Risk side, while another recorded the posture from the front. The perfor­
Score
mance of the proposed postural estimation network was evaluated by
1–2 Acceptable working posture if not maintained or repeated for long monitoring validation loss in non-augmented split training data. The
periods results of the proposed method were compared with those of the tradi­
3–4 Further investigation is needed, posture change may be required
5–6 Investigate and implement posture changes soon to avoid further
tional method to assess postural risks based on RULA assessment by
exposure to WMSDs risk computing the ICC as an index for agreement between the two
7 Requires immediate attention and changes in posture techniques.
The model scored an MAE of 2.86 between predicted and absolute
joint coordinates on the validation data, as represented in Fig. 5 which
computed grand scores is 0.819 and 0.797 for left- and right-body
suggests little error in prediction. An overfitting is a condition in which
posture, respectively.
the network learns the training data too well and subsequently becomes
As shown in Table 5, one-way ANOVA results revealed that the RULA
unable to generalize, which results in poor performance on new data.
grand scores from experts’ respective postural evaluations were not
The proposed algorithm showed a difference in MAE of 1.14 between the
significantly different (p > 0.05) from that of the algorithm with respect
training and validation data, which suggests very little likelihood of
to left-body posture [F (Golabchi et al., 2015; Massaccesi et al., 2003) =
neural network overfitting. Table 3 displays the results of RULA evalu­
0.000, p = 1.000] and right-body posture [F (Golabchi et al., 2015;
ation of left- and right-side body posture using manual methods and the
Massaccesi et al., 2003) = 0.392, p = 0.679].
proposed algorithm. The proposed algorithm assigned the same evalu­
ation scores as manual evaluators to 40.91% of postures. Although the
4. Discussion
proportion of identical evaluation scores is relatively low, the algorithm
still assigned 77.27% of the postures the same or higher than corre­
This study introduces a new method to estimate a RULA score from
sponding scores achieved with the manual approach. Since postures
the 2-view (sagittal and frontal) body posture images with the help of a
with higher evaluation scores requires more attention from the evalua­
CNN-based neural model that is invariant in response to scale, visual
tors, the results indicate that evaluation by the proposed method is more

6
G.K. Nayak and E. Kim International Journal of Industrial Ergonomics 86 (2021) 103218

Fig. 4. Example of test images of postures performing different kinds of occupational tasks from front and side perspective.

estimated scores are in good agreement with the investigators’ manual


evaluation scores. For a few postures, the algorithm’s RULA grand score
was different from manual estimation for one side of the body posture. A
possible explanation for this finding is that the upper body posture is
asymmetrical between sides in the side-view image and/or that ta given
joint was occluded by other body parts. As mentioned in the literature on
RULA, the selection of left- or right-side view can be made based on
posture held for the greatest amount of the work cycle, location of the
highest loads, or a subject’s initially reported discomfort (McAtamney
and Corlett, 1993). To offset uncertainty caused by asymmetrical
posture between left- and right-side views, another camera can be
mounted to record both side views.
A one-way ANOVA test was conducted on the RULA grand scores
obtained from the two experts’ manual evaluation and from the pro­
posed algorithm’s postural evaluation on both sides of the body. As per
the results of the ANOVA test reported in Table 5, there was no signif­
icant effect of the evaluation technique on the final RULA Grand score
assessment results (p > 0.05). This suggests that the proposed technique
can be used as an alternative to the traditional, manual approach to
Fig. 5. MAE plot of posture estimation network during training.
evaluate posture based on RULA.
The time taken to evaluate a posture manually is based on factors like
conservative than that of the manual method and thus will have lower posture complexity and investigator experience, whereas the machine
omission error than the manual method. This will lead to potentially learning technique is independent of such factors. As depicted in
risky postures being subjected to further investigation in order to min­ Table 3, the range of inference time for the postural estimation algo­
imalize the risk of WMSDs in workplaces. rithm is 13.68–16.45 s. The proposed algorithm thus drastically reduces
As shown in Table 4, the statistical analysis results demonstrate high the time taken for postural evaluation and helps ergonomic investigators
levels of agreement between automated and manual evaluation tech­ to make quick decisions. The reported inference time is dependent on
niques. The ICC values between 0.75 and 0.90 indicate good reliability, the size of input image and GPU of the machine in which the model is
and values greater than 0.90 indicate excellent reliability (Koo and Li, running, which explains the slight difference.
2016). The value of the ICC index is between 0.776 and 0.867 for both This study makes several contributions to workplace safety and the
sides of body posture in this study, indicating that the algorithm’s literature that studies it. The present study is the first to implement a

7
G.K. Nayak and E. Kim International Journal of Industrial Ergonomics 86 (2021) 103218

Table 3
Group A, Group B and Grand Score from RULA postural evaluation for left and right sides of body posture by Expert 1 (E1), Expert 2 (E2) and postural estimation
algorithm (A).
Posture Group A Score Group B Score Grand Score Time (in seconds)

Left Right Left Right Left Right

E1 E2 A E1 E2 A E1 E2 A E1 E2 A E1 E2 A E1 E2 A A

1 2 2 2 2 2 2 3 3 2 3 3 2 5 5 4 5 5 4 14.01
2 3 3 2 3 3 3 2 2 5 2 2 5 5 5 6 5 5 7 13.83
3 3 3 2 4 4 3 5 5 5 5 5 5 7 7 6 7 7 7 13.68
4 4 4 3 4 4 4 3 3 3 3 3 3 6 6 6 6 6 6 13.77
5 3 3 2 3 3 3 6 6 6 6 6 6 7 7 6 7 7 6 14.76
6 2 2 2 2 2 2 2 2 2 2 2 2 4 4 4 4 4 4 14.22
7 3 3 3 3 3 3 3 3 3 3 3 3 5 5 6 5 5 6 14.26
8 3 3 2 3 3 2 3 3 4 3 3 4 5 5 6 5 5 6 14.29
9 4 4 4 4 4 5 2 2 3 2 2 3 6 6 6 6 6 7 16.45
10 4 4 4 2 2 2 2 2 2 2 2 2 6 6 6 4 4 4 16.01
11 4 4 3 2 2 3 5 5 6 5 5 6 7 7 7 6 6 7 15.76

movement more than translation in one direction or a 1 view image can


Table 4 be used to obtain corresponding joint movement such as upper arm
ICC index for Group A Score, Group B Score and Grand Score with a confidence
flexion/extension from side view image or neck side-bending/twist
interval of 95% for left and right-side body postures.
movement from front view image. The image view required for
Measure Intraclass Correlation Coefficient (ICC) computing the joint movement is explained in detail in section 2.3. In
Left-body Posture Right-body posture the future, more advanced neural networks can be incorporated to
RULA Group A Score 0.776 0.867 predict 3D joint locations from the raw image pixels, thereby elimi­
RULA Group B Score 0.851 0.851 nating the need for images taken from two different orientations (Mar­
RULA Grand Score 0.819 0.797 tinez et al., 2017; Zhou et al., 2017; Li and Chan, 2015). The second
limitation of the study is that the proposed machine learning algorithm
is dependent on the person detector model to perform body joint pre­
Table 5 dictions. The person detection model fails on images in which body
Results from independent sample t-test (Traditional Vs. Proposed Method postures are obstructed heavily with objects. Fig. 6 shows examples of
Evaluation). such images. To overcome this limitation, the posture must be recorded
Variables Manual Proposed Method t from a better position, which will reduce occlusion and aid the person
Evaluation Evaluation detector model in detecting the subject. Although it is recommended to
Mean SD Mean SD have camera view perpendicular to the plane of interest for accurate
assessment of the angle, the investigator can still alter the position of
RULA Grand Score - Left 5.73 1.009 5.73 0.905 0.00
RULA Grand Score - Right 5.45 1.036 5.82 1.250 − 0.743
recording cameras to his own judgement in case the workplace occludes
a subject’s posture. For instance, the posture in Fig. 6 is occluded from
the vehicle frame to such an extent that the person detector model is
machine learning algorithm for observation-based RULA posture unable to detect the subject. However, the same posture can be captured
assessment to estimate all relevant body posture scores, including the from a different position to obtain results, as shown in Posture 5 in
wrist score, from images of postures in real workplace (Peppoloni et al., Fig. 4.
2015; Plantard et al., 2017; Clark et al., 2012; Li et al., 2020). Addi­ The results of the proposed method were validated against ratings by
tionally, the output of the algorithm can be directly interpreted to assess ergonomic analysts who evaluated the postures using basic ergonomics
the risks associated with postures, reducing the time and cost typically
required to train an ergonomic analyst in conducting a RULA assessment
survey. Further, with this method, investigators will no longer need to
segment body parts or evaluate posture manually, which will reduce
evaluation time and eliminate human error. In addition, the proposed
system is easy to use, allowing an investigator to assess postures quickly
at the workplace without conducting preliminary surveys to only select
postures that were reported for discomfort. Based on our results, we
conclude that the proposed system will aid ergonomics experts in car­
rying out the investigation at the workplace and would encourage the
use of advanced tools for ergonomic assessments.
There are several features and limitations to this study that should be
noted. The first limitation of this study is that the reliability of the al­
gorithm is dependent on the recordings of the postures. Video images
represent posture 2-dimensionally and may cause perspective errors if
the camera view is not perpendicular to motion of interest thus, the
postures must be captured such that images depict true location of body
joints (Lowe et al., 2014). The proposed computer vision model can
estimate individual group scores from a single view image but requires 2
images in perpendicular plane to avoid parallax error in estimating true
joint positions and to compute RULA grand scores for postures. Two
Fig. 6. Example of posture that the person detection model failed to detect due
ergonomic analysts can record postures for the tasks that require
to high object obstruction.

8
G.K. Nayak and E. Kim International Journal of Industrial Ergonomics 86 (2021) 103218

tools (goniometer and online angle measurement software) to simulate References


the practical study in the workplace. Although this validation approach
could be questioned, this choice was made because recent surveys have Andrews, D.M., Fiedler, K.M., Weir, P.L., Callaghan, J.P., 2012. The effect of posture
category salience on decision times and errors when using observation-based posture
proved that ergonomic practitioners worldwide are still prefer to use assessment methods. Ergonomics 55 (12), 1548–1558. https://doi.org/10.1080/
basic tools over direct measurement tools to evaluate postures in 00140139.2012.726656.
workplaces (Lowe et al., 2019). Andriluka, Mykhaylo, Pishchulin, Leonid, Gehler, Peter, Bernt Schiele, 2015. Max planck
institute for informatics: MPII human pose dataset. http://human-pose.mpi-inf.mpg.
The observation-based posture evaluation method, despite being de/.
popular among ergonomics practitioners, is prone to omission errors Bernard, B.P., 1997. U.S. Department of health and human services, centers for disease
that can be effectively overcome with the use of the method deployed in control and prevention, national institute of occupational safety and health.
Musculoskeletal disorders and workplace factors: a critical review of epidemiologic
this study. The proposed algorithm has proven to be more conservative evidence for work-related musculoskeletal disorders of the neck, upper extremity,
in posture evaluations than manual evaluations, which will lead in­ and lower back 97–141. Retrieved from. https://www.cdc.gov/niosh/docs/97-141/.
vestigations for ergonomic intervention to discover more postures that Bisong, E., 2019. Google colaboratory. In: Building Machine Learning and Deep Learning
Models on Google Cloud Platform. Apress, Berkeley, CA. https://doi.org/10.1007/
pose WMSD risks. The proposed method can evaluate postures without
978-1-4842-4470-8_7.
omission error and with performance similar to but faster than manual Caruana, R., Lawrence, S., Giles, L., 2001. Overfitting in neural nets: backpropagation,
evaluations. This method, therefore, can aid ergonomic evaluators in conjugate gradient, and early stopping. In: Advances in Neural Information
conducting RULA-based surveys of occupational postures in workplace Processing Systems 13 - Proceedings of the 2000 Conference, NIPS 2000 Neural
Information Processing Systems Foundation. Retrieved from: https://papers.nips.
conditions. cc/paper/1895-overfitting-in-neural-nets-backpropagation-conjugate-gradient-and-
The posture estimation network discussed in this study can be easily early-stopping.pdf.
extended to other posture assessment tools like ALLA, REBA and OWAS Choobineh, A., Tosian, R., Alhamdi, Z., Davarzanie, M., 2004. Ergonomic intervention in
carpet mending operation. Appl. Ergon. 35 (5), 493–496. https://doi.org/10.1016/j.
that evaluate body postures to determine their associated WMSD risks. apergo.2004.01.008.
For instance, REBA evaluates body postures in a similar way to the Clark, R.A., Pua, Y.H., Fortin, K., Ritchie, C., Webster, K.E., Denehy, L., Bryant, A.L.,
proposed method by estimating the angles of leg, trunk, neck, arm and 2012. Validity of the Microsoft Kinect for assessment of postural control. Gait
Posture 36 (3), 372–377. https://doi.org/10.1016/j.gaitpost.2012.03.033.
wrist. The output of the postural estimation network is a 2D body joint Crawford, Joanne O., 2007. The nordic musculoskeletal questionnaire. Occup. Med. 57
location that can be processed with a python-based script to evaluate (4), 300–301. https://doi.org/10.1093/occmed/kqm036.
postures based on RULA. This separation of logic allows the proposed Golabchi, A., Han, S., Fayek, A.R., 2015. An application of fuzzy ergonomic assessment
for human motion analysis in modular construction. In: Modular and Offsite
postural estimation network to be used for evaluation of postures based Construction (MOC) Summit Proceedings.
on other tools. González, B., Adenso-Díaz, B., Torre, P., 2003. Ergonomic performance and quality
relationship: an empirical evidence case. Int. J. Ind. Ergon. 31, 33–40. https://doi.
org/10.1016/S0169-8141(02)00116-6.
5. Conclusion
Hignett, S., McAtamney, L., 2000. Rapid entire body assessment (REBA). Appl. Ergon. 31
(2), 201–205. https://doi.org/10.1016/s0003-6870(99)00039-3.
This study discusses a deep learning technique to estimate posture Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q., 2017. Densely connected
scores from images of postures at occupational workplaces by detecting convolutional networks. IEEE conference on computer vision and pattern
recognition. Retrieved from: https://arxiv.org/abs/1608.06993, 2261-2269.
2D coordinates of body joints, including wrist joints, to evaluate work Ionescu, C., Papava, D., Olaru, V., Sminchisescu, C., 2014. Human3.6M: large scale
postures based on RULA. This study proposes a posture estimation datasets and predictive methods for 3D human sensing in natural environments.
network to detect body key point locations of work postures from im­ IEEE Trans. Pattern Anal. Mach. Intell. 36 (7), 1325–1339. https://doi.org/10.1109/
TPAMI.2013.248.
ages. The accuracy of the network has been validated on a subset of Jas, Fang, Fred, 2020. GitHub. COCO-WholeBody. Retrieved from. https://github.
training images and achieves high precision. The network has been com/jin-s13/COCO-WholeBody.
tested on common workplace postures recorded in indoor and outdoor Jin, S., et al., 2020. Whole-body human pose estimation in the Wild. Computer Vision –
ECCV 12354. https://doi.org/10.1007/978-3-030-58545-7_12, 2020.
environments and on workers performing different activities such as Karhu, O., Kansi, P., Kuorinka, I., 1977. Correcting working postures in industry: a
lifting, lowering, pushing, etc. The input images are taken from two practical method for analysis. Appl. Ergon. 8 (4), 199–201. https://doi.org/10.1016/
perspectives, the front and side of a subject’s posture, depicting the true 0003-6870(77)90164-8.
Kee, D., 2020. An empirical comparison of OWAS, RULA and REBA based on self-
posture in 3D space. The results of the algorithm were compared with reported discomfort. Int. J. Occup. Saf. Ergon.: JOSE 26 (2), 285–295. https://doi.
those from manual evaluation by 2 ergonomics experts and indicated org/10.1080/10803548.2019.1710933.
very good agreement on an ICC index. The automation of observation- Kingma, Diederik P., Jimmy, Ba, 2015. Adam: a method for stochastic optimization. In:
International Conference for Learning Representations. https://arxiv.org/abs/
based techniques for posture assessment using deep learning will pro­
1412.6980.
duce results with similar performance as manual evaluators, eliminate Kong, Y.K., Lee, S.Y., Lee, K.S., Kim, D.M., 2018. Comparisons of ergonomic evaluation
errors due to human mistakes and reduce time for posture evaluation in tools (ALLA, RULA, REBA and OWAS) for farm work. Int. J. Occup. Saf. Ergon.: JOSE
workplaces. In the future, advanced machine learning algorithms will be 24 (2), 218–223. https://doi.org/10.1080/10803548.2017.1306960.
Koo, T.K., Li, M.Y., 2016. A guideline of selecting and reporting intraclass correlation
developed to estimate postures in 3D space from a single image. To coefficients for reliability research. Journal of chiropractic medicine 15 (2),
further this end, the present study can be extended to evaluate postures 155–163. https://doi.org/10.1016/j.jcm.2016.02.012.
based on different postural screening tools. Li, S., Chan, A.B., 2014, November. 3d human pose estimation from monocular images
with deep convolutional neural network. In: Asian Conference on Computer Vision.
Springer, Cham, pp. 332–347.
Declaration of competing interest Li, L., Xu, X., 2019. A deep learning-based RULA method for working posture assessment.
Proc. Hum. Factors Ergon. Soc. Annu. Meet. 63, 1090–1094.
Li, Li, Martin, Tara, Xu, Xu, 2020. A novel vision-based real-time method for evaluating
The authors declare that they have no known competing financial postural risk factors associated with musculoskeletal disorders. Appl. Ergon. 87,
interests or personal relationships that could have appeared to influence 103138. https://doi.org/10.1016/j.apergo.2020.103138.
the work reported in this paper. Lin, T.Y., et al., 2014. Microsoft COCO: common objects in Context. Computer Vision –
ECCV 8693. https://doi.org/10.1007/978-3-319-10602-1_48, 2014.
Lowe, B., Weir, P., Andrews, D., 2014. Observation-based Posture Assessment: Review of
Acknowledgments Current Practice and Recommendations for Improvement.
Lowe, Brian D., Dempsey, Patrick G., Jones, Evan M., 2019. Ergonomics assessment
methods used by ergonomics professionals. Appl. Ergon. 62 (1), 838–842. https://
The study was funded by the University of Windsor (grant No.
doi.org/10.1016/j.apergo.2019.102882.
818140). Maas, A.L., Hannun, A.Y., Ng, A.Y., 2013. Rectifier nonlinearities improve neural
network acoustic models. In: International Conference on Machine Learning, vol. 30.
Retrieved from. http://robotics.stanford.edu/~amaas/papers/relu_hybrid_
icml2013_final.pdf.

9
G.K. Nayak and E. Kim International Journal of Industrial Ergonomics 86 (2021) 103218

Martinez, J., Hossain, R., Romero, J., Little, J.J., 2017. A simple yet effective baseline for Database. Ann. Rheum. Dis. 78, 844–848. https://doi.org/10.1136/annrheumdis-
3d human pose estimation. In: Proceedings of the IEEE International Conference on 2019-215142.
Computer Vision, pp. 2640–2649. Sergey Ioffe, Szegedy, Christian, 2015. Batch normalization: accelerating deep network
Massaccesi, M., Pagnotta, A., Soccetti, A., Masali, M., Masiero, C., Greco, F., 2003. training by reducing internal covariate shift. In: Proceedings of the 32nd
Investigation of work-related disorders in truck drivers using RULA method. Appl. International Conference on International Conference on Machine Learning. 37
Ergon. 34 (4), 303–307. https://doi.org/10.1016/S0003-6870(03)00052-8. (ICML’15). JMLR.Org, pp. 448–456. https://arxiv.org/abs/1502.03167.
McAtamney, L., Nigel Corlett, E., 1993. RULA: a survey method for the investigation of Simoneau, Serge, St-Vincent, Marie, Chicoine, Denise, 1996. Institut de recherche
work-related upper limb disorders. Appl. Ergon. 24 (2), 91–99. https://doi.org/ Robert-Sauvé en santé et en sécurité du travail (IRSST). Work Related
10.1016/0003-6870(93)90080-s. Musculoskeletal Disorders (WMSDs). Retrieved from. https://www.irsst.qc.ca/
Middlesworth, Matt, 2020. ErgoPlus. The cost of musculoskeletal disorders (MSDs). media/documents/PubIRSST/RG-126-ang.pdf.
Retrieved from. https://ergo-plus.com/cost-of-musculoskeletal-disorders-infograph Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R., 2014.
ic/. Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn.
Peppoloni, L., Filippeschi, A., Ruffaldi, E., Avizzano, C.A., 2015. A novel wearable system Res. 15 (1), 1929–1958. https://dl.acm.org/doi/abs/10.5555/2627435.2670313.
for the online assessment of risk for biomechanical load in repetitive efforts. Int. J. Toshev, Alexander, Szegedy, Christian, 2013. DeepPose: human pose estimation via deep
Ind. Ergon. 1–11. https://doi.org/10.1016/j.ergon.2015.07.002. neural networks. In: Proceedings of the IEEE Computer Society Conference on
Plantard, P., Shum, H., Le Pierres, A.S., Multon, F., 2017. Validation of an ergonomic Computer Vision and Pattern Recognition. https://doi.org/10.1109/
assessment method using Kinect data in real workplace conditions. Appl. Ergon. 65, CVPR.2014.214.
562–569. https://doi.org/10.1016/j.apergo.2016.10.015. Tuure, V.M., 1992. Determination of physical stress in agricultural work. Int. J. Ind.
Putz-Anderson, Vern, et al., 1997. Centers for disease control and prevention. Ergon. 10 (4), 275–284.
Musculoskeletal disorders and workplace factors. Retrieved from. https://www.cdc. Van Wely, P., 1970. Design and disease. Appl. Ergon. 1 (5), 262–269. https://doi.org/
gov/niosh/docs/97-141/pdfs/97-141.pdf. 10.1016/0003-6870(70)90075-x.
Ryan, G.A., 1989. The prevalence of musculo-skeletal symptoms in supermarket workers. Van Wendel de Joode, B., Burdorf, A., Verspuy, C., 1997. Physical load in ship
Ergonomics 32 (4), 359–371. https://doi.org/10.1080/00140138908966103. maintenance: hazard evaluation by means of a workplace survey. Appl. Ergon. 28
Sammut, C., Webb, G.I., 2011. Mean squared error. Encyclopedia of machine learning. (3), 213–219. https://doi.org/10.1016/s0003-6870(96)00051-8.
https://doi.org/10.1007/978-0-387-30164-8_528https://link.springer.co Vignais, N., Bernard, F., Touvenot, G., Sagot, J.C., 2017. Physical risk factors
m/referenceworkentry/10.1007%2F978-0-387-30164-8_528. identification based on body sensor network combined to videotaping. Appl. Ergon.
Sanchez-Lite, A., et al., 2013. Novel ergonomic postural assessment method (NERPA) 65, 410–417. https://doi.org/10.1016/j.apergo.2017.05.003.
using product-process computer aided engineering for ergonomic workplace design. Wright, E.J., Haslam, R.A., 1999. Manual handling risks and controls in a soft drinks
PloS One 8 (8), e72703. https://doi.org/10.1371/journal.pone.0072703. distribution centre. Appl. Ergon. 30, 311–318. https://doi.org/10.1016/S0003-6870
Sasikumar, V., Binoosh, S., 2020. A model for predicting the risk of musculoskeletal (98)00036-2.
disorders among computer professionals. Int. J. Occup. Saf. Ergon.: JOSE 26 (2), Yamashita, R., Nishio, M., Do, R.K.G., et al., 2018. Convolutional neural networks: an
384–396. https://doi.org/10.1080/10803548.2018.1480583. overview and application in radiology. Insights Imaging 9, 611–629. https://doi.
Schneider, Elke, Irastorza, Xabier, Copsey, Sarah, 2010. European agency for safety and org/10.1007/s13244-018-0639-9.
health at work. OSH in figures: work-related musculoskeletal disorders in the EU — Zhou, Xingyi, Huang, Qixing, Sun, Xiao, Xue, Xiangyang, Wei, Yichen, 2017. Towards 3D
facts and figures. Retrieved from. https://osha.europa.eu/en/publications/reports human pose estimation in the Wild: a weakly-supervised approach. Computer vision
/TERO09009ENC. and pattern recognition. https://arxiv.org/abs/1704.02447.
Sebbag, E., Felten, R., Sagez, F., et al., 2019. The world-wide burden of musculoskeletal
diseases: a systematic analysis of the World Health Organization Burden of Diseases

10

You might also like