0% found this document useful (0 votes)
54 views10 pages

Enhancing Laboratory Safety With AI: PPE Detection and Non-Compliant Activity Monitoring Using Object Detection and Pose Estimation

This research presents an automated deep learning framework that enhances laboratory safety in pharmaceutical manufacturing through real-time monitoring of Personal Protective Equipment (PPE) compliance and non-compliant activities using object detection and pose estimation. The system achieves 90% accuracy and operates at 25 frames per second, significantly reducing human intervention and improving compliance with safety regulations. Future developments aim to integrate IoT and cloud-based analytics to further enhance safety monitoring.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views10 pages

Enhancing Laboratory Safety With AI: PPE Detection and Non-Compliant Activity Monitoring Using Object Detection and Pose Estimation

This research presents an automated deep learning framework that enhances laboratory safety in pharmaceutical manufacturing through real-time monitoring of Personal Protective Equipment (PPE) compliance and non-compliant activities using object detection and pose estimation. The system achieves 90% accuracy and operates at 25 frames per second, significantly reducing human intervention and improving compliance with safety regulations. Future developments aim to integrate IoT and cloud-based analytics to further enhance safety monitoring.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Volume 10, Issue 3, March– 2025 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar1274

Enhancing Laboratory Safety with AI: PPE


Detection and Non-Compliant Activity Monitoring
Using Object Detection and Pose Estimation
Aro Praveen1; Nahin Shaikh2; Mohammad Annus3; Gayathri4;
Bharani Kumar Depuru5*
1;2;3
Research Associate; 4Team Leader; 5Director,
1;2;3;4;5
AiSPRY

Corresponding Author: Bharani Kumar Depuru5*

Publication Date: 2025/04/03

Abstract: Ensuring workplace safety and adhering to regulatory standards in pharmaceutical manufacturing is vital.
However, traditional manual monitoring methods are inefficient, prone to errors, and labor-intensive, resulting in potential
safety risks and non-compliance penalties. This research introduces an automated deep learning framework that employs
video analytics for real-time compliance monitoring, providing a scalable alternative to manual inspection processes.

The system integrates YOLOv11n for detecting Personal Protective Equipment (PPE), such as gloves, masks, and
goggles, identifying violations where PPE is either missing or improperly worn. Additionally, YOLOv8n-Pose is utilized to
assess non-compliant postures, including actions like bending, hand-raising, and face-touching. A logging system tracks
violations with precise timestamps, enabling efficient documentation for audits and regulatory purposes.

A curated video dataset was developed and annotated using Roboflow, featuring both compliant and non-compliant
actions. To enhance the model's robustness, preprocessing techniques such as resizing, contrast enhancement, and data
augmentation were applied. The system’s performance, evaluated using metrics like mean Average Precision (mAP), F1-
score, and precision, demonstrated an impressive 90% accuracy, with a mAP@50 of 92.1% and a processing speed of 25
frames per second (FPS), fulfilling the real-time monitoring criteria.

This solution offers a scalable, real-time alternative to manual inspections, reducing human intervention, improving
workplace safety, ensuring compliance with regulations, and automating the documentation process. Future developments
aim to integrate IoT devices, employ edge computing, and incorporate cloud-based analytics to further enhance safety
monitoring and compliance.

How to Cite: Aro Praveen; Nahin Shaikh; Mohammad Annus; Gayathri; Bharani Kumar Depuru (2025). Enhancing Laboratory
Safety with AI: PPE Detection and Non-Compliant Activity Monitoring Using Object Detection and Pose Estimation.
International Journal of Innovative Science and Research Technology, 10(3), 1895-1904.
https://doi.org/10.38124/ijisrt/25mar1274

I. INTRODUCTION Recent advancements in computer vision and deep


learning have enabled video-based safety monitoring. This
Ensuring workplace safety and regulatory compliance is research presents an automated framework that integrates
crucial in pharmaceutical manufacturing, where strict object detection and pose estimation to improve compliance
guidelines protect both product quality and worker well- tracking. The system employs YOLOv11n to detect Personal
being[1]. Protective Equipment (PPE) violations such as missing or
improperly worn gloves, masks, and goggles while
However, manual safety monitoring and reporting YOLOv8n-Pose is used to recognize unsafe postures and
remain time-consuming, error-prone, and inefficient, often movements, including bending, hand-raising, and face-
leading to delayed incident detection. Such delays increase touching[3]. Unlike conventional PPE detection systems that
the risk of safety violations, regulatory penalties, and focus only on equipment compliance, this approach also
operational setbacks, highlighting the need for an automated, monitors worker behaviour, capturing actions that might lead
real-time compliance monitoring solution[2]. to safety risks[4].

IJISRT25MAR1274 www.ijisrt.com 1895


Volume 10, Issue 3, March– 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar1274

Fig 1 This Figure Depicts the CRISP-ML(Q) Architecture that we have followed for this Research Study.
(Source: Mind Map - 360DigiTMG)

A key feature of this system is its automated logging emphasizes data understanding, preprocessing, model
mechanism, which records violations with timestamps, development, evaluation, and deployment with quality
providing a structured method for compliance tracking and assurance. The CRISP-ML(Q) process adopted in this study
audits. The combination of PPE detection and behavioural is illustrated in [Fig.1], demonstrating the systematic
analysis enhances workplace safety by identifying risk-prone approach taken for data collection, annotation, model
actions that may go unnoticed in manual inspections. training, and validation [5].

To evaluate the system, a diverse dataset was compiled II. ARCHITECTURE


using video footage from pharmaceutical laboratories,
covering both compliant and non-compliant scenarios. The Architecture plays a crucial role in the design and
dataset underwent rigorous preprocessing, including resizing, development of any intelligent system, providing a structured
contrast enhancement, and data augmentation, to optimize framework that defines how different components interact
model accuracy. and function together. A well-defined architecture ensures
scalability, efficiency, and seamless integration of various
This research introduces a scalable, real-time modules, ultimately improving the system's reliability and
compliance monitoring solution that minimizes human performance.
intervention, reduces workplace hazards, and streamlines
regulatory processes. Future developments will explore The PPE Detection and Compliance Monitoring System
integration with IoT and edge computing to enhance is designed to ensure laboratory safety through an automated
deployment flexibility and further improve workplace safety deep learning-based approach that integrates object detection
and operational efficiency. and human pose estimation. The architecture follows a
structured pipeline that includes data collection,
To ensure a structured and rigorous development preprocessing, model training, integration, and deployment,
process, we followed the CRISP-ML(Q) methodology, which ensuring accurate and efficient real-time monitoring [6].

IJISRT25MAR1274 www.ijisrt.com 1896


Volume 10, Issue 3, March– 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar1274

Fig 2 High Level Architecture Diagram Representing PPE Detection and Compliance Monitoring System Incorporating Object
Detection and Pose Estimation Models

 System Workflow: In the model integration phase, the outputs from the
As depicted in [Fig.2], the system begins with video data object detection and pose estimation models are merged into
collection from the Opensource platform, capturing real- a unified framework. Fine-tuning is conducted to optimize
world scenarios where compliance needs to be monitored. detection accuracy and reduce false positives, ensuring high
Frames are extracted and annotated using Roboflow, where precision in compliance monitoring.
Personal Protective Equipment (PPE) components such as
hair cover, no hair cover, goggles, no goggle, face masks, The deployment phase involves use of streamlit
gloves, shoes, and lab coats etc are labelled. To enhance framework and it can run on both local machine and cloud,
model performance, preprocessing techniques such as image enabling real-time video processing for compliance
augmentation and resizing are applied, ensuring robustness verification. The system generates log files that record
across varied environments. detected violations, facilitating auditability and further
analysis. The deployed system operates in a continuous
For PPE detection, a YOLOv11n model is trained to monitoring mode, with regular performance evaluations to
identify missing protective equipment in real-time. Parallelly, ensure accuracy and adaptability to dynamic laboratory
pose estimation using YOLOv8 extracts key-points environments.
corresponding to human body joints, which are further
analysed through a rule-based approach to identify non- By leveraging deep learning-based object detection,
compliant activities, such as bending, raising hands, or pose estimation, and rule-based compliance verification, this
touching the face. The integration of these two models system provides an automated, scalable, and efficient solution
enables a comprehensive compliance assessment, capturing for laboratory safety enforcement. The architecture
both equipment violations and unsafe human actions within minimizes human intervention, enhances compliance
the laboratory environment [7]. monitoring, and enables real-time enforcement of safety
protocols, ensuring a safer working environment in laboratory
settings [8].

IJISRT25MAR1274 www.ijisrt.com 1897


Volume 10, Issue 3, March– 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar1274

Fig 3 Low Level Architecture Diagram Representing PPE Detection and Compliance Monitoring System Incorporating Object
Detection and Pose Estimation Models

For a more detailed breakdown of system components, meticulous annotation to make it meaningful. For this,
data flow, and processing stages, a Low-Level Architecture Roboflow was used to manually label 12 PPE-related classes.
(LLA) is provided [Fig.3]. The LLA delves deeper into
module-specific interactions, highlighting key functionalities This manual annotation step was critical in ensuring
such as data preprocessing, model inference, decision-making accuracy, as precise bounding boxes allow the detection
logic, and deployment structure. This detailed architectural model to differentiate between compliant and non-compliant
view further enhances understanding of the system’s real- scenarios effectively [10].
time processing pipeline [9].
During initial analysis, an imbalance was observed—
III. DATA COLLECTION AND certain PPE classes had significantly fewer samples. This
PREPROCESSING posed a risk of biased detection, where underrepresented
classes might be overlooked by the model. To address this,
The success of an AI-driven PPE Detection and additional frames were extracted, and targeted. augmentation
Compliance Monitoring System heavily depends on the techniques were applied, ensuring each class had sufficient
quality, diversity, and balance of the dataset used for training. representation.
A well-structured dataset ensures that the model can
generalize effectively, reducing false positives and negatives Preprocessing for Object Detection: Making Data
in real-world laboratory environments. Model-Ready.

 Data Collection: To improve model accuracy and simulate real-world


To build a realistic and diverse dataset, video footage variations, the following preprocessing and Augmentation
was sourced from open platforms such as YouTube, steps were applied:
replicating real-world laboratory environments where PPE
compliance is crucial. These videos provided varied lighting  Preprocessing
conditions, camera angles, and subject movements, ensuring
that the model learns to adapt to dynamic lab settings.  Auto-Orient.
 Resizing
Using OpenCV, frames were extracted from these  Auto-Adjust Contrast Data
videos, forming the foundation of the object detection dataset.
However, a raw dataset alone is insufficient—it requires

IJISRT25MAR1274 www.ijisrt.com 1898


Volume 10, Issue 3, March– 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar1274
 Augmentation: as the foundation for further analysis and compliance
monitoring. The detection technique is based on bounding
 Horizontal Flip. boxes, which are used to determine the precise locations of
 Random Cropping. objects within images, ensuring an effective and structured
 Saturation Adjustment. approach to safety enforcement.
 Blur Simulation.
 Noise Addition.  YOLOv11n – The Backbone of Detection
The architecture of YOLOv11n follows a structured
The collected dataset includes both compliant and non- workflow where images are divided into grids, and the model
compliant activities, ensuring a comprehensive predicts bounding boxes along with class probabilities in a
understanding of laboratory safety violations. By leveraging single forward pass. It leverages anchor boxes and
pose estimation, the system can identify unsafe actions convolutional layers to enhance object localization, ensuring
alongside missing PPE, enhancing overall compliance precise detection.
monitoring.
For training, the model was pre-trained on large-scale
A robust data collection and preprocessing pipeline is datasets and then fine-tuned using a domain-specific dataset
the foundation of any AI-powered monitoring system. By containing over 700 manually annotated images spanning 12
curating a balanced dataset, applying intelligent PPE classes. Various preprocessing techniques, including
augmentations, and leveraging pose-based action recognition, resizing, contrast adjustment, and data augmentation, were
this system moves beyond traditional PPE detection—it applied to enhance performance across different lighting and
enforces compliance through a multi-dimensional approach, environmental conditions.
ensuring a safer laboratory environment.
During inference, the model generates bounding boxes
IV. MODEL BUILDING with confidence scores to indicate detected objects. To further
refine detections, it applies thresholding and Non-Max
 Object Detection Suppression (NMS) to eliminate low-confidence predictions
Object detection serves as the foundation of our and redundant detections, ensuring accurate and efficient PPE
automated workplace safety framework, enabling the identification.
identification and localization of Personal Protective
Equipment (PPE) in video frames. This ensures real-time  Performance Metrics:
compliance monitoring by flagging safety violations as they
occur.  Accuracy: Achieves a mean Average Precision
(mAP@50) of over 92%. [Fig.4,5] include training,
 Overview validation results]
The primary objective of object detection in this system  Speed: Processes video streams at 25 frames per second
is to accurately identify and classify PPE items such as (FPS), meeting real-time requirements.
gloves, masks, goggles, and helmets, enabling the distinction  Robustness: Extensive data augmentation and fine-tuning
between compliant and non-compliant scenarios. This serves enable reliable performance across different scenarios.

Fig 4 Training Graphs for the YOLO Model, Presenting its Learning Progress and Performance.

IJISRT25MAR1274 www.ijisrt.com 1899


Volume 10, Issue 3, March– 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar1274

Fig 5 Output of Validation Batch

 Integration:  Good Manufacturing Practices (GMP) Compliance –


The PPE detection module serves as the first stage in the Following strict guidelines for drug production to
safety monitoring pipeline. Detected PPE data is seamlessly maintain quality and hygiene.
passed to the pose estimation module (YOLOv8n-Pose),  Good Clinical Practices (GCP) Compliance – Conducting
which further analyzes worker behavior. This integration ethical and well-monitored clinical trials with proper
creates a comprehensive real-time compliance system by patient consent.
combining object detection and pose-based activity  Pharmacovigilance Compliance – Monitoring and
monitoring. reporting adverse drug reactions to protect public health.
 Data Integrity and Documentation – Ensuring accurate,
 Pose Estimation and Non-Compliance Detection secure, and tamper-proof records throughout drug
(YOLOv8n-Pose) development.
Non-Compliant Behavior Identification: Successfully
recognized unsafe postures and activities such as bending,  Non-Compliance Activities in the Pharmaceutical
hand-raising, and face-touching. Industry

 System-Level Outcomes  Manufacturing Violations – Ignoring GMP standards,


leading to contamination or substandard drug production.
 Real-Time Logging: The system generated detailed log  Clinical Trial Misconduct – Conducting trials unethically,
files that include timestamps and compliance statuses for such as falsifying data or bypassing approval protocols.
each detected violation. This automated documentation  Marketing and Advertising Violations – Misleading
enhances traceability for audits. promotions, false claims, or promoting off-label drug use
 Product Quality Issues – Selling drugs with incorrect
 Compliance and Non-Compliance Activity: labeling, contamination, or potency problems.
 Bribery and Corruption – Engaging in unethical practices
 Compliance Activities in the Pharmaceutical Industry like bribing officials for faster approvals or regulatory
favors.
 Regulatory Compliance – Adhering to government and
industry regulations to ensure drug safety and efficacy.

IJISRT25MAR1274 www.ijisrt.com 1900


Volume 10, Issue 3, March– 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar1274
 Pose Estimation:  Nose: Central reference point on the face, commonly used
for orientation detection.
 Neck: Central point connecting the head to the torso,
crucial for tracking body posture.
 Left Shoulder: Marks the left shoulder joint, useful in
movement tracking and posture correction.
 Left Elbow: Indicates the left elbow joint, essential for
tracking arm movement .
 Left Wrist: Tracks the left wrist position, useful in hand
gesture recognition.
 Right Shoulder: Represents the right shoulder joint,
similar to the left shoulder.
 Right Elbow: Represents the right elbow joint, mirroring
the left elbow.
 Right Wrist: Tracks the right wrist position, aiding in hand
tracking applications.
 Left Hip: Represents the left hip joint, crucial in gait
analysis and activity tracking.
 Left Knee: Tracks the left knee joint, which is important
for walking and running motion analysis.
 Left Ankle: Indicates the left ankle joint, useful in foot
placement and balance tracking.
 Right Hip: Marks the right hip joint, providing symmetry
to the body structure.
 Right Knee: Represents the right knee joint, similar to the
Fig 6 YOLO Key Point left knee in tracking movement.
 Right Ankle: Represents the right ankle joint, aiding in
Table 1 Key Points motion analysis and balance assessment.
Number Key Point  Left Eye: Represents the left eye position, useful for facial
0 Nose recognition and gaze tracking.
1 Neck  Right Eye: Represents the right eye position, similar to the
2 Left Shoulder left eye in functionality.
3 Left Elbow  Left Ear: Indicates the left ear’s location, which is
4 Left wrist important for head pose estimation.
5 Right Shoulder  Right Ear: Indicates the right ear’s location, aiding in head
6 Right Elbow angle calculations[Table.1].
7 Right Wrist
8 Left Hip Training YOLOv8-Pose for key point detection involves
9 Left Knee optimizing a multi-task loss function, including key point loss
10 Left Ankle (measuring the difference between predicted and ground-
11 Right Hip truth positions), bounding box loss (ensuring accurate
12 Right Knee localization), and confidence loss (evaluating certainty in key
13 Right Ankle point predictions). The model is trained on datasets like
14 Left Eye COCO, which provide human images annotated with key
15 Right Eye points. To improve robustness to variations in human poses,
16 Left Ear data augmentation techniques such as flipping, scaling, and
17 Right Ear rotation are applied during training.

YOLOv8-Pose extends object detection by predicting  Angle Calculations for Movement Analysis:
key points along with bounding boxes. Each detected human
is represented by a bounding box (x, y, width, height,  The function calculate angle (a, b, c) computes angles
confidence score) and key points {(x_kp, y_kp, conf_kp)} for between three key points to analyze joint movements.
each joint, where x_kp, y_kp are the pixel coordinates and  Angles are calculated for hips, knees, and elbows, which
conf_kp represents the confidence score of the key point are crucial for identifying postures like bending and arm
prediction. The model follows a single-stage detection movements.
approach, directly predicting key points from an input image
without requiring a separate detection step. It detects 18 key  Action Recognition:
[Fig.6] points for each person, covering crucial anatomical
landmarks:  Jump Detection: Uses ankle height relative to a baseline
(calculate jump) to determine if a person is jumping.

IJISRT25MAR1274 www.ijisrt.com 1901


Volume 10, Issue 3, March– 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar1274
 Bending Detection: If knee angles are less than 105 V. DEPLOYMENT
degrees, it classifies the action as "BENDING."
 Face Touching Detection: Uses Euclidean distance to The PPE Detection and Compliance Monitoring System
check if the wrist is close to the nose. is deployed on a cloud-based infrastructure, ensuring efficient
 Running Detection: Evaluates ankle speeds, step length, real-time processing and accessibility. The system is built
and vertical motion. using Streamlit, providing an interactive interface for
 Lying on the Floor Detection: Based on the height of the seamless monitoring of compliance violations. The
hips relative to the frame. deployment allows video streams to be processed in real time,
where the model detects PPE and identifies non-compliant
actions.

Fig 7 Illustrates the Deployed System's Output

Each detected individual is enclosed in a bounding box, Overall Detection Accuracy: Registered at 90%,
with PPE components labeled near the corresponding body demonstrating robust performance across varied laboratory
parts. Additionally, pose estimation highlights non-compliant environments.
actions such as bending or touching the face. The processed
video output is displayed through the interface, allowing  Precision and Recall:
users to monitor violations and maintain safety standards
effectively. [Fig.7 Illustrate output].  Precision: ~99%[Fig.8], confirming that the vast majority
of identified objects were indeed PPE.
VI. RESULT  Recall: ~88%[Fig.9], illustrating the model’s ability to
capture most instances of PPE.
 Object Detection Performance (YOLOv11n)  Inference Speed: The model processes video streams at 25
Mean Average Precision (mAP@50): Achieved 92.1%, frames per second (FPS), ensuring real-time detection,
indicating that the model reliably detects Personal Protective which is critical for dynamic environments.
Equipment (PPE) items.

Fig 8 Precision- Confidence Curve

IJISRT25MAR1274 www.ijisrt.com 1902


Volume 10, Issue 3, March– 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar1274
This research has developed and evaluated object ability to generalize across various situations has been
detection and pose estimation models based on YOLO enhanced, and therefore its reliability in real-world scenarios
architectures. With high accuracy and computational is bolstered. The main contribution of this paper lies in
efficiency, our models are going to find applications in work combining object detection and pose estimation that brings
safety monitoring and action recognition. With wise gains in action recognition and benefits sectors like
preprocessing techniques and augmentation, the model's healthcare, security, and industrial automation.

Fig 9 Recall- Confidence Curve for Object Detection

The results prove that it strikes a good balance between The results prove that it strikes a good balance between
precision and computational efficiency for the real-time precision and computational efficiency for the real-time
application. However, there are limitations to our research. application. However, there are limitations to our research.
Improving the model's performance would involve including Improving the model's performance would involve including
a larger and more diverse dataset. One main challenge for a larger and more diverse dataset. One main challenge for
future research is to enable efficient real-time inference future research is to enable efficient real-time inference
suitable for low-power edge devices. For future work, suitable for low-power edge devices. For future work,
transformer-based models and other advanced deep learning transformer-based models and other advanced deep learning
techniques will be explored for further detection accuracy techniques will be explored for further detection accuracy
improvement. improvement.

In addition, integration of the real-time deployment In addition, integration of the real-time deployment
strategy and enhanced interpretability of the model will be the strategy and enhanced interpretability of the model will be the
main tasks towards making this approach more universal and main tasks towards making this approach more universal and
scalable for different real-world applications. scalable for different real-world applications.

VII. CONCLUSION ACKNOWLEDGEMENT

This research has developed and evaluated object We acknowledged that with the consent from
detection and pose estimation models based on YOLO 360DigiTMG, we have used the CRISP-ML(Q) Methodology
architectures. With high accuracy and computational (ak.1) and the ML Workflow which are available as open-
efficiency, our models are going to find applications in work source in the official website of 360DigiTMG(ak.2).
safety monitoring and action recognition. With wise
preprocessing techniques and augmentation, the model's  Funding and Financial Declarations:
ability to generalize across various situations has been
enhanced, and therefore its reliability in real-world scenarios  The authors affirm that no financial support, grants, or
is bolstered. The main contribution of this paper lies in funding were obtained during the research or the
combining object detection and pose estimation that brings manuscript preparation.
gains in action recognition and benefits sectors like  The authors confirm that they have no financial or non-
healthcare, security, and industrial automation. financial conflicts of interest to disclose.

IJISRT25MAR1274 www.ijisrt.com 1903


Volume 10, Issue 3, March– 2025 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/25mar1274
 Data Availability Statement: [5]. Vinod, D. C. Mohanty, A. John, and B. K. Depuru,
The datasets utilized, generated, and/or analyzed during “Application of artificial intelligence in poultry
the current study are not publicly accessible due to internal farming - Advancing efficiency in poultry farming by
data privacy policies. However, they can be obtained from the automating the egg counting using computer vision
corresponding author upon reasonable request. system,” Research Square, Aug. 18, 2023. DOI:
10.21203/rs.3.rs-3266412/v1.
FUTURE SCOPE [6]. M. Shahin, F. F. Chen, A. Hosseinzadeh, H. K.
Koodiani, and H. Bouzary, “Enhanced safety
The future of this automated deep learning framework implementation in 5S+1 via object detection
in pharmaceutical manufacturing safety monitoring is algorithms,” The International Journal of Advanced
promising, with a significant opportunity to enhance data Manufacturing Technology, vol. 126, 2023. DOI:
collection through the integration of IoT devices. By adding 10.1007/s00170-023-10970-9.
sensors like temperature and humidity monitors, the system [7]. S. Ludwika and A. P. Rifai, “Deep learning for
can gather comprehensive environmental data, improving detection of proper utilization and adequacy of
real-time insights into worker activities and workplace personal protective equipment in manufacturing
conditions. This multi-sensor approach would strengthen teaching laboratories,” Safety, vol. 10, no. 1, p. 26,
safety protocols and ensure better compliance with regulatory Mar. 2024. DOI: 10.3390/safety10010026.
standards. [8]. Y. Duan, Z. Li, and B. Shi, “Multi-Target Irregular
Behavior Recognition of Chemical Laboratory
Additionally, adopting edge computing can improve the Personnel Based on Improved DeepSORT Method,”
system's efficiency by processing data locally, reducing Processes, vol. 12, no. 2796, Dec. 2024. DOI:
response times and enabling real-time monitoring without 10.3390/pr12122796.
relying on cloud infrastructure. This decentralized model [9]. L. Ali, F. Alnajjar, M. M. A. Parambil, M. I. Younes,
would allow the system to function independently at multiple Z. I. Abdelhalim, and H. Aljassmi, “Development of
manufacturing sites, improving scalability and resilience YOLOv5-based real-time smart monitoring system for
while providing faster, more effective insights for large-scale increasing lab safety awareness in educational
pharmaceutical operations. institutions,” Sensors, vol. 22, no. 8820, pp. 1–15,
Nov. 2022. DOI: 10.3390/s22228820.
REFERENCES [10]. S. Kaur, H. K. Shukla, R. K. Pal, N. Yadav, and S.
Singh, “Human activity recognition,” International
[1]. S. Liu, Y. Yin, and S. Ostadabbas, “In-bed pose Journal of Scientific Research in Science, Engineering
estimation: Deep learning with shallow dataset,” IEEE and Technology (IJSRSET), vol. 9, no. 3, pp. 161–166,
Journal of Translational Engineering in Health and May–Jun. 2022. DOI: 10.32628/IJSRSET229342.
Medicine, vol. 7, pp. 4900112, 2019. DOI:
10.1109/JTEHM.2019.2892970.
[2]. R. M. Butler, E. Frassini, T. S. Vijfvinkel, S. van Riel,
C. Bachvarov, J. Constandse, M. van der Elst, J. J. van
den Dobbelsteen, and B. H. W. Hendriks,
“Benchmarking 2D human pose estimators and
trackers for workflow analysis in the cardiac
catheterization laboratory,” Medical Engineering and
Physics, vol. 136, 2025, Art. no. 104289. DOI:
10.1016/j.medengphy.2025.104289.
[3]. V. R. Kumar, P. Waghmare, S. Bukya, B. K. Depuru,
and I. Kaliamoorthy, “Forecasting drug demand for
optimal medical inventory management: A data-driven
approach with advanced machine learning
techniques,” International Journal of Innovative
Science and Research Technology, vol. 8, no. 9, pp.
221–229, Sep. 2023. DOI:
10.38124/IJISRT20AUG257
[4]. Bhat, A. Dhadd, B. S. Patil, and B. K. Depuru,
“Enhancing automobile manufacturing efficiency
using machine learning: Sequence tracking and
clamping monitoring with machine learning video
analytics and laser light alert system,” International
Journal of Innovative Science and Research
Technology, vol. 8, no. 8, pp. 1884–1896, Aug. 2023.
DOI: 10.1080/00207543.2022.2152897.

IJISRT25MAR1274 www.ijisrt.com 1904

You might also like