0% found this document useful (0 votes)
63 views6 pages

Weapon Detection From Images Using YOLO and OpenCV

The document presents a study on an automated system for weapon detection using the YOLO v5 object detection model, aimed at addressing the global issue of gun violence. The proposed methodology incorporates transfer learning and advanced computer vision techniques to enhance accuracy and efficiency in identifying firearms in real-time video feeds. The findings suggest that this system could significantly improve public safety and security measures by enabling quicker responses to potential threats.

Uploaded by

sale saisannidh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views6 pages

Weapon Detection From Images Using YOLO and OpenCV

The document presents a study on an automated system for weapon detection using the YOLO v5 object detection model, aimed at addressing the global issue of gun violence. The proposed methodology incorporates transfer learning and advanced computer vision techniques to enhance accuracy and efficiency in identifying firearms in real-time video feeds. The findings suggest that this system could significantly improve public safety and security measures by enabling quicker responses to potential threats.

Uploaded by

sale saisannidh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2024 First International Conference on Technological Innovations and Advance Computing (TIACOMP)

Weapon Detection from images using YOLO and


2024 First International Conference on Technological Innovations and Advance Computing (TIACOMP) | 979-8-3503-9211-1/24/$31.00 ©2024 IEEE | DOI: 10.1109/TIACOMP64125.2024.00098

OpenCV
Divyanshi Chitravanshi Ayush Malik Harsh Saini
Department of CSE Department of CSE Department of CSE
ABES Engineering College ABES Engineering College ABES Engineering College
Ghaziabad, India Ghaziabad, India Ghaziabad, India
[email protected] [email protected] [email protected]

Sandhya Avasthi Kadambri Agrawal Yash Grover


Department of CSE Department of CSE Department of CSE
ABES Engineering College ABES Engineering College ABES Engineering College
Ghaziabad, India Ghaziabad, India Ghaziabad, India
[email protected] [email protected] [email protected]

The inherent inadequacies of traditional human-based


Abstract—Gun violence incidents have sadly claimed many surveillance methods are demonstrated by the ongoing
lives annually, making them a major global problem. A victimization of innocent people on the streets, where 4.2 out
completely automated, computer-based approach for of every 80,000 people are attacked each year in Pakistan [10].
identifying popular weaponry, like rifles and pistols, is Although the human visual system is highly adept at
presented in this paper. Recent advances in deep learning and identifying intricate things, prolonged eye strain can cause
transfer learning technology have transformed object headaches and impair the ability to identify minute details.
identification and recognition capabilities. Our suggested With the help of robust graphics processing units (GPUs) and
method makes use of the YOLO v5 (You Only Look Once) deep learning and machine learning algorithms applied to
object detection model, which was trained on a private dataset huge datasets, automatic computer-based solutions provide
made up of pictures of different types of guns. One important higher accuracy [3]. Because of this, these systems are
benefit of our methodology is that it incorporates transfer becoming essential for intelligent security and surveillance
learning techniques, which removes the requirement for applications. The implementation of hyperparameter tuning
powerful GPUs or large amounts of processing power that are and data augmentation procedures serves to guarantee the
normally needed for training deep neural networks from the resilience and generalization capabilities of the model [4].
beginning. The outcomes show that the YOLO v5 model This paper's contributions are that we rigorously experiment
outperforms both conventional convolutional neural network on both custom video sequences and benchmark datasets to
models and its predecessor, YOLO v4. This concept could show the effectiveness of our approach. The acquired results
highlight YOLOv5's capability for quickly and correctly
potentially help avoid killings and mass shootings by being
identifying weapons, offering a viable path for strengthening
integrated into monitoring systems, thus saving lives.
security infrastructure [5-6].
Additionally, our model and methodology have the potential
for the creation of robotic security devices that can identify A. MOTIVATION
lethal weapons and lessen the likelihood of an attack, The urgent need to improve public safety protocols and
improving public safety in high-risk regions. The potential lessen the widespread problem of gun violence is what
consequences of properly and efficiently identifying firearms motivates this study project. Our goal is to provide law
in real-time video feeds could be extensive, affecting public enforcement and security agencies with a potent technical
safety protocols, private security companies, and law tool by creating an automated system that can accurately
enforcement organizations. identify firearms. With the use of this instrument, they can
foresee possible dangers and possibly avert catastrophic
Keywords—OpenCV, You Only Look Once (YOLO), Object
Detection, Weapon Recognition, Firearm Detection
events. Gun violence has had terrible effects on communities
all around the world, taking numerous innocent lives in the
I. INTRODUCTION process. The disturbingly high frequency of these accidents,
despite continuous efforts to reduce them, calls for the
The pervasive problem of gun violence has had a seriously investigation of novel remedies. In order to enable prompt
negative effect on society all around the world. Many innocent intervention and maybe prevent mass casualties, an automated
lives are sadly lost as a result of firearm-related incidents system that can consistently and precisely detect firearms in
every year. Studies have indicated that children who witness real time can be an essential first line of defense.
or are the victims of such heinous acts of violence are at a
heightened risk of developing persistent psychological trauma. Furthermore, situational awareness and response
Youth who are not shielded from violence may experience capabilities can be greatly improved by integrating such a
serious mental health issues in the short and long term. sophisticated system into the security frameworks that are
According to a plethora of research data, portable firearms are already in place. Security staff may more efficiently identify
the most often used weapons in a variety of criminal acts, such and reduce any threats by being equipped with state- of-the-
as robbery, burglary, sexual assault, and theft. Identification art threat detection tools. This will ultimately create a safer
of suspicious objects can be facilitated for law enforcement environment for everyone in the community. This research
agencies with the use of the proposed weapon recognition project is a vital step toward creating all-encompassing and
model [1-3].

979-8-3503-9211-1/24/$31.00 ©2024 IEEE 560


DOI 10.1109/TIACOMP64125.2024.00098
Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:30:43 UTC from IEEE Xplore. Restrictions apply.
preventative policies to deal with the urgent problem of gun had the lowest mAP (86.5%) and accuracy (85%) at the VT
violence, which still affects communities all over the world. stage [2]. This could be because VC plants become
increasingly visually similar to corn plants at later growth
B. OBJECTIVES stages, making it more difficult to recognize them. This study
1. To do a thorough examination of current developments indicates the potential of YOLOv5 for precision agriculture
and studies in object detection, with an emphasis on applications and emphasizes the effect of the plant growth
methods designed for the identification of firearms. stage on detection accuracy.
Jung and Choi (2022) looked into ways to enhance the
2. To build our suggested system on the latest object YOLOv5 object detection algorithm, particularly for
detection methods, like the YOLO (You Only Look examining photos taken by drones in a recent study. They
Once) algorithm, and to use them in combination with suggested YOLOv5_Ours, a modified model that was trained
reliable computer vision libraries like OpenCV. on a dataset of 3,360 photos. YOLOv5_Ours achieved a mean
Average Precision (mAP) of 95.5% as opposed to 94.6%,
3. To conduct a thorough analysis and comparison of outperforming the original YOLOv5. Strong performance
different object detection models, taking into account indicators in other domains, such as the F1-score (88.8%),
their strengths, weaknesses, and performance metrics in recall (87.4%), and validation precision (90.7%), further
the context of actual deployment scenarios. supported this gain. These findings imply that, in a variety of
scenarios, YOLOv5 can provide a notable improvement for
This research initiative's main objective is to conduct a object detection tasks with drone imagery [3].
comprehensive analysis of the most recent advancements and Vijayakumar and Vairavasundaram (2024) have published
state-of-the-art approaches in the field of object detection, research that offers an extensive analysis of YOLO object
with a focus on methods designed specifically for the precise detection models, including versions ranging from YOLOv1
identification of firearms. Through deep diving into this area, to YOLO. The article explores many important topics, such as
we hope to acquire a comprehensive grasp of cutting-edge the performance measures that are used to assess the models,
methods and how they might be used in our system. Besides, the post-processing methods that are utilized to enhance the
a crucial part of this project is putting sophisticated object outcomes, the availability of training datasets, and typical
identification algorithms—like the YOLO (You Only Look object identification applications. The article also examines
Once) algorithm—into practice and integrating them with the unique architecture of every YOLO iteration, offering
potent computer vision libraries like OpenCV [7,8]. Our insights into their design decisions [4]. To demonstrate the
system will be built around these frameworks and tools, adaptability and significance of this object detection
which will allow for precise and effective firearm detection framework, the authors conclude by highlighting the
in a variety of settings. contributions made by multiple YOLO variants to diverse
The sections that follow will provide a thorough methodology, real-world applications.
experimental findings, a discussion, and closing thoughts to In a recent study, the YOLO object identification technique
build a strong and practical solution that uses cutting-edge was investigated for use in the detection and recognition of
technology to address the urgent problem of gun violence. traffic signs by Flores-Calero et al. (2024) in the journal
Mathematics. They looked at previous studies in this area in
II. LITERATURE SURVEY their systematic review. The results demonstrate how widely
In a recent study titled "A Comparative Study of YOLOv5 YOLO has been implemented in intelligent transportation
Models Performance for Image Localization and systems, especially for enhanced driver-assistance features
Classification," Horvat, Jelečević, and Gledec (2022) and driverless cars. The fact that YOLO is so widely used
examined the abilities of several YOLOv5 models for image shows how well it works to identify and recognize traffic signs,
tasks. Finding the YOLOv5 model with the best overall which is essential for maintaining safety and improving the
performance was their goal. According to the study, the model capabilities of autonomous vehicles [8,11].
with the most parameters, YOLOv5x, produced the best The research paper [5] examined the efficacy of YOLO
outcomes, while the model with the fewest parameters, models (YOLOv5, YOLOv6, and YOLOv7) in identifying
YOLOv5n, produced the worst results [1, 9,10]. This result and categorizing items with different sizes in a study that was
validates the relationship between a model's learning capacity published in the Journal of Advances in Information
and the quantity of parameters it uses. Put another way, Technology [5,12]. They used a customized dataset and
models with more parameters typically perform better on tasks contrasted feature-based and object-based classification
like picture localization and classification because they are strategies. The findings showed that YOLOv7 performed
better equipped to recognize complicated patterns inside data. better than the other models, obtaining notable gains in
The YOLOv5 object detection algorithm was assessed by measures such as Precision, Recall, and mean Average
researchers in a recent study by Yadav et al. (2022), which Precision at 50% (mAP@50%). Notably, YOLOv7
was published in Artificial Intelligence in Agriculture, to demonstrated its capacity to assess object properties
determine whether volunteer cotton (VC) plants are present in effectively by surpassing 90% accuracy in the feature-based
corn fields. Using footage from unmanned aerial systems classification of small objects. Furthermore, the model
(UAS), the study assessed the efficacy of four YOLOv5 demonstrated an astounding 70% accuracy rate in item
variations (s, m, l, and x) in identifying VC plants at three detection tests. These results show that YOLOv7 has potential
different growth phases (V3, V6, and VT). The results showed for applications that need precise object identification and
that, with an amazing 98% classification accuracy and 96.3% classification, especially for tiny objects [14].
mean Average Precision (mAP), YOLOv5s performed best at
the V6 stage. YOLOv5s and YOLOv5m, on the other hand,

561

Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:30:43 UTC from IEEE Xplore. Restrictions apply.
An image-adaptive YOLO framework, or IA-YOLO, was make batch processing easier, all of the photos were reduced
recently presented by Liu et al. (2022) for better object to 416 by 416 pixels before being trained.
recognition in difficult weather. A differentiable image 2) Object Detection: The capacity to identify things in
processing (DIP) module included in this framework digital images is a crucial component of computer vision. The
dynamically modifies image attributes in response to discipline of object detection has greatly benefited from
meteorological circumstances. To do this, IA-YOLO uses a
recent developments in deep learning. Equation (1) defines
tiny CNN (CNN-PP) to forecast the DIP module's required
parameters. To train IA-YOLO, the researchers used the activation function of the Rectified Linear Unit (ReLU)
YOLOv3 in conjunction with low light and fog circumstances used by the YOLO (You Only Look Once) algorithm, which
[9, 15]. The results of this training indicate that IA-YOLO acts as a trained object detector. Unwanted values are set to
may be able to improve object identification accuracy in zero by ReLU, which has non-saturating activation, so
unfavorable weather situations. eliminating them from the activation map (as shown in
In the recent study from 2023 [7] examined the effectiveness equation 1).
of many YOLO object detectors for weed detection in ReLU: g(x) = max(0, x) ------- (1)
turfgrass environments. The dataset used in the study was The input data is flattened into a one-dimensional array by
named "Weeds," and it included 11,385 weed annotations on the last completely connected layers. After being flattened,
over 4,200 photos. Four YOLO models— YOLOv5m, the output is input into a feedforward neural network, which
YOLOv7, YOLOv7s, and YOLOv8l—were compared by the creates a specified long feature vector for each training
researchers. With an accuracy of 0.9476, a mean Average iteration by using backpropagation. The high-level features
Precision at 50% intersection over union (mAP_0.5) of 0.9795, that these layers are anticipated to learn as nonlinear
and a comparable metric at a higher threshold (mAP_0.5:0.95) combinations are represented by the convolutional layers'
of 0.8123, YOLOv8l was the best performer. Although output.
YOLOv5m had the highest recall (0.9663), suggesting that it 3) Pretrained Model and Transfer Learning: A pre-
could recognize the majority of weeds, YOLOv8l trained model is used in conjunction with transfer learning
outperformed YOLOv5m in this particular application by strategies to accelerate the training process and take
offering a better balance between accuracy and precision [7, advantage of preexisting information. Pretrained on the
16].
COCO (Common Objects in Context) dataset, the YOLO
III. PROPOSED METHODOLOGY object detector offers a wide range of object types.
The suggested method for putting into practice an Furthermore, the pre-trained model is trained on the COCO
integrated framework that would enable security and ImageNet datasets, and the weights acquired from that
investigations to be carried out by gradually identifying and training are applied. As a result, the system can identify and
detecting potentially harmful weapons is described in this categorize items that belong to the classes that are included
section. The framework is intended to use IP cameras to in the training dataset. Three- scale output predictions are
monitor and detect any threats, informing and empowering created using the Darknet-53 architecture, which uses 53
security staff to respond accordingly. By using this process, layers of convolutional neural networks for feature
the system will be able to identify weapons that are harmful extraction.
when they are present at strategic places or access points. The The suggested approach incorporates state-of-the-art object
technology can start an alarm process to alert authorities or identification methods, makes use of transfer learning for
instructors upon confirmation of a threat. In addition, the effective model training, and incorporates a thorough security
structure has a door-locking mechanism that can be triggered
architecture for threat detection, alerting, and response
if a shooter or someone with a deadly weapon is discovered.
coordination in real-time.
Security staff may simultaneously view live photos taken by
the IP cameras, which allows for quick response and real-time
situational awareness. The methodology is illustrated in Fig 2.
A system for managing information has also been created
to log and document all operational actions. This system
records each step taken during an incident and acts as a central
repository. By examining and drawing lessons from historical
occurrences, this database can be extremely helpful in
enhancing reaction plans and readiness for upcoming crises.
Three primary components make up the broad approach to this
research activity, as described below.
Fig.1 Example to show how YOLO works.
1) Dataset Preparation: To train machine learning
models successfully, it is essential to obtain a desirable and This is the example of YOLO v5 model, this depicts how the
appropriate dataset. In order to achieve this, a sizable quantity model actually works, Basically, YOLO splits an image into
of weapon photos were gathered by hand from numerous grids and makes predictions about what bounding boxes and
internet sources. Sample photos from the gathered dataset are objects will show up in each grid cell. This is not the case
shown in Figure 2. We collected a minimum of fifty with other detection methods, such as R-CNN, which finds
photographs each type of weapon. The ".jpg" formatted candidate regions that may contain objects through a region
photos were kept in a special subfolder called "images". To proposal stage.

562

Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:30:43 UTC from IEEE Xplore. Restrictions apply.
Our method's versatility in many real-world situations is
one of its main benefits. Through training the model on a wide
range of datasets that include different lighting situations,
object sizes, and image resolutions, we have made sure the
model is reliable for real-world scenarios. This is especially
important in security and surveillance scenarios since the
surrounding environment can greatly affect the quality of the
video that is recorded. Given the difficulties in detecting
firearms in low-light and low-resolution scenarios, our
Fig.2 Proposed Methodology approach shows great promise for use in high-risk areas such
as public spaces and vital infrastructure, where prompt threat
The You Only Look Once (YOLO) method of object detection is crucial. The data set which is being taken into
identification is shown in the image. It deconstructs the consideration, the sample of the same is shown here in Fig 4.
procedure into several phases. The input image is first divided
into smaller areas by overlaying a grid. After that, YOLO
examines each region, estimating the class probability—the
chance that a certain object would be present—and
constructing bounding boxes around those objects (bounding
box prediction). To remove overlaps and produce the most
reliable detections, the system finally refines these
predictions. YOLO is quick because of this one-step
approach, but in comparison to other detection techniques, it
occasionally sacrifices accuracy for speed. The process of
weapon (gun) detection steps is shown in Fig.3. Fig.4 Sample images from Dataset used in implemetations

Fig 3. Model depicting object detection steps using YOLO

This Image depicts how our model is actually working, firstly


the image is captured, it can be captured in two ways either
Fig.5 Detected images of weapon shown through bounding
from the CCTV camera or the webcam. Then the object is
box.
recognized using the YOLO algorithm if it’s a weapon then
it is detected and a bounding box is drawn on the boundary By applying our model, possible weapon has been
with the text Weapon and hence the model is trained after identified in the picture by the YOLO v5 model, an alarm can
that. be raised for additional research. Security professionals need
to respond quickly to the scene in order to assess the threat and
IV. EXPERIMENTAL RESULTS take necessary measures. Table 1 summarizes different
After Using real-time CCTV footage captured at a frame version of YOLO algorithm with comparison.
rate of one frame per second with low quality and low light, Table 1. Shows the comparison between different versions
our method was able to successfully identify guns. Most of of YOLO
the earlier work concentrated on object detection in high-
resolution photos and videos. As a result, in real-world Parameter YOLOv5 YOLOv6 YOLOv7 YOLOv8
circumstances, models that were trained on high-quality
datasets proved unsuitable for identifying low-quality Release
May 2020
June
July 2022 January 2023
objects. After our model's training and testing stages, we Date 2022
carefully examined the outcomes. The model's performance Framewor
PyTorch PyTorch PyTorch PyTorch
was also assessed using "mp4"-formatted recorded footage. k
Fig.5 presents the outcomes of using our suggested methods. Efficient
CSPNet Extended
If pertinent samples were included in the training dataset, the Architectu (Cross Stage
Rep
YOLOv4
Evolved
model's high accuracy would allow for the trustworthy Backbon from
re Partial (with new
e YOLOv7
detection of a variety of weapon types. Network) head)

563

Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:30:43 UTC from IEEE Xplore. Restrictions apply.
could provide a more thorough danger assessment. Alert
Model v5s, v5m, v5l, n, s, m, I, Regular and systems and real-time feedback could allow for quick
n, s, m, I, x
Variants v5x x X
reactions to possible threats. Accuracy can be gradually
Performan
ce ~0.67 increased by ongoing model modification based on learning
~0.65 ~0.69 ~0.70 from new data.
([email protected] (YOLOv
(YOLOv5x) (YOLOv7) (YOLOv8x)
) 6x)
Automatic face masking could solve privacy concerns,
Speed FPS ~140 FPS
~150 FPS
~ 160 FPS ~165 FPS
while anomaly detection skills could spot departures from
(YOLOv regular behavior. Real-world applicability and usability can be
(YOLOv5s) (YOLOv7) (YOLOv8n)
6n) strengthened by additional work on scalability, hardware
Adjustabl optimization, robustness to environmental changes, and user
Input Size Adjustable Adjustable Adjustable interface enhancements. The performance of the system can
e
Parameter
be improved by extensive testing in challenging conditions
~9M - ~37M - and collaborative learning between surveillance systems.
Count ~7.5M - 88M ~6M - 98M
120M 74M
Furthermore, it is still crucial to address legal and ethical
Training
Data COCO COCO COCO COCO issues in order to make sure that developments comply with
Pre- ethical norms and privacy laws. The system can develop into
training
Yes Yes Yes Yes
a more effective instrument for guaranteeing security and
Data public safety in a variety of situations by pursuing these
improvements. Communities can be made safer and more
Improved
label
Advanced Enhanced secure by utilizing advances in artificial intelligence and
Auto- label augmentatio computer vision. Even while YOLOv5 is useful, it's worth
Additional assignme
Augment, assignment, n,
Features
Mosaic data
nt,
Extended Dynamic investigating more recent iterations, such as YOLOv8 (which
Efficient was launched in January 2023). Subsequent research
augmentation anchor-free anchor box
training
tricks
head calculation comparing these models may determine the best option for
different deployment scenarios.
Established, Improved Further
well- training improved
Highest REFERENCES
accuracy and
documented, technique accuracy [1] Horvat, Marko & Jelečević, Ljudevit & Gledec, Gordan. (2022). A
Advantage speed,
excellent s, better and speed, comparative study of YOLOv5 models performance for image
s enhanced
balance of performa advanced localization and classification.
augmentatio
speed and nce than label
n techniques [2] Pappu Kumar Yadav, J. Alex Thomasson, Stephen W. Searcy, Robert
accuracy YOLOv5 assignment
G. Hardin, Ulisses Braga-Neto, Sorin C. Popescu, Daniel E. Martin,
Roberto Rodriguez, Karem Meza, Juan Enciso, Jorge Solórzano Diaz,
Tianyi Wang, Assessing the performance of YOLOv5 algorithm for
detecting volunteer cotton plants in corn fields at three different growth
A useful comparison of YOLOv5, YOLOv6, YOLOv7, and stages,Artificial Intelligence in Agriculture,Volume 6,2022,Pages 292-
YOLOv8 is provided in this table. Important measurements 303,ISSN 2589-7217.
such as Frames Per Second (FPS) for speed and mean [3] Jung, Hyun-Ki, and Gi-Sang Choi. 2022. "Improved YOLOv5:
Efficient Object Detection Using Drone Images under Various
Average Precision ([email protected]) for precision allow us to Conditions" Applied Sciences 12, no. 14: 7255.
evaluate their performance. The pattern points to gradual https://doi.org/10.3390/app12147255.
enhancements in successive iterations. YOLOv8 offers the [4] Vijayakumar, A., Vairavasundaram, S. YOLO-based Object Detection
most accuracy, but compared to YOLOv6n and YOLOv7n, it Models: A Review and its Applications. Multimed Tools Appl (2024).
https://doi.org/10.1007/s11042-024-18872-y
trades off a little bit in speed. YOLOv5s continues to lead in
[5] NgocQuach, Luyl-Da, Khang Nguyen Quoc, Anh Nguyen Quynh, and
real-time applications that prioritize speed. Furthermore, Hoang Tran Ngoc. "Evaluating the effectiveness of YOLO models in
YOLOv8n is a great option for deployment on devices with different sized object detection and feature-based classification of
little processing power due to its reduced parameter count. small objects." Journal of Advances in Information Technology 14, no.
5 (2023): 907-917.
V. CONCLUSION [6] Liu, Wenyu, Gaofeng Ren, Runsheng Yu, Shi Guo, Jianke Zhu, and
Lei Zhang. "Image-adaptive YOLO for object detection in adverse
The possibility of using the YOLOv5 object detection weather conditions." In Proceedings of the AAAI Conference on
model in surveillance systems is explored and how it can be Artificial Intelligence, vol. 36, no. 2, pp. 1792-1800. 2022.
used in applications to identify firearms. With its real-time [7] Sportelli, Mino, Orly Enrique Apolo-Apolo, Marco Fontanelli,
performance, computational efficiency, and efficacy even in Christian Frasconi, Michele Raffaelli, Andrea Peruzzi, and Manuel
difficult circumstances, this model produced encouraging Perez-Ruiz. "Evaluation of YOLO object detectors for weed detection
results. This research has ramifications that go beyond in different turfgrass scenarios." Applied Sciences 13, no. 14 (2023):
8502.
academia, as it may find use in a number of security-related
[8] Flores-Calero, Marco, César A. Astudillo, Diego Guevara, Jessica
domains, including law enforcement, public areas, transit hubs, Maza, Bryan S. Lita, Bryan Defaz, Juan S. Ante, David Zabala-Blanco,
and key infrastructure. Advanced computer vision algorithms and José María Armingol Moreno. 2024. "Traffic Sign Detection and
may identify guns quickly and precisely, which can increase Recognition Using YOLO Object Detection Algorithm: A Systematic
public safety, minimize reaction times, and improve threat Review" Mathematics 12, no. 2: 297.
prevention. https://doi.org/10.3390/math12020297.
[9] “Suspicious and Anomaly detection”, Shubham Deshmukh, Favin
VI. FUTURE IMPLICATIONS Fernandes , Monali Ahire ,Devarshi Borse, 2022.
[10] A. Rahoo, F. A. Alvi, U. Rajput, and I. A. Halepoto, “An Efficient
There are various opportunities to improve the YOLOv5- approach for Firearms Detection using Machine Learning”, VFAST
based weapon identification system in the future. Including trans. softw. eng., vol. 11, no. 2, pp. 94–99, Jun. 2023.
multi-modal data sources like aural cues and thermal imaging

564

Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:30:43 UTC from IEEE Xplore. Restrictions apply.
[11] Ragab, Mohammed Gamal, Said Jadid Abdulkader, Amgad Muneer, [14] S. Avasthi, R. Chauhan, & D.P. Acharjya (2022). Significance
Alawi Alqushaibi, Ebrahim Hamid Sumiea, Rizwan Qureshi, Safwan of preprocessing techniques on text classification over hindi
Mahmood Al-Selwi, and Hitham Alhussian. "A comprehensive and english short texts. In Applications of Artificial Intelligence
systematic review of yolo for medical object detection (2018 to and Machine Learning: Select Proceedings of ICAAAIML
2023)." IEEE Access (2024). 2021 (pp. 743-751). Singapore: Springer Nature Singapore.
[12] Zhou, Yan. "A yolo-nl object detector for real-time detection." Expert [15] S. Avasthi & R. Chauhan, (2024). Privacy-Preserving Deep
Systems with Applications 238 (2024): 122256. Learning Models for Analysis of Patient Data in Cloud
[13] A. A. Abins, P. P, R. G C and R. Cheran, "Weapon Recognition in Environment. In Computational Intelligence in Healthcare
CCTV Videos: Deep Learning Solutions for Rapid Threat Informatics (pp. 329-347). Singapore: Springer Nature
Identification," 2024 Second International Conference on Emerging Singapore.
Trends in Information Technology and Engineering (ICETITE), [16] Yadav, Pavinder, Nidhi Gupta, and Pawan Kumar Sharma. "Robust
Vellore, India, 2024, pp. 1-8, doi: 10.1109/ic- weapon detection in dark environments using Yolov7-
ETITE58242.2024.10493569. DarkVision." Digital Signal Processing 145 (2024): 104342.

565

Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:30:43 UTC from IEEE Xplore. Restrictions apply.

You might also like