0% found this document useful (0 votes)
26 views6 pages

Next-Gen Security YOLOv8 For Real-Time Weapon Detection

The document discusses a research study presented at the 7th International Conference on I-SMAC, focusing on the use of YOLOv8-Small for real-time weapon detection in military contexts. The study highlights the model's accuracy, speed, and adaptability to various environmental conditions, demonstrating its potential to enhance military operations and security measures. It also compares YOLOv8-Small with other detection methods and outlines future research directions for improving weapon detection systems.

Uploaded by

sale saisannidh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views6 pages

Next-Gen Security YOLOv8 For Real-Time Weapon Detection

The document discusses a research study presented at the 7th International Conference on I-SMAC, focusing on the use of YOLOv8-Small for real-time weapon detection in military contexts. The study highlights the model's accuracy, speed, and adaptability to various environmental conditions, demonstrating its potential to enhance military operations and security measures. It also compares YOLOv8-Small with other detection methods and outlines future research directions for improving weapon detection systems.

Uploaded by

sale saisannidh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Proceedings of the 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC 2023)

IEEE Xplore Part Number: CFP23OSV-ART; ISBN: 979-8-3503-4148-5

Next-Gen Security: YOLOv8 for Real-


Time Weapon Detection
2023 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC) | 979-8-3503-4148-5/23/$31.00 ©2023 IEEE | DOI: 10.1109/I-SMAC58438.2023.10290401

Deepali Deshpande Manas Jain Adhip Jajoo


Department of Information Department of Artificial Intelligence Department of Artificial Intelligence
Technology and Data Science and Data Science
Vishwakarma Institute of Technology Vishwakarma Institute of Technology, Vishwakarma Institute of Technology,
Pune, India Pune, India Pune, India
[email protected] [email protected] [email protected]

Devika Kadam Harshvardhan Kadam Aryan Kashyap


Department of Artificial Intelligence Department of Artificial Intelligence Department of Artificial Intelligence
and Data Science and Data Science and Data Science
Vishwakarma Institute of Technology, Vishwakarma Institute of Technology, Vishwakarma Institute of Technology,
Pune, India Pune, India Pune, India
[email protected] [email protected] [email protected]

Abstract—The swift and accurate identification of weaponry mass shootings and other violent crimes is growing in
holds paramount importance in military operations to ensure the today's society. Particularly in congested or challenging
safety of personnel and the effectiveness of missions. In recent locations, traditional physical inspection and surveillance
times, deep learning models have emerged as robust solutions for approaches are not always successful in finding firearms.
object detection tasks, rendering them valuable tools for The automationof the weapon detection process has shown
enhancing military security. This research study delves into the
tremendous promise when using deep learning
realm of weapon detection by presenting a novel approach
utilizing YOLOv8-Small, a streamlined variant of the renowned techniques.[1],[6],[8],[14]. YOLO (You Only Look Once)
You Only Look Once (YOLO) detection framework. The study's is one of the most well-liked deep learning object
primary objective revolves around harnessing the capabilities of identification frameworks [4],[5],[12]. The most recent
YOLOv8-Small for precise weapon detection within military version of YOLO, known as v8, provides a number of
contexts. Through a meticulous design process and rigorous improvements over earlier iterations, including increased
training, the proposed model demonstrates its competence in accuracy, speed, robustness, and scalability. Because it may
identifying a diverse range of weapons with remarkable accuracy combine these benefits, YOLOv8 is relevant for the job of
and efficiency. The experimental results validate the potential detecting weapons. YOLOv8 is fast enough to be utilized in
applicability of YOLOv8-Small in bolstering military operations, real-time applications, accurate enough to reliably detect
underscoring its utility as a force multiplier on the battlefield. weapons, andscalable enough to be used to detect a variety
Moreover, the research delves into the model's adaptability to
of weapons. YOLOv8 is a dependable tool for weapon
varying environmental conditions, a critical factor in real-world
military scenarios. The findings reveal the model's capacity to detection in a variety of contexts since it is resilient to
maintain consistent performance across different terrains, changes in illumination, posture, and occlusion. The
lighting conditions, and weather situations. This adaptability research investigate the usage of YOLOv8 for weapon
significantly enhances its operational viability, ensuring reliable detection in security applications. On a range of datasets and
weapon detection capabilities even under challenging in a variety of environments, we will assess YOLOv8's
circumstances. The implications of this research extend to broader performance. The study also talks about YOLOv8's
military strategies and tactics, where rapid and accurate weapon drawbacks and potential upgrades in the future. The findings
detection can tip the scales in favor of mission success. The of this study will contribute to enhancing the resilience,
potential integration of YOLOv8-Small with existing military accuracy, and speed ofweapon detecting systems. This will
systems holds promise for enhancing situational awareness and enable the deployment of this model on hardware systems
proactive threat mitigation. In conclusion, this research study
because of its robustness and light weight nature.
presents a pioneering contribution to the field of military weapon
detection by leveraging YOLOv8-Small's efficiency and In the following sections, the research will delve into the
adaptability. The study's insights provide valuable guidance for methodology, experimental setup, results, and discussions
military stakeholders seeking innovative solutions to enhance to provide a comprehensive analysis of the proposed
security, thereby paving the way for more effective and weapon detection approach using YOLOv8-Small. The
safeguarded military operations.
research objectives of this study revolve around the
Keywords—Object Recognition, Deep Learning, development and assessment of a real-time weapon
Convolutional Neural Network, Computer Vision, Single Shot detection system utilizing YOLO v8. The primary goal is to
Detection, You Only Look Once version 8-Small (YOLO v8s). create a robust and efficient system capable of identifying
weapons in real-time scenarios. To achieve this, the system
I. INTRODUCTION will be thoroughly evaluated using diverse datasets,
In order to ensure public safety and stop potential violent ensuring its effectiveness across a range of conditions and
acts,weapon detection is a vital responsibility. The threat of scenarios. Additionally, this research aims to contribute to
the field by comparing the performance of the developed

979-8-3503-4148-5/23/$31.00 ©2023 IEEE 1055


Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:32:49 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC 2023)
IEEE Xplore Part Number: CFP23OSV-ART; ISBN: 979-8-3503-4148-5

system with other contemporary state-of-the-art methods for using public datasets demonstrate superior performance
weapon detection, thereby providing valuable insights into compared to YOLOv8. This research contributes to the
the system's capabilities and potential advancements in the advancement of target recognition techniques, particularly
domain of security and surveillance technologies. for small size objects.
In conclusion, these research papers collectively
contribute to the field of weapon detection using AI and deep
II. LITERATURE SURVEY learning techniques for security applications. They highlight
the importance of addressing real-time detection,
In recent years, there has been a growing concern
environmental conditions, adversarial attacks, and the trade-
regarding gun violence rates and the use of lethal weapons in
off between speed and accuracy in practical implementations.
various settings. To address these concerns and enhance
Further advancements in these areas are essential to enhance
security measures, researchers have turned to artificial
security and protect public safety effectively.
intelligence (AI) and deep learning techniques for weapon
detection in security applications. This paper aims to provide
an overview of several research papers in this domain, III. APPROACH TO MODEL SELECTION
shedding light on their methodologies, findings, and
Modern object detection algorithms can be divided into
limitations, while also drawing connections between them.
two categories: Two Shot object detection and Single Shot
P. Shanmugapriya's paper [1] begins by acknowledging
object detection.
the challenges associated with weapon detection, such as the
diverse appearance of weapons, clutter and occlusions, and In Two -Shot object detection, an image is passed twice
rapidly changing security scenarios. The paper reviews in thealgorithm before predicting its output. During the first
various AI methods, including Support Vector Machines pass, a set of suggestions or potential object positions are
(SVMs), Random Forests (RFs), and Deep Neural Networks generated. The second pass then further develops these
(DNNs), applied to weapon detection. The authors propose a hypotheses to generate the final predictions. Compared to
DNN-based approach that involves feature extraction using a single-shot object detection, this approach requires more
Convolutional Neural Network (CNN) and subsequent computation but is more accurate. Region-based
convolutional neural networks (RCNN), Fast RCNN, and
classification. However, a notable limitation of this study is
Faster RCNN are a few of the well-known models in this
the absence of real-time testing, which is crucial for practical
area. [3]
security applications. Dr. N. Geetha 's paper [2] shares a
similar concern over gun violence and presents a machine Single stage object detectors have a simplified
learning-based firearm identification system. The study architecture that is specifically built for object detection in a
utilizes a Support Vector Machine (SVM) to classify images single step while taking into account all region proposals.
containing firearms based on characteristics like size, color, These detectors are computationally efficient since they
and shape. While this approach demonstrates promise, it produce the bounding boxes and probability of an object
overlooks the impact of environmental conditions and belonging to a particular class by taking into account all the
adversarial attacks, which are critical factors for real-world spatial sizes of an image at once. But compared to other
deployment. Harsh Jain's research [3] focuses on enhancing approaches, single-shot object detection typically performs
less accurately. In situations with limited resources, these
security through video surveillance systems capable of
strategies can be used to quickly detect objects. A fully
detecting aberrant actions, including weapon detection. Two convolutional neural network (CNN) is used by the single-
CNN-based algorithms, Single Shot Detector (SSD) and shot detector YOLO to analyze a picture. Generally
Faster R-CNN, are employed in this study. The results show speaking, single-shot object detection is better for real-time
fair accuracy, but the trade-off between speed and accuracy applications than two-shot object detection for applications
suggests that specific requirements and constraints must be that prioritize accuracy.
considered. The inability of Faster R-CNN to detect firearms
in real time and the challenge of gathering a sizable dataset Numerous single stage object detection-based
of weapon photos are notable limitations. Sanam Narejo's algorithms have been developed recently, including
work [4] aims to create an accurate gun detection smart Deconvolution Single Shot Detector (DSSD), M2Det,
RetinaNet, and RefineDet++ [3][4] . But, due to the
surveillance security system using the YOLOv3 model. This
complexity and strength of Two Stage detectors, they
system not only distinguishes between types of firearms but
typically outperform Single Stage detection algorithms.
also records incident details for future reference. The However, since the development of You Only Look Once
YOLOv3 model consists of a CNN for feature extraction, a (YOLO) and its sequels, efforts to complete object detection
Region Proposal Network (RPN) for object location in a single step have earned excellent reviews. Deep neural
proposals, and a classifier for weapon classification. The networks are used in these techniques to tackle the
study's innovative approach includes simulating a real-world localization problem, which is framed as a regression
scenario with socket programming. However, future work is problem. It is observed that YOLO is giving both earlier
needed to train the model on a larger dataset and reduce false single staged detectors and two staged detectors a stiff battle
positives. Haitong Lou's paper [5] introduces the DC- in terms of accuracy and prediction time. It is only one of
YOLOv8 algorithm for small size target detection, the most widely used solutions in production because of its
addressing the limitations of human observation and simple architectural design, low level of complexity, and
judgment errors in complex environments. This algorithm simple implementation. As a result,the YOLO architecture
improves detection accuracy for small size objects while was chosen sinceit offers the speed and accuracy needed for
maintaining accuracy for larger targets. Experimental results real-time object identification. YOLOv8 is the newest and
most accurate real-time object identification model, and it

979-8-3503-4148-5/23/$31.00 ©2023 IEEE 1056


Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:32:49 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC 2023)
IEEE Xplore Part Number: CFP23OSV-ART; ISBN: 979-8-3503-4148-5

best serves our goals because of its balanced configuration enhance object localization.
between speed and accuracy. YOLOv8 comes in a variety of
model variations to meet user needs. These versions have Training:
been refined for usage and available computing power.
• YOLOv8-Small is trained using annotated training
data, which includes bounding box annotations and
Table 1. Comparison of different versions of YOLO v8 corresponding class labels.
• Classification loss functions, such as cross-entropy
Model Size (pixels) mAP Value (50-95) Speed CPU ONNX (ms)
loss, are employed to train the model for accurate
YOLOv8n 640 37.3 80.4 object classification.
YOLOv8s 640 38.3 128.4 • Localization loss functions, such as thesmooth L1
YOLOv8m 640 39.2 234.7 loss, are used to train the model for precise object
localization.
YOLOv8l 640 39.7 375.2
YOLOv8x 640 41.3 479.1 Some special features of YOLO v8s are:

Anchor Free Model:


After evaluating these different variations, it was
concluded that the YOLOv8s model offers a good frame The YOLOv8 model differs from older YOLO models by
rate with excellent accuracy despite operating on a modest
being anchor-free, meaning it predicts an object's center
computational device. Due to the fact that the proposed
algorithm must be able to detect weapons using directly instead of calculating its offset from pre-defined
autonomous cars or cameras placed throughout diverse anchor boxes. The use of anchor boxes in previous YOLO
locations, which may not have powerful processing versions posed challenges, as they might represent the box
capabilities, YOLO v8s seems a reliable option. distribution of a standard benchmark but not necessarily the
custom dataset being used. With anchor-free detection, the
IV. ARCHITECTURE model makes fewer box predictions, leading to a faster and
It is unable to directly assess the direct research more efficient Non- Maximum Suppression (NMS) process.
methods and ablation studies used to develop YOLOv8 NMS is a post-processing step that filters through candidate
because it does not yet have a published paper. Having detections after the model's inference to retain the most
stated that, the repositoryand information on the model was accurate and relevant detections.
examined that was accessible to begin documenting what
is new in YOLOv8. The following elements make up
YOLOv8-Small'sarchitecture details:
Backbone:
• YOLOv8-Small is built on the Darknet
architecture, which is composed of several
convolutional layers.
• It collects features from the input image at various
sizes to capture object details.

Neck:
• As a neck component, YOLOv8-Small uses feature Fig 1. Visualization of Anchor Box in YOLO
pyramid networks (FPNs).
• Multi-scale feature fusion is made possible by
FPNs, allowing the model to recognize objects of New Convolutions:
varying sizes. The YOLO v8 model uses a total of 23 convolutional layers.
• Skip connections are used to connect low- level The first 14 layers are used in the backbone network, which
features with higher-level layers, aiding in the is responsible for extracting features from the input image.
detection of small objects. The remaining 9 layers are used in the neck and head
networks, which are responsible for generating the output
Detection Heads: predictions. The convolutionallayers in the YOLO v8 model
use a variety of kernelsizes and activation functions. Small
• Within YOLOv8 Small, there are several detection kernel sizes (3x3 or 5x5) and Rectified Linear Unit (ReLU)
heads assigned the duty of forecasting bounding activation functions are used in the first few layers. With the
boxes and probabilities associated with various help of these layers, low-level elements from the source
classes. image, like edges and textures, are extracted. The later layers
• Each detection head is composed of convolutional in the model use larger kernel sizes (7x7 or 11x11) and ELU
layers followed by post- processing operations. activation functions. These layers are used to extract high-
• Anchor boxes, pre-defined bounding box shapes of level features from the input image, such as object parts and
different scales and aspect ratios, are utilized to object shapes.

979-8-3503-4148-5/23/$31.00 ©2023 IEEE 1057


Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:32:49 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC 2023)
IEEE Xplore Part Number: CFP23OSV-ART; ISBN: 979-8-3503-4148-5

In YOLOv8, there were some modifications made tothe


fundamental building block. Specifically, C2f (acomponent)
3. Data Augmentation: To enhance the model's
was used in place of C3 from previous versions.
Additionally, the initial 6x6 convolution in the stem was ability to generalize, augmentation techniques
replaced with a 3x3 convolution. Despite the increase in the wereapplied to range of data. [8]
kernel size from 1x1 to 3x3, the bottleneck in YOLOv8
remained similar tothat of YOLOv5. These changes suggest 4. Data Splitting: The dataset was split into three
that YOLOv8 might be moving back towards the ResNet subsets: a training set, a validation set, and a testing
block, which was originally introduced in 2015. In the neck set. The 75-15-10 split ratio was chosen to ensure
of YOLOv8, features are directly combined without sufficient data for training while enabling robust
enforcing strict channel proportions. As a result, there is a evaluation. [1]
reduction in the overall number of parameters and the size
of the tensors in the model. This design choice likely
contributes to better efficiency and optimization in the B. Detection Procedure of YOLO v8s
YOLOv8 architecture.
The detection procedure in the YOLOv8s model is as
Closing the Mosaic Augmentation: follows:

While model architecture is often the focus of deep learning The input image is resized to a fixed size of 640x640
research, YOLOv5 and YOLOv8's training procedure is pixels. Subsequently, the image is partitioned into a grid
crucial to their effectiveness. YOLOv8 enhances photos comprisingcells arranged in a 13x13 layout, with each cell
while you're training online. The model views a slightly tasked with detecting objects within its designated image
different variety of the images it has been given at each area. For each cell, the model predicts 5 bounding boxes,
epoch. Mosaic augmentation is one of the augmentations. each of which hasthe following attributes:
The model is compelled to learn things in novel places, in • x, y: The coordinates of the center of the
partial occlusion, and against varioussurrounding pixels as a boundingbox relative to the cell.
result of the stitching together of four photos.
• w, h: The width and height of the bounding box.
• Confidence: The probability that the bounding
The YOLOv8-Small architecture is designed to be boxcontains an object.
lightweight and efficient, making it suitable for real-time • Class_id: The ID of the object class that the
object detection applications. It strikes a balance between bounding box is most likely to contain.
accuracy and computational efficiency, leveraging
techniques such as FPNs, anchor boxes, skip connections, The model then applies a non-maximum suppression
and optimized loss functions to achieve robust and effective algorithm to the bounding boxes to remove any overlapping
object detection performance. boxes that are unlikely to contain objects. The remaining
bounding boxes are then ranked by their confidence scores,
and the top-scoring boxes are returned as the detection results.
V. METHODOLOGY The YOLOv8s model is a single-stage object detection
Modern object detection algorithms can be divided into model, which means that it predicts all of thebounding boxes
two categories: Two Shot object detection and Single Shot and class labels for an image in a single pass.This makes it
object detection. faster than two-stage object detection models
The dataset, which consisted of approximately 9633
A. Data Collection and Pre-processing images, was accompanied by annotation files. To generate
these annotations, a cloud-based platform was leveraged that
The first phase of our research involved constructing a allowed the researchers to draw bounding boxes around
customized dataset tailored to the specific requirements of weapons in each image and assign corresponding labels to
our weapon detection model. There are various steps that them. This meticulous annotation process ensured that the
have to be followed in the preprocessing of the dataset: model would be trained on accurately labeled data, critical for
achieving high detection accuracy.
1. Data Collection and Annotation: The project Subsequently, the YOLOv8 small architecture was chosen
began by curating a diverse dataset containing for the object detection model, as it offered real-time detection
images captured from various scenarios. Present capabilities, making it well-suited for our application. This
dataset contains 9633 images which are further architecture allows us to detect weapons in images and video
divided into 5 classes that are Pistol, Missile, Gun, streams with remarkable efficiency essential for security
Grenade, and knife [4],[2].Each image was applications that require quick response times.
meticulously annotated with bounding boxes
around weapons. [2] During the training phase, model was inputted with the
dataset into the neural network and conducted thorough
2. Data Cleaning: The annotated dataset underwent experiments to optimize the model's performance. To
athorough quality control process to identify and obtain the best possible results, the YOLOv8 small model
rectify labeling errors, missing annotations, and was trained for a range of 25 to 40 epochs. This training
inconsistencies. duration allowed us to fine-tune the model and achieve

979-8-3503-4148-5/23/$31.00 ©2023 IEEE 1058


Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:32:49 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC 2023)
IEEE Xplore Part Number: CFP23OSV-ART; ISBN: 979-8-3503-4148-5

optimal performance on our specific dataset. The objective inference. Using a faster hardware accelerator, such as
was to minimize the loss function, focusing on both a GPU or TPU, can significantlyimprove performance
box_loss (regression loss for bounding boxes) and efficiency. [4]
class_loss (categorical classification loss). This ensured the
model acquired the ability to precisely anticipate bounding
• Optimize Hyperparameters: Experiment with
boxes and accurately categorize weapons within the images.
different hyperparameters such as learning rate, batch
size, epochs, regularization, etc. Hyperparameter
C. Validation of the Model optimization techniques like grid search or Bayesian
optimization can help you find the best set of
After achieving a well-trained model with minimum hyperparameters for your task.
loss, the model was proceeded to validate its performance
using a separate validation set, which was previously The combination of the meticulously curated dataset, the
reserved for this purpose. During validation, the choice of YOLOv8 small architecture, extensive training, and
hyperparameters were fine-tuned to further enhance the careful hyperparameter tuning resulted in an efficient and
model's generalization capabilities. Through this process, reliable weapon detection model. By focusing on real-time
detection capabilities, our model is well-suited for security
a balance was achieved between avoiding overfitting on
applications, contributing to enhanced safety and threat
the training data and maintaining good performance on detection in various real-world scenarios.
unseen data.

D. Testing of the Model VI. RESULTS


The proposed project developed a YOLOv8-based
Finally, with a well-optimized and validated model, weapon detection model for web API deployment and
inferences were run on the test dataset to evaluate the hardware integration. The YOLO v8 small object detection
efficiency of the selected approach. This step involved model has a detection rate of 48.6% mAP on the custom test-
passing the test images through the model to detect and set. This means that it can correctly detect small objects
visualize the presence of weapons accurately. YOLOv8 48.6% of the time, with a confidence of at least 50%. This is
first predicts a set of bounding boxes around objects in an a significant improvement over previous YOLO models,
image.Then, it uses a confidence score to determine which which had difficulty detecting small objects. Real-time
bounding boxes are likely to contain objects. Finally, it testing showed an average speed of 25 frames per second on
removes overlapping bounding boxes to ensure that only CPU and couldbe much higher depending on the efficiency of
one boundingbox is predicted for each object in the image. the GPU. Thisswift processing makes it highly suitable for
deployment in real-world applications where real-time
The remaining bounding boxes with high confidence
response is critical. Notably, the model showed excellent
scores are the predictedlocations of objects in the image.
performance in identifying concealed weapons and
Also, the key performance metrics such as precision, distinguishing them from non-threatening objects, reducing
recall, and Intersection over Union(IoU) were measured to the risk of false alarms andensuring reliable detection.
quantitatively assess the model's accuracy and
effectiveness in detecting weapons. To validate the adaptability of our model for web
deployment and hardware integration, the successfully
implemented it as an API service, enabling users to access
E. Factors to Improve Efficiency of the Model weapon detection functionalities easily. Additionally, model
was made in sucha way that it can be easily integrated into a
To further increase the efficiency of the model hardware system with a camera module, showcasing its
following points can be taken into consideration: potential for seamless integration into security setups and
autonomous surveillancesystems.
• Using a large dataset: A larger dataset will help to Following are the images which depict the model's
train YOLO models to be more accurate and robust,at capabilityto accurately identify and locate various types of
present the dataset had 9633 images, and it can be weapons within complex and cluttered scenes. The model
increased to approx. 25,000 to improve the efficiency effectively highlights instances of firearms, knives, and other
to its max. [10] potentially dangerous objects, demonstrating its utility in
enhancing public safety and security measures. The result
images below show the model’s robustness across different
• Using quantization technique: Quantization is a lighting conditions, angles, and perspectives, solidifying its
technique that converts the weights and activations of a potential as a valuable tool for real-world weapon detection
neural network from floating point values to integer applications.
values. This can make the model much smaller and
faster, without sacrificing too much accuracy, post-
training quantization can be used to make our model
faster.

• Using a faster hardware accelerator: YOLO models


can be very computationally expensive to train and

979-8-3503-4148-5/23/$31.00 ©2023 IEEE 1059


Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:32:49 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the 7th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC 2023)
IEEE Xplore Part Number: CFP23OSV-ART; ISBN: 979-8-3503-4148-5

VII. CONCLUSION
Weapon detection using the YOLO object detection
framework has shown promise in automating the detection
and classification of weapons in real-time. The YOLO
architecture, with its multi-scale feature fusion and anchor
boxes, enables accurate localization of weapons in images or
video frames. While YOLOv8 is hypothetical, advancements
in the YOLO framework may lead to further improvements in
weapon detection accuracy and efficiency. It is important to
stay updated with the latest research in the field to explorethe
most recent advancements in weapon detection using the
YOLO framework. In conclusion, the research presents a
robust and accurate YOLOv8-based weapon detection model,
Fig 2. Validation Image generated by the model showcasing its efficacy for real-time deployment on the web
via API and its suitability for hardware integration. With its
remarkable accuracy and real-time performance, the model
can significantly contribute to enhancing public safety and
security in various domains.
REFERENCES
[1] P. Shanmugapriya, Gurram Yugandhar Reddy, Jammalamadaka
Mahendra Kumar, "Weapon Detection using Artificial Intelligence and
Deep Learning for Security Applications", IRJET 2022
[2] Dr. N. Geetha, Akash Kumar. K. S, Akshita. B. P, Arjun. M
Coimbatore Institute of Technology, “Weapon Detection in
Surveillance System”, IJERT 2021.K. Elissa, “Title of paper if
Fig 3(a). Result on test data known,” unpublished.
[3] Harsh Jain, Aditya Vikram, Mohana, Ankit Kashyap, Ayush Jain
Telecommunication Engineering, RV College of Engineering®
Bengaluru, Karnataka, India. “Weapon Detection using Artificial
Intelligence and Deep Learning for Security Applications”, ICESC
2020
[4] Sanam Narejo, Bishwajeet Pandey, Doris Esenarro vargas, Ciro
Rodriguez and M. Rizwan Anjum, “Weapon Detection Using YOLO
V3 for Smart Surveillance System”, Recent Trends in Advanced
Robotic Systems 2021.
[5] Haitong Lou1 , Xuehu Duan1 , Junmei Guo1 , Haiying Liu1 *, Jason
Gu2 , Lingyun Bi1 , Haonan Chen “DC-YOLOv8: Small size Object
detection algorithm based on camera sensor”, 2023.
Fig 3(b). Result on test data [6] Alan Agurto Yong Li Gui Yun Tian Nick Bowring Stephen Lockwood “A
Review of Concealed Weapon Detection and Research in Perspective”.
[7] Milind Rane, Manas Jain, Aryan Kashyap, Adhip Jajoo, Harshvardhan
Kadam, Devika Kadam, Vishwakarma Institute of Technology, Pune,
“Mine Detecting Military Bot using IoT”, ESCI 2023
[8] Krizhevsky, A., I. Sutskever, and G.E. Hinton, “ ImageNet
classification with deep convolutional neural networks”, in
Proceedings of the 25th International Conference on Neural
Information Processing Systems - Volume 1. 2012, Curran Associates
Inc.: Lake Tahoe, Nevada.
[9] Wang, C., K. Huang, “How to use Bag-of-Words model better for
image classification”. Image and Vision Computing, 2015.
Fig 4(a). Real-time testing of rifle for the present model [10] Okafor, E., “Comparative study between deep learning and bag of
visual words for wild-animal recognition”, 2016.
[11] Turaga, “Machine recognition of human activities: A survey”. IEEE
Transactions on Circuits and Systems for Video technology, 2008.
[12] Zhang, Y., H. Wang, and F. Xu. “Object detection and recognition of
intelligent service robot based on deep learning” , IEEE International
Conference on Cybernetics and Intelligent Systems (CIS) and IEEE
Conference on Robotics, Automation and Mechatronics (RAM), 2017.
[13] Bhatnagar, “An Ensemble of Deep Learning and Feature Based Models for
Financial Sentiment Analysis”, 2017
[14] Krizhevsky, A., I. Sutskever, and G.E. Hinton. “ Imagenet
classification with deep convolutional neural networks”. in Advances
in neural information processing systems, 2012.
Fig 4(b). Real-time testing of firearms for the present model [15] Martinez-Martin, “Object Detection and Recognition for Assistive
Robots: Experimentation and Implementation”. IEEE Robotics &
Automation Magazine, 2017

979-8-3503-4148-5/23/$31.00 ©2023 IEEE 1060


Authorized licensed use limited to: VIT University. Downloaded on December 01,2024 at 06:32:49 UTC from IEEE Xplore. Restrictions apply.

You might also like