0% found this document useful (0 votes)
206 views6 pages

Artificial Intelligence Based Missile Guidance System: Darshan Diwani, Archana Chougule, Debajyoti Mukhopadhyay

Uploaded by

neitan H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
206 views6 pages

Artificial Intelligence Based Missile Guidance System: Darshan Diwani, Archana Chougule, Debajyoti Mukhopadhyay

Uploaded by

neitan H
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)

Artificial Intelligence based Missile


Guidance System
Darshan Diwani1, Archana Chougule2, Debajyoti Mukhopadhyay3
Sanjay Ghodawat University Kolhapur, India
1
[email protected]
[email protected]

Mumbai University, WIDiCoReL Research Lab, Mumbai, India


3
[email protected]

Abstract - In the 20th century, wars are being won by the


nations with superior air power, for example, U.S. invasion of A. YOLO Algorithm
Afghanistan and Iraq. Such wars caused major deaths of civilian YOLO is a super-fast cutting edge object detection
peoples since the missiles which were heavily used in these wars algorithm. YOLO is the abbreviation of YOU ONLY LOOK
were not equipped with atomicity and intelligence. With ONCE [6]. The YOLO algorithm applies neural network to
escalating cost of a missile and the potential damage that an
entire image at once rather than dividing the image and
intruding aircraft can cause, there is a need to give atomicity and
applying the network to the sub images. The biggest benefit of
intelligence to a missile guidance system which can choose and
track the specified targets on its own or choose among many to using YOLO is its ultimate speed; it is super fast and it is able
hit on. Major contribution of this paper in this direction is to process 45 frames per second with ease. When it is trained on
automate detection of target object and identification of exact real photos of objects and tested on artistic work, YOLO is the
location of the same. The paper proposes use of artificial ultimate king and it beats top detection methods like R-CNN
intelligence for identification of object and location. The paper by a much difference in terms of accuracy obtained. YOLO is
also explains how this information can be used for exact also capable of understanding the generalized object
automated positioning of the missile. representation.

Keywords—Intelligence in missiles, intelligence in Guidance B. Working of YOLO Algorithm


System, Object Detection for UAV’s., YOLO, Object detection
Compared to other neural network algorithms like region
I. INTRODUCTION convolutional neural network (RCNN) which tries to perform
detection of objects on various regions which leads to
Recent enhancements in Artificial Intelligence such as obtaining prediction many times for different types of regions
deploying intelligent agents (IA) hold assurance of bettering in a image frame, Yolo algorithm is more like Full
the performance and giving the intelligence to the guidance Convolutional Neural Network (F-CNN) and inputs the image
systems. The word agent means one which looks among
frame at once to the FCNN and gives the prediction as output.
different options and makes a suitable choice on its own
without any human interface with it. Intelligent agents or IA
are the software based entities that performs this similar
features. They are distinguished by some general qualities like
independence, autonomy and social ability. Such agent falls
into category of Artificial Intelligence and they have capacity
of solving complex problems given to it on its own. The
Intelligent agents can be distinguished by their features, such
as connection agents, informational agents and priority agents
etc. An informational agent gives data access to a huge
collection of information sources. A connection agent draws
given information and passes that to the users of the system.
Efforts have been taken by researchers to use intelligent agents
to automate and guide missile vision systems [11], [12]. The
paper introduces better vision guidance for priority based
object tracking and directing missile using popular YOLO
algorithm. Algorithm helps to improve accuracy of missile
target identification and avoid mis-hitting of the missile.
Detail information of YOLO algorithm is given below. Fig. 1. Working of YOLO Algorithm

978-1-7281-5475-6/20/$31.00 ©2020 IEEE 873


2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)

The object bounding box might come larger than the grid itself elements in a video steaming frame. Then it tries to apply
in most of the cases. Because of that we must need to reframe Speeded Up Robust Features (SURF) feature detection and a
object detection as a single regression problem, right from the LAN based IP camera module to determine their position in
image pixels to bounding box coordinates and class the real world [1]. Based on the analysis of flight attribute for
probabilities. the toy rocket, the system can now generate the altitude and
A single CNN in parallel try to predict N number of position launch angles that will allow a toy missile to intercept
bounding boxes of detected objects and class probabilities the target. A real-time state of art object determination and
matrix for those N boxes. YOLO Algorithm gets trained on tracking technique from video stream frame images is
developed by combining the object determination and tracking
full analyzed images and it then directly enhances overall
into a dynamic Kalman model [3]. At the object detection step,
detection performance. This consolidated module has vast
the object which we are concerned about is automatically
benefits over traditional methods of object detection. First, detected from a map of saliency processed via the image
YOLO is extremely fast. Since we try to reframe object background cue at each frame; at the stalking stage, Kalman
disclosure as a single regression problem we don’t need any filter is deployed to obtain a raw prediction about the object
complicated pipeline. We simply use our Convolutional state, which is further re - processed via a on board detector
network on a unfamiliar picture while testing to predict combined with the saliency map and sensual information
disclosures. This helps us in processing of streaming video between two successive frames obtained from the video steam
frames straightaway with latency of lower than 30 [4]. When we compare this technique with existing methods,
milliseconds. Secondly, YOLO algorithm reasons globally the approach which we have proposed does not desire any
about the image when making predictions. The sliding hand-operated initialization for tracking, runs much faster than
window algorithms and region proposal-based protocols the trackers of the same category, and obtains competitive
divides the original image into smaller pieces before performance on a huge number of images sequences.
processing, Unlike them YOLO takes the whole image when Comprehensive analysis illustrates the impressiveness and
it is in training and testing so it encodes meaningful exceptional performance of the proposed approach. Real-time
information about objects and their outward aspects on its state of art object detection is essential for vast number of
own. Fast region convolutional neural network, a top and applications of Unmanned Aerial Vehicles (UAVs) such as
currently trending object detection method, misunderstands exploration and surveillance, search-and-rescue, and
background patches in an image for objects because it can’t infrastructure survey. In recent time, Convolutional Neural
see the larger context. YOLO performs much better when we Networks (CNNs) have stood out as a prominent class of
take this aspect into consideration by making 50% less technique for identifying image content, and are widely
background errors as compared to Fast R-CNN. YOLO also considered in the computer vision community to be easy to
understands generalized depiction of objects with their labels. adopt standard approach for most problems [2], [8]. However,
Since YOLO can understand generalized depiction of objects object detection algorithms which are based on CNNs are
it gives less or no errors when applied to unexpected images. extremely complex to be running on normal set of processors
The YOLO divides the entire image into an N x N grids and and they are exceedingly computationally demanding, typically
then it tries to obtain the bounding boxes, which are the boxes it requires high powered Graphics Processing Units (GPUs)
to be painted around images and predicted probabilities for which might need high power and be very high in weight,
each of these object regions around the boxes. The strategy especially for a lightweight and low-cost drone. So it’s always
better to move our computations to an computing cloud which
used to obtain these probabilities map in YOLO is logistic
is off-board. We then apply RCNN to detect hundreds of object
regression. The obtained bounding boxes are then arranged by
in real time.
the associated probabilities of the object regions. YOLO
makes the use of Independent logistic classifiers for the object III. DESIGN AND SETUP
class prediction or to get the detected object label. YOLO
algorithm accepts the entire image frame as whole at once as This section describes details of design of missile guidance
input and it then tries to predict the object bounding box system. It includes hardware and software details, quadcopter
coordinates and object class probabilities for these boxes. The system used for streaming, object detection technique, rocket
working of YOLO is shown in Figure 1. motor ignition method and overall system architecture.

A. Scope of Work
II. LITERATURE SURVEY
To design a prototype of Missile Guidance system that
will autonomously select targets based on the priorities given
Artificial neural network is used as preferred choice for
intelligent missile guidance system including proportional to it to hit on. This Guidance system will be tested using UAV
navigation guidance [9]. The use of video streams to obtain (Unnamed Aerial Vehicle) equipped with toy Missiles. The
and locate targets has the ability to reduce cost when Missile Guidance system will be designed by using YOLO
compared to the use of active sensors. Such technique presents algorithm. The UAV will be built on the top of Naza Flight
initial results for a system that combines use of visual sensing Controller and with IP camera as its payload which will
to locate and point the target [10]. The system makes the use Stream the real-time data to Ground Station where our
of foreground segmentation to identity new objects or intelligent Agent will be running on.

874
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)

B. Hardware and Software Used represents the System Architecture of designed Quadcopter
We use YOLO v3 for detection of objects from images that basically show each component of the Quadcopter, how
frames received from our camera streaming module which our the Quadcopter works.
UAV will be holding. The detected objects obtained from the D. Camera Streaming Module
frame are then analyzed based on the priorities given to our
intelligent agent and it will then choose the targets to hit if Internet cam is a camera unit that circulates and accepts data
any. The missile prototype is built using the Rocket Candy (R- over a local area network (LAN) or over the internet.
Candy) propellant. Spark plug is connected to Processing video stream on UAV itself requires large on-
NodeMCU/Arduino which is used to ignite the Missile which board processor which increases the overall system cost with
is again based on the commands from intelligent agent. much extent. In this project the use of IP Camera on board
Hardware: F450 Quadcopter Frame, 2200mAH 4S Lithium with UAV makes it easier to take the live stream to Ground
Polymer Battery, 4 – 930KV Brushless DC Motors,4 – Control Station for further video processing.
Simonk Electroic Speed Control, Arduino Mega, Nichrome The network and the other configurations is a
Wire, IP Camera. comparatively simple process for most of the devices;
Software: generally the set up is as easy as connecting to the Wi-Fi
Arduino IDE : Network. While some of the camera models requires basic
This integrated development environment is used to program understanding of Internet technology to make them running,
the functionality of Missile triggering module which is but most of them can be used as Plug And Play device. Most
responsible for igniting the nichrome wire connected to the of the camera modules now a day comes with their own set up
Rocket Candy motor of missile. connecting tools and integrating such cameras has become
Spyder IDE : easy because of documentations they provide.
This integrated development environment is used to program
the intelligent agent in python language and YOLO deep
learning algorithm.
C. Unmanned Aerial Vehicle
Unmanned aerial vehicles (UAV) are a category of aerial
vehicles that can fly without the on-vehicle existence of
human as pilots. Typically, Unmanned Vehicle systems do
consist the aircraft component, payloads and a station on
ground to control various aspects of aircraft. The UAV is built
with NAZA M Lite flight controller which will help us to
aerially test our missile guidance system in real time.

Fig. 3. Camera calibration setup, showing the defined axes, the


target, and the central point of the captured image
E. Video Processing and Object Detection
The video stream which we obtained using camera module
will be the input to object detection module which in this case
will be the YOLO. YOLO is a network we will use for
detection of objects. The object detection technique consists in
indentifying the location on the image frame at which certain
objects are present, as well as labeling those objects.
Fig. 2. Quadcopter System Architecture Alternative methods for this, like R-CNN and its alterations,
A quadcopter has total of 4 motors fixed on a symmetric used a singular pipeline to process this task in multiple steps.
quadcopter frame, every arm is basically ninety degree aligned This makes it run slowly and makes it hard to enhance,
for the X config. Two motors rotate in clockwise direction, because every entity must be trained separately. The biggest
whereas the other two rotate in counter clockwise direction to benefit of using YOLO is its ultimate speed; it is super fast and
create the opposite force required to stay stable. The Figure 2 it is able process 45 frames per second with ease. When it is

875
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)

trained on real photos of objects and tested on artistic work,


YOLO is the ultimate king and it beats top detection methods
like R-CNN by a much difference in terms of accuracy
obtained. In mAP (mean Average Precision) measured at 0.5
Intersection over Union (IOU) YOLO v3 is on par with Focal
Loss but about 4x faster. Moreover, we can easily tradeoff
between accuracy and speed easily by altering the overall size
of the model. YOLO algorithm accepts the entire image frame
as whole at once as input and it then tries to predict the object
bounding box coordinates and object class probabilities for
these boxes. Camera calibration is used to find exact location
of the object from image as shown in figure 3. Figure 4 shows
comparison of YOLO algorithm with other algorithms.

Fig. 5. Rocket Motor Ignition Module PinOut

G. System Architecture
The Figure 6 represents overall System Architecture of
proposed system that basically shows each component of the
system, how the system works, and the flow of the system and
so on. Video Streaming is taken from the IP Camera and that
goes under pre-processing stages to enhance the feature of a
Video Stream. These processed video frames are then passed
Fig. 4. YOLO V/s Others to object detection module which in our case in the Yolo
algorithm itself. Yolo analyzes every frame obtained and then
F. Rocket Motor Ignition Module it tries to detect the objects using our trained model. Yolo then
Arduino is a widely used electronics platform based on easy to passes the detected objects to intelligent agent which checks
go hardware and software toolkit and it is open-source. the tagged labels and based on their priorities it signals the
Arduino boards have capabilities of reading the inputs from rocket motor ignition module.
various sensors or any Relational Databases and can also react
to such inputs by giving some output signals. One can easily
code Arduino using their open-source IDE. The Prototype of
missile which will have potential rocket fuel will have to get
ignite by means this triggering module which is based on
Arduino based Micro-Controller. The Estimates of targets if
any is obtained by Our YOLO algorithm is sent to this
module. System uses Nichrome metal based ignition system.
After receiving the co-ordinates from the feature extraction
module the Arduino will send high signal to relay which will
then ignite the Nichrome wire which is connected to the rocket
fuel. Figure 5 shows pinout for rocket motor ignition module.

Fig. 6 Block diagram of system architecture

876
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)

IV. IMPLEMENTATION OF THE INTELLIGENT AGENT annotated images are used as datasets in machine learning
Here we are going to discuss about how we are implementing while building an AI-based model that can work itself using
our Intelligent Agent right from training to testing and feature the deep learning process to helps humans in performing
extraction. various tasks without human intervention.

A. Training Of System
Every deep learmimg task needs trainng of system which
again requires Dataset to work on. System is implemented
using the images from Google's OpenImagesV4 dataset [7]
and the coco Dataset provided and publicly available online. It
is a kind of huge dataset with almost 500 classes of objects
and their labels. The dataset also contains the bounding box
annotations for these objects.The open-source code which is
named as darknet, is a neural network framework written in
CUDA and C.
B. Train-test split
As with every machine learning algorithm there comes need
of splitting the data into train set and test set to watch and look
for results.
1. Training set: this is random part of data from our dataset Fig. 8. Prototype of Guidance System
used to train our model. Depending on the requirement the
algorithm randomly chooses 70-90% of data for this set. D. Feature Extraction and Recognition
2. Test Set : this is random part of data from our dataset used We unify the separate components of object detection into a
to test our model. Depending on the requirement the algorithm single neural network. Our network uses features from the
randomly chooses 10-30% of data for this set. entire image to predict each bounding box. It also predicts all
bounding boxes across all classes for an image
simultaneously. This means our network reasons globally
about the full image and all the objects in the image. The
YOLO design enables end-to-end training and realtime speeds
while maintaining high average precision. Our system divides
the input image into an S * S grid. If the center of an object
falls into a grid cell, that grid cell is responsible for detecting
that object. Prototypes of UAV and missile guidance system
are developed for testing proposed approach ass shown in
figure 7 and figure 8.
V. RESULTS
Autonomous missile guidance systems can be a helpful in
modern day wars to minimize the unwanted destruction. The
proposed aims to lower this destruction by providing easy to
train intelligent agent which chooses the hitting target without
any human interface. The projected methodology is based on
priority based trained intelligent agent. This system processes
the video frames and it tries to detect the objects in real time.
Fig. 7. Prototype of UAV The detected objects are then passed to intelligent agent which
chooses the High Risk target to hit on based on priorities given
C. Data Annotation to it while training and labelling the objects. Figure 9 shows
Data Annotation is technique used in Machine Learning and example of object detected using proposed approach and the
computer vision to label the data such a way that Machine can developed object tracking system for UAVs.
understand. This step is usually done by humans using Data
VI. FUTURE SCOPE
Annotation softwares provided to store the huge amount of
data generated. The bounding box is the most commonly used In order to archive more accuracy in detecting the objects
technique for image annotation basically highlights an object from distance, ZigBee can be used instead of Wi-FI. Zig-Bee’s
in the image to make it recognizable for machines by training have high accuracy and quick response can make this system
them to learn from these data and give a relevant output. The powerful.

877
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)

VII. CONCLUSION Journal of Innovative Research in Science and Engineering


Artificial Intelligence based Missile Guidance System is and Technology, 02, 2016
successfully executing using deep learning and artificial
intelligence. The method takes video stream as input , detects [3] Xiao Bian Haibin Lingand Qinghua Hu Pengfei Zhu,
the objects in video frame in real time and it then also decides Longyin Wen. Vision meets drones: A challenge.
one High Risk target among many to hit on. International Journal of Computer Science and Network,
04,2018.

[4] K. N. Swamy V. Krishnabrahmam, N. Bharadwaj.


Guided missile with intelligent agent. Defense science journal,
1.50:25–30, 2009.

[5] Guanghui Wang Yuanwei Wu, Yao Sui. Vision-based


real-time aerial object local-ization and tracking for uav
sensing system. International Journal of Innovative Research
in Computer and Communication Engineering, 07, 2018.

[6] Joseph Redmon, Santosh Divval, Ross Girshick, Ross


Girshick. You Only Look Once: Unified, Real-Time Object
Detection. arXiv:1506.02640v5 [cs.CV] 9 May 2016

[7] Google Open Image Dataset,


https://opensource.google/projects/open-images-dataset

[8] Jangwon Lee, Jingya Wang, David Crandall, Selma


Sabanovi and Geoffrey Fox. Real-Time Object Detection for
Unmanned Aerial Vehicles based on Cloud-based
Convolutional Neural Networks. School of Informatics and
Computing, Indiana University, Bloomington,
IN 47408, USA

[9] Arvind Rajagopalan, Farhan A. Faruqi, D (Nanda)


Nandagopal. Intelligent missile guidance using artificial neural
networks. Artificial Intelligence Research 2015, Vol. 4, No. 1

[10] Kit Axelrod, Ben Itzstein and Michael West. A self-


targeting missile system using computer vision. Experimental
Robotics Major Project, University of Sydney

[11] Qiang Gao ; Yijie Zou ; Jianhua Zhang ; Sheng Liu ;


Zhen Xie ; Shengyong Chen. Missile vision guidance based-
on adaptive image filtering. 2015 IEEE International
Conference on Information and Automation, DOI:
Fig. 9. Object Detection Frame 10.1109/ICInfA.2015.7279500

[12] Bahaaeldin Gamal Abdelaty, Mohamed Abdallah


Soliman, Ahmed Nasr Ouda. Reducing Human Effort of the
REFERENCES
Optical Tracking of Anti-Tank Guided Missile Targets via
Embedded Tracking System Design. American Journal of
[1] Jingya Wand Jangwon Lee and David Crandall. Real-time Artificial Intelligence. Vol. 2, No. 2, 2018, pp. 30-35.
object detection for un-manned aerial vehicles based on cloud- doi: 10.11648/j.ajai.20180202.13
based convolutional neural network. International Research
Journal of Engineering and Technology, 2018.

[2] Ben Itzstein Kit Axelrod and Michael West. A self-


targeting missile system using computer vision. International

878

You might also like