Artificial Intelligence Based Missile Guidance System: Darshan Diwani, Archana Chougule, Debajyoti Mukhopadhyay
Artificial Intelligence Based Missile Guidance System: Darshan Diwani, Archana Chougule, Debajyoti Mukhopadhyay
The object bounding box might come larger than the grid itself elements in a video steaming frame. Then it tries to apply
in most of the cases. Because of that we must need to reframe Speeded Up Robust Features (SURF) feature detection and a
object detection as a single regression problem, right from the LAN based IP camera module to determine their position in
image pixels to bounding box coordinates and class the real world [1]. Based on the analysis of flight attribute for
probabilities. the toy rocket, the system can now generate the altitude and
A single CNN in parallel try to predict N number of position launch angles that will allow a toy missile to intercept
bounding boxes of detected objects and class probabilities the target. A real-time state of art object determination and
matrix for those N boxes. YOLO Algorithm gets trained on tracking technique from video stream frame images is
developed by combining the object determination and tracking
full analyzed images and it then directly enhances overall
into a dynamic Kalman model [3]. At the object detection step,
detection performance. This consolidated module has vast
the object which we are concerned about is automatically
benefits over traditional methods of object detection. First, detected from a map of saliency processed via the image
YOLO is extremely fast. Since we try to reframe object background cue at each frame; at the stalking stage, Kalman
disclosure as a single regression problem we don’t need any filter is deployed to obtain a raw prediction about the object
complicated pipeline. We simply use our Convolutional state, which is further re - processed via a on board detector
network on a unfamiliar picture while testing to predict combined with the saliency map and sensual information
disclosures. This helps us in processing of streaming video between two successive frames obtained from the video steam
frames straightaway with latency of lower than 30 [4]. When we compare this technique with existing methods,
milliseconds. Secondly, YOLO algorithm reasons globally the approach which we have proposed does not desire any
about the image when making predictions. The sliding hand-operated initialization for tracking, runs much faster than
window algorithms and region proposal-based protocols the trackers of the same category, and obtains competitive
divides the original image into smaller pieces before performance on a huge number of images sequences.
processing, Unlike them YOLO takes the whole image when Comprehensive analysis illustrates the impressiveness and
it is in training and testing so it encodes meaningful exceptional performance of the proposed approach. Real-time
information about objects and their outward aspects on its state of art object detection is essential for vast number of
own. Fast region convolutional neural network, a top and applications of Unmanned Aerial Vehicles (UAVs) such as
currently trending object detection method, misunderstands exploration and surveillance, search-and-rescue, and
background patches in an image for objects because it can’t infrastructure survey. In recent time, Convolutional Neural
see the larger context. YOLO performs much better when we Networks (CNNs) have stood out as a prominent class of
take this aspect into consideration by making 50% less technique for identifying image content, and are widely
background errors as compared to Fast R-CNN. YOLO also considered in the computer vision community to be easy to
understands generalized depiction of objects with their labels. adopt standard approach for most problems [2], [8]. However,
Since YOLO can understand generalized depiction of objects object detection algorithms which are based on CNNs are
it gives less or no errors when applied to unexpected images. extremely complex to be running on normal set of processors
The YOLO divides the entire image into an N x N grids and and they are exceedingly computationally demanding, typically
then it tries to obtain the bounding boxes, which are the boxes it requires high powered Graphics Processing Units (GPUs)
to be painted around images and predicted probabilities for which might need high power and be very high in weight,
each of these object regions around the boxes. The strategy especially for a lightweight and low-cost drone. So it’s always
better to move our computations to an computing cloud which
used to obtain these probabilities map in YOLO is logistic
is off-board. We then apply RCNN to detect hundreds of object
regression. The obtained bounding boxes are then arranged by
in real time.
the associated probabilities of the object regions. YOLO
makes the use of Independent logistic classifiers for the object III. DESIGN AND SETUP
class prediction or to get the detected object label. YOLO
algorithm accepts the entire image frame as whole at once as This section describes details of design of missile guidance
input and it then tries to predict the object bounding box system. It includes hardware and software details, quadcopter
coordinates and object class probabilities for these boxes. The system used for streaming, object detection technique, rocket
working of YOLO is shown in Figure 1. motor ignition method and overall system architecture.
A. Scope of Work
II. LITERATURE SURVEY
To design a prototype of Missile Guidance system that
will autonomously select targets based on the priorities given
Artificial neural network is used as preferred choice for
intelligent missile guidance system including proportional to it to hit on. This Guidance system will be tested using UAV
navigation guidance [9]. The use of video streams to obtain (Unnamed Aerial Vehicle) equipped with toy Missiles. The
and locate targets has the ability to reduce cost when Missile Guidance system will be designed by using YOLO
compared to the use of active sensors. Such technique presents algorithm. The UAV will be built on the top of Naza Flight
initial results for a system that combines use of visual sensing Controller and with IP camera as its payload which will
to locate and point the target [10]. The system makes the use Stream the real-time data to Ground Station where our
of foreground segmentation to identity new objects or intelligent Agent will be running on.
874
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)
B. Hardware and Software Used represents the System Architecture of designed Quadcopter
We use YOLO v3 for detection of objects from images that basically show each component of the Quadcopter, how
frames received from our camera streaming module which our the Quadcopter works.
UAV will be holding. The detected objects obtained from the D. Camera Streaming Module
frame are then analyzed based on the priorities given to our
intelligent agent and it will then choose the targets to hit if Internet cam is a camera unit that circulates and accepts data
any. The missile prototype is built using the Rocket Candy (R- over a local area network (LAN) or over the internet.
Candy) propellant. Spark plug is connected to Processing video stream on UAV itself requires large on-
NodeMCU/Arduino which is used to ignite the Missile which board processor which increases the overall system cost with
is again based on the commands from intelligent agent. much extent. In this project the use of IP Camera on board
Hardware: F450 Quadcopter Frame, 2200mAH 4S Lithium with UAV makes it easier to take the live stream to Ground
Polymer Battery, 4 – 930KV Brushless DC Motors,4 – Control Station for further video processing.
Simonk Electroic Speed Control, Arduino Mega, Nichrome The network and the other configurations is a
Wire, IP Camera. comparatively simple process for most of the devices;
Software: generally the set up is as easy as connecting to the Wi-Fi
Arduino IDE : Network. While some of the camera models requires basic
This integrated development environment is used to program understanding of Internet technology to make them running,
the functionality of Missile triggering module which is but most of them can be used as Plug And Play device. Most
responsible for igniting the nichrome wire connected to the of the camera modules now a day comes with their own set up
Rocket Candy motor of missile. connecting tools and integrating such cameras has become
Spyder IDE : easy because of documentations they provide.
This integrated development environment is used to program
the intelligent agent in python language and YOLO deep
learning algorithm.
C. Unmanned Aerial Vehicle
Unmanned aerial vehicles (UAV) are a category of aerial
vehicles that can fly without the on-vehicle existence of
human as pilots. Typically, Unmanned Vehicle systems do
consist the aircraft component, payloads and a station on
ground to control various aspects of aircraft. The UAV is built
with NAZA M Lite flight controller which will help us to
aerially test our missile guidance system in real time.
875
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)
G. System Architecture
The Figure 6 represents overall System Architecture of
proposed system that basically shows each component of the
system, how the system works, and the flow of the system and
so on. Video Streaming is taken from the IP Camera and that
goes under pre-processing stages to enhance the feature of a
Video Stream. These processed video frames are then passed
Fig. 4. YOLO V/s Others to object detection module which in our case in the Yolo
algorithm itself. Yolo analyzes every frame obtained and then
F. Rocket Motor Ignition Module it tries to detect the objects using our trained model. Yolo then
Arduino is a widely used electronics platform based on easy to passes the detected objects to intelligent agent which checks
go hardware and software toolkit and it is open-source. the tagged labels and based on their priorities it signals the
Arduino boards have capabilities of reading the inputs from rocket motor ignition module.
various sensors or any Relational Databases and can also react
to such inputs by giving some output signals. One can easily
code Arduino using their open-source IDE. The Prototype of
missile which will have potential rocket fuel will have to get
ignite by means this triggering module which is based on
Arduino based Micro-Controller. The Estimates of targets if
any is obtained by Our YOLO algorithm is sent to this
module. System uses Nichrome metal based ignition system.
After receiving the co-ordinates from the feature extraction
module the Arduino will send high signal to relay which will
then ignite the Nichrome wire which is connected to the rocket
fuel. Figure 5 shows pinout for rocket motor ignition module.
876
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)
IV. IMPLEMENTATION OF THE INTELLIGENT AGENT annotated images are used as datasets in machine learning
Here we are going to discuss about how we are implementing while building an AI-based model that can work itself using
our Intelligent Agent right from training to testing and feature the deep learning process to helps humans in performing
extraction. various tasks without human intervention.
A. Training Of System
Every deep learmimg task needs trainng of system which
again requires Dataset to work on. System is implemented
using the images from Google's OpenImagesV4 dataset [7]
and the coco Dataset provided and publicly available online. It
is a kind of huge dataset with almost 500 classes of objects
and their labels. The dataset also contains the bounding box
annotations for these objects.The open-source code which is
named as darknet, is a neural network framework written in
CUDA and C.
B. Train-test split
As with every machine learning algorithm there comes need
of splitting the data into train set and test set to watch and look
for results.
1. Training set: this is random part of data from our dataset Fig. 8. Prototype of Guidance System
used to train our model. Depending on the requirement the
algorithm randomly chooses 70-90% of data for this set. D. Feature Extraction and Recognition
2. Test Set : this is random part of data from our dataset used We unify the separate components of object detection into a
to test our model. Depending on the requirement the algorithm single neural network. Our network uses features from the
randomly chooses 10-30% of data for this set. entire image to predict each bounding box. It also predicts all
bounding boxes across all classes for an image
simultaneously. This means our network reasons globally
about the full image and all the objects in the image. The
YOLO design enables end-to-end training and realtime speeds
while maintaining high average precision. Our system divides
the input image into an S * S grid. If the center of an object
falls into a grid cell, that grid cell is responsible for detecting
that object. Prototypes of UAV and missile guidance system
are developed for testing proposed approach ass shown in
figure 7 and figure 8.
V. RESULTS
Autonomous missile guidance systems can be a helpful in
modern day wars to minimize the unwanted destruction. The
proposed aims to lower this destruction by providing easy to
train intelligent agent which chooses the hitting target without
any human interface. The projected methodology is based on
priority based trained intelligent agent. This system processes
the video frames and it tries to detect the objects in real time.
Fig. 7. Prototype of UAV The detected objects are then passed to intelligent agent which
chooses the High Risk target to hit on based on priorities given
C. Data Annotation to it while training and labelling the objects. Figure 9 shows
Data Annotation is technique used in Machine Learning and example of object detected using proposed approach and the
computer vision to label the data such a way that Machine can developed object tracking system for UAVs.
understand. This step is usually done by humans using Data
VI. FUTURE SCOPE
Annotation softwares provided to store the huge amount of
data generated. The bounding box is the most commonly used In order to archive more accuracy in detecting the objects
technique for image annotation basically highlights an object from distance, ZigBee can be used instead of Wi-FI. Zig-Bee’s
in the image to make it recognizable for machines by training have high accuracy and quick response can make this system
them to learn from these data and give a relevant output. The powerful.
877
2020 7th International Conference on Signal Processing and Integrated Networks (SPIN)
878