Object Detection in UAVs
Object Detection in UAVs
Volume 6 Issue 3, March-April 2022 Available Online: www.ijtsrd.com e-ISSN: 2456 – 6470
1
Coordinator, 2Assistant, 3Student,
1,2,3
Department Information Technology, Niranjana Majithia College of Commerce, Mumbai, Maharashtra, India
INTRODUCTION:
The Project A.I DRONE will be a project that covers around drones and their applications for private and
both Robotics and A.I Development and Usage. professional users. Based on a brief overview of the
Project intends to combine the pathfinder algorithms development of the drone industry in recent years,
and develop new efficient ones from them to be this article examines the co-evolution of drone
applied in the fields of Mapping, Traversing and technology and the entrepreneurial activity linked to
Examining Data. The Idea of a fully automated flying it. The project also includes construction of a drone
device eludes chains of minds and let us have the gift that is completely controlled by A.I. The drone will
of flight which we desire. We can complete this have collision detection and on board stabilization for
desire by seeing through the eyes of the drones. The smooth functioning. As well as speed detection and
idea of a flying machines is not a new but always be basic awareness of its surrounding which will be
an intriguing one. To create a machine that is capable given by the user to the system. The drone will be
of automated flight is our ambition and goal; the vast implemented with different operation modes which
application of the same is nothing but a bonus to us will include:
developers. A.I mode (drone will take commands directly
[1]With the rise of Deep Learning approaches in from A.I)
computer vision applications, significant strides have Manual Mode (Drone will take commands from a
been made towards vehicular autonomy. Research user)
activity in autonomous drone navigation has
increased rapidly in the past five years, and drones are Follow mode (Drone will follow the user)
moving fast towards the ultimate goal of near- Hover/Idle (When no commands are coming in)
complete autonomy. [2] There is undoubtedly hype
@ IJTSRD | Unique Paper ID – IJTSRD49724 | Volume – 6 | Issue – 3 | Mar-Apr 2022 Page 1374
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
The amalgamation of the two should give a A.I which rather depend on a human input to function. They do
can very well Map the data, find shortest and fasted have some prominent features including camera and
node points to reach the final state; A drone that can audio recording and transmission which gives the
be used in surveillance which doesn’t require a user user a perception and sense of the environment in
to be present and can as also find a shortest path to be which the drone is functioning. Many of Path finding
taken to reach a the destination by its own. [23]Many algorithms which are very good for grid mapping and
UAV studies have tried to detect and track certain finding the shortest path between two nodes or points
types of objects such as vehicles [7, 8], people are namely:
including moving pedestrians [9, 10], and landmarks Djistkra’s Algorithm
for autonomous navigation and landing [11, 12] in DFS Algorithm
real-time. However, there are only a few that consider BFS Algorithm
detecting multiple objects [13]. Despite the fact that Greedy best Algorithm
detecting multiple target objects is obviously
The number of unmanned aerial vehicles (UAVs) is
important for many applications of UAVs. In our
growing rapidly. In the US alone, approximately 3.55
view, the main reasons for this gap between
million small UAVs are expected to be deployed for
application needs and technical capabilities are due to consumer use by 2020. [6]Artificial intelligence and
two practical but critical limitations: (1) object drones are a match made in tech heaven. Pairing the
recognition algorithms often need to be hand-tuned to
real-time machine learning technology of AI with the
particular object and context types; (2) it is difficult to exploratory abilities of unmanned drones gives
build and store a variety of target object models, ground-level operators a human-like eye-in-the-sky.
especially when the objects are diverse in appearance, More than ever before, drones play key problem-
and (3) real-time object detection demands high solving roles in a variety of sectors — including
computing power even to detect single objects, much Defense, agriculture, natural disaster relief, security
less when many target objects are involve. and construction. With their ability to increase
efficiency and improve safety, drones have become
important tools for everyone from firefighters to
farmers. Smart UAVs are so popular. Thanks to
artificial intelligence software, drones can now
process what they see and report back in real-time.
Below are five companies that install AI technology
in Drone:
DroneSense
Neurala
Scale
Skycatch
Applied aeronautics
Fig no. 1 Unmanned aerial Vehicle
Aerovironment
LITERATURE REVIEW
Recent work on UAV based vehicle detection appears
There are plenty of Drone kits and AI for object
to largely focus on the detection of moving vehicles
Detection available on the market and the combine
[14], [15], [16] or the specific tracking of identified
application of the both is also seen. The most
ground objects [17]. The work of [14] uses an
prominent of all AI for object Detection is [1]
approach based on identifying consistently moving
OpenCV Object Detector which is open source and
subsets of edges within an overall flight sequence as a
free to use for educational purposes. The same also
moving vehicle using a graph cuts driven technique.
served as inspiration for the problem of special
Previously [15] followed a similar methodology
awareness for the drone which we encountered in the
through the use of camera motion estimation and
start. The application of this is easier said than done,
Kalman filter based tracking of a moving object
to integrate the different type of systems available and
within the scene but extended over optical/IR sensing.
make a program which is stable is completely
In [16] the authors present an approach based on
different endeavor. Similarly many small I/O based
layered segmentation and background stabilization
drones are available at very cheap prices (Though the
combined with real-time tracking which then leads to
same are very one dimensional in application and the
the classification of identified moving objects as
build is very fragile) the drones are simply put just an
{people | vehicle} based on [18]. The more general
input and output device with no AI to control and
work of [17] makes use of the classical mean-shift
@ IJTSRD | Unique Paper ID – IJTSRD49724 | Volume – 6 | Issue – 3 | Mar-Apr 2022 Page 1375
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
tracking approach to track generic ground object There are three key point to train AI well that is:
descriptors, including but not limited to vehicles, 1. High quality data
from a UAV image sequence but does not explicitly For AI Installation high quality data required if you
tackle the initial object detection problem. In all of used poor quality data or a collected data that isn't
these cases [14], [15], [16] 2 the detection of relevant so Individuals face many problems while
vehicles is primarily driven by the isolation of a performing task. If AI does not get the high quality
moving component from the overall scene. By data then AI produces undesirable results. It can
contrast recent work in people detection [19] create AI that is Bias. In advance building robust
investigates the problem of people detection in UAV models the reality is that noisy data & incomplete
aerial imagery independently of movement using data remain the biggest hurdles to effective end-to-
modern classifier approaches [20] aided by multi- end solutions. To avoid all these problems there is
spectral (optical/IR) imagery. However, [19] is aided two main lines on which you have to focus and work
by the IR temperature characteristics for human towards goals. 1) Clean that data you have
bodies that cannot be readily relied upon in the 2) Generate more data to help train needed models.
vehicle detection case. Overall work on the specific
2. Accurate Data Annotation
detection of vehicles, encompassing both static and
Data annotation is the process of attaching the
dynamic vehicles, within UAV imagery is limited.
meaning to data. This process can be manual but is
Current work specifically addressing this problem
usually performed or assisted by software and
[21] relies upon auxiliary scene information and
requires a human touch Data annotation is the most
additional thermal/IR sensing. Here we present an important part of data processing for machine
approach for the application of the object detection
learning algorithms, particularly for supervised
methodology of [22] for the detection of both static
learning, in which both in out and output data are
and dynamic vehicles using only an optical camera
annotation for classification. These data annotation
based on a perspective viewpoint from a medium
can be in any forms such as image, text, video or
level UAV platform.
audio annotation. We know that AI system required
Proposed System massive amount of data to establish a foundation for
The following is the system we intend to create and reliable learning patterns. We need thousands of
the requirements for the same. The working and training images, even for a simple application like a
application of the A.I are also included. Further model able to differentiate a dog from a cat.
improvements and changes can be made if needed.
3. A Culture of Experiments
A. A. I Training Approaching experiments isn't an easy task, which
Training data is paramount to the success of any AI why you should think of way that can facilitate your
model or project. Think of it as garbage in, garbage desired results. Experiments here means adding
out. If you train a model with poor-quality data, then things to the insights from that data you have &
how can you expect it to perform? You can’t and it planning processes on a test & learn a basis to see
won’t. Having the right algorithm doesn’t mean we how they respond.AI can make mistake during
are done, we need to train the A.I with the right data training errors/ mistakes are valuable & normal part
set. If wrong data set is used the A.I will not work in of the AI training process. Experiments will help to
the correct way. The effectiveness of the A.I can be create AI that is even better & more innovative for
directly related to the quality of the dataset. Once the your goal.
training of the A.I is completed, we can validate the
A.I in the next step. In validation we provide the A.I B. Interfacing AI with UAV
with the new data set and check how effective it is. The main Challenge with interfacing the Command
As with the training phase, you will want to make module (RPI3) with the drone flight controller (cc3d)
sure to evaluate the results so you can confirm the AI is to generate stable PWM signals using GPIO pins
is behaving as expected, and account for any new on the Rpi3 Board. We use python3 for general
variables that you may not have considered controls on the RPI3 board itself. The Pigpio Library
previously. Over fitting if in the system can be found is utilized to Generate PWM signals with frequency
in validation stage. Once the validation is completed, of 1000µs to 2000µ. The flight controller interpret
we give A.I a real world test. a data set without any 1000µs as minimum value and 2000µ as the
tags or targets is provided to the A.I, if the A.I is maximum input value. The Pigpio library lets us
successful making accurate decision with the enable to run Pigpio Daemon which continuously
unstructured data set, it is ready to go. We can repeat generates stream of PWM Signals. The jitter on the
the validation process for more accuracy for the A.I output was lowest compared to other GPIO libraries.
until we are satisfied with results produces by A.I.
@ IJTSRD | Unique Paper ID – IJTSRD49724 | Volume – 6 | Issue – 3 | Mar-Apr 2022 Page 1376
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
learning or deep learning models to work in a real-life
environment. Artificial intelligence is getting
upgraded with high-quality training data that are used
while developing the machine learning models. Role
of Object Detection in Artificial Intelligence The
main motive of integrating Object Detection in AI is
creating a visual perception model that can visualize
the situation and perform the task itself without
human intervention while taking the right decisions.
The whole process involves methods of acquiring the
datasets, processing, analyzing, and understanding the
digital images to utilize the same in the real-world
scenario. Object Detection in AI is playing a
substantial role in developing machine learning
models for different sectors, industries, or fields.
From object detection to expression recognition,
Object Detection is providing detailed information
about various things to machines around the world.
Fig no. 2 Ground control Station Interface
Making the AI possible with useful information the
C. The Object Detection Module machine learning algorithms, perceive the
This module would be integrated in all the modules information precisely and learn from that taking the
while the functioning of the drone as itwill help the right decision to perform the next action. Object
drone be aware of the environment and take necessary Detection is helping AI to make more and more
actions for its safety. This module will identify the intelligent with correct information that a machine
object with help of a camera and analysis the video. can see only when objects are preciselylabelled with
The module when than detect different objects in each image annotation techniques. Object Detection is
frame and compare it to the database to recognize the playing the Following Role in Various Fields:
object and then decide the course of action based on Face Recognition ·
the proximity of the same. Though the initial module Video Surveillance
would only react to a small number of objects as a Object Detection
much larger database and more powerful AI will be Object Recognition
needed to function the same module on the level of Medical Imaging
humans. Localization and Mapping
Augmented Reality/Virtual Reality
Human Expressions & Emotional Analysis
Transforming the paperwork into digital data
Face Recognition is a type of biometric software that
maps an individual’s facial features mathematically
and stores it as a face print. The system uses deep
learning techniques to compare a live capture or
digital image to the stored face print in order to verify
an individual’s identity. Once the recognized face
matches a stored image, attendance is marked in
corresponding excel sheet for that person. The other
reason for taking face recognition as biometric
parameter is this technology reduces the physical
Fig -3: Object Detection touch of objects/records providing a contagious-by-
touch free environment which the whole world is
Open CV Object Detection is playing a key role in adopting these days. Automated attendance system
detecting the various types of objects while flying in using machine learning approach automatically
mid-air. A high-performance on board image detects and recognizes face and marks attendance
processing and a drone neural network is used for which saves time and maintains a record of the
object detection, classification, and tracking while collected data.
flying into the air. In AI, computer vision is playing a
big role to train visual perception-based machine
@ IJTSRD | Unique Paper ID – IJTSRD49724 | Volume – 6 | Issue – 3 | Mar-Apr 2022 Page 1377
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
System Requirements industry Author links open overlay panel
A. Software Ferran Gionesa Alexander Bremab
Pycharm [3] Article contributed by: TELUS international
SQLite databases
https://www.telusinternational.com/articles/ho
Librepilot GCS w-to-train-ai
Pigpio library
[4] International Research Journal of Engineering
B. Hardware requirements and Technology (IRJET) e-ISSN: 2395-0056
Drone Frame F450 Quadcopter
Volume: 09 Issue: 03 | Mar 2022 www.irjet.net
Raspberry Pi 3b+
p-ISSN: 2395-0072 © 2022, IRJET | Impact
CC3D flight controller
Factor value: 7.529 | ISO 9001:2008 Certified
A2212/10T Brushless Motors Journal | Page 182 Face Recognition based
ESCs Smart Attendance System Using IoT
Camera 3mp Tippavajhala Sundar Srinivas1, Thota
[6]2200 mah 11.1 V 3S Lipo Battery & Charger
Goutham2, Dr. M. Senthil Kumaran3
Four 1405R Propellers
Barometric sensor [5] https://journals.plos.org/plosone/article?id=10.1
371/journal.pone.0225092 Autonomous drone
These are the major components, though additional or
hunter operating by deep learning and all-
lesser components might be required while actual
onboard computations in GPS-denied
assembly of the drone. The intent is to keep the cost
environments Philippe Martin Wyder ,Yan-
to minimum and create a stable and functioning
Song Chen, Adrian J. Lasrado, Rafael J. Pelles,
drone.
Robert Kwiatkowski, Edith O. A. Comas,
CONCLUSIONS Richard Kennedy, Arjun Mangla, Zixi Huang,
Overall from the results presented we can see the Xiaotian Hu, Zhiyao Xiong, Tomer Aharoni,
successful detection of objects. The results suggest Tzu-Chan Chuang, Hod Lipson
that the cloud based approach could allow speed-ups
[6] https://builtin.com/artificial-
of nearly an order of magnitude, approaching real-
intelligence/drones-ai-companies Fighting Fires
time performance even when detecting objects of
and Saving Elephants: How 12 Companies are
various categories. We demonstrated our approach in
using the AI Drone to Solve Big Problems Sam
terms of recognition accuracy and speed, and in a
Daley
simple target searching scenario. Our approach
enables the UAVs, especially lightweight, low-cost [7] T. P. Breckon, S. E. Barnes, M. L. Eichner, and
consumer UAVs, to use state-of-the-art object K. Wahren, “Autonomous real-time vehicle
detection algorithms. In essence what we have detection from a medium-level UAV,” in Proc.
learned that capabilities of A.I and its development 24th International Conference on Unmanned
will far exceed in the coming decade and we will see Air Vehicle Systems, pp. 29.1-29.9, 2009.
more and more automation in the day to day life. The [8] J. Gleason, A.V. Nefian, X. Bouyssounousse,
UAVs are being utilized for everyday usage and with T. Fong, and G. Bebis, “Vehicle detection from
the current lockdown and covid 19 pandemic, the aerial imagery,” in Proc. IEEE International
need to exclude contact is than fulfilled by Robotics Conference on Robotics and Automation
and A.I. To be able to create and train new algorithm (ICRA), pp. 2065-2070, 2011.
would help us to cast our will onto the machines and
thus improve the standard of living. [9] A. Gaszczak, T. P. Breckon, and J. Han, “Real-
time people and vehicle detection from UAV
REFERENCES imagery,” in Proc SPIE Conference Intelligent
[1] https://www.mdpi.com/2504-446X/5/2/52/pdf Robots and Computer Vision XXVIII:
Flying Free: A Research Overview of Deep Algorithms and Techniques, vol. 7878, 2011.
Learning in Drone Navigation Autonomy
Thomas Lee 1, Susan Mckeever 2 and Jane [10] H. Lim, and S. N. Sinha, “Monocular
Courtney 1. Localization of a moving person onboard a
Quadrotor MAV,” in Proc. IEEE International
[2] https://www.sciencedirect.com/science/article/a Conference on Robotics and Automation
bs/pii/S0007681317301210?via%3Dihub From (ICRA), pp. 2182-2189, 2015.
toys to tools: The co-evolution of technological
and entrepreneurial developments in the drone [11] J. Engel, J. Sturm, and D. Cremers, “Scale-
aware navigation of a low-cost quadcopter with
@ IJTSRD | Unique Paper ID – IJTSRD49724 | Volume – 6 | Issue – 3 | Mar-Apr 2022 Page 1378
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
a monocular camera,” Robotics and 203–214, Berlin, Heidelberg, 2008. Springer-
Autonomous Systems, vol. 62, no. 11, pp. Verlag.
1646-1656, 2014. [17] H. Helble and S. Cameron. Oats: Oxford aerial
[12] C. Forster, M. Faessler, F. Fontana, M. tracking system. Robot. Auton. Syst.,
Werlberger, and D. Scaramuzza, “Continuous 55(9):661–666, 2007.
on-board monocular-vision-based elevation
[18] N. Dalal and B. Triggs. Histograms of oriented
mapping applied to autonomous landing of
gradients for human detection. Proc. Conf. on
micro aerial vehicles,” in Proc. IEEE
Computer Vision and Pattern Recognition,
International Conference on Robotics and
1:886–893 vol. 1, June 2005.
Automation (ICRA), pp. 111-118, 2015.
[19] P. Rudol and P. Doherty. Human body
[13] F. S. Leira, T. A. Johansen, and T. I. Fossen,
detection and geolocalization for UAV search
“Automatic detection, classification and
and rescue missions using color and thermal
tracking of objects in the ocean surface from
imagery. Aerospace Conference, IEEE, pages
UAVs using a thermal camera,” in Proc. IEEE
1–8, March 2008.
Aerospace Conference, pp.1-10, 2015.
[20] P. Viola and M.J. Jones. Rapid object detection
[14] K. Kaaniche, B. Champion, C. Pegard, and P. using a boosted cascade of simple features.
Vasseur. A vision algorithm for dynamic
Proc. Conf. on Computer Vision and Pattern
detection of moving vehicles with a UAV. Recognition, 1:I–511–I–518 vol.1, 2001.
Proc. Int. Conf. on Robotics and Automation,
pages 1878–1883, April 2005. [21] S. Hinz and U. Stilla. Car detection in aerial
thermal images by local and global evidence
[15] J. Kang, K. Gajera, I. Cohen, and G. Medioni.
accumulation. Pattern Recogn. Lett. 27(4):308–
Detection and tracking of moving objects from 315, 2006.
overlapping EO and IR sensors. Proc. Int. Conf.
on Computer Vision and Pattern Recognition, [22] P. Viola and M.J. Jones. Robust real-time face
pages 123–123, June 2004. detection. Int. J. Comput. Vision, 57(2):137–
154, 2004
[16] J. Xiao, C. Yang, F. Han, and H. Cheng.
Vehicle and person tracking in aerial videos. In [23] Autonomous Real-time Vehicle Detection from
Proc. Int. Workshop on Multimodal a Medium-Level UAV Toby P. Breckon, Stuart
Technologies for Perception of Humans, pages E. Barnes, Marcin L. Eichner and Ken Wahren
@ IJTSRD | Unique Paper ID – IJTSRD49724 | Volume – 6 | Issue – 3 | Mar-Apr 2022 Page 1379