Drone-Assisted Outdoor Parking Algorithm For Sedan-Type Self-Driving Vehicles
Drone-Assisted Outdoor Parking Algorithm For Sedan-Type Self-Driving Vehicles
In Partial Fulfillment
of the Requirements for the Degree
by
The Problem
Jecan Alicer
Elysa M. Apas
Adrian Kendrick N. Emnacen
Jean Roy C. Lastimosa
Table of Contents
List of Tables iv
List of Figures v
Chapter 1 INTRODUCTION 1
1.1 Background of the Study 1
1.2 Problem Statement 4
1.3 Objectives 4
1.4 Purpose of the Study 5
1.4.1 Self-Driving Car Owners 5
1.4.2 Autonomous Vehicle Industry 5
1.4.3 Drone Technology Industry 5
1.4.4 General Public 6
1.5 Scope and Limitations 6
1.6 Definition of Terms 6
Chapter 2 THEORETICAL BACKGROUND 8
2.1 Theories 8
2.1.1 Web Camera Operation 8
2.1.2 Artificial Intelligence and Machine Learning 10
2.1.3 Computer Vision 11
2.1.4 Hough Line Transform 11
2.1.5 Canny Edge Detection 14
2.1.6 Gaussian Blur 17
ii
2.1.7 Convolutional Neural Networks 19
2.1.8 Scaled-YOLOv4 20
2.1.9 Wi-Fi 22
2.2 Literature Review 23
2.2.1 Autonomous Parking Systems for Self-Driving Cars 23
2.2.2 Parking Availability Recognition and Prediction 24
2.2.3 Drone-Assisted Systems 26
BIBLIOGRAPHY 52
iii
List of Tables
Page
iv
List of Figures
Page
v
Chapter 1
INTRODUCTION
In this chapter, the researchers discussed the statistics that signify the impor- tance of
this study. Self-driving vehicles and its autonomous parking, or algorithm- assisted
parking, features were introduced and defined. The researchers pre- sented how the
autonomous parking feature of self-driving vehicles may affect traditional car drivers and
be exclusive to indoor parking areas only. Through this drone-assisted algorithm, the
researchers ought to provide a way for self-driving cars to utilize their autonomous
parking feature in outdoor parking areas.
Parking lots are widely perceived as safe due to the low speeds drivers must have
while driving in such place. However, this perception is actually causing park- ing lots to
be much more dangerous of a place than necessary [24]. Every year, parking lots witness
a concerning number of accidents, primarily due to human distractions such as texting
and social media use. The National Safety Council re- ports that parking lots witness over
50,000 accidents annually, accounting for 20% of all car crashes. These incidents result
in 500 or more fatalities and cause injuries to an additional 60,000 individuals each year
[25]. The 2021 Metro Manila Acci- dent Reporting and Analysis System (MMARAS)
report further sheds light on the specific risks faced by parked vehicles. According to the
report, 1,524 incidents were recorded as ‘hit parked vehicle’, indicating a notable
vulnerability in these areas [26]. Additionally, hit-and-run incidents in 2021 numbered
well over a thou- sand, further underscoring the critical importance of addressing safety
concerns in parking lots. Implementing practical solutions like clear signage, safety
equipment,
1
improved lighting, and routine maintenance is paramount to mitigate the frequency and
severity of accidents in parking areas, ensuring the safety of both property and lives.
In recent years, advancements in technology have presented opportunities to address
the challenges posed by human-caused accidents on roadways and park- ing areas. One
such innovation are the self-driving cars. The global market for self-driving cars was
worth about $20.25 billion in 2021. According to the predic- tion of experts, the market
for self-driving cars will keep growing at an average annual rate of 13.9% during the
forecast period [1]. There has been a significant increase in partnerships, collaborations,
and investments dedicated to developing self-driving vehicles within the self-driving car
market. This growth is expected to continue due to substantial guidance and support
from both governments and private sectors in various countries, further expanding the
automated driving ve- hicle technology over the projected time. In addition to that, the
most commonly produced and available self-driving cars nowadays are sedan-type.
Sedan is ex- pected to be the largest segment in the market of autonomous vehicles [27].
Figure
1.1 shows the forecasted market size of self-driving cars by region, between the years
2018 to 2013.
2
Figure 1.1: Self-Driving Cars Market Size, By Region, 2018-2030 [1]
3
individuals [2].
Concerns have been made about the potential effects of autonomous parking systems
for self-driving cars on the length of time traditional car drivers must wait in parking
lines. As self-driving cars require dedicated parking spaces and time- consuming
navigation to designated areas, it may significantly extend the waiting periods for non-
autonomous vehicles, thereby hindering their ability to efficiently utilize self-parking
features [3]. These dedicated parking spaces are designed in structured indoor parking
spaces only, which also makes them incapable of uti- lizing their autonomous parking
feature in outdoor parking areas. To make this possible, involvement of other sensors or
external devices may be implemented in the system.
This study aims to further enhance the autonomous parking capabilities of self-
driving vehicles, specifically sedan types. An advanced approach in outdoor park- ing
availability detection and recognition is to be proposed to achieve convenience,
timeliness, and safety.
1.3 Objectives
4
• To enable the ability to locate a vacant outdoor parking space with image
processing.
• To utilize a combination of line and edge detection methods, which are Hough Line
Transform and Canny Edge Detection.
Self-driving car owners are privileged enough to experience the features of a self-
driving car (i.e., automatic parking, obstacle detection). This study offers an additional
feature for self-driving car owners to experience, a parking availability locator. It is less
time-consuming and more convenient for self-driving car drivers.
The designed algorithm will be implemented and integrated into self-driving cars.
This improves self-driving cars’ parking capabilities, wherein, the marketabil- ity and
competitiveness of the autonomous vehicle industry will enhance.
The integration of drones with self-driving cars for parking purposes can lead to new
opportunities for the drone technology industry, giving rise to the develop- ment of
specialized drone systems and services tailored for autonomous vehicle applications.
5
1.4.4 General Public
The study has the potential to improve overall road safety by reducing accidents
caused by human error during parking maneuvers. It can also make parking easier and
less time-consuming for the public using self-driving car services.
This study focuses solely on creating an algorithm for DJI Mini 2 Drone to assist
sedan-type self-driving vehicles to park in outdoor parking areas. This study will not
cover other factors that might affect the performance of the drone, such as the condition
of the weather and the drone’s hardware.
Drone: an aerial device, the system’s external device that assists self-driving vehicles
in autonomous parking.
6
Self-driving vehicles: also known as autonomous cars or driverless cars, refer
to vehicles that are capable of traveling and navigating without human intervention.
Sedan: a type of self-driving vehicle, characterized by its four doors, that has an
average length in the range of 180 to 195 inches (4.5 to 5 meters).
7
Chapter 2
THEORETICAL BACKGROUND
In this chapter, the theoretical basis and review of related literature of drone- assisted
autonomous parking systems and technologies for self-driving vehicles were discussed.
2.1 Theories
This section discusses the theories that structured the designing and imple- mentation
of the proposed algorithm.
Webcams, or ”web cameras,” are digital devices that have seamlessly inte- grated into
daily lives, playing a pivotal role in capturing and transmitting live video or still images
to computers and the internet. The operation of a webcam can be broken down into a few
key steps. At the heart of every webcam is an image sen- sor. This sensor is equipped
with a small lens positioned at the front of the camera, which collects incoming light. The
lens contains a grid of minuscule light detectors, converting light into an electrical signal.
Once collected, the image is processed, transforming it into a digital format represented
by a series of zeros and ones. This digital format is vital for compatibility with computers
and online platforms.
8
Webcams primarily connect to computers using a USB cable, though some models
can also operate wirelessly via Wi-Fi. The USB connection serves the dual purpose of
providing power to the webcam and facilitating the transfer of digital data from the image
sensor to the computer. Additionally, webcams are accompanied by specialized software
that enables seamless interaction with the computer. This software allows users to control
the camera, adjust settings, and access features like autofocus and exposure modifications
[28].
Beyond the delineations of integrated, wired, and wireless webcams, two dis- tinct
categories of webcams come into focus, each characterized by its unique capabilities and
intended purposes. These are IP Cameras and Network Cam- eras. Network Cameras,
these webcams are typically integrated into computers or available as off-the-shelf
purchases at personal electronics retailers. Prominent brands such as Microsoft, Logitech,
and Razer are synonymous with network cam- eras, which are primarily tailored to meet
short-term usage needs. IP webcams, on the other hand, are designed with a distinctive
purpose. Engineered for con- tinuous 24/7 surveillance, these devices frequently boast
superior video quality compared to their network camera counterparts. As a result, IP
cameras have gar- nered attention in the realms of security systems, pet monitoring, and
applications demanding extended periods of use.
9
webcams may strike a chord [29].
Artificial Intelligence
AI refers to the development of computer systems or other technologies that can carry
out tasks which traditionally require human intelligence (e.g., problem solving,
understanding human language, making decisions, etc.) [30]. AI has the potential to have
a transformational impact on business on a par with prior general- purpose technologies
[4]. Through the years, technology has been growing and AI has been implemented and
utilized in growing businesses and industries. AI has witnessed exponential growth and
adoption, catalyzed by advancements in ma- chine learning, deep learning, natural
language processing, computer vision, and robotics. These developments have fueled the
integration of AI into diverse indus- tries, including healthcare, finance, manufacturing,
transportation, and entertain- ment, with the promise of revolutionizing processes,
optimizing resource allocation, and unlocking new frontiers of innovation.
Machine Learning
On the other hand, Machine Learning is a subset of AI; it is one of the most exciting
recent technologies in artificial intelligence [5]. Through the use of algo- rithms and
statistical models, machine learning enables machines to improve at a certain task by
learning from data (e.g., facial recognition, text prediction, product recommendations,
predictive analysis, etc.) [31]. This refers to a computer’s ca-
10
pacity to continuously improve performance without requiring humans to explicitly
describe how to carry out each task, and it is the most significant general-purpose
technology in this time. Machine learning has improved significantly in recent years and
is now much more accessible. With this, systems that can learn how to carry out tasks on
their own can now be created [4].
Computer vision is a technology that enables computers to understand the vi- suals
captured by their cameras, in the form of images or videos, the same way humans see and
understand through human eyes. It is also a field of artificial intelligence (AI) that
extracts useful information from digital photos, videos, and other visual inputs and offers
recommendations or actions in response to that in- formation. If AI gives computers the
ability to think, computer vision gives them the ability to see, observe, and comprehend
[32]. In this study, computer vision is utilized to enable the drone to comprehend and
recognize the scenery its cam- era captures. Analyzing videos and images captured by
unmanned aerial vehicles or aerial drones is an emerging application attracting significant
attention from re- searchers in various areas of computer vision [6]. This will allow the
drone to recognize and analyze if what the camera is seeing is a vacant parking spot or
not.
The Hough Line Transform is a computer vision technique that is utilized to detect
straight lines in an image [33]. It is used in image analysis, computer vision, and digital
image processing. Originally, the Hough Transform is solely aimed to identify straight
lines. This technique was later generalized to also detect other shapes, such as circles,
ellipses, etc. [34].
11
Slope-Intercept form:
y = mx + b (2.1)
Where:
b = y-intercept
For Hough Line Transform, lines are represented in the polar system:
Where:
The angle ’θ’ is measured in a counterclockwise direction from the positive di-
rection of the x-axis. It is important to note that the direction of measurement can vary
depending on how the coordinate system is defined. Moreover, a more effi- cient
implementation of the Hough Line Transform is the Probabilistic Hough Line Transform.
the Standard Hough Line Transform—uses the HoughLines() function and the
In the context of the Hough Line Transform procedure, an essential initial step entails
the establishment of a 2D array or accumulator, with an initial value of zero.
12
Figure 2.1: (r, θ) Line Parameterization [34]
This array functions as a structured data repository for the storage of values as- sociated
with two distinct parameters: ’r’ and ’θ’. Within this array, the rows are designated to
correspond to the ’r’ parameter, while the columns are dedicated to the ’θ’ parameter.
The dimensions of this array are contingent upon the desired level of precision sought in
the Hough Transform operation. For instance, when aiming for an angular accuracy of 1
degree, it necessitates the allocation of 180 columns, as the maximum degree for a
straight line equals 180. Concerning the ’r’ parameter, its upper limit is contingent upon
the diagonal length of the image. Employing a one-pixel accuracy criterion, the number
of rows in the array is es- tablished to align with the diagonal length of the image, thereby
In this study, the Hough Line Transform will be utilized for the drone to detect
straight lines from a parking lot image, in order to identify the area and availability of a
certain parking space.
13
2.1.5 Canny Edge Detection
gradients in both horizontal (Gx) and vertical (Gy) di- rections. These gradients represent
the first derivative in both horizontal (Gx) and vertical (Gy) directions.
Edge gradient for each pixel:
q
Edge Gradient (G) = G2x + G2y (2.3)
Where:
Gx = horizontal direction
Gy = vertical direction
Gx
Angle (θ) = tan−1 (2.4)
Gy
Where:
14
θ = resultant gradient’s angle of direction with respect to Gx Gx =
horizontal direction
Gy = vertical direction
Figure 2.2 shows a representation of how the non-maximum suppression works. A full
scan of the image is done from points C to A to B. The gradient direction is perpendicular
to the edge. Point A is on the edge while points B and C are on the gradient direction.
15
than maxV al, it is sure to be an edge. If it’s lower than minV al, it is sure to be a
non-edge, so it is discarded. Moreover, the intermediary values between these thresholds
are categorized as either edges or non-edges based on their connec- tivity. If they are
linked to pixels identified as ”sure-edges,” they are acknowledged as part of the edges;
otherwise, they are also excluded.
Figure 2.3 shows a sample graph for Hysteresis Thresholding. In this graph, Edge
A is identified as a ”sure-edge” because its intensity is above maxV al. Even though edge
C falls below maxV al, it is still considered a valid edge because it is connected to
edge A, forming a complete curve. On the other hand, edge B, although above minVal
and in the same region as edge C, is discarded because it’s not connected to any ”sure-
edge.” Selecting appropriate minV al and maxV al values is crucial for obtaining
accurate results. This stage also helps eliminate small pixel noises by assuming that
edges are long lines [38].
In this study, the Canny Edge Detection method will be utilized to identify and extract
lines from the drone’s captured image. All the aforementioned processes are
encapsulated into a single function in OpenCV, which is the cv.Canny() func- tion. The
output of this will be used as an input for the Hough Line Transform.
16
2.1.6 Gaussian Blur
Gaussian blur’s versatility is evident in its applications, including noise reduc- tion,
softening edges, and serving as a crucial preprocessing step in computer vision and image
recognition. It plays a vital role in privacy protection by obscuring sensitive information
and simulating depth of field in photography.
1 x
G(x) = e— 2
√ 2πσ2 2σ2 , (2.5)
Where:
horizontal direction
Gy = vertical direction
1 2 2
G(x) = · — x2σ2
+y
. (2.6)
2πσ2
e
17
This mathematical representation ensures a smoother image [39].
Gaussian blur’s scalability and adaptability are evident in its application to low- end
and high-end devices. On low-end devices, it optimizes computation orders and balances
feature map sizes, addressing factors such as memory bandwidth and access cost. On
high-end devices like GPUs, Gaussian blur contributes to efficient resource utilization
through careful consideration of scaling factors and re- ceptive fields. The
implementation of Gaussian blur is exemplified in the design of Scaled-YOLOv4, a
model setting a new standard in object detection. This model demonstrates efficiency and
adaptability across various GPU types, showcasing its prowess in achieving comparable or
superior performance. The results obtained from Scaled-YOLOv4 on a GPU V100
highlight its efficiency and performance met- rics, solidifying its status as a benchmark in
object detection [40].
Gaussian blur stands as a powerful and versatile tool in image processing, of- fering a
nuanced approach to noise reduction, edge smoothing, and overall image enhancement.
Its integration within advanced models like Scaled-YOLOv4 under- scores its
adaptability to various GPU types, positioning it as a benchmark in object detection and
emphasizing its significance in the future of computer vision [41].
18
utilization, enhancing efficiency without compromising speed. Overall, incorporat- ing
Gaussian blur into self-driving algorithms proves strategic, aligning with specific
requirements, improving safety, and contributing responsible, reliable autonomous
systems [42] [39].
Convolutional Neural Networks (CNNs) is a key tool in Deep Learning, espe- cially
in image processing. They distinguish themselves by preserving the spatial connections
between pixels in a picture, as opposed to conventional neural net- works that flatten
images into 1D arrays of pixel values. This spatial integrity en- ables the preservation of
elements such as edges and forms. CNNs address each channel independently while
keeping the overall picture structure in color images with several channels (such as red,
green, and blue).
Figure 2.4 shows the architecture of CNNs. These function by applying con-
volutional filters, which are effectively learnt sets of weights applied to pixel val- ues.
During training, these filters move over the picture, creating feature maps that emphasize
certain elements such as curves or edges. CNNs use the ReLU
19
Activation Function to increase accuracy and minimize processing complexity. In-
tegrating Convolutional Neural Networks (CNNs) into a Drone-Assisted Parking
Algorithm for Self-Driving Vehicles is crucial for its success. Drones capture rich visual
data of the parking environment, and CNNs excel in processing and recog- nizing
patterns within this data. By automatically extracting relevant features, such as parking
spaces, obstacles, and other vehicles, CNNs enable the algorithm to make informed
decisions. Their ability to understand spatial hierarchies is vital for interpreting the layout
of the parking lot and determining optimal parking spots. Moreover, CNNs offer real-
time processing capabilities, adapting to diverse condi- tions and ensuring timely
feedback for the self-driving vehicle. The adaptability of CNNs to variability in lighting
and environmental factors, coupled with their high ac- curacy in computer vision tasks,
enhances the overall reliability and effectiveness of the parking algorithm, contributing to
safer and more efficient self-driving vehicle parking [44].
Furthermore, CNNs use pooling techniques to efficiently reduce feature map size
without sacrificing crucial information, making them indispensable for image- related
Deep Learning applications [45].
2.1.8 Scaled-YOLOv4
20
YOLOv4-CSP include Cross Stage Partial Connections (CSP) to improve informa- tion
flow inside the model, hence increasing object identification performance even further.
Scaled-YOLOv4 is used in a variety of applications, with its principal purpose being
in object detection tasks. It is extremely useful in applications requiring ex- act object
recognition and localization, such as surveillance systems, autonomous cars, and robots.
It assists with environment awareness in self-driving automobiles by identifying people,
vehicles, traffic signs, and road markings. Scaled-YOLOv4 is used in security and
surveillance systems for real-time object detection and track- ing. It aids in inventory
management and consumer tracking in retail, as well as monitoring stock levels and
evaluating shopper behavior. It is used in the medi- cal profession to detect anatomical
structure and abnormalities in medical pictures such as X-rays and MRIs. It also has uses
in agriculture, environmental monitor- ing, industrial automation, and bespoke item
identification, making it a useful tool across a wide range of sectors and use cases [46].
Figure 2.5 shows the architec- ture of the Scaled-YOLOv4.
21
Incorporating Scaled-YOLOv4 (You Only Look Once version 4) into a Drone-
Assisted Parking Algorithm for Self-Driving Vehicles is of paramount importance for
several key reasons. Renowned for its exceptional accuracy in object detec- tion,
YOLOv4 ensures the precise identification of parking spaces, obstacles, and other
vehicles, thereby facilitating safe and effective navigation. Its efficiency and real-time
processing capabilities are especially crucial in dynamic parking envi- ronments, where
quick and accurate detection is imperative for timely decision- making by the self-driving
vehicle. The single-pass architecture of YOLOv4 further reduces latency, aligning
seamlessly with the fast-paced demands of parking sce- narios. The model’s
incorporation of multi-scale feature extraction is beneficial for detecting objects of
various sizes, addressing the diverse scale of elements in park- ing lots. Scaled-YOLOv4’s
adaptability to different scales and robustness to envi- ronmental variability make it well-
suited for real-world applications where parking conditions may vary. Furthermore, the
model’s enhanced generalization capabil- ities ensure reliable performance across
different parking lots and environments, reinforcing its utility in a wide range of practical
situations. Overall, leveraging Scaled-YOLOv4 in the parking algorithm significantly
enhances the accuracy, effi- ciency, and adaptability of object detection, empowering
self-driving vehicles with the capabilities needed for secure and proficient parking
maneuvers in diverse and dynamic settings [7].
2.1.9 Wi-Fi
22
communications with current Wi-Fi-enabled wireless devices, including wireless routers
and wireless access points [48].
The term ”Wi-Fi” itself refers to the well-known wireless network technology that
makes a WLAN possible. It is known that the technology used on a daily basis, such as
smartphones, laptops, smart TVs, etc., is Wi-Fi capable. This indicates that these gadgets
are compatible with frequencies in the 2.4 GHz and 5 GHz bandwidths. To avoid
interference, these frequencies are different from those that are utilized for phone, TV,
and radio broadcasts [49].
The self-driving car industry has made significant advancements during the past
decade. Not to mention the substantial advancements they contribute to the overall
effectiveness, convenience, and safety of roads and transportation net- works, these new
capabilities will have major global effects that could radically change society [8]. Self-
driving cars are equipped with an autonomy system. Ac- cording to Badue et al. [9], the
architecture of the autonomy system of self-driving cars is typically organized into the
perception system and the decision-making sys- tem. The perception system is typically
broken down into numerous subsystems that are in charge of functions like self-driving
car localization, mapping of static obstacles, tracking of moving obstacles, road mapping,
and detection and identi- fication of traffic signals, among others. This made autonomous
parking possible for self-driving vehicles.
23
Table 2.1: Mentioned Methods of Autonomous Parking
Author(s) Method/Approach
Lee et al. [10] LiDAR-based
Wang et al. [11] Bird’s eye view vision system based
Various versions and designs of autonomous parking systems for self-driving cars
have been implemented and released, some commercially and some through research.
One of these is the lidar-based automatic parking by Lee et al. [10], wherein an HDL-32E
LiDAR is utilized to overcome the deficiency of ultrasonic sen- sors and camera. A paper
[12] collating a review of literature on automatic parking discusses different theories
involved in different designed automatic parking sys- tems. This included visual
perception, ultrasonic sensors and radar technology, path planning, control algorithms
based on fuzzy theory, neural network, image processing and recognition technology, and
digital signal processing technology, etc. Moreover, Wang et al. [11] introduces an
automatic parking method through a bird’s eye view vision system.
Locating an available parking space has been an inconvenience and time- wasting
duty for any driver, especially considering the growing number of vehicles as time passes
by. To give a solution to this problem, with the help of advanced technologies, parking
space availability recognition has been implemented. The recognition of vacant parking
spaces is one of the crucial elements for creating
24
a fully automated parking help system. Different approaches to this have been designed
and implemented utilizing various devices and algorithms.
Lee and Seo [14] introduced an algorithm for a camera-based parking slot avail- ability
recognition that was based on slot-context analysis. This algorithm included a slot-
validation step that probabilistically identifies multiple slot contexts, offering greater
adaptability for irregular patterns. The robustness of the proposed algo- rithm is
demonstrated through simulations and real-world vehicle-level experiments conducted in
diverse conditions. Another approach by Caicedo et al. [15] is based on a calibrated
discrete choice model for selecting parking alternatives. The paper proposes a
methodology for predicting real-time parking space availability in in- telligent parking
reservation (IPR) architectures—a technology-driven system that enables vehicles to
book parking spaces in advance utilizing a variety of digital tools and platforms. This
system uses three subroutines to predict parking avail- ability, distribute simulated
parking demand, and anticipate future departures. A work by Zheng et al. [16] aims to
forecast the availability rate of parking spaces. It aims to provide a solution to the key
obstacles to parking availability prediction, which are the accuracy of long-term
predictions, the interaction of parking lots in an area, and how user activities affect
parking availability. Algorithms for modelling the occupancy rate and for availability
prediction (i.e., regression tree, support vector regression, and neural networks) were
described.
Author(s) Method/Approach
Lee and Seo [14] Camera-based slot-context analysis
Caicedo et al. [15] Based on a calibrated discrete choice model
Zheng et al. [16] Occupancy rate modelling
The significance of these works is how they contribute to drivers. When parking
25
slot availability information is not disclosed in advance, drivers may move slowly and
waste time searching for available on-street parking spaces [17]. Therefore, a system that
predicts the availability of parking spaces is essential.
[19] discusses the science, technology, and the future of autonomous drones. It states
how these drones could have a significant impact on everyday activities including
transportation, communication, agriculture, disaster preparedness, and environmental
protection. The application of drones to the aforementioned areas will be referred to as
drone-assisted systems.
One example of where the drone is most widely used is on monitoring or surveil- lance
systems. An intelligent drone-based surveillance system by Sarkar et al. [20] utilizes a
DJI Matric 100 drone with Zenmuse camera with a 7x zoom to capture images, scan
environment, and detect obstacles in real-time. This work exper- imented for the
objective to develop a UAV-based smart parking lot monitoring system and license
plate detection, with the utilization of a computer. Drones were also employed in study
by Dasilva et al. [21] as programmable aerial eyes for an effective and economical
surveillance system. Illegal parking detection was the main objective of this work,
specifically detecting unregistered vehicles that were unauthorized to park in reserved
lots at an institutional area (i.e., Barry University). Aside from surveillance systems, an
article [22] introduces another drone as- sisted system that is primarily used in
intelligent transportation systems (ITS),
26
Table 2.3: Mentioned Drone-Assisted Systems
27
Chapter 3
RESEARCH METHODOLOGY
In this chapter, the research methods utilized were discussed and elaborated. This
includes the system’s block diagram, flowcharts, software and hardware struc- tures, and
the description of experiments to test the effectiveness of the proposed algorithm.
Figure 3.1 shows the two-way communication system between the Drone’s Al-
gorithm and the Self-driving AI. During the initiation stage of the device, the mi-
croprocessor prioritizes the process of the drone’s flight pattern sent from the self-
driving AI’s GPS via a dedicated 5G wireless connection. The AI will then display the
parking lot area with the drone’s flight path. As the drone follows the set path, the
microprocessor will stream the video feed in real-time to the self-driving vehi- cle’s AI.
The microprocessor will then construct a coordinated map leading towards the designated
area, which will assist the navigation system of the vehicle. As for
28
the drone’s health status, it will be continuously monitored by the microprocessor and
shared to the vehicle’s display.
Figure 3.2 shows the operation that initiates the parking space detection and
navigation process, commencing with drone flight initialization and precise coordi- nate
generation for navigation. The drone is deployed to scan and map the parking lot,
simultaneously sharing its live video feed via a dedicated 5G wireless connec- tion with
the self-driving vehicle’s AI system. The AI processes the live video feed,
29
actively identifying vacant parking spaces by analyzing the images in real time. It
distinguishes between occupied and vacant spaces, assigning labels for navigation
guidance. The drone generates navigation instructions for the vehicle, facilitating its
journey to the designated parking spot. Successful parking is achieved when the vehicle
reaches its spot unobstructed by other vehicles, culminating the opera- tion. This
innovative system leverages drone technology, a dedicated 5G connec- tion, and real-time
video processing by the vehicle’s AI to optimize parking space utilization in dynamic
environments.
Figure 3.3 shows the real-time Hough Line Transform algorithm initiated by ini-
tializing video capture, creating a continuous loop for processing frames in real- time.
Each frame is captured and subjected to preprocessing steps, including grayscale
conversion, Gaussian blur, and Canny edge detection, aimed at en- hancing line detection
accuracy. The core of the algorithm employs the Hough Line Transform, represented
30
line segments in the image. These line segments are drawn on the original frame using
cv2.line. The resulting frame, showcasing the detected lines, is displayed in real-time
through cv2.imshow. The algorithm continuously checks for key presses, providing a
means to break the loop, and finally releases resources, ensuring the seamless execution
of the real-time Hough Line Transform.
Figure 3.4 shows the drone’s subroutine. Throughout the vehicle’s navigation, the
drone assumes the role of ongoing monitoring. It observes assigned parking spaces and,
if an incoming vehicle occupies an initially designated spot, the drone recalculates new
coordinates for guidance to the nearest available parking space within the same lot. This
process ensures efficient parking space allocation and real-time adaptability.
31
3.1.1 Hardware
Figure 3.5 shows the DJI Mini 2 Drone, weighing in at approximately 249 grams and
measuring 138mm x 81mm x 58mm, its compact frame is equipped with a 12- megapixel
camera for capturing photos and 720p video for live sharing. The drone boasts a
maximum flight time of up to 31 minutes and can fly at maximum wind speed resistance
of up to 8.5-10.5 meters per second and if no wind resistance it can fly 16 meters per
second, covering distances of around 15.7 km.
The DJI Mini 2 Drone has become a versatile tool in various fields, including
research and development. The technology holds the potential to address intricate
challenges in autonomous vehicle navigation, obstacle detection, and information
gathering with regards to its autonomous parking system. In this study, the drone will
serve as the camera or eye for the proposed algorithm and will be the one who will assist
the RC rover, which is a prototype of a sedan-type self-driving vehicle.
32
Raspberry Pi 4 Model B
Python programming language is utilized in this study by the reason of Python being
one of the most utilized programming languages in self-driving vehicles, alongside with
C and C++ [50]. In addition to that, Elon Musk’s Tesla, one of the most famous
manufacturers of self-driving vehicles, operates on an operating
33
system built on the Python programming language [51]. The utilization of Python is for
the proposed algorithm to be compatible with an actual sedan-type self-driving vehicle,
despite that only a prototype is to be utilized in the experiments conducted.
Remote-controlled Rover
Figure 3.7 shows the KEYESTUDIO Smart Car Robot 4WD Programmable DIY
Starter Kit, which serves as a prototype for the self-driving vehicle in this study. This kit
provides a comprehensive platform to emulate the core functionality of a self-driving car,
offering a 4-wheel drive chassis, motors, wheels, and sensors. By interfacing this kit with
an autonomous system, control algorithms, and navigation strategies can be developed
and tested in a controlled and cost-effective environ- ment. Its adaptability and
compatibility with Raspberry Pi 4 Model B enable the researchers to mimic the essential
components and functionalities of a self-driving vehicle, allowing to explore and validate
autonomous driving technologies effec- tively.
34
3.1.2 Software
The GUI consists of four main buttons, namely, “Flight Data” button, “Video Feed”
button, “Controls” button, and “Help” button. Each main button displays its
corresponding window, containing different function buttons, that will open as the user
clicks a main button. Figure 3.7 shows the display when “Video Feed” button is clicked,
and Figure 3.8 shows the display when “Controls” button is clicked.
35
Figure 3.8: GUI for Drone Algorithm (Default Window)
Figure 3.8 shows the default window of the application, which is the Video Feed
window. It displays a live video stream from the drone’s camera. The predefined
waypoints are also displayed, along with their corresponding longitude, latitude, altitude,
36
Figure 3.9: GUI for Drone Algorithm (Controls Window)
Figure 3.9 shows the Controls window, which is utilized in controlling the drone. The
“ASCEND” button is clicked for the drone to take flight and travel to the pre- defined
location and path. On the other hand, the “LAND” button is clicked to land the drone.
This section contains all the details of the experiments, which includes the drone’s
battery consumption measurement, scanning accuracy test of the drone algorithm,
obstacle detection and tolerance test, and latency measurement.
The objective of the experiment is to measure the battery consumption rate of the DJI
Mini 2 drone while traveling from one waypoint to another, with varying altitude.
37
The materials required are:
1. Prepare the DJI Mini 2 Drone, DJI Mini 2 Drone Controller, and a mobile phone
installed with DJI Fly App.
2. Assemble the drone’s controller by attaching the its joysticks on its designated spots.
3. Place the mobile phone on the controller’s cellphone holder and plug the cord on
the mobile phone’s charging port, which is preferably a USB Type C. Use an
adapter if necessary.
4. Turn on the drone’s controller and open the DJI Fly App on the mobile phone
5. Turn on the DJI Mini 2 Drone. Make sure that its battery indicator lights are all
ON, which means the drone’s battery is full.
6. On the DJI Fly App, select DJI Mini 2 as the drone that will be utilized. Wait until
the drone will connect with the mobile phone and a video livestream of the drone
will appear.
8. Measure the drone’s initial battery percentage at Waypoint i. Record the value in
Table 3.1.
9. On the DJI Fly App, long press the “Take Off” button for the drone to take flight.
38
10. Manually control the drone and let it travel around the edges of the parking lot. The
corners of the parking lot will be labeled as the Waypoints.
11. As the drone hovers from one waypoint to another, measure and record the altitude
in meters (h), battery percentage (BP ) of the drone, and elapsed time (t) as it
arrives at a waypoint or corner.
Record the values in Table 3.1.
i — — — —
A
B
C
D
E
F
G
A (final)
12. When the drone returns to Waypoint A, land the drone manually. Measure the
drone’s final battery percentage at Waypoint A. Record the value in Table 3.1
14. Close the DJI Fly App, remove the mobile phone from the controller, and
disassemble the joysticks.
15. Calculate the change in altitude (∆h) each time it travels from one waypoint
39
to another by using the formula:
∆h = h2 − h1 (3.1)
Where:
h1 = preceding altitude
h2 = altitude during the current waypoint Record
the values in Table 3.1.
16. Calculate the change in battery percentage (∆BP) each time it travels from one
waypoint to another by using the formula:
Where:
17. Calculate the change of elapsed time (∆t) as it travels from one waypoint to another
by using the formula:
∆t = t2 − t1 (3.3)
Where:
18. Calculate the decreasing rate of the battery percentage per minute (%BP/min)
40
as it travels from one waypoint to another by using the formula:
∆BP
%BP/min = (3.4)
∆t
Where:
19. Compare and observe how the varying altitude and elapsed time affect the change
of the drone’s battery percentage consumption.
The objective of the experiment is to determine the accuracy of the drone’s algorithm
and to assess the drone’s capability to scan its surroundings and collect necessary data for
autonomous navigation and decision-making.
The materials required are:
2. Raspberry Pi 4 Model B
2. Connect the monitor, keyboard, and mouse to the USB ports of the Raspberry Pi 4
Model B.
41
3. Plug the power cord of the Raspberry Pi 4 Model B to turn on the microcom- puter.
5. Click the Drone Algorithm Application that can be seen on the screen to dis- play
the GUI.
6. Place the DJI Mini 2 Drone to its initial location and turn it on.
8. Click the “ASCEND” button on the screen for the drone to take flight and travel to
the predefined location and path.
9. As the drone hovers, monitor the quality of the live video feed.
10. Check if the drone was able to recognize vacant parking spaces correctly or if there
were errors.
11. Record the number of vacant parking spaces the drone recognizes (PR) and the
number of actual vacant parking spaces (PA) that are visible in the live video feed in
Table 3.2. Observe and compare the recorded values.
12. Calculate the percentage error between the recorded (PR) and actual (PA) number of
vacant parking spaces by using the formula:
PA − PR
Percentage Error (%E) = (3.5)
PA
Where:
42
13. Observe and record in Table 3.2 how frequently the drone incorrectly labeled an
occupied parking space as vacant.
14. Click the “LAND” button on the screen to return the drone back to its initial
location.
17. Close the Drone Algorithm Application, turn off the microcomputer, and un- plug
the power cord.
18. Observe the collected data to assess the drone’s capability to scan its sur- roundings
and collect necessary data for autonomous navigation and decision-
43
making.
The objective of the experiment is to assess the ability of the drone, equipped with the
proposed algorithm, in effectively navigating the self-driving vehicle while responding to
obstacles.
The materials required are:
2. RC Rover
3. Raspberry Pi 4 Model B
1. Prepare the DJI Mini 2 Drone, RC Rover, Raspberry Pi 4 Model B, modem, objects
for obstacle scenarios, monitor, keyboard, and mouse.
2. Randomly place the objects for obstacle scenarios within the predefined lo- cation
that the drone and RC Rover will be travelling.
3. Connect the monitor, keyboard, and mouse to the USB ports of the Raspberry Pi 4
Model B.
4. Plug the power cord of the Raspberry Pi 4 Model B to turn on the microcom- puter.
44
6. Click the Drone Algorithm Application that can be seen on the screen to dis- play
the GUI.
7. Place the DJI Mini 2 Drone to its initial location and turn it on.
10. Click the “ASCEND” button on the screen for the drone to take flight and travel to
the predefined location and path.
11. Monitor the drone’s ability to detect and respond to obstacles in real time.
12. Record the number of detected obstacles (OD) and the number of actual
obstacles (OA) in Table 3.3. Observe and compare the recorded values.
13. Calculate the percentage error between the number of detected obstacles (OD)
OA − OD
Percentage Error (%E) = (3.6)
OA
Where:
45
Table 3.3: Obstacle Detection and Tolerance Test
14. When the drone is done scanning the predefined location and detecting ob- stacles,
a map with a route to a vacant parking lot from the initial location of the rover will
be displayed.
Control the rover to trace and follow the route on the map.
15. Monitor and record in Table 3.3 if the given route was obstacle-free.
16. Click the “LAND” button on the screen to return the drone back to its initial
location.
46
19. Close the Drone Algorithm Application, turn off the microcomputer, and un- plug
the power cord.
20. Remove the objects for obstacle scenarios within the experiment setting.
21. Observe the collected data to assess the drone’s obstacle tolerance and its ability to
navigate safely in dynamic environments.
The objective of the experiment is to measure the data transmission time of the drone,
equipped with the proposed algorithm, with respect to distance.
The materials required are:
2. Raspberry Pi 4 Model B
2. Connect the monitor, keyboard, and mouse to the USB ports of the Raspberry Pi 4
Model B.
3. Plug the power cord of the Raspberry Pi 4 Model B to turn on the microcom- puter.
47
5. Click the Drone Algorithm Application that can be seen on the screen to dis- play
the GUI.
6. Place the DJI Mini 2 Drone to its initial location and turn it on.
8. Click the “ASCEND” button on the screen for the drone to take flight and travel to
the predefined location and path.
9. The drone is programmed to send data to the microcomputer every 10 meters it has
travelled. As the drone hovers, the distance travelled and delay of the most recently
received data are displayed on the screen.
Record the latency (in milliseconds) of the data transmission in Table 3.4 at every
10 meters the drone has travelled.
10. Repeat Procedure 9 until the drone has travelled 100 meters.
48
11. Click the “LAND” button on the screen to return the drone back to its initial
location.
13. Close the Drone Algorithm Application, turn off the microcomputer, and un- plug
the power cord.
14. Observe the collected data to assess the quality, reliability, and operational limits of
the drone’s wireless communication system.
49
Chapter 4
RESULTS AND
DISCUSSIONS
In this chapter, the data and results from the four experiments in the previous chapter
were tabulated and thoroughly discussed individually.
In this experiment, the battery consumption of the drone while traveling from one
waypoint to another was measured and recorded, along with its altitude and elapsed time
as it arrives in each waypoint.
i 0m 98% 0 min — — — —
A 7m 94% 1.15 min +7 m -4% 1.15 min -3.48 %/min
B 11 m 93% 1.53 min +4 m -1% 0.38 min -2.63 %/min
C 12 m 92% 2.03 min +1 m -1% 0.50 min -2.00 %/min
D 12 m 92% 2.25 min 0m 0% 0.22 min 0.00 %/min
E 12 m 91% 2.55 min 0m -1% 0.30 min -3.33 %/min
F 20 m 84% 4.05 min +8 m -7% 1.50 min -4.67 %/min
G 20 m 80% 5.10 min +1 m -4% 1.05 min -3.81 %/min
A (final) 0m 76% 6.13 min -20 m -4% 1.03 min -3.88 %/min
As shown in Table 4.1, the battery percentage gradually decreases as the drone travels
from one waypoint to another. The altitude was manually varied to observe how the
change of altitude affects the decreasing rate of the drone’s battery per- centage. It was
50
observed that the greater the change of the drone’s altitude, the
51
greater also is the decreasing rate of the drone’s battery percentage per minute.
52
BIBLIOGRAPHY
Published
[1] Self-driving Cars Market Size Global Report, 2022 - 2030, en, May 2022.
[3] K.-W. Min and J.-D. Choi, “Design and implementation of autonomous ve- hicle
valet parking system,” in 16th International IEEE Conference on Intel- ligent
Transportation Systems (ITSC 2013), ISSN: 2153-0017, Oct. 2013,
[4] E. Brynjolfsson and A. Mcafee, “Artificial intelligence, for real,” Harvard busi-
ness review, vol. 1, pp. 1–31, 2017.
53
ISSN: 1573-7462. DOI: 10.1007/s10462-020-09943-1. [Online]. Available:
https://doi.org/10.1007/s10462-020-09943-1 (visited on 10/03/2023).
https://ieeexplore.ieee.org/abstract/document/8220479 (visited on
10/02/2023).
[9] C. Badue et al., “Self-driving cars: A survey,” Expert Systems with Applica-
tions, vol. 165, p. 113 816, Mar. 2021, ISSN: 0957-4174. DOI: 10.1016/j.
54
[11] C. Wang, H. Zhang, M. Yang, X. Wang, L. Ye, and C. Guo, “Automatic Parking
Based on a Bird’s Eye View Vision System,” en, Advances in Mechanical
Engineering, vol. 6, p. 847 406, Jan. 2014, Publisher: SAGE Publications,
[13] S. Ma, H. Jiang, M. Han, J. Xie, and C. Li, “Research on Automatic Park- ing
Systems Based on Parking Scene Recognition,” IEEE Access, vol. 5,
pp. 21 901–21 917, 2017, Conference Name: IEEE Access, ISSN: 2169-3536. DOI:
[14] S. Lee and S.-W. Seo, “Available parking slot recognition based on slot con- text
analysis,” en, IET Intelligent Transport Systems, vol. 10, no. 9, pp. 594–
https://onlinelibrary.wiley.com/doi/abs/10.1049/iet-its.2015.
0226 (visited on 10/08/2023).
pp. 7281–7290, Jun. 2012, Publisher: Pergamon, ISSN: 0957-4174. DOI: 10.
1016/j.eswa.2012.01.091. [Online]. Available: https://www.sciencedirect.
com/science/article/abs/pii/S0957417412001042 (visited on 10/08/2023).
55
Conference on Intelligent Sensors, Sensor Networks and Information Pro-
[17] W. Shao, Y. Zhang, B. Guo, K. Qin, J. Chan, and F. D. Salim, “Parking Avail-
ability Prediction with Long Short Term Memory Model,” en, in Green, Per-
vasive, and Cloud Computing, S. Li, Ed., ser. Lecture Notes in Computer
Science, Cham: Springer International Publishing, 2019, pp. 124–137, ISBN: 978-3-
[19] D. Floreano and R. J. Wood, “Science, technology and the future of small au-
tonomous drones,” en, Nature, vol. 521, no. 7553, pp. 460–466, May 2015,
drone-based-surveillance--application-to-parking-lot-monitoring/
56
parking lots,” in The 21st World Multi-Conference on Systemics, Cybernetics
and Informatics (WMSCI 2017) Proceedings, 2017.
[22] W. Shi, H. Zhou, J. Li, W. Xu, N. Zhang, and X. Shen, “Drone Assisted Vehic-
ular Networks: Architecture, Challenges and Opportunities,” IEEE Network, vol.
32, no. 3, pp. 130–137, May 2018, Conference Name: IEEE Network, ISSN: 1558-
https://ieeexplore.ieee.org/abstract/document/8253543 (visited on
10/08/2023).
https://ieeexplore.ieee.org/abstract/document/8690977 (visited on
10/08/2023).
Electronic Sources
[24] PARKING LOT ACCIDENTS: More Common and Dangerous Than You
Think, en. [Online]. Available:
https://www.spadalawgroup.com/blog/parking- lot-accidents-more-
[25] RMG, How Often Do Car Crashes Occur in Parking Lots And Garages, en-
[26] D. Laurel, Rear-enders, sideswipes were the most common types of acci-
dents in Metro Manila in 2021, en, Sep. 2022. [Online]. Available: https:
57
//www.topgear.com.ph/news/motoring- news/mmaras- impacts- 2021-
a962-20220914 (visited on 12/29/2023).
[27] S. Cho, Autonomous / Self-Driving Cars Market Size, Share, Growth, Fore-
cast 2030, en, Sep. 2023. [Online]. Available: https://www.linkedin.com/
pulse/autonomous-self-driving-cars-market-size-share-growth-shu-
[28] O. Brown, What Is a Webcam? Here’s Everything You Need to Know, en-US,
webcam?fbclid=IwAR06QWa8AWFX7-xyRskT3ddMBNPe1oO8GDIXy66fMjU6XgvwNPZ1p1yhkXk
(visited on 12/06/2023).
[29] J. Ledford, What is a Computer Webcam and How Does It Work? en, Sec-
[30] Machine Learning vs. AI: Differences, Uses, and Benefits, en, Jun. 2023.
Available: https://www.tableau.com/learn/articles/machine-learning-
[34] Hough transform, en, Page Version ID: 1173787350, Sep. 2023. [Online].
58
[35] T. Kacmajor, Hough Lines Transform Explained, en, Nov. 2020. [Online].
Available: https://medium.com/@tomasz.kacmajor/hough-lines-transform-
[36] Line detection in python with OpenCV — Houghline method, en-US, Sec-
tion: Python, May 2017. [Online]. Available: https://www.geeksforgeeks. org /
[37] Canny edge detector, en, Page Version ID: 1177834947, Sep. 2023. [Online].
Available: https://en.wikipedia.org/w/index.php?title=Canny_edge_
Available: https://www.sciencedirect.com/topics/engineering/gaussian-blur
(visited on 11/13/2023).
[40] Scaled-YOLOv4 is Now the Best Model for Object Detection. [Online].
[41] A. Kumar, Computer Vision: Gaussian Filter from Scratch. en, Mar. 2019.
59
[43] Basic CNN Architecture: Explaining 5 Layers of Convolutional Neural Net-
[45] Convolutional Neural Networks : The Theory, no. [Online]. Available: https:
//www.bouvet.no/bouvet-deler/understanding-convolutional-neural-
networks-part-1 (visited on 10/08/2023).
https://blog.roboflow.com/scaled-yolov4-tops-efficientdet/ (visited on
11/27/2023).
[48] What Is Wi-Fi? - Definition and Types, en. [Online]. Available: https://www.
cisco.com/c/en/us/products/wireless/what-is-wifi.html (visited on
10/10/2023).
[49] S. Panzer, What Is Wireless Fidelity and is it the same as WiFi? en-US, Mar.
10-programming-languages-utilized-autonomous-vehicles-abbott (visited on
01/05/2024).
60
[51] Why Python is the favourite programming language of Elon Musk, en, Sep.
2022. [Online]. Available: https : / / content . techgig . com / technology /
why-python-is-the-favourite-programming-language-of-elon-musk/
61