0% found this document useful (0 votes)
35 views66 pages

Drone-Assisted Outdoor Parking Algorithm For Sedan-Type Self-Driving Vehicles

This document discusses a proposed drone-assisted algorithm for sedan-type self-driving vehicles to utilize autonomous parking in outdoor parking areas. Parking lots currently see many accidents each year due to human error. The growth of the self-driving car market indicates a need for autonomous parking solutions. However, current autonomous parking is limited to indoor use. The proposed algorithm would use a drone to scan parking lots and detect available spaces and obstacles, allowing self-driving cars to autonomously park outdoors.

Uploaded by

Jecan alicer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views66 pages

Drone-Assisted Outdoor Parking Algorithm For Sedan-Type Self-Driving Vehicles

This document discusses a proposed drone-assisted algorithm for sedan-type self-driving vehicles to utilize autonomous parking in outdoor parking areas. Parking lots currently see many accidents each year due to human error. The growth of the self-driving car market indicates a need for autonomous parking solutions. However, current autonomous parking is limited to indoor use. The proposed algorithm would use a drone to scan parking lots and detect available spaces and obstacles, allowing self-driving cars to autonomously park outdoors.

Uploaded by

Jecan alicer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 66

Drone-Assisted Outdoor Parking

Algorithm for Sedan-Type Self-Driving


Vehicles

A Design / Capstone Project


Presented to the Department of Electronics Engineering Cebu
Institute of Technology University
Cebu City, Philippines

In Partial Fulfillment
of the Requirements for the Degree

Bachelor of Science in Electronics Engineering

by
The Problem
Jecan Alicer
Elysa M. Apas
Adrian Kendrick N. Emnacen
Jean Roy C. Lastimosa
Table of Contents

List of Tables iv
List of Figures v
Chapter 1 INTRODUCTION 1
1.1 Background of the Study 1
1.2 Problem Statement 4
1.3 Objectives 4
1.4 Purpose of the Study 5
1.4.1 Self-Driving Car Owners 5
1.4.2 Autonomous Vehicle Industry 5
1.4.3 Drone Technology Industry 5
1.4.4 General Public 6
1.5 Scope and Limitations 6
1.6 Definition of Terms 6
Chapter 2 THEORETICAL BACKGROUND 8
2.1 Theories 8
2.1.1 Web Camera Operation 8
2.1.2 Artificial Intelligence and Machine Learning 10
2.1.3 Computer Vision 11
2.1.4 Hough Line Transform 11
2.1.5 Canny Edge Detection 14
2.1.6 Gaussian Blur 17

ii
2.1.7 Convolutional Neural Networks 19
2.1.8 Scaled-YOLOv4 20
2.1.9 Wi-Fi 22
2.2 Literature Review 23
2.2.1 Autonomous Parking Systems for Self-Driving Cars 23
2.2.2 Parking Availability Recognition and Prediction 24
2.2.3 Drone-Assisted Systems 26

Chapter 3 RESEARCH METHODOLOGY 28


3.1 Proposed Method 28
3.1.1 Hardware 32
3.1.2 Software 35
3.2 Description of Experiments 37
3.2.1 Battery Consumption Measurement 37
3.2.2 Scanning Accuracy Test 41
3.2.3 Obstacle Detection and Tolerance Test 44
3.2.4 Latency Measurement 47
Chapter 4 RESULTS AND DISCUSSIONS 50
4.1 Battery Consumption Measurement 50

BIBLIOGRAPHY 52

iii
List of Tables
Page

2.1 Mentioned Methods of Autonomous Parking 24


2.2 Mentioned Parking Availability Recognition/Prediction Approaches 25
2.3 Mentioned Drone-Assisted Systems 27

3.1 Battery Consumption Measurement 39


3.2 Scanning Accuracy Test 43
3.3 Obstacle Detection and Tolerance Test 46
3.4 Latency Measurement 48

4.1 Battery Consumption Measurement Data 50

iv
List of Figures
Page

1.1 Self-Driving Cars Market Size, By Region, 2018-2030 [1] 3

2.1 (r, θ) Line Parameterization [34] 13


2.2 Non-Maximum Suppression [38] 15
2.3 Hysteresis Thresholding [38] 16
2.4 CNN Architecture [43] 19
2.5 Scaled-YOLOv4 Architecture [47] 21

3.1 Drone Algorithm Block Diagram 28


3.2 System Process for Drone Algorithm 29
3.3 System Process for Hough Line Transform Algorithm 30
3.4 System Process for Drone Subroutine 31
3.5 DJI Mini 2 Drone 32
3.6 Raspberry Pi 4 Model B 33
3.7 KEYSTUDIO Smart Car Robot 34
3.8 GUI for Drone Algorithm (Default Window) 36
3.9 GUI for Drone Algorithm (Controls Window) 37

v
Chapter 1

INTRODUCTION

In this chapter, the researchers discussed the statistics that signify the impor- tance of
this study. Self-driving vehicles and its autonomous parking, or algorithm- assisted
parking, features were introduced and defined. The researchers pre- sented how the
autonomous parking feature of self-driving vehicles may affect traditional car drivers and
be exclusive to indoor parking areas only. Through this drone-assisted algorithm, the
researchers ought to provide a way for self-driving cars to utilize their autonomous
parking feature in outdoor parking areas.

1.1 Background of the Study

Parking lots are widely perceived as safe due to the low speeds drivers must have
while driving in such place. However, this perception is actually causing park- ing lots to
be much more dangerous of a place than necessary [24]. Every year, parking lots witness
a concerning number of accidents, primarily due to human distractions such as texting
and social media use. The National Safety Council re- ports that parking lots witness over
50,000 accidents annually, accounting for 20% of all car crashes. These incidents result
in 500 or more fatalities and cause injuries to an additional 60,000 individuals each year
[25]. The 2021 Metro Manila Acci- dent Reporting and Analysis System (MMARAS)
report further sheds light on the specific risks faced by parked vehicles. According to the
report, 1,524 incidents were recorded as ‘hit parked vehicle’, indicating a notable
vulnerability in these areas [26]. Additionally, hit-and-run incidents in 2021 numbered
well over a thou- sand, further underscoring the critical importance of addressing safety
concerns in parking lots. Implementing practical solutions like clear signage, safety
equipment,

1
improved lighting, and routine maintenance is paramount to mitigate the frequency and
severity of accidents in parking areas, ensuring the safety of both property and lives.
In recent years, advancements in technology have presented opportunities to address
the challenges posed by human-caused accidents on roadways and park- ing areas. One
such innovation are the self-driving cars. The global market for self-driving cars was
worth about $20.25 billion in 2021. According to the predic- tion of experts, the market
for self-driving cars will keep growing at an average annual rate of 13.9% during the
forecast period [1]. There has been a significant increase in partnerships, collaborations,
and investments dedicated to developing self-driving vehicles within the self-driving car
market. This growth is expected to continue due to substantial guidance and support
from both governments and private sectors in various countries, further expanding the
automated driving ve- hicle technology over the projected time. In addition to that, the
most commonly produced and available self-driving cars nowadays are sedan-type.
Sedan is ex- pected to be the largest segment in the market of autonomous vehicles [27].
Figure

1.1 shows the forecasted market size of self-driving cars by region, between the years
2018 to 2013.

2
Figure 1.1: Self-Driving Cars Market Size, By Region, 2018-2030 [1]

A significant aspect of self-driving cars is its algorithm-assisted parking feature or


autonomous parking feature. It is a technology-driven approach aimed at opti- mizing
parking processes and minimizing the risks associated with human parking errors.
Algorithm-assisted parking systems leverage sensors, cameras, and so- phisticated
algorithms to guide vehicles into parking spaces with precision and efficiency. The
motivation for exploring algorithm-assisted parking solutions arises from the pressing
need to enhance road safety and reduce accident rates. As highlighted in the previous
section, a significant proportion of accidents, especially in transportation, can be
attributed to human error. These errors include misjudg- ing distances, improper parking
techniques, and failures to detect obstacles, all of which can lead to fender benders,
collisions, and property damage. Algorithm- assisted parking systems hold the potential
to substantially mitigate these risks. By automating the parking process and minimizing
human involvement, these sys- tems aim to eliminate common parking errors. This not
only enhances safety for drivers and pedestrians but also reduces the incidence of
parking-related acci- dents, which can lead to financial losses and increased insurance
premiums for

3
individuals [2].
Concerns have been made about the potential effects of autonomous parking systems
for self-driving cars on the length of time traditional car drivers must wait in parking
lines. As self-driving cars require dedicated parking spaces and time- consuming
navigation to designated areas, it may significantly extend the waiting periods for non-
autonomous vehicles, thereby hindering their ability to efficiently utilize self-parking
features [3]. These dedicated parking spaces are designed in structured indoor parking
spaces only, which also makes them incapable of uti- lizing their autonomous parking
feature in outdoor parking areas. To make this possible, involvement of other sensors or
external devices may be implemented in the system.

This study aims to further enhance the autonomous parking capabilities of self-
driving vehicles, specifically sedan types. An advanced approach in outdoor park- ing
availability detection and recognition is to be proposed to achieve convenience,
timeliness, and safety.

1.2 Problem Statement

The autonomous parking capability of sedan-type self-driving vehicles remains


limited to indoor parking spaces and might impact parking queueing times for non-
autonomous car drivers. This exclusivity hinders the self-driving vehicles to effi- ciently
utilize their autonomous parking features.

1.3 Objectives

The main objective of this study is to design a drone-assisted outdoor parking


algorithm for sedan-type self-driving cars to make autonomous parking inclusive. The
specific objectives of the algorithm are as follows:

4
• To enable the ability to locate a vacant outdoor parking space with image
processing.

• To utilize a combination of line and edge detection methods, which are Hough Line
Transform and Canny Edge Detection.

1.4 Purpose of the Study

The success of this research can be beneficial to the following:

1.4.1 Self-Driving Car Owners

Self-driving car owners are privileged enough to experience the features of a self-
driving car (i.e., automatic parking, obstacle detection). This study offers an additional
feature for self-driving car owners to experience, a parking availability locator. It is less
time-consuming and more convenient for self-driving car drivers.

1.4.2 Autonomous Vehicle Industry

The designed algorithm will be implemented and integrated into self-driving cars.
This improves self-driving cars’ parking capabilities, wherein, the marketabil- ity and
competitiveness of the autonomous vehicle industry will enhance.

1.4.3 Drone Technology Industry

The integration of drones with self-driving cars for parking purposes can lead to new
opportunities for the drone technology industry, giving rise to the develop- ment of
specialized drone systems and services tailored for autonomous vehicle applications.

5
1.4.4 General Public

The study has the potential to improve overall road safety by reducing accidents
caused by human error during parking maneuvers. It can also make parking easier and
less time-consuming for the public using self-driving car services.

1.5 Scope and Limitations

This study focuses solely on creating an algorithm for DJI Mini 2 Drone to assist
sedan-type self-driving vehicles to park in outdoor parking areas. This study will not
cover other factors that might affect the performance of the drone, such as the condition
of the weather and the drone’s hardware.

1.6 Definition of Terms

The following are the significant terms utilized in this study:

Algorithm: a step-by-step procedure or set of instructions embedded to the drone


that is designed to assist self-driving cars for autonomous parking.

Autonomous parking: also known as self-parking or algorithm-assisted park- ing,


refers to a technology that allows the vehicle to park itself with human intervention.

Communication system: refers to the communication between the drone and


self-driving vehicle.

Drone: an aerial device, the system’s external device that assists self-driving vehicles
in autonomous parking.

Queueing/waiting: the process of vehicles waiting in line or queue to access a


parking space, often leading to time delays and increased stress for drivers.

6
Self-driving vehicles: also known as autonomous cars or driverless cars, refer
to vehicles that are capable of traveling and navigating without human intervention.

Sedan: a type of self-driving vehicle, characterized by its four doors, that has an
average length in the range of 180 to 195 inches (4.5 to 5 meters).

7
Chapter 2

THEORETICAL BACKGROUND

In this chapter, the theoretical basis and review of related literature of drone- assisted
autonomous parking systems and technologies for self-driving vehicles were discussed.

2.1 Theories

This section discusses the theories that structured the designing and imple- mentation
of the proposed algorithm.

2.1.1 Web Camera Operation

In the contemporary digital milieu, webcams have emerged as unassuming yet


indispensable devices. Primarily recognized for their role in facilitating real-time video
streaming for online meetings, web conferencing, and educational contexts, these
versatile instruments have found utility in an array of other domains. It is crucial to
acknowledge that the capabilities of webcams can vary significantly.

Webcams, or ”web cameras,” are digital devices that have seamlessly inte- grated into
daily lives, playing a pivotal role in capturing and transmitting live video or still images
to computers and the internet. The operation of a webcam can be broken down into a few
key steps. At the heart of every webcam is an image sen- sor. This sensor is equipped
with a small lens positioned at the front of the camera, which collects incoming light. The
lens contains a grid of minuscule light detectors, converting light into an electrical signal.
Once collected, the image is processed, transforming it into a digital format represented
by a series of zeros and ones. This digital format is vital for compatibility with computers
and online platforms.

8
Webcams primarily connect to computers using a USB cable, though some models
can also operate wirelessly via Wi-Fi. The USB connection serves the dual purpose of
providing power to the webcam and facilitating the transfer of digital data from the image
sensor to the computer. Additionally, webcams are accompanied by specialized software
that enables seamless interaction with the computer. This software allows users to control
the camera, adjust settings, and access features like autofocus and exposure modifications
[28].

Beyond the delineations of integrated, wired, and wireless webcams, two dis- tinct
categories of webcams come into focus, each characterized by its unique capabilities and
intended purposes. These are IP Cameras and Network Cam- eras. Network Cameras,
these webcams are typically integrated into computers or available as off-the-shelf
purchases at personal electronics retailers. Prominent brands such as Microsoft, Logitech,
and Razer are synonymous with network cam- eras, which are primarily tailored to meet
short-term usage needs. IP webcams, on the other hand, are designed with a distinctive
purpose. Engineered for con- tinuous 24/7 surveillance, these devices frequently boast
superior video quality compared to their network camera counterparts. As a result, IP
cameras have gar- nered attention in the realms of security systems, pet monitoring, and
applications demanding extended periods of use.

Webcams find application across a diverse spectrum of functions, predomi- nantly


serving as conduits for live video streaming between distant locations. Their roles
encompass facilitating online meetings, supporting educational endeavors, and enhancing
social interactions with friends, family, and colleagues. While some webcams extend
their capabilities to capturing still images, the quality of these images may vary.
Conversely, IP cameras tend to serve more commercial pur- poses, with increasing
popularity in home security systems, baby monitors, and pet surveillance. For
individuals familiar with digital cameras, the functioning of

9
webcams may strike a chord [29].

2.1.2 Artificial Intelligence and Machine Learning

Though often utilized interchangeably, Artificial Intelligence (AI) and Machine


Learning (ML) are two distinct, though related, concepts that aim for machines or
computers to perform certain tasks.

Artificial Intelligence

AI refers to the development of computer systems or other technologies that can carry
out tasks which traditionally require human intelligence (e.g., problem solving,
understanding human language, making decisions, etc.) [30]. AI has the potential to have
a transformational impact on business on a par with prior general- purpose technologies
[4]. Through the years, technology has been growing and AI has been implemented and
utilized in growing businesses and industries. AI has witnessed exponential growth and
adoption, catalyzed by advancements in ma- chine learning, deep learning, natural
language processing, computer vision, and robotics. These developments have fueled the
integration of AI into diverse indus- tries, including healthcare, finance, manufacturing,
transportation, and entertain- ment, with the promise of revolutionizing processes,
optimizing resource allocation, and unlocking new frontiers of innovation.

Machine Learning

On the other hand, Machine Learning is a subset of AI; it is one of the most exciting
recent technologies in artificial intelligence [5]. Through the use of algo- rithms and
statistical models, machine learning enables machines to improve at a certain task by
learning from data (e.g., facial recognition, text prediction, product recommendations,
predictive analysis, etc.) [31]. This refers to a computer’s ca-

10
pacity to continuously improve performance without requiring humans to explicitly
describe how to carry out each task, and it is the most significant general-purpose
technology in this time. Machine learning has improved significantly in recent years and
is now much more accessible. With this, systems that can learn how to carry out tasks on
their own can now be created [4].

2.1.3 Computer Vision

Computer vision is a technology that enables computers to understand the vi- suals
captured by their cameras, in the form of images or videos, the same way humans see and
understand through human eyes. It is also a field of artificial intelligence (AI) that
extracts useful information from digital photos, videos, and other visual inputs and offers
recommendations or actions in response to that in- formation. If AI gives computers the
ability to think, computer vision gives them the ability to see, observe, and comprehend
[32]. In this study, computer vision is utilized to enable the drone to comprehend and
recognize the scenery its cam- era captures. Analyzing videos and images captured by
unmanned aerial vehicles or aerial drones is an emerging application attracting significant
attention from re- searchers in various areas of computer vision [6]. This will allow the
drone to recognize and analyze if what the camera is seeing is a vacant parking spot or
not.

2.1.4 Hough Line Transform

The Hough Line Transform is a computer vision technique that is utilized to detect
straight lines in an image [33]. It is used in image analysis, computer vision, and digital
image processing. Originally, the Hough Transform is solely aimed to identify straight
lines. This technique was later generalized to also detect other shapes, such as circles,
ellipses, etc. [34].

A line can be expressed in different forms. One common representation is the

11
Slope-Intercept form:

y = mx + b (2.1)

Where:

m = slope of the line

b = y-intercept

x and y = Cartesian coordinates

For Hough Line Transform, lines are represented in the polar system:

r = xcosθ + ysinθ (2.2)

Where:

r = perpendicular distance from the origin to the line

θ = angle formed between the perpendicular line and the x-axis

The angle ’θ’ is measured in a counterclockwise direction from the positive di-
rection of the x-axis. It is important to note that the direction of measurement can vary
depending on how the coordinate system is defined. Moreover, a more effi- cient
implementation of the Hough Line Transform is the Probabilistic Hough Line Transform.

It gives as output the extremes of the detected lines (x0,y0,x1,y1) [35].


This representation is commonly used in OpenCV—an open-source computer
vision library. The functions available in OpenCV for the Hough Line Transform are

the Standard Hough Line Transform—uses the HoughLines() function and the

Probabilistic Hough Line Transform—uses the HoughLinesP () function [33].

In the context of the Hough Line Transform procedure, an essential initial step entails
the establishment of a 2D array or accumulator, with an initial value of zero.

12
Figure 2.1: (r, θ) Line Parameterization [34]

This array functions as a structured data repository for the storage of values as- sociated

with two distinct parameters: ’r’ and ’θ’. Within this array, the rows are designated to

correspond to the ’r’ parameter, while the columns are dedicated to the ’θ’ parameter.
The dimensions of this array are contingent upon the desired level of precision sought in
the Hough Transform operation. For instance, when aiming for an angular accuracy of 1
degree, it necessitates the allocation of 180 columns, as the maximum degree for a

straight line equals 180. Concerning the ’r’ parameter, its upper limit is contingent upon
the diagonal length of the image. Employing a one-pixel accuracy criterion, the number
of rows in the array is es- tablished to align with the diagonal length of the image, thereby

encompassing the entire spectrum of feasible ’r’ values [36].

In this study, the Hough Line Transform will be utilized for the drone to detect
straight lines from a parking lot image, in order to identify the area and availability of a
certain parking space.

13
2.1.5 Canny Edge Detection

Canny Edge Detection is an image processing technique developed by John F. Canny


in 1986 to detect wide ranges of edges in images [37]. It is a multi-stage al- gorithm,
which includes noise reduction using Gaussian filter, gradient calculation, non-maximum
suppression, and hysteresis thresholding. The first step is noise re- duction using
Gaussian filter, or Gaussian smoothing. Wherein, the input image is convolved with a
5x5 Gaussian filter to reduce noise. This step helps to ensure that the subsequent edge
detection is less sensitive to small variations in pixel values, since edge detection is
known to be susceptible to noise. Next is the gradient cal- culation, wherein the
smoothened image is then filtered with gradient masks (i.e., Sobel kernel) to calculate the

gradients in both horizontal (Gx) and vertical (Gy) di- rections. These gradients represent

the first derivative in both horizontal (Gx) and vertical (Gy) directions.
Edge gradient for each pixel:

q
Edge Gradient (G) = G2x + G2y (2.3)

Where:

G = Edge Gradient or Gradient Magnitude

Gx = horizontal direction

Gy = vertical direction

Resultant gradient’s angle of direction with respect to Gx:

Gx
Angle (θ) = tan−1 (2.4)
Gy

Where:

14
θ = resultant gradient’s angle of direction with respect to Gx Gx =

horizontal direction

Gy = vertical direction

The direction of the gradient is consistently perpendicular to edges, and it is quantized


to one of four angles that correspond to vertical, horizontal, and two diagonal directions.
After calculating the gradient magnitudes and directions is the non-maximum
suppression, which involves suppressing all gradient values except for the local maxima,
which are considered as potential edges. The idea is to thin out the edges to have a one-
pixel width.

Figure 2.2: Non-Maximum Suppression [38]

Figure 2.2 shows a representation of how the non-maximum suppression works. A full

scan of the image is done from points C to A to B. The gradient direction is perpendicular

to the edge. Point A is on the edge while points B and C are on the gradient direction.

Point A is then examined in conjunction with points B and C to determine if they


collectively create a local maximum. If this criterion is met, the point is taken into
consideration for the next stage; otherwise, it is suppressed or set to zero.
The last stage is edge tracking by Hysteresis, or Hysteresis thresholding. In this step,
it is determined which edges are real and which are not. Two thresh- olds are utilized,

which are minV al and maxV al. If an edge’s intensity is higher

15
than maxV al, it is sure to be an edge. If it’s lower than minV al, it is sure to be a
non-edge, so it is discarded. Moreover, the intermediary values between these thresholds
are categorized as either edges or non-edges based on their connec- tivity. If they are
linked to pixels identified as ”sure-edges,” they are acknowledged as part of the edges;
otherwise, they are also excluded.

Figure 2.3: Hysteresis Thresholding [38]

Figure 2.3 shows a sample graph for Hysteresis Thresholding. In this graph, Edge
A is identified as a ”sure-edge” because its intensity is above maxV al. Even though edge

C falls below maxV al, it is still considered a valid edge because it is connected to

edge A, forming a complete curve. On the other hand, edge B, although above minVal

and in the same region as edge C, is discarded because it’s not connected to any ”sure-

edge.” Selecting appropriate minV al and maxV al values is crucial for obtaining
accurate results. This stage also helps eliminate small pixel noises by assuming that
edges are long lines [38].
In this study, the Canny Edge Detection method will be utilized to identify and extract
lines from the drone’s captured image. All the aforementioned processes are

encapsulated into a single function in OpenCV, which is the cv.Canny() func- tion. The
output of this will be used as an input for the Hough Line Transform.

16
2.1.6 Gaussian Blur

Gaussian blur, a fundamental image processing technique, is renowned for its


capability to enhance images by reducing noise and smoothing details. At its core, this
filter applies a Gaussian function to each pixel in an image, creating a visually pleasing
and smoother result. The process involves a convolution operation with a kernel, derived
from the Gaussian function, serving as weights during the blurring process. This method,
emphasizing the central pixel and diminishing as it moves away from the center, is
effective in noise reduction and edge smoothing.

Gaussian blur’s versatility is evident in its applications, including noise reduc- tion,
softening edges, and serving as a crucial preprocessing step in computer vision and image
recognition. It plays a vital role in privacy protection by obscuring sensitive information
and simulating depth of field in photography.

The mathematical foundation of Gaussian blur is represented by a formula that


assigns weights to pixels based on their distance from the center.
One-dimensional Gaussian function:

1 x
G(x) = e— 2
√ 2πσ2 2σ2 , (2.5)

Where:

θ = resultant gradient’s angle of direction with respect to Gx Gx =

horizontal direction

Gy = vertical direction

and the two-dimensional version is

1 2 2
G(x) = · — x2σ2
+y
. (2.6)
2πσ2
e

17
This mathematical representation ensures a smoother image [39].
Gaussian blur’s scalability and adaptability are evident in its application to low- end
and high-end devices. On low-end devices, it optimizes computation orders and balances
feature map sizes, addressing factors such as memory bandwidth and access cost. On
high-end devices like GPUs, Gaussian blur contributes to efficient resource utilization
through careful consideration of scaling factors and re- ceptive fields. The
implementation of Gaussian blur is exemplified in the design of Scaled-YOLOv4, a
model setting a new standard in object detection. This model demonstrates efficiency and
adaptability across various GPU types, showcasing its prowess in achieving comparable or
superior performance. The results obtained from Scaled-YOLOv4 on a GPU V100
highlight its efficiency and performance met- rics, solidifying its status as a benchmark in
object detection [40].

Gaussian blur stands as a powerful and versatile tool in image processing, of- fering a
nuanced approach to noise reduction, edge smoothing, and overall image enhancement.
Its integration within advanced models like Scaled-YOLOv4 under- scores its
adaptability to various GPU types, positioning it as a benchmark in object detection and
emphasizing its significance in the future of computer vision [41].

Gaussian blur is a fundamental image processing technique with versatile ap-


plications, particularly in algorithms like the Drone-Assisted Parking Algorithm for Self-
Driving Vehicles. In this context, it addresses challenges such as noisy im- ages, playing
a crucial role in enhancing perceptual clarity. The algorithm benefits from Gaussian
blur’s edge-smoothing capabilities, improving object detection and recognition during
parking maneuvers. Additionally, Gaussian blur contributes to privacy protection by
obscuring sensitive information in captured images. Its ability to simulate depth of field
aids in achieving optimal focus, while its scalability en- sures effectiveness across varied
environmental conditions. The judicious applica- tion of Gaussian blur in the Drone-
Assisted Parking Algorithm optimizes resource

18
utilization, enhancing efficiency without compromising speed. Overall, incorporat- ing
Gaussian blur into self-driving algorithms proves strategic, aligning with specific
requirements, improving safety, and contributing responsible, reliable autonomous
systems [42] [39].

2.1.7 Convolutional Neural Networks

Convolutional Neural Networks (CNNs) is a key tool in Deep Learning, espe- cially
in image processing. They distinguish themselves by preserving the spatial connections
between pixels in a picture, as opposed to conventional neural net- works that flatten
images into 1D arrays of pixel values. This spatial integrity en- ables the preservation of
elements such as edges and forms. CNNs address each channel independently while
keeping the overall picture structure in color images with several channels (such as red,
green, and blue).

Figure 2.4: CNN Architecture [43]

Figure 2.4 shows the architecture of CNNs. These function by applying con-
volutional filters, which are effectively learnt sets of weights applied to pixel val- ues.
During training, these filters move over the picture, creating feature maps that emphasize
certain elements such as curves or edges. CNNs use the ReLU

19
Activation Function to increase accuracy and minimize processing complexity. In-
tegrating Convolutional Neural Networks (CNNs) into a Drone-Assisted Parking
Algorithm for Self-Driving Vehicles is crucial for its success. Drones capture rich visual
data of the parking environment, and CNNs excel in processing and recog- nizing
patterns within this data. By automatically extracting relevant features, such as parking
spaces, obstacles, and other vehicles, CNNs enable the algorithm to make informed
decisions. Their ability to understand spatial hierarchies is vital for interpreting the layout
of the parking lot and determining optimal parking spots. Moreover, CNNs offer real-
time processing capabilities, adapting to diverse condi- tions and ensuring timely
feedback for the self-driving vehicle. The adaptability of CNNs to variability in lighting
and environmental factors, coupled with their high ac- curacy in computer vision tasks,
enhances the overall reliability and effectiveness of the parking algorithm, contributing to
safer and more efficient self-driving vehicle parking [44].

Furthermore, CNNs use pooling techniques to efficiently reduce feature map size
without sacrificing crucial information, making them indispensable for image- related
Deep Learning applications [45].

2.1.8 Scaled-YOLOv4

Scaled-YOLOv4 is a sophisticated object identification model that can locate and


identify things in pictures and video frames. Its main job is to divide input photos into
grids and forecast bounding boxes around objects, providing infor- mation about the
object’s class, confidence score, and coordinates. The flexi- bility of Scaled-YOLOv4 to
diverse resource restrictions, establishing a balance between accuracy and processing
economy, distinguishes it. It optimizes model width, depth, and resolution using a
compound scaling technique, making it suited for a broad range of devices and
circumstances. Furthermore, variations such as

20
YOLOv4-CSP include Cross Stage Partial Connections (CSP) to improve informa- tion
flow inside the model, hence increasing object identification performance even further.
Scaled-YOLOv4 is used in a variety of applications, with its principal purpose being
in object detection tasks. It is extremely useful in applications requiring ex- act object
recognition and localization, such as surveillance systems, autonomous cars, and robots.
It assists with environment awareness in self-driving automobiles by identifying people,
vehicles, traffic signs, and road markings. Scaled-YOLOv4 is used in security and
surveillance systems for real-time object detection and track- ing. It aids in inventory
management and consumer tracking in retail, as well as monitoring stock levels and
evaluating shopper behavior. It is used in the medi- cal profession to detect anatomical
structure and abnormalities in medical pictures such as X-rays and MRIs. It also has uses
in agriculture, environmental monitor- ing, industrial automation, and bespoke item
identification, making it a useful tool across a wide range of sectors and use cases [46].
Figure 2.5 shows the architec- ture of the Scaled-YOLOv4.

Figure 2.5: Scaled-YOLOv4 Architecture [47]

21
Incorporating Scaled-YOLOv4 (You Only Look Once version 4) into a Drone-
Assisted Parking Algorithm for Self-Driving Vehicles is of paramount importance for
several key reasons. Renowned for its exceptional accuracy in object detec- tion,
YOLOv4 ensures the precise identification of parking spaces, obstacles, and other
vehicles, thereby facilitating safe and effective navigation. Its efficiency and real-time
processing capabilities are especially crucial in dynamic parking envi- ronments, where
quick and accurate detection is imperative for timely decision- making by the self-driving
vehicle. The single-pass architecture of YOLOv4 further reduces latency, aligning
seamlessly with the fast-paced demands of parking sce- narios. The model’s
incorporation of multi-scale feature extraction is beneficial for detecting objects of
various sizes, addressing the diverse scale of elements in park- ing lots. Scaled-YOLOv4’s
adaptability to different scales and robustness to envi- ronmental variability make it well-
suited for real-world applications where parking conditions may vary. Furthermore, the
model’s enhanced generalization capabil- ities ensure reliable performance across
different parking lots and environments, reinforcing its utility in a wide range of practical
situations. Overall, leveraging Scaled-YOLOv4 in the parking algorithm significantly
enhances the accuracy, effi- ciency, and adaptability of object detection, empowering
self-driving vehicles with the capabilities needed for secure and proficient parking
maneuvers in diverse and dynamic settings [7].

2.1.9 Wi-Fi

Wireless-Fidelity (Wi-Fi) is a wireless networking technology that allows devices such


as computers, smartphones, or other devices to connect to the internet or communicate
with one another wirelessly within a particular area. It is a brand name created by a
marketing firm that’s meant to serve as an interoperability seal for marketing efforts. The
IEEE 802.11 standard defines the protocols that enable

22
communications with current Wi-Fi-enabled wireless devices, including wireless routers
and wireless access points [48].
The term ”Wi-Fi” itself refers to the well-known wireless network technology that
makes a WLAN possible. It is known that the technology used on a daily basis, such as
smartphones, laptops, smart TVs, etc., is Wi-Fi capable. This indicates that these gadgets
are compatible with frequencies in the 2.4 GHz and 5 GHz bandwidths. To avoid
interference, these frequencies are different from those that are utilized for phone, TV,
and radio broadcasts [49].

2.2 Literature Review

This section consists of an overview of literature related to autonomous parking


systems for self-driving cars, parking availability recognition and prediction, and drone-
assisted systems, which are highly relevant to the study.

2.2.1 Autonomous Parking Systems for Self-Driving Cars

The self-driving car industry has made significant advancements during the past
decade. Not to mention the substantial advancements they contribute to the overall
effectiveness, convenience, and safety of roads and transportation net- works, these new
capabilities will have major global effects that could radically change society [8]. Self-
driving cars are equipped with an autonomy system. Ac- cording to Badue et al. [9], the
architecture of the autonomy system of self-driving cars is typically organized into the
perception system and the decision-making sys- tem. The perception system is typically
broken down into numerous subsystems that are in charge of functions like self-driving
car localization, mapping of static obstacles, tracking of moving obstacles, road mapping,
and detection and identi- fication of traffic signals, among others. This made autonomous
parking possible for self-driving vehicles.

23
Table 2.1: Mentioned Methods of Autonomous Parking

Author(s) Method/Approach
Lee et al. [10] LiDAR-based
Wang et al. [11] Bird’s eye view vision system based

Various versions and designs of autonomous parking systems for self-driving cars
have been implemented and released, some commercially and some through research.
One of these is the lidar-based automatic parking by Lee et al. [10], wherein an HDL-32E
LiDAR is utilized to overcome the deficiency of ultrasonic sen- sors and camera. A paper
[12] collating a review of literature on automatic parking discusses different theories
involved in different designed automatic parking sys- tems. This included visual
perception, ultrasonic sensors and radar technology, path planning, control algorithms
based on fuzzy theory, neural network, image processing and recognition technology, and
digital signal processing technology, etc. Moreover, Wang et al. [11] introduces an
automatic parking method through a bird’s eye view vision system.

The mentioned works may have different approaches in creating an effective


autonomous parking system but guaranteed to have the same objective, which is to
accomplish parking tasks efficiently and safely without a driver, enhance driving
comfort, and significantly lower the risk of parking accidents [13].

2.2.2 Parking Availability Recognition and Prediction

Locating an available parking space has been an inconvenience and time- wasting
duty for any driver, especially considering the growing number of vehicles as time passes
by. To give a solution to this problem, with the help of advanced technologies, parking
space availability recognition has been implemented. The recognition of vacant parking
spaces is one of the crucial elements for creating

24
a fully automated parking help system. Different approaches to this have been designed
and implemented utilizing various devices and algorithms.
Lee and Seo [14] introduced an algorithm for a camera-based parking slot avail- ability
recognition that was based on slot-context analysis. This algorithm included a slot-
validation step that probabilistically identifies multiple slot contexts, offering greater
adaptability for irregular patterns. The robustness of the proposed algo- rithm is
demonstrated through simulations and real-world vehicle-level experiments conducted in
diverse conditions. Another approach by Caicedo et al. [15] is based on a calibrated
discrete choice model for selecting parking alternatives. The paper proposes a
methodology for predicting real-time parking space availability in in- telligent parking
reservation (IPR) architectures—a technology-driven system that enables vehicles to
book parking spaces in advance utilizing a variety of digital tools and platforms. This
system uses three subroutines to predict parking avail- ability, distribute simulated
parking demand, and anticipate future departures. A work by Zheng et al. [16] aims to
forecast the availability rate of parking spaces. It aims to provide a solution to the key
obstacles to parking availability prediction, which are the accuracy of long-term
predictions, the interaction of parking lots in an area, and how user activities affect
parking availability. Algorithms for modelling the occupancy rate and for availability
prediction (i.e., regression tree, support vector regression, and neural networks) were
described.

Table 2.2: Mentioned Parking Availability Recognition/Prediction Approaches

Author(s) Method/Approach
Lee and Seo [14] Camera-based slot-context analysis
Caicedo et al. [15] Based on a calibrated discrete choice model
Zheng et al. [16] Occupancy rate modelling

The significance of these works is how they contribute to drivers. When parking

25
slot availability information is not disclosed in advance, drivers may move slowly and
waste time searching for available on-street parking spaces [17]. Therefore, a system that
predicts the availability of parking spaces is essential.

2.2.3 Drone-Assisted Systems

Unmanned aerial systems (UASs), also referred to as drones or unmanned aerial


vehicles (UAVs), are the aircrafts that have the ability to fly without a pilot and
passengers [18]. They are mostly remote-controlled, but as technology advances, drones
have been modified to achieve a lesser human intervention for the drone controls. With
that said, autonomous drones have been manufactured. A paper

[19] discusses the science, technology, and the future of autonomous drones. It states
how these drones could have a significant impact on everyday activities including
transportation, communication, agriculture, disaster preparedness, and environmental
protection. The application of drones to the aforementioned areas will be referred to as
drone-assisted systems.

One example of where the drone is most widely used is on monitoring or surveil- lance
systems. An intelligent drone-based surveillance system by Sarkar et al. [20] utilizes a
DJI Matric 100 drone with Zenmuse camera with a 7x zoom to capture images, scan
environment, and detect obstacles in real-time. This work exper- imented for the
objective to develop a UAV-based smart parking lot monitoring system and license
plate detection, with the utilization of a computer. Drones were also employed in study
by Dasilva et al. [21] as programmable aerial eyes for an effective and economical
surveillance system. Illegal parking detection was the main objective of this work,
specifically detecting unregistered vehicles that were unauthorized to park in reserved
lots at an institutional area (i.e., Barry University). Aside from surveillance systems, an
article [22] introduces another drone as- sisted system that is primarily used in
intelligent transportation systems (ITS),

26
Table 2.3: Mentioned Drone-Assisted Systems

Author(s) Type of System


Sarkar et al. [20] Surveillance system
Dasilva et al. [21] University surveillance system
Shi et al. [22] Vehicular networks or intelligent transportation system
Saputro et al. [23] Hybrid communication system for mobile roadside units

which is the Drone-Assisted Vehicular Networks (DAVN). It makes connections for


vehicles available everywhere through effectively fusing the networking and com-
munication technologies for drones and connected vehicles. In relation to that, Saputro et
al. [23] designed a secure drone-based hybrid communication system for mobile roadside
units (RSUs) to take emergency situations into account for in- telligent transportation
systems. Integration of drones in existing infrastructures results in more efficient and
secure systems.

27
Chapter 3

RESEARCH METHODOLOGY

In this chapter, the research methods utilized were discussed and elaborated. This
includes the system’s block diagram, flowcharts, software and hardware struc- tures, and
the description of experiments to test the effectiveness of the proposed algorithm.

3.1 Proposed Method

Figure 3.1: Drone Algorithm Block Diagram

Figure 3.1 shows the two-way communication system between the Drone’s Al-
gorithm and the Self-driving AI. During the initiation stage of the device, the mi-
croprocessor prioritizes the process of the drone’s flight pattern sent from the self-
driving AI’s GPS via a dedicated 5G wireless connection. The AI will then display the
parking lot area with the drone’s flight path. As the drone follows the set path, the
microprocessor will stream the video feed in real-time to the self-driving vehi- cle’s AI.
The microprocessor will then construct a coordinated map leading towards the designated
area, which will assist the navigation system of the vehicle. As for

28
the drone’s health status, it will be continuously monitored by the microprocessor and
shared to the vehicle’s display.

Figure 3.2: System Process for Drone Algorithm

Figure 3.2 shows the operation that initiates the parking space detection and
navigation process, commencing with drone flight initialization and precise coordi- nate
generation for navigation. The drone is deployed to scan and map the parking lot,
simultaneously sharing its live video feed via a dedicated 5G wireless connec- tion with
the self-driving vehicle’s AI system. The AI processes the live video feed,

29
actively identifying vacant parking spaces by analyzing the images in real time. It
distinguishes between occupied and vacant spaces, assigning labels for navigation
guidance. The drone generates navigation instructions for the vehicle, facilitating its
journey to the designated parking spot. Successful parking is achieved when the vehicle
reaches its spot unobstructed by other vehicles, culminating the opera- tion. This
innovative system leverages drone technology, a dedicated 5G connec- tion, and real-time
video processing by the vehicle’s AI to optimize parking space utilization in dynamic
environments.

Figure 3.3: System Process for Hough Line Transform Algorithm

Figure 3.3 shows the real-time Hough Line Transform algorithm initiated by ini-
tializing video capture, creating a continuous loop for processing frames in real- time.
Each frame is captured and subjected to preprocessing steps, including grayscale
conversion, Gaussian blur, and Canny edge detection, aimed at en- hancing line detection
accuracy. The core of the algorithm employs the Hough Line Transform, represented

mathematically as lines = cv2.HoughLinesP (edges, 1, np.pi

/180, threshold = 50, minLineLength = 100, maxLineGap = 10), which identifies

30
line segments in the image. These line segments are drawn on the original frame using

cv2.line. The resulting frame, showcasing the detected lines, is displayed in real-time

through cv2.imshow. The algorithm continuously checks for key presses, providing a
means to break the loop, and finally releases resources, ensuring the seamless execution
of the real-time Hough Line Transform.

Figure 3.4: System Process for Drone Subroutine

Figure 3.4 shows the drone’s subroutine. Throughout the vehicle’s navigation, the
drone assumes the role of ongoing monitoring. It observes assigned parking spaces and,
if an incoming vehicle occupies an initially designated spot, the drone recalculates new
coordinates for guidance to the nearest available parking space within the same lot. This
process ensures efficient parking space allocation and real-time adaptability.

31
3.1.1 Hardware

This section discusses the hardware components to be utilized in this study.

DJI Mini 2 Drone

Figure 3.5: DJI Mini 2 Drone

Figure 3.5 shows the DJI Mini 2 Drone, weighing in at approximately 249 grams and
measuring 138mm x 81mm x 58mm, its compact frame is equipped with a 12- megapixel
camera for capturing photos and 720p video for live sharing. The drone boasts a
maximum flight time of up to 31 minutes and can fly at maximum wind speed resistance
of up to 8.5-10.5 meters per second and if no wind resistance it can fly 16 meters per
second, covering distances of around 15.7 km.

The DJI Mini 2 Drone has become a versatile tool in various fields, including
research and development. The technology holds the potential to address intricate
challenges in autonomous vehicle navigation, obstacle detection, and information
gathering with regards to its autonomous parking system. In this study, the drone will
serve as the camera or eye for the proposed algorithm and will be the one who will assist
the RC rover, which is a prototype of a sedan-type self-driving vehicle.

32
Raspberry Pi 4 Model B

Figure 3.6: Raspberry Pi 4 Model B

Figure 3.6 shows the Raspberry Pi 4 Model B. It is a powerful single-board computer


that plays a pivotal role in this study as it serves as the interface for the self-driving
prototype. With its quad-core ARM Cortex-A72 processor and options for varying
memory configurations (up to 8GB RAM), the Raspberry Pi 4 deliv- ers the
computational horsepower necessary for real-time data processing and decision-making
in autonomous driving scenarios. Its versatile I/O ports, includ- ing USB, HDMI, and
GPIO pins, enable seamless connectivity to the self-driving car’s sensors, cameras, and
communication systems. Additionally, the Raspberry Pi’s compatibility with various
programming languages and software libraries, com- bined with its compact form factor,
makes it an ideal hub for processing sensor data, executing algorithms, and facilitating
real-time communication, ensuring the successful integration of the autonomous system.

Python programming language is utilized in this study by the reason of Python being
one of the most utilized programming languages in self-driving vehicles, alongside with
C and C++ [50]. In addition to that, Elon Musk’s Tesla, one of the most famous
manufacturers of self-driving vehicles, operates on an operating

33
system built on the Python programming language [51]. The utilization of Python is for
the proposed algorithm to be compatible with an actual sedan-type self-driving vehicle,
despite that only a prototype is to be utilized in the experiments conducted.

Remote-controlled Rover

Figure 3.7: KEYSTUDIO Smart Car Robot

Figure 3.7 shows the KEYESTUDIO Smart Car Robot 4WD Programmable DIY
Starter Kit, which serves as a prototype for the self-driving vehicle in this study. This kit
provides a comprehensive platform to emulate the core functionality of a self-driving car,
offering a 4-wheel drive chassis, motors, wheels, and sensors. By interfacing this kit with
an autonomous system, control algorithms, and navigation strategies can be developed
and tested in a controlled and cost-effective environ- ment. Its adaptability and
compatibility with Raspberry Pi 4 Model B enable the researchers to mimic the essential
components and functionalities of a self-driving vehicle, allowing to explore and validate
autonomous driving technologies effec- tively.

34
3.1.2 Software

A Drone Algorithm Application is utilized in this study to achieve direct com-


munication to the DJI Mini 2 Drone from the Raspberry Pi 4 Model B. Figures 3.7 and
3.8 show the graphical user interface of the application. It serves as an in- tuitive and
user-friendly interface that consolidates all essential components and functions relevant
to the study. It provides access to critical features, including drone control, data
visualization, and real-time monitoring. Users can effortlessly interact with the
autonomous system, enabling them to execute predefined flight missions, adjust
parameters, and access vital telemetry data. The GUI streamlines data presentation and
analysis, enhancing the efficiency of data interpretation and decision-making processes.

The GUI consists of four main buttons, namely, “Flight Data” button, “Video Feed”
button, “Controls” button, and “Help” button. Each main button displays its
corresponding window, containing different function buttons, that will open as the user
clicks a main button. Figure 3.7 shows the display when “Video Feed” button is clicked,
and Figure 3.8 shows the display when “Controls” button is clicked.

35
Figure 3.8: GUI for Drone Algorithm (Default Window)

Figure 3.8 shows the default window of the application, which is the Video Feed
window. It displays a live video stream from the drone’s camera. The predefined
waypoints are also displayed, along with their corresponding longitude, latitude, altitude,

and distance from the starting point (i.e., Waypoint A).

36
Figure 3.9: GUI for Drone Algorithm (Controls Window)

Figure 3.9 shows the Controls window, which is utilized in controlling the drone. The
“ASCEND” button is clicked for the drone to take flight and travel to the pre- defined
location and path. On the other hand, the “LAND” button is clicked to land the drone.

3.2 Description of Experiments

This section contains all the details of the experiments, which includes the drone’s
battery consumption measurement, scanning accuracy test of the drone algorithm,
obstacle detection and tolerance test, and latency measurement.

3.2.1 Battery Consumption Measurement

The objective of the experiment is to measure the battery consumption rate of the DJI
Mini 2 drone while traveling from one waypoint to another, with varying altitude.

37
The materials required are:

1. DJI Mini 2 Drone

2. DJI Mini 2 Drone Controller

3. Mobile Phone installed with DJI Fly App

The procedure is as follows:

1. Prepare the DJI Mini 2 Drone, DJI Mini 2 Drone Controller, and a mobile phone
installed with DJI Fly App.

2. Assemble the drone’s controller by attaching the its joysticks on its designated spots.

3. Place the mobile phone on the controller’s cellphone holder and plug the cord on
the mobile phone’s charging port, which is preferably a USB Type C. Use an
adapter if necessary.

4. Turn on the drone’s controller and open the DJI Fly App on the mobile phone

5. Turn on the DJI Mini 2 Drone. Make sure that its battery indicator lights are all
ON, which means the drone’s battery is full.

6. On the DJI Fly App, select DJI Mini 2 as the drone that will be utilized. Wait until
the drone will connect with the mobile phone and a video livestream of the drone
will appear.

7. Place the DJI Mini 2 on its initial location, labeled as Waypoint i.

8. Measure the drone’s initial battery percentage at Waypoint i. Record the value in
Table 3.1.

9. On the DJI Fly App, long press the “Take Off” button for the drone to take flight.

38
10. Manually control the drone and let it travel around the edges of the parking lot. The
corners of the parking lot will be labeled as the Waypoints.

11. As the drone hovers from one waypoint to another, measure and record the altitude

in meters (h), battery percentage (BP ) of the drone, and elapsed time (t) as it
arrives at a waypoint or corner.
Record the values in Table 3.1.

Table 3.1: Battery Consumption Measurement

Waypoints h BP t ∆h ∆BP ∆t %BP/min

i — — — —
A
B
C
D
E
F
G
A (final)

12. When the drone returns to Waypoint A, land the drone manually. Measure the
drone’s final battery percentage at Waypoint A. Record the value in Table 3.1

13. Turn off the DJI Mini 2 Drone.

14. Close the DJI Fly App, remove the mobile phone from the controller, and
disassemble the joysticks.

15. Calculate the change in altitude (∆h) each time it travels from one waypoint

39
to another by using the formula:

∆h = h2 − h1 (3.1)

Where:

h1 = preceding altitude
h2 = altitude during the current waypoint Record
the values in Table 3.1.

16. Calculate the change in battery percentage (∆BP) each time it travels from one
waypoint to another by using the formula:

∆BP = BP2 − BP1 (3.2)

Where:

BP1 = preceding battery percentage


BP2 = battery percentage during the current waypoint Record the
values in Table 3.1.

17. Calculate the change of elapsed time (∆t) as it travels from one waypoint to another
by using the formula:

∆t = t2 − t1 (3.3)

Where:

t1 = preceding elapsed time


t2 = elapsed time during the current waypoint
Record the values in Table 3.1.

18. Calculate the decreasing rate of the battery percentage per minute (%BP/min)

40
as it travels from one waypoint to another by using the formula:

∆BP
%BP/min = (3.4)
∆t

Where:

∆BP = change of battery percentage


∆t = change of time in minutes Record
the values in Table 3.1.

19. Compare and observe how the varying altitude and elapsed time affect the change
of the drone’s battery percentage consumption.

3.2.2 Scanning Accuracy Test

The objective of the experiment is to determine the accuracy of the drone’s algorithm
and to assess the drone’s capability to scan its surroundings and collect necessary data for
autonomous navigation and decision-making.
The materials required are:

1. DJI Mini 2 Drone

2. Raspberry Pi 4 Model B

3. Modem for Wi-Fi connection

4. Monitor, keyboard, and mouse

The procedure is as follows:

1. Prepare the DJI Mini 2 Drone, Raspberry Pi 4 Model B, modem, monitor,


keyboard, and mouse.

2. Connect the monitor, keyboard, and mouse to the USB ports of the Raspberry Pi 4
Model B.

41
3. Plug the power cord of the Raspberry Pi 4 Model B to turn on the microcom- puter.

4. Connect the microcomputer to the designated modem’s Wi-Fi.

5. Click the Drone Algorithm Application that can be seen on the screen to dis- play
the GUI.

6. Place the DJI Mini 2 Drone to its initial location and turn it on.

7. Connect the drone to the designated modem’s Wi-Fi.

8. Click the “ASCEND” button on the screen for the drone to take flight and travel to
the predefined location and path.

9. As the drone hovers, monitor the quality of the live video feed.

10. Check if the drone was able to recognize vacant parking spaces correctly or if there
were errors.

11. Record the number of vacant parking spaces the drone recognizes (PR) and the
number of actual vacant parking spaces (PA) that are visible in the live video feed in
Table 3.2. Observe and compare the recorded values.

12. Calculate the percentage error between the recorded (PR) and actual (PA) number of
vacant parking spaces by using the formula:

PA − PR
Percentage Error (%E) = (3.5)
PA

Where:

PA = number of actual vacant parking spaces


PR = number of vacant parking spaces the drone recognizes Record the
values in Table 3.2.

42
13. Observe and record in Table 3.2 how frequently the drone incorrectly labeled an
occupied parking space as vacant.

Table 3.2: Scanning Accuracy Test

Trial PR PA %Error No. of Incorrect Labels


1
2
3
4
5
6
7
8
9
10
...
100

14. Click the “LAND” button on the screen to return the drone back to its initial
location.

15. Repeat procedures 8 to 14 up to 100 trials.

16. Turn off the DJI Mini 2 Drone.

17. Close the Drone Algorithm Application, turn off the microcomputer, and un- plug
the power cord.

18. Observe the collected data to assess the drone’s capability to scan its sur- roundings
and collect necessary data for autonomous navigation and decision-

43
making.

3.2.3 Obstacle Detection and Tolerance Test

The objective of the experiment is to assess the ability of the drone, equipped with the
proposed algorithm, in effectively navigating the self-driving vehicle while responding to
obstacles.
The materials required are:

1. DJI Mini 2 Drone

2. RC Rover

3. Raspberry Pi 4 Model B

4. Modem for Wi-Fi connection

5. Objects for obstacle scenarios

6. Monitor, keyboard, and mouse

The procedure is as follows:

1. Prepare the DJI Mini 2 Drone, RC Rover, Raspberry Pi 4 Model B, modem, objects
for obstacle scenarios, monitor, keyboard, and mouse.

2. Randomly place the objects for obstacle scenarios within the predefined lo- cation
that the drone and RC Rover will be travelling.

3. Connect the monitor, keyboard, and mouse to the USB ports of the Raspberry Pi 4
Model B.

4. Plug the power cord of the Raspberry Pi 4 Model B to turn on the microcom- puter.

5. Connect the microcomputer to the designated modem’s Wi-Fi.

44
6. Click the Drone Algorithm Application that can be seen on the screen to dis- play
the GUI.

7. Place the DJI Mini 2 Drone to its initial location and turn it on.

8. Connect the drone to the designated modem’s Wi-Fi.

9. Place and position the RC Rover at its initial location.

10. Click the “ASCEND” button on the screen for the drone to take flight and travel to
the predefined location and path.

11. Monitor the drone’s ability to detect and respond to obstacles in real time.

12. Record the number of detected obstacles (OD) and the number of actual
obstacles (OA) in Table 3.3. Observe and compare the recorded values.

13. Calculate the percentage error between the number of detected obstacles (OD)

between the actual number of obstacles (OA) by using the formula:

OA − OD
Percentage Error (%E) = (3.6)
OA

Where:

OA = number of actual obstacles OD =


number of detected obstacles Record
the values in Table 3.3.

45
Table 3.3: Obstacle Detection and Tolerance Test

Trial OD OA %Error Is the route obstacle-free? (Yes/No)


1
2
3
4
5
6
7
8
9
10
...
100

14. When the drone is done scanning the predefined location and detecting ob- stacles,
a map with a route to a vacant parking lot from the initial location of the rover will
be displayed.
Control the rover to trace and follow the route on the map.

15. Monitor and record in Table 3.3 if the given route was obstacle-free.

16. Click the “LAND” button on the screen to return the drone back to its initial
location.

17. Repeat procedures 9 to 16 up to 100 trials.

18. Turn off the DJI Mini 2 Drone.

46
19. Close the Drone Algorithm Application, turn off the microcomputer, and un- plug
the power cord.

20. Remove the objects for obstacle scenarios within the experiment setting.

21. Observe the collected data to assess the drone’s obstacle tolerance and its ability to
navigate safely in dynamic environments.

3.2.4 Latency Measurement

The objective of the experiment is to measure the data transmission time of the drone,
equipped with the proposed algorithm, with respect to distance.
The materials required are:

1. DJI Mini 2 Drone

2. Raspberry Pi 4 Model B

3. Modem for Wi-Fi connection

4. Monitor, keyboard, and mouse

The procedure is as follows:

1. Prepare the DJI Mini 2 Drone, Raspberry Pi 4 Model B, modem, monitor,


keyboard, and mouse.

2. Connect the monitor, keyboard, and mouse to the USB ports of the Raspberry Pi 4
Model B.

3. Plug the power cord of the Raspberry Pi 4 Model B to turn on the microcom- puter.

4. Connect the microcomputer to the designated modem’s Wi-Fi.

47
5. Click the Drone Algorithm Application that can be seen on the screen to dis- play
the GUI.

6. Place the DJI Mini 2 Drone to its initial location and turn it on.

7. Connect the drone to the designated modem’s Wi-Fi.

8. Click the “ASCEND” button on the screen for the drone to take flight and travel to
the predefined location and path.

9. The drone is programmed to send data to the microcomputer every 10 meters it has
travelled. As the drone hovers, the distance travelled and delay of the most recently
received data are displayed on the screen.
Record the latency (in milliseconds) of the data transmission in Table 3.4 at every
10 meters the drone has travelled.

Table 3.4: Latency Measurement

Distance (meters) Latency (milliseconds)


10
20
30
40
50
60
70
80
90
100

10. Repeat Procedure 9 until the drone has travelled 100 meters.

48
11. Click the “LAND” button on the screen to return the drone back to its initial
location.

12. Turn off the DJI Mini 2 Drone.

13. Close the Drone Algorithm Application, turn off the microcomputer, and un- plug
the power cord.

14. Observe the collected data to assess the quality, reliability, and operational limits of
the drone’s wireless communication system.

49
Chapter 4

RESULTS AND

DISCUSSIONS

In this chapter, the data and results from the four experiments in the previous chapter
were tabulated and thoroughly discussed individually.

4.1 Battery Consumption Measurement

In this experiment, the battery consumption of the drone while traveling from one
waypoint to another was measured and recorded, along with its altitude and elapsed time
as it arrives in each waypoint.

Table 4.1: Battery Consumption Measurement Data

Waypoints h BP t ∆h ∆BP ∆t %BP/min

i 0m 98% 0 min — — — —
A 7m 94% 1.15 min +7 m -4% 1.15 min -3.48 %/min
B 11 m 93% 1.53 min +4 m -1% 0.38 min -2.63 %/min
C 12 m 92% 2.03 min +1 m -1% 0.50 min -2.00 %/min
D 12 m 92% 2.25 min 0m 0% 0.22 min 0.00 %/min
E 12 m 91% 2.55 min 0m -1% 0.30 min -3.33 %/min
F 20 m 84% 4.05 min +8 m -7% 1.50 min -4.67 %/min
G 20 m 80% 5.10 min +1 m -4% 1.05 min -3.81 %/min
A (final) 0m 76% 6.13 min -20 m -4% 1.03 min -3.88 %/min

As shown in Table 4.1, the battery percentage gradually decreases as the drone travels
from one waypoint to another. The altitude was manually varied to observe how the
change of altitude affects the decreasing rate of the drone’s battery per- centage. It was

50
observed that the greater the change of the drone’s altitude, the

51
greater also is the decreasing rate of the drone’s battery percentage per minute.

52
BIBLIOGRAPHY

Published

[1] Self-driving Cars Market Size Global Report, 2022 - 2030, en, May 2022.

[Online]. Available: https://www.polarismarketresearch.com/industry-

analysis/autonomous-vehicle-market (visited on 01/02/2024).

[2] N. L. Tenhundfeld, E. J. de Visser, A. J. Ries, V. S. Finomore, and C. C. Tossell,


“Trust and distrust of automated parking in a tesla model x,” Human Factors, vol.

62, no. 2, pp. 194–210, 2020, PMID: 31419163. DOI: 10.1177/

0018720819865412. eprint: https://doi.org/10.1177/0018720819865412.


[Online]. Available: https://doi.org/10.1177/0018720819865412.

[3] K.-W. Min and J.-D. Choi, “Design and implementation of autonomous ve- hicle
valet parking system,” in 16th International IEEE Conference on Intel- ligent
Transportation Systems (ITSC 2013), ISSN: 2153-0017, Oct. 2013,

pp. 2082–2087. DOI: 10 . 1109 / ITSC . 2013 . 6728536. [Online]. Available:


https://ieeexplore.ieee.org/abstract/document/6728536 (visited on
09/29/2023).

[4] E. Brynjolfsson and A. Mcafee, “Artificial intelligence, for real,” Harvard busi-
ness review, vol. 1, pp. 1–31, 2017.

[5] S. Das, A. Dey, A. Pal, and N. Roy, “Applications of artificial intelligence in


machine learning: Review and prospect,” International Journal of Computer
Applications, vol. 115, no. 9, 2015.

[6] Y. Akbari, N. Almaadeed, S. Al-maadeed, and O. Elharrouss, “Applications,


databases and open computer vision research from drone videos and im- ages: A
survey,” en, Artif Intell Rev, vol. 54, no. 5, pp. 3887–3938, Jun. 2021,

53
ISSN: 1573-7462. DOI: 10.1007/s10462-020-09943-1. [Online]. Available:
https://doi.org/10.1007/s10462-020-09943-1 (visited on 10/03/2023).

[7] A. Karaman et al., “Hyper-parameter optimization of deep learning archi- tectures


using artificial bee colony (ABC) algorithm for high performance real-time
automatic colorectal cancer (CRC) polyp detection,” en, Appl In- tell, vol. 53, no.
12, pp. 15 603–15 620, Jun. 2023, ISSN: 1573-7497. DOI:

10.1007/s10489-022-04299-1. [Online]. Available: https://doi.org/10.


1007/s10489-022-04299-1 (visited on 11/11/2023).

[8] M. Daily, S. Medasani, R. Behringer, and M. Trivedi, “Self-Driving Cars,”


Computer, vol. 50, no. 12, pp. 18–23, Dec. 2017, Conference Name: Com- puter,

ISSN: 1558-0814. DOI: 10.1109/MC.2017.4451204. [Online]. Available:

https://ieeexplore.ieee.org/abstract/document/8220479 (visited on
10/02/2023).

[9] C. Badue et al., “Self-driving cars: A survey,” Expert Systems with Applica-

tions, vol. 165, p. 113 816, Mar. 2021, ISSN: 0957-4174. DOI: 10.1016/j.

eswa.2020.113816. [Online]. Available: https://www.sciencedirect.com/


science/article/pii/S095741742030628X (visited on 10/04/2023).

[10] B. Lee, Y. Wei, and I. Y. Guo, “AUTOMATIC PARKING OF SELF-DRIVING


CAR BASED ON LIDAR,” English, The International Archives of the Pho-
togrammetry, Remote Sensing and Spatial Information Sciences, vol. XLII-
2-W7, pp. 241–246, Sep. 2017, Conference Name: ISPRS Geospatial Week 2017
(Volume XLII-2/W7) - 18–22 September, Wuhan, China Pub- lisher:

Copernicus GmbH, ISSN: 1682-1750. DOI: 10.5194/isprs-archives- XLII - 2 - W7

- 241 - 2017. [Online]. Available: https : / / isprs - archives .

copernicus.org/articles/XLII-2-W7/241/2017/ (visited on 10/08/2023).

54
[11] C. Wang, H. Zhang, M. Yang, X. Wang, L. Ye, and C. Guo, “Automatic Parking
Based on a Bird’s Eye View Vision System,” en, Advances in Mechanical
Engineering, vol. 6, p. 847 406, Jan. 2014, Publisher: SAGE Publications,

ISSN: 1687-8132. DOI: 10.1155/2014/847406. [Online]. Available: https:

//doi.org/10.1155/2014/847406 (visited on 10/08/2023).

[12] W. Wang, Y. Song, J. Zhang, and H. Deng, “Automatic parking of vehicles: A


review of literatures,” International Journal of Automotive Technology, vol. 15,

pp. 967–978, 2014.

[13] S. Ma, H. Jiang, M. Han, J. Xie, and C. Li, “Research on Automatic Park- ing
Systems Based on Parking Scene Recognition,” IEEE Access, vol. 5,
pp. 21 901–21 917, 2017, Conference Name: IEEE Access, ISSN: 2169-3536. DOI:

10.1109/ACCESS.2017.2760201. [Online]. Available: https://ieeexplore.

ieee.org/abstract/document/8071146 (visited on 10/08/2023).

[14] S. Lee and S.-W. Seo, “Available parking slot recognition based on slot con- text
analysis,” en, IET Intelligent Transport Systems, vol. 10, no. 9, pp. 594–

604, 2016, eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1049/iet-its.2015.0226, ISSN:

1751-9578. DOI: 10.1049/iet- its.2015.0226. [Online]. Available:

https://onlinelibrary.wiley.com/doi/abs/10.1049/iet-its.2015.
0226 (visited on 10/08/2023).

[15] F. Caicedo, C. Blazquez, and P. Miranda, “Prediction of parking space avail-


ability in real time,” en-US, Expert Systems with Applications, vol. 39, no. 8,

pp. 7281–7290, Jun. 2012, Publisher: Pergamon, ISSN: 0957-4174. DOI: 10.
1016/j.eswa.2012.01.091. [Online]. Available: https://www.sciencedirect.
com/science/article/abs/pii/S0957417412001042 (visited on 10/08/2023).

[16] Y. Zheng, S. Rajasegarar, and C. Leckie, “Parking availability prediction for


sensor-enabled car parks in smart cities,” in 2015 IEEE Tenth International

55
Conference on Intelligent Sensors, Sensor Networks and Information Pro-

cessing (ISSNIP), Apr. 2015, pp. 1–6. DOI: 10.1109/ISSNIP.2015.7106902.

[Online]. Available: https://ieeexplore.ieee.org/abstract/document/


7106902 (visited on 10/08/2023).

[17] W. Shao, Y. Zhang, B. Guo, K. Qin, J. Chan, and F. D. Salim, “Parking Avail-
ability Prediction with Long Short Term Memory Model,” en, in Green, Per-
vasive, and Cloud Computing, S. Li, Ed., ser. Lecture Notes in Computer
Science, Cham: Springer International Publishing, 2019, pp. 124–137, ISBN: 978-3-

030-15093-8. DOI: 10.1007/978-3-030-15093-8_9.

[18] P. Kardasz, J. Doskocz, M. Hejduk, P. Wiejkut, and H. Zarzycki, “Drones and


possibilities of their using,” J. Civ. Environ. Eng, vol. 6, no. 3, pp. 1–7, 2016.

[19] D. Floreano and R. J. Wood, “Science, technology and the future of small au-
tonomous drones,” en, Nature, vol. 521, no. 7553, pp. 460–466, May 2015,

Number: 7553 Publisher: Nature Publishing Group, ISSN: 1476-4687. DOI: 10 .

1038 / nature14542. [Online]. Available: https : / / www . nature . com /

articles/nature14542 (visited on 10/06/2023).

[20] S. Sarkar, M. W. Totaro, and K. Elgazzar, “Intelligent drone-based surveil- lance:


Application to parking lot monitoring and detection,” in Unmanned Systems
Technology XXI, vol. 11021, SPIE, May 2019, pp. 13–19. DOI: 10.

1117/12.2518320. [Online]. Available: https://www.spiedigitallibrary.

org/conference- proceedings- of- spie/11021 /1102104 /Intelligent-

drone-based-surveillance--application-to-parking-lot-monitoring/

10.1117/12.2518320.full (visited on 10/08/2023).

[21] J. Dasilva, R. Jimenez, R. Schiller, and S. Z. Gonzalez, “Unmanned aerial vehicle-


based automobile license plate recognition system for institutional

56
parking lots,” in The 21st World Multi-Conference on Systemics, Cybernetics
and Informatics (WMSCI 2017) Proceedings, 2017.

[22] W. Shi, H. Zhou, J. Li, W. Xu, N. Zhang, and X. Shen, “Drone Assisted Vehic-
ular Networks: Architecture, Challenges and Opportunities,” IEEE Network, vol.
32, no. 3, pp. 130–137, May 2018, Conference Name: IEEE Network, ISSN: 1558-

156X. DOI: 10.1109/MNET .2017.1700206. [Online]. Available:

https://ieeexplore.ieee.org/abstract/document/8253543 (visited on
10/08/2023).

[23] N. Saputro, K. Akkaya, R. Algin, and S. Uluagac, “Drone-Assisted Multi- Purpose


Roadside Units for Intelligent Transportation Systems,” in 2018 IEEE 88th
Vehicular Technology Conference (VTC-Fall), ISSN: 2577-2465, Aug. 2018,

pp. 1–5. DOI: 10.1109/VTCFall.2018.8690977. [Online]. Available:

https://ieeexplore.ieee.org/abstract/document/8690977 (visited on
10/08/2023).

Electronic Sources

[24] PARKING LOT ACCIDENTS: More Common and Dangerous Than You
Think, en. [Online]. Available:
https://www.spadalawgroup.com/blog/parking- lot-accidents-more-

common-and-dangerous-than-you-think.cfm (vis- ited on 12/29/2023).

[25] RMG, How Often Do Car Crashes Occur in Parking Lots And Garages, en-

US, Mar. 2023. [Online]. Available: https://nelsonmacneil.com/blog/how-

common-are-parking-lot-car-crashes/ (visited on 12/29/2023).

[26] D. Laurel, Rear-enders, sideswipes were the most common types of acci-

dents in Metro Manila in 2021, en, Sep. 2022. [Online]. Available: https:

57
//www.topgear.com.ph/news/motoring- news/mmaras- impacts- 2021-
a962-20220914 (visited on 12/29/2023).

[27] S. Cho, Autonomous / Self-Driving Cars Market Size, Share, Growth, Fore-
cast 2030, en, Sep. 2023. [Online]. Available: https://www.linkedin.com/

pulse/autonomous-self-driving-cars-market-size-share-growth-shu-

cho (visited on 01/02/2024).

[28] O. Brown, What Is a Webcam? Here’s Everything You Need to Know, en-US,

2023. [Online]. Available: https://www.obsbot.com/blog/webcam/what-is-

webcam?fbclid=IwAR06QWa8AWFX7-xyRskT3ddMBNPe1oO8GDIXy66fMjU6XgvwNPZ1p1yhkXk
(visited on 12/06/2023).

[29] J. Ledford, What is a Computer Webcam and How Does It Work? en, Sec-

tion: Lifewire, 2021. [Online]. Available: https://www.lifewire.com/what- is-

a-webcam-4844083 (visited on 12/06/2023).

[30] Machine Learning vs. AI: Differences, Uses, and Benefits, en, Jun. 2023.

[Online]. Available: https://www.coursera.org/articles/machine-learning-

vs-ai (visited on 10/02/2023).

[31] Real-World Examples of Machine Learning (ML) — Tableau, en-US. [Online].

Available: https://www.tableau.com/learn/articles/machine-learning-

examples (visited on 10/02/2023).

[32] What is Computer Vision? — IBM, en-us. [Online]. Available: https://www.


ibm.com/topics/computer-vision (visited on 10/03/2023).

[33] OpenCV: Hough Line Transform. [Online]. Available: https://docs.opencv.


org/3.4/d9/db0/tutorial_hough_lines.html (visited on 10/08/2023).

[34] Hough transform, en, Page Version ID: 1173787350, Sep. 2023. [Online].

Available: https : / / en . wikipedia . org / w / index . php ? title = Hough

_ transform&oldid=1173787350#cite_note-1 (visited on 10/03/2023).

58
[35] T. Kacmajor, Hough Lines Transform Explained, en, Nov. 2020. [Online].

Available: https://medium.com/@tomasz.kacmajor/hough-lines-transform-

explained-645feda072ab (visited on 10/08/2023).

[36] Line detection in python with OpenCV — Houghline method, en-US, Sec-
tion: Python, May 2017. [Online]. Available: https://www.geeksforgeeks. org /

line - detection - python - opencv - houghline - method/ (visited on


10/08/2023).

[37] Canny edge detector, en, Page Version ID: 1177834947, Sep. 2023. [Online].

Available: https://en.wikipedia.org/w/index.php?title=Canny_edge_

detector&oldid=1177834947 (visited on 11/12/2023).

[38] OpenCV: Canny Edge Detection. [Online]. Available: https://docs.opencv.


org/4.x/da/d22/tutorial_py_canny.html (visited on 11/12/2023).

[39] Gaussian Blur - an overview — ScienceDirect Topics. [Online].

Available: https://www.sciencedirect.com/topics/engineering/gaussian-blur
(visited on 11/13/2023).

[40] Scaled-YOLOv4 is Now the Best Model for Object Detection. [Online].

Avail- able: https://blog.roboflow.com/scaled-yolov4-tops-efficientdet/


(visited on 11/13/2023).

[41] A. Kumar, Computer Vision: Gaussian Filter from Scratch. en, Mar. 2019.

[Online]. Available: https://medium .com /@akumar5 /computer - vision -

gaussian-filter-from-scratch-b485837b6e09 (visited on 11/13/2023).

[42] Educative Answers - Trusted Answers to Developer Questions. [Online].

Avail- able: https://www.educative.io/answers/what-is-gaussian-

blur-in- image-processing (visited on 11/13/2023).

59
[43] Basic CNN Architecture: Explaining 5 Layers of Convolutional Neural Net-

work — upGrad blog. [Online]. Available: https://www.upgrad.com/blog/

basic-cnn-architecture/ (visited on 11/11/2023).

[44] Dharmaraj, Convolutional Neural Networks (CNN) — Architectures Explained,

en, Jun. 2022. [Online]. Available: https://medium.com/@draj0718/convolutional-


neural-networks-cnn-architectures-explained-716fb197b243 (visited
on 11/11/2023).

[45] Convolutional Neural Networks : The Theory, no. [Online]. Available: https:
//www.bouvet.no/bouvet-deler/understanding-convolutional-neural-
networks-part-1 (visited on 10/08/2023).

[46] Scaled-YOLOv4 model. [Online]. Available: https://iq.opengenus.org/ scaled-


yolov4/ (visited on 10/08/2023).

[47] J. Solawetz, J. Nelson, M. B. DEC 3, and 2. 8. M. Read, Scaled-YOLOv4 is Now


the Best Model for Object Detection, en, Dec. 2020. [Online]. Available:

https://blog.roboflow.com/scaled-yolov4-tops-efficientdet/ (visited on
11/27/2023).

[48] What Is Wi-Fi? - Definition and Types, en. [Online]. Available: https://www.
cisco.com/c/en/us/products/wireless/what-is-wifi.html (visited on
10/10/2023).

[49] S. Panzer, What Is Wireless Fidelity and is it the same as WiFi? en-US, Mar.

2021. [Online]. Available: https://stl.tech/blog/wireless-fidelity- the-

rundown/ (visited on 10/10/2023).

[50] D. Abbott, Best 10 Programming Languages Utilized in Autonomous Vehi-


cles, en, Apr. 2022. [Online]. Available: https://www.linkedin.com/pulse/ best-

10-programming-languages-utilized-autonomous-vehicles-abbott (visited on
01/05/2024).

60
[51] Why Python is the favourite programming language of Elon Musk, en, Sep.
2022. [Online]. Available: https : / / content . techgig . com / technology /

why-python-is-the-favourite-programming-language-of-elon-musk/

articleshow/94302315.cms (visited on 01/05/2024).

61

You might also like