5 Object & Potholes Detection To Control Car Speed Using IOT Final Report
5 Object & Potholes Detection To Control Car Speed Using IOT Final Report
The quality of the roads that automobiles are driven on is essential to ensuring that every
vehicle, manual or automatic, can complete its journey satisfactorily. Road defects such as
potholes and speed bumps can lead to car wear and even fatal traffic accidents. As a result,
recognizing and characterizing these anomalies helps to reduce the likelihood of crashes and
vehicle damage. The identification of street abnormalities is made more difficult by the fact that
street pictures are intrinsically multivariate because of significant amounts of duplicated data and
heavily contaminated measurement noise. This study offers the automatic color image processing
of potholes on highways using a YOLO Deep learning model, either from the video frames or
from photos taken with a smartphone camera. Lightweight architecture was chosen to facilitate
training and usage more quickly. This is composed of seven properly coordinated and
interconnected layers. Every pixel in the original image is used, no resizing involved. We used
the conventional stride and pooling procedures to get as much data as we could. This enhances
the developed model's capability to identify potholes in foggy environment by De- hazing the
current frames to control the vehicle speed through IOT model using the Decision making
process.
Keywords: YOLO Deep learning model, Smartphone camera, IOT model and Decision
making process.
TABLE OF CONTENTS
Page
Chapter No Title
No.
Acknowledgement I
Abstract Ii
List of Figures V
1. Introduction
2. Literature survey
3. Scope of the project
4. Methodology
5. Details of design, working and processes
6. Result and applications
7. Conclusion and future scope
8. Appendix
9. References and bibliography
List of Figures
INTRODUCTION
1.1 INTRODUCTION
In recent years, automated systems have proliferated across numerous industries, with
technology playing a pivotal role in their growth. The advent of autonomous technology has
greatly simplified human life. The automation of transportation and surveillance systems has had
several positive outcomes. Highways are the backbone of any transportation system since they
connect the most major nodes. Potholes are a big issue for transportation networks on highways,
and autonomous systems must not put their users in risk. According to official statistics provided
by the Indian government, 4,869 people lost their lives in accidents caused by potholes in 2015.
The need of maintaining well-maintained roadways is emphasized by this.
Globally, the COVID-19 pandemic has been devastating. Among the many sectors severely
impacted by the lockdowns is road maintenance. The situation on the roads has deteriorated
because of this. So, it's necessary to have a system that can keep an eye on the roads without
human intervention. This study introduces a Deep Learning and Image Processing-based
approach to pothole detection and measurement estimation. Convolutional Neural Networks
have formed the basis for several novel object detection algorithms in recent years. In order to
find potholes, this article suggests using the YOLO (You Only Look Once) approach. After
training the YOLO algorithm with a custom dataset that contains dry and wet craters of varying
sizes and shapes, the results are evaluated using Intersection over Union (IoU) and mean average
precision (mAP). A wide variety of potholes can be recognized by the model with a reasonable
degree of accuracy. Also, the proposed image-processing pothole size estimator dramatically
reduces the overall time required for road maintenance by providing reasonably accurate
dimensions of the found potholes through the application of triangular similarity.
CHAPTER 2
LITERATURE SURVEY
1] Yifan Pan et al. presents an approach for the detection of asphalt road pavement
distresses from UAV MSI using SVM, artificial neural networks, and RF learning algorithms.
The case study in the suburb of Shihezi City indicated that spatial features (i.e., texture and
geometry) contributed much more to the accuracy of the cracks and potholes detection than the
spectral ones for both RGB and 12-band MSI. The three types of features extracted from the
UAV MSI can achieve best classification and less running time if using an 18-tree RF classifier.
The overall accuracy of the classification of cracks, potholes, and nondistressed pavements is
98.3% with the UAV MSI. In addition to the feature set and classifier, the spatial resolution of
pavement imagery is also a decisive factor for the performance of the RF classifier. The
comparison study of the simulated multiple resolution imagery showed that the spatial resolution
should not exceed the minimum scale of pavement distress objects; otherwise some small
damages (i.e., cracks) may be missed even in the segmentation procedure. In conclusion, the
flexible UAV platform configured with multispectral remote sensors provide a valuable tool for
the monitoring of asphalt pavement condition.
Future Work/Limitations: In future work, more UAV pavement images of different
road areas and segments could be used to further evaluate the performance of these models and
parameters on the detection of potholes and cracks. Because of the spatial resolution limitation,
the UAV pavement images used in the paper still cannot capture the cracks that width is less than
13.54 mm. Therefore, higher resolution pavement images should be obtained to further increase
the accuracy of pavement condition evaluation. In this paper, author just focus on the two
common asphalt pavement damages, potholes and cracks. The next work will consider more
types of road surfaces and pavement damages, such as cement road, gravel road, rutting, and
road roughness. Other remote sensing data including LiDAR and radar by UAV also have a great
potential on the pavement condition monitoring. For example, LiDAR can directly acquire the
elevation information of the road surface. By means of it, the depth of potholes, cracks, and
rutting can be characterized directly by the LiDAR data. Additionally, other advanced learning
algorithms could also be introduced into the pavement distresses detection, such as convolutional
neural networks. Moreover, the integrated software and algorithms by individual coding could be
used to speed up the detection of pavement distresses in the future.
2] JIHAD DIB et al. research has been extensively focusing on machine vision more than
the other techniques. Every technique has had its own limitation and weaknesses which could
cause a significant risk to the service users hence making these techniques not usable for real-
time navigation of autonomous vehicles and platforms. This limitation has long restrained the
capability of such vehicles. Finding a complete system which provides autonomous avoidance of
negative obstacles to autonomous vehicles has always been a challenging task due to the
stochastic nature of pavements and footpaths, potholes and cracks exist in different shapes, and
could be filled with water, ice, dirt, or could be reflecting a strong light etc. every case is a
limitation to a certain detection system ranging from RGB cameras where water/ice, low light
and strong light are a limitation, to thermal cameras where high temperature is a limitation, to
reflective laser, where reflection caused by water/ice is a limitation. Not to forget the limitation
in the processing technique or power needed, as some systems require a heavy amount of
computation, while others require a large amount of power in order to power the
sensor/processor. An additional issue is the real-time functionality as not all systems can be used
in real-time and this task should be fulfilled in real-time with a very low runtime as the detection
should be as close to instant as possible in order to provide an accurate avoidance. Finally, the
size of the system could be in some cases a limitation as some systems require larger equipment
or larger power source which could not be mounted onto the autonomous vehicle. This could be
managed in most of the cases but it has to be considered as an important factor for this task. One
additional limitation could be the ability to mass-produce the system which in most cases could
be managed but the cost of the system might be high for the users.
3] YUJIE WU et al. propose a video object detection algorithm guided by the object blur
degree evaluation. author improve the weight assignment for the aggregated frames with the blur
prior. Especially, a blur mapping network is introduced to label each pixel as either blur or non-
blur. Because author only care about the object blur degree without the background, a saliency
detection network is adopted to focus on the objects. Calibrated by the saliency map, the
calibrated blur map which focus on object blur degree is obtained to calculate the weight for each
frame. The extensive experiments demonstrate that the proposed method outperforms state-of-
the-art video object detection algorithms with affordable increased computation.
Future Work/Limitations: However, the blur mapping and saliency networks may fail
for some unusual cases that the objects are too small to be distinguished, which can be improved
in the future work. Furthermore, another important degenerate element in video object detection
is rare poses. author will design special module to tackle rare poses in the future. It is beneficial
to video object detection accuracy improvement.
4] MIN QIAO et al. propose a double-branch network for SOD. To improve the detection
accuracy and efficiency, author propose the inclusion of EPEM, CAM and FFM components.
The EPEM provides multiscale edge features for edge branch to improve the accuracy.
Experimental results show that the EPEM yields more accurate results for complex objects than
single-scale methods. Extracting features from two branches may cause information redundancy
and require expensive computations. The CAM can suppress redundant information and raise the
detection efficiency. The FFM can combine the information of the two branches and improve the
accuracy through feedback. In this paper, author propose a fine version and a rough version. The
fine version can greatly improve the detection accuracy. The rough version can greatly improve
the detection speed with little reduction in accuracy. In addition, our method obtains better
results in detecting objects with complex shapes. Using 9 datasets to evaluate the performance of
the proposed method, author find that our method outperforms 8 state-of-the-art methods under a
variety of evaluation metrics and has a real-time speed of 73 FPS.
6] AMEL ALI ALHUSSAN et al. proposed a new approach for classifying potholes and
plain roads. The proposed approach is based on employing the deep network ResNet-50 for
extracting high-level features from the input image. In addition, the significant features are
selected using the binary dipper throated optimization algorithm. On the other hand, the dataset
is balanced using a proposed optimized SMOTE algorithm. Moreover, the random forest
classifier is employed for classifying the selected features. This classifier is optimized using the
continuous dipper throated optimization algorithm to achieve the best performance. To prove the
superiority of the proposed approach, several experiments were conducted to compare the
proposed approach to other optimization methods and three classifiers.
Future Work/Limitations: In addition, a statistical analysis is performed to
assess the stability and efficiency of the proposed approach. The results emphasized the
effectiveness and superiority of the proposed approach.
7] Dong Chen et al. narrate the High-quality roads condition will enhance the comfort
and safety of the driver and passengers. The road surface without the obvious defects improves
the speed of the vehicle as much as possible to ensure the efficiency of life and production.
However, the temperature, external forces, overloads, and human damage bring road damage. It
poses a significant safety risk to vehicles and motorists if the road damage cannot be observed
and resolved in time. Although researchers in the past have proposed various methods, they are
inadequate in spatial and temporal resolution and efficiency. In this context, author propose to
install acceleration sensors based on the wheel steering lever of the vehicle by capturing the
structural vibrations caused by the contact between the vehicle wheels and the road surface. The
road roughness is judged by analyzing the amplitude intensity at 60–90 Hz frequency, and real-
time analysis and Spatio-temporal information are fused to form a set of rapid reflectometry
methods for road quality via the IoT platform. The cumulative analysis of abnormal trajectory
information on the server-side provides a set of road defect information maps with a real-time
resolution by Web-GIS. It provides a set of reference methods for road management in smart
cities and also provides a reference for future vehicle-road cooperation for intelligent driving. In
the subsequent research, author will explore the capability of the method to invert other road
features at other frequencies. Finally, an attempt has been made to use Google Plus code for
geospatial location description in smart city applications in this context. The use of this class of
GeoSOT-based (GEOgraphical coordinates Subdividing grid with one-dimension integral coding
On 2n-Tree) methods helps to discretize and multiscale the geospatial, and the physical objects
can be quickly associated with data within the same area by means of unique integer-type codes
for each block, like the Google Plus Code corresponding to the road vibration information under
the block in this context. GeoSOT’s representation of spatial location is cutting-edge.
Future Work/Limitations: In the future, in addition to Google Plus Code, Beidou Grid
Code will also become more mature, and all observation data obtained based on sensor web and
remote sensing methods can be expressed by Beidou Grid Code, which brings excellent
development space for the cost, power consumption, and computational pressure of the system of
geospatial observation.
8] DARRYN ANTON JORDAN et al. provided an overview of the PDCL system and
presented results from initial measurements. A detailed comparison of the error in depth map
recovery for several video encoders revealed that Nvidia’s H.265 encoder should be used for all
future measurement campaigns. Furthermore, the disparity maps produced through depth map
flattening show great promise for CNN-based detection. Unfortunately, however, the RDMs
produced by the radar proved ineffective in the detection of potholes.
Future Work/Limitations: Future work includes the implementation of plane detection
methods for depth maps, such as random sample consensus (RANSAC). Alternative mappings
for depth map colorization should be investigated to potentially improve dynamic range. Finally,
the temporal performance of the videos encoders should be analyzed for a more comprehensive
comparison.
9] KAZUTOSHI AKITA et al. proposed a method for estimating object-scale proposals
for scale-optimized object detection using SR. With images that are rescaled by the appropriate
SR scaling factor, an object detector can work better than in the original size image. A variety of
experimental results validated that our proposed RDSP network can capture the rough locations
of objects depending on contextual information. author qualitatively and quantitatively verified
that object detectors using our scale proposals outperform those without the scale proposals.
Future Work/Limitations: Since the proposed method can also be applied to many
other computer vision tasks (e.g., human pose estimation, face detection, and human tracking)
that capture tiny objects, author would like to extend our proposals to these tasks in future work.
10] Gerasimos Arvanitis et al. propose a cooperative obstacle detection and rendering
scheme that utilizes LiDAR data and driving patterns to identify obstacles within the road range.
Our system allows for information sharing between connected vehicles, enabling drivers to be
notified about incoming potholes even when there is no direct line-of-sight. This cooperative
driving scheme increases situational awareness and reduces the risk of accidents caused by
unexpected obstacles. Our method is based on the analysis of point clouds which is challenged
by the lack of benchmark datasets obtained from LiDAR devices. To overcome this problem,
author created our own synthetic dataset and added it to the maps of the CARLA simulator,
thereby creating realistic driving environments. The comparison of our method with other state-
of-the-art approaches, regarding the accuracy of pothole detection in real datasets, has shown its
effectiveness providing very promising outcomes. Our proposed approach can be extended to
cover a wider range of road hazards beyond potholes, such as debris or uneven road surfaces. By
utilizing the same LiDAR sensor technology, author can detect these hazards and provide similar
AR visualizations to drivers. Moreover, author plan to investigate the integration of other sensing
modalities, such as RGB-D cameras, which could provide additional visual information to
improve the accuracy of obstacle detection and enhance the situational awareness of drivers.
Future Work/Limitations: In addition, our methodology can be further improved by
incorporating machine learning algorithms to enhance the accuracy and efficiency of obstacle
detection and classification. author plan to explore the use of deep learning models, which have
shown promising results in various computer vision tasks, to enhance our point cloud processing
system. Lastly, author envision that our proposed approach could be applied beyond personal
vehicles, such as in autonomous vehicles and public transportation systems. By leveraging V2X
communication, our cooperative obstacle detection and rendering scheme could provide a safer
driving experience for all road users.
12] MINGU JEONG et al. proposed a system to detect abandoned objects by using
background matting with Dense ASPP. The proposed system was able to reduce false positives
through Pre-Processing, Abandoned Object Recognition(AOR), and Abandoned Object Decision
by Feature Correlation(AODFC). to solve the errors of the existing abandoned object detection
methods. In Pre-Processing, image normalization is used to address issues such as
communication noise, and illumination changes are identified to eliminate false positives. The
AOR system detects abandoned objects by using background matting and removing human
object information to overcome the difficulty of detecting abandoned objects due to occlusion
and similar objects. Finally, the AODFC System detects the final abandoned object through
feature correlation analysis based on the abandoned object coordinates found in the AOR to
reduce false positives. In addressing the challenges posed by background subtraction in
abandoned object detection, the proposed research introduces a novel approach that significantly
mitigates common issues such as sensitivity to lighting variations, the complexity of background
modeling, and the handling of shadows and complex backgrounds. Traditional methods often
falter when faced with these dynamic environmental factors, leading to decreased accuracy and
reliability. The proposed method leverages advanced deep learning techniques to effectively
overcome these obstacles, enhancing the system’s ability to distinguish abandoned objects from
their surroundings with greater precision.
Future Work/Limitations: By eliminating the aforementioned problems, the proposed
system not only improves the detection of abandoned objects but also sets a foundation for future
advancements in the field. The use of deep learning offers a flexible and robust framework
capable of adapting to the nuanced variations of outdoor environments, thus significantly
reducing false positives and negatives that have plagued previous methodologies. Looking
forward, author acknowledge the potential for further refinement and optimization of the
proposed system. Future research directions include stabilizing and streamlining the deep
learning architecture to enhance system performance and efficiency. Additionally, author
advocate for the expansion of the proposed method to encompass the detection of hazardous
objects such as explosives and drugs. This progression would mark a significant step forward in
the development of comprehensive security and surveillance systems capable of addressing a
wider range of threats.
13] KEONG-HUN CHOI et al. In object detection by supervised learning, the class type
and location of the object are used as a label for each training image. The same label is required
for the new environment to detect objects in an environment different from the training
environment. In this paper, author proposed a reinforcement learning-based object detection
method that only requires images and the number of objects on images as labels. A transformer-
based object proposal model, an evaluation model using the corresponding area, and a reward
configuration are proposed. The model for evaluating the presented object candidate area was
trained based on supervised learning. An existing object detection dataset was used for training.
Experimental results show that the proposed algorithm can cope with unseen environments using
labels of images and the number of objects on images. However, the proposed method has the
following areas for improvement. An object evaluation model is trained using the existing object
detection dataset in supervised learning. Additionally, the proposed method only differentiates
whether the detected region is an object.
Future Work/Limitations: For future research, author plan to extend the proposed
algorithm in two ways. First, author will consider a direction that does not require the number of
objects as the label. Second, author want to add classing object types in the proposed method.
14] HANJUN WANG et al. propose an improved YOLOv8n model to complete the task
of detecting foreign objects in transmission lines. This model effectively improves the accuracy
of detection and maintains a fast detection speed. author have made two key improvements to the
baseline model this time: firstly, author have introduced ECA attention mechanism into the
model and added attention mechanism to each feature map of different sizes to improve the
dependency relationship between channels. Secondly, a small object detection layer has been
added to the detection head, enhancing the model’s ability to recognize small targets and
reducing the impact of shooting distance on the detection task. The improved YOLOv8n model
significantly enhances its performance on this task, with improved mAP compared to the
baseline YOLOv8n model. Reached. The detection speed still maintains a high level, and the
inference time for each image is. Compared to the baseline YOLOv8 model, there is no
significant decrease. This work has broad application scenarios, which can greatly improve the
efficiency of foreign object detection in transmission lines, and can be applied to other aspects of
power inspection through transfer learning.
CHAPTER 3
3.1 Viso.ai
Potholes cost American drivers an estimated $3 billion annually in vehicle repairs and
other expenses. This number does not include the cost of accidents caused by potholes. In a
survey by IAM Roadsmart (formerly the Institute of Advanced Motorists), nine out of ten of the
2,000 drivers surveyed said that they were affected by potholes last year. More than half (54%)
said they had to swerve or brake sharply to avoid an impact with a pothole.Pothole detection is
an essential part of maintaining roads and ensuring safe driving conditions. It is a challenging
task that requires accurate detection and monitoring of road conditions in real-time. With
advancements in computer vision, the Internet of Things (IoT) and Artificial Intelligence (AI),
the video feeds of distributed cameras can be analyzed with deep learning models to inspect road
conditions with AI. Computer vision applications for pothole detection have a wide range of use
cases, including road maintenance, smart city, asset management, transportation, and road
management systems. The development of automated pothole detection systems has made it
possible to detect potholes faster, enabling timely repairs and minimizing the cost of road
maintenance.
Reference: https://viso.ai/application/pothole-detection/
Potholes Detection is my individual project for CASA0018 — Deep Learning for Sensor
Network, one of the courses in MSc Connected Environments from the Centre for Advanced
Spatial Analysis (CASA), The Bartlett, UCL. My experience has positioned me at the
intersection of the IoT solution, cloud technology and business analytics and my aim is to apply
my specialist knowledge to the digital twin solutions in the smart city industry. Road
infrastructure is playing an imperative role in achieving the United Nations’ Sustainable
Development Goal of providing access to safe, affordable, accessible and sustainable transport
systems for all, improving road safety, notably (United Nations, 2015).
Reference: https://vivian-ku.medium.com/real-time-potholes-detection-an-aiot-application-to-
maintain-road-safety-and-facilitate-city-dbb7bccefe0e
CHAPTER 4
METHODOLOGY
The methodology for Object & Potholes detection to control car speed using IOT is
developed under waterfall model architecture as shown in the below figure 1.
The sequence phases in water fall model according to our project are mentioned below.
4.1 Requirement Analysis – Here requirement analysis are done based on following
points
Base paper for Object & Potholes detection to control car speed using IOT at in foggy
environment
4.2 System Design: The System Object & Potholes detection to control car speed using IOT is
designed by using the following hardware and software
CPU : core i5
RAM : 8 GB
HDD : 500 GB
Micro Controller : Arduino UNO
Camera : 48 MP
Sensor : Ultra sonic sensor
Motor Driver : L2981
Software Specification:
4.3 Implementation:
Proposed system is designed by using the following modules
Frame Collection
Frame Object Formation
Frame Resizing
Image De-hazing
4.3.2 Module B: YOLO
Network Layer
Optimization Function
Dense Layer
Neuron Segmentation
Pothole Identification
Segregation of potholes
Micro Controller activation to slow down the vehicle wheel
The developed software is deployed in the laptop of above mentioned configuration with the
help of the mentioned software.
As this software is tested for the quick recovery, so maintenance of the system is not a
challenging task. This is because the tools and the software used are open source, so there is no
question of licensing the required software.
CHAPTER 5
The Use Case Diagram depicts the various use cases that are performed by the user in the
proposed model. The use cases include, live image, dehazing filter, preprocessing, labelling,
yolo, decision making, alert generation and slow down vehicle.
Step 1: Training - The technology is utilizing the image to successfully locate the pothole. The
first step of the process is to locate the image's pothole so that an alert can be generated. The
pothole recognition module uses the Yolov8 approach to successfully identify potholes. It is
necessary to train this model before applying it to pothole recognition. The initial stages of
training include downloading the roboflow dataset and installing the ultralytics for the Yolov8
model. Visit https://public.roboflow.com/object-detection/pothole to obtain the pothole
recognition dataset and then connect Roboflow to your API key. In order to get the directory's
file list, it quickly searches the downloaded dataset. The next step is to use the file list to get the
directory's file count.
There will be a total of 465 files utilized for training. After the files are sorted alphabetically,
they are moved to the destination directory and mixed up. After recalculating the directory count,
we find that there are 419 files in the training directory and 179 in the other directory. Once you
have incorporated the roboflow data and effectively shuffled the potholes dataset, we can begin
training the yolov8 model for the yolo item detection task. Train the detection model for 200
epochs with the trained weights using a batch size of 32 and an image size of 640. The project
runs are archived in a zip file in the given directory once the yolov8 model has been trained. One
of its variants, the YOLOv8, is a Convolutional Neural Network (CNN). Through the innovative
and efficient application of CNN technique components, it accomplishes object identification
with enhanced accuracy. Yolo employs a max pooling layer, many batch and dropout
normalizations, 24 parameter convolutional layers, and other techniques to regularize the model
and avoid over fitting. The model's highest point consists of two completely linked layers.
Following decomposition and reduction by the initial convolutional layers, the channels are max-
pooled with a 2x2 kernel and a stride of 2. Max pooling is applied uniformly across all levels of
the model. As the number of observations grows, the kernel sizes of the convolutional layers that
follow it grow larger and larger. The ReLU activation function is utilized in this layer
architecture. The activation functions of all the layers are identical, with the exception of the
completely linked layers. These layers employ a linear activation function to create the.pt file,
which is Yolo8's trained data file. Next, we'll use this.pt file to let you know if there's a pothole.
Use the same method on the road humps dataset as well. It can be found at
https://universe.roboflow.com/detection-system/humps-bumps-potholes-detection/dataset/8. The
Yolov8 model is described in depth in Table 2.
Step 2: Testing - In this case, we're feeding the pothole with video input and extracting frames to
feed in real-time. The live frames are extracted from the video to apply defog filter by re-
arranging the alpha filter. Then, We apply the trained model file.pt to the live streaming frames
in order to locate the pothole. From this dataset, we may determine the upper left rectangle
locations of them. The red and white road humps and potholes markers are visible from this
vantage point, and we can also observe the stability of the frames being checked. White potholes
have lower confidence levels, indicating that they are less extensive, whereas red potholes have
higher ones.
Step 3: Controlling car Speed- When obstacles like bumps in the road are recognized, a signal
is sent to the linked microcontroller, the Arduino UNO, telling it to slow down for a certain
amount of time. Afterwards, the microcontroller A DC electric motor's speed and rotational
direction can be controlled by the L298N motor driver. A square wave pulse width modulation
(L298N PWM) system is employed. A broader pulse width causes the motor to rotate at a faster
rate. The specific pulse width, however, will change for each motor type. Based on potholes and
road humps along with the object distance obtained from Ultra sonic sensor, Car speed will be
controlled using the DC motor.
CHAPTER 6
YOLO_TEST YOLOV V8 Testing Frames from the Input The video frames are
Video tested for the potholes
and they are marked and
showed in different
colors based on the size
Successfully
Future Work
The system will be expanded in the future to incorporate surveillance cars, enabling
accurate autonomous road condition monitoring. These surveillance vehicles would also be
equipped with GPS modules so they could track the exact positions of the potholes. The amount
of road damage and the quantity of raw materials required to patch the potholes could both be
approximated using the expected sizes of the holes. The majority of planning and inspection can
therefore be finished remotely.
CHAPTER 8
APPENDIX
8.1 Object & Potholes detection to control car speed using IOT at in foggy environment
https://in.mathworks.com/help/visionhdl/ug/pothole-detection.html
https://devpost.com/software/detector
https://discuss.streamlit.io/t/real-time-pothole-detection-web-application-seeking-your-
feedback/49888
https://www.datacamp.com/blog/yolo-object-detection-explained
https://www.v7labs.com/blog/yolo-object-detection
CHAPTER 9
2] J. Dib, K. Sirlantzis and G. Howells, "A Review on Negative Road Anomaly Detection
Methods," in IEEE Access, vol. 8, pp. 57298-57316, 2020, doi:
10.1109/ACCESS.2020.2982220.
3] Y. Wu, H. Zhang, Y. Li, Y. Yang and D. Yuan, "Video Object Detection Guided by Object
Blur Evaluation," in IEEE Access, vol. 8, pp. 208554-208565, 2020, doi:
10.1109/ACCESS.2020.3038913.
4] M. Qiao, G. Zhou, Q. L. Liu and L. Zhang, "Salient Object Detection: An Accurate and
Efficient Method for Complex Shape Objects," in IEEE Access, vol. 9, pp. 169220-169230,
2021, doi: 10.1109/ACCESS.2021.3138782.
7] D. Chen, N. Chen, X. Zhang and Y. Guan, "Real-Time Road Pothole Mapping Based on
Vibration Analysis in Smart City," in IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing, vol. 15, pp. 6972-6984, 2022, doi:
10.1109/JSTARS.2022.3200147.
11] C. Sharma, S. Ghosh, K. B. A. Shenoy and G. Poornalatha, "A Novel Multiclass Object
Detection Dataset Enriched with Frequency Data," in IEEE Access, vol. 12, pp. 85551-85564,
2024, doi: 10.1109/ACCESS.2024.3416168.
12] M. Jeong, D. Kim and J. Paik, "Practical Abandoned Object Detection in Real-World
Scenarios: Enhancements Using Background Matting with Dense ASPP," in IEEE Access, vol.
12, pp. 60808-60825, 2024, doi: 10.1109/ACCESS.2024.3395172.
13] K. -H. Choi and J. -E. Ha, "Object Detection Method Using Image and Number of Objects
on Image as Label," in IEEE Access, vol. 12, pp. 121915-121931, 2024, doi:
10.1109/ACCESS.2024.3452728.
14] H. Wang, S. Luo and Q. Wang, "Improved YOLOv8n for Foreign-Object Detection in Power
Transmission Lines," in IEEE Access, vol. 12, pp. 121433-121440, 2024, doi:
10.1109/ACCESS.2024.3452782.
****