Algorithmic Analysis and Development For Enhanced Human AutonomousVehicle With Pages Removed (1) Remov

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Algorithmic Analysis and Development for

Enhanced Human-Autonomous Vehicle Coexistence


Samir Rana Jay Upadhyay Ashish Joshi
Dept. of CSE Dept. of CSE Dept. of CSE
Graphic Era Hill University Graphic Era Hill University Graphic Era Hill University
Dehradun Dehradun Dehradun
[email protected] [email protected] [email protected]

Aditya Pratap Prashant Kholiya


Dept. of CSE Dept. of CSE
Graphic Era Hill University Graphic Era Hill University
Dehradun Dehradun
[email protected] [email protected]

Abstract— This research paper delves into the world of with traditional vehicles.
autonomous vehicle algorithms, conducting a thorough analysis
and development of essential features. We focus on comparing
and evaluating these components along with thealgorithms that
drive them. Lane detection ensures precise lane-keeping and
enhances road safety. The study demonstrates how accurate
lane detection algorithms help autonomous vehicles navigate
with confidence. Speed control assesses algorithms governing
vehicle speed to improve road safety, reduce accidents, and save
fuel. We concentrate on theirrole in providing safety-centric
speed recommendations and ensuring compliance with speed
limits. We also focus on need for a standardized development
framework to pave the way forward. Our investigation extends
to a driver-assist speeding feature, offering recommended
speeds directly to the driver's smartphone. This innovative
feature empowers drivers to make informed decisions,
Fig. 1. Represetation of Autonomous Vehicle.
augmenting road safety and traffic rule compliance. Collision
detection uses advanced algorithms to identify potential
collision risks. Autonomous vehicles make rapid decisions to This transformation promises to bring a multitude of benefits,
prevent accidents, ensuring passenger and road user safety. We ranging from enhanced safety and traffic efficiencyto reduced
explore algorithms such as CNN, RCNN, Canny edge detection, fuel consumption and environmental conservation.
and Sobel operator, each playing a unique role in enhancing
autonomous driving features. In summary, this research paper However, this transition is not without its challenges. The
offers insights into the interplay of algorithms and critical complexities of autonomous vehicle development, coupled
autonomous driving features. Comprehensive analysis and
comparative evaluation advance the autonomous vehicle with fragmented efforts, escalating costs, and heightened
landscape, enhancing road safety and efficiency for all road safety concerns, have underscored the pressing need for a
users. standardized development framework topave the way forward.
Keywords—Autonomous Vehicles, Algorithm Analysis, Human-
Autonomous Vehicle Coexistence

I. INTRODUCTION
In the ever-evolving landscape of autonomous vehicles, the
promise of a transportation revolution hinges on the intricate
interplay between cutting-edge algorithms and the essential
features that empower these vehicles to autonomously
perceive, interpret, and respond to their surroundings. This
research paper embarks on an in-depth exploration titled
"Algorithmic Analysis and Development for Enhanced Fig. 2. Data Flow in an Autonomous Vehicle.
Human-Autonomous Vehicle Coexistence," with a particular
This research encompasses a combined investigation into
emphasis on the comparative evaluation of algorithms for key
multiple pivotal features that collectively define the capabilities
functionalities that play a pivotal role in the realm of and safety of autonomous vehicles, Figure 2 displays a
autonomous driving. The automotive industry is at a acomplete dataflow of hardware and software components of an
crossroads, poised to transition into a new era of autonomous vehicle. Among these, object detection stands to
transportation, where self- driving cars coexist harmoniously. identify and respond to objects in its environment.
The vehicle's perception capabilities are greatly influenced by driving vehicles, but with rise of automotive industry there is
the algorithms used, which include CNN (Convolutional a concern about security which needs to be considered. With
Neural Network), RCNN (Region-based Convolutional time different safety regulations have been developed but
Neural Network), and YOLO (You Only Look Once). To find different company have their different technologies and
out the best strategy, the research aims to perform a detailed algorithms used for R and D of their self-driving or
analysis and comparison of these object detection algorithms. autonomous vehicle. Our framework come as a centralized
The deployment of automatic braking systems is important framework for all company and fundamental framework for all
because it is a basic safety feature that is linked to the available safety regulations.
precision and dependability of object detection algorithms.
The study also looks into the essential elements of lane In the realm of autonomous vehicle algorithm analysis and
detection and traffic sign recognition. In order to ensure safe development, prior research has laid a strong foundation for
and law-abiding autonomous driving, advanced algorithms the current study. The work of Firoz Khan and his co- authors,
such as CNN and RCNN for traffic sign detection are as presented in the paper "Autonomous vehicles: A study of
examined, with particular emphasis on how precisely they implementation and security,”[3] underscores the significance
identify and interpret traffic signs. of safety and the importance of robust algorithms in advancing
To make sure that the car keeps exact lane discipline, lane autonomous vehicles. The present study, titled "Algorithmic
detection uses algorithms such as Sobel operator and Canny Analysis and Development for Enhanced Human-
edge detection. This research project stands out for its careful Autonomous Vehicle Coexistence," draws inspiration from the
investigation of over speeding detection and real-time speed foundational workby Khan and his team. It builds upon their
estimation. These new features provide a quick view from a insights, underscoring the need for real-time data analysis and
smartphone and can help people drive more efficiently. This the integration of modern technology.
improves safety and increases peace of mind and traffic This research zeroes in on critical components that are pivotal
control on the road. in shaping the future of autonomous vehicles, including lane
detection, speed control, collision detection, object
More importantly, the research in this article demonstrates
recognition, and a pioneering driver-assist system that offers
the importance of key features in practical tools. They
real-time speed recommendations directly to the driver's
provide the basis for vehicles to be aware of their
smartphone. This article[4] reviews the methods employed for
surroundings, make informed decisions, communicate with
lane detection in an autonomous vehicle.
traffic and comply with traffic laws. This research aims to
Our paper also do a comparative analysis of algorithm which
improve cooperation between drivers and road users by
are best for features mentioned above. This paper[5] [6] [7]
providing valuable information through analysis and
explains the computational approach , working of canny edge
comparison.
detection and the improvements for better efficiency. Other
Algorithms can also be used for better result in detecting edge
II. LITERATURE SURVEY like sobel operator ,This Paper[8][9][10][11]discusses
Comprehensive and Comparative Analysis of Image Edge
As the automotive industry undergoes major technological Detection Techniques.
changes, electric vehicles are leading the way. These Object detection and algorithm used for it plays an important
autonomous vehicles have the potential to revolutionize role in autonomous vehicle software , For different features
transportation by increasing efficiency and safety. But there like Traffic sign , signal detection , vehicledetection , obstacle
are many problems in their adoption, including business, detection advance algorithm like CNN,R-CNN ,YOLO is
management and business issues. In response to these used. This paper[12][13][14][15][16] gives a comparative and
problems, a great idea emerged to create and analyze detailed Analysis of theseAlgorithm and discusses how these
autonomous vehicles. Deep Learning Algorithm performs object and image
detection.
This paper [1] provides an overview of the progress in this
field while highlighting the challenges faced by autonomous
III. METHODOLOGIES
vehicles. As technology advances, it is clear that the future is
driverless cars. This paper [2] explores the future of the Our research aims to establish a minimalistic framework for
automobile, focusing on the modification and integration of autonomous vehicles while introducing an innovative feature
new technologies into conventional vehicles, turning them for real-time speed detection and alert systems to enhance
into nearby cars , the future belongs to autonomous vehicle coexistence between human-driven and autonomous vehicles.
and smart driving applications, with so much advancement We conducted our study through a systematic three-phase
and in the automobile sector existing infrastructure will be approach.
improved by human autonomous systems.
Phase 1: Lane Detection
In past few years, we have seen a rapid growth in automotive
sector. In the past we have seen very efficient self-driving car In the first phase, we focused on the critical aspect of lane
like Tesla, which have changed the way we look at self- detection. Our analysis revolved around a comparative
assessment of two prominent techniques: Canny edge step involves image thresholding. This step plays a pivotal
detection and the Sobel operator. The objective was to role in reducing noise and emphasizing prominent lane lines
identify the most suitable method for accurate lane detection, withinthe image.
a fundamental component for autonomous vehicles. In direct or simple thresholding, a routine is used for each
In the first phase, we focused on the critical aspect of lane pixel in the image. 0 if the pixel value is less than the
detection. Our analysis revolved around a comparative specified parameter; otherwise it is set to the maximum
assessment of two prominent techniques: Canny edge value. OpenCV's cv.threshold function is commonly
detection and the Sobel operator. The objective was to employed forthis purpose.
identify the most suitable method for accurate lane detection,
This function takes the base grayscale image as the first
a fundamental component for autonomous vehicles. First of
all we will be doing camera calibration for accurate spatial argument, the pixel distribution threshold as the second
mapping. Some pinhole cameras introduce significant argument, and the maximum value assigned to the pixel
distortion to images., OpenCV provides two methods for value exceeding the default value as the third argument.
handling distortion and obtaining accurate spatial mapping: Yields result like:
the radial distortion model and the tangential distortion
model.
1. Radial Distortion Model:
Radial distortion is caused by imperfections in
the camera's lens.
2. Making straight lines appear curved. OpenCV
represents radial distortion with a mathematical
model that includes distortion coefficients k1,k2,k3.
The distorted coordinates (xdistorted, However, in cases where images show different lighting
conditions in different areas, using the same general threshold
ydistorted) are calculated using the formula:
may not be ideal. To solve this limitation, a dynamic change is
xdistorted=x(1+k1r2+k2r4+k3r6) applied. In threshold adjustment, an algorithm dynamically
ydistorted=y(1+k1r2+k2r4+k3r6) determines the threshold for each pixel. This adjustment
where r is the radial distance from the image Centre. method creates different areas in the same image, improving
3. Tangential Distortion Model: OpenCV accounts for results especially in images with uneven illumination.
tangential distortion with distortion coefficients
p1,p2.
The formula to calculate xdistorted is:
xdistorted=x+[2p1xy+p2(r2+2x2)]
The formula to calculate xdistorted is:
ydistorted=y+[p1(r2+2y2)+2p2xy]

To perform camera calibration, five distortion coefficients


(k1,k2,p1,p2,k3) need to be determined. There also need to be
specific camera parameters such as focal length (fx, fy) and
optical position (cx, cy). These parameters are used to create
a 3x3 camera matrix.

Fig. 3. Yields Result.

The next step in lane detection being perspective transformation


to convert driver view to top-down view.Visibility changes are
very useful if you want to adjust your photos just right. When
Extrinsic parameters involve rotation and translation used, perspective transformation transforms the image in a
straight line. In the realm of lane detection, Edge detection
vectors, describing the transformation from a 3D point in
methods such as the Canny and Sobel operators are frequently
the world coordinates to the camera's image coordinates. employed for identifying lane markings.
Once the camera is calibrated, the distortion introduced by
the lens can be corrected. OpenCV provides methods to The algorithm Canny Edge Detection, created by John F.
undistort images based on the calibration parameters. This Canny, operates through multiple stages:
allows for accurate spatial mapping and the removal of 1. Noise reduction: 5x5 Gauss filter is the first step to
distortions, making images suitable for further computer reduce the effect of noise in the image. This filter
vision tasks. In the lane detection process, the next crucial helps smooth the image.
2. Finding the reference for gradient of the image: The The Sobel operator is a two-dimensional spatial gradient
smoothed image filter is in horizontal and vertical measurement technique widely used to identify high spatial
directions by the Sobel kernel to obtain its first frequency regions corresponding to edges in images. Its
derivatives in the horizontal and vertical directions. operation involves implementing a pair of 3x3 convolution
Using these derivatives, the algorithm calculates kernels, each designed to be optimally responsive to edges
the edge slope and direction for each pixel. The either vertically or horizontally relative to the pixel grid. As
gradient direction is always on the edge, and the shown in Figure 1, these nuclei are still the same; a kernel is
vertices are equal to one of the four vertices rotated 90°, similar to the Roberts Cross operator.
representing vertical, horizontal, and two
diagonals.
3. Limitations: Once the direction and magnitude of
the gradient are obtained, the image is fully
analyzed to remove unnecessary pixels that will not
create edges. At each pixel, check whether there is
a maximum in the near direction of the gradient. If
so, proceed to the next stage; otherwise it is
Fig. 6. Sobel Convolution Kernels.
removed (set to zero) In essence,this stage yields a
binary image with thin edges. The main purpose of the Sobel operator is to capture the
approximate true gradient magnitude at each point in the
grayscale input image. As shown in Figure 8, convolution with
kernels occurs in separate measurements of the gradient
components in each direction.
Here direction 0 tells that the direction for maximum contrast
from black to white is given from left to right of image and other
Fig. 4. Binary image with thin edges.
angles are measured counterclockwise from this reference
point. Usually the output presented to the user is the actual
4. Hysteresis threshold: This stage determines which gradient magnitude calculated using the so-called convolution
edges are true and which are not. Two defaults are kernel and added to the previous image output.
used: minVal and maxVal. Pixels with gradient The Sobel operator is instrumental in highlighting edges in
intensity above maxVal are considered true edges, images, emphasizing variations in intensity and aiding in the
while pixels below minVal are pixels that are not extraction of important features from the visual data.
true edges and are discarded. Pixels that fall within The comparison between the Canny edge detection and Sobel
these thresholds are classified as edges or non- operator reveals distinct characteristics and trade-offs. The
edges based on their connection to “cutting-edge” Sobel operator is praised for its simplicity, attributed to its use
pixels. If they connect to "real edge" pixels, they of approximate gradient calculations. However, a significant
are considered part of the edge; otherwise they are drawback is its vulnerability to the signal-to- noise ratio. As
thrown away. Choosing minVal and maxVal is noise increases, the accuracy of edge gradient magnitude
important to get accurate results. This level also decreases, leading to inaccurate results. In scenarios with
helps remove small pixel noise, meaning edges elevated noise levels, neither the Sobel operator nor other
appear as long lines. similar operators like Prewitt or Roberts Cross can effectively
detect appropriate edges, often discarding noise at the expense
The Canny Edge Detection algorithm yields a binary image of losing genuine edge information.
with well-defined and strong edges in the original image, a
crucial step in identifying and highlighting lane markings.

Fig. 7. Original image and after applying Sobel Operator


On the other hand, the Canny edge detection method, despite its
greater computational complexity and time consumption, offers
several advantages. It exhibits a superior signal-to-noise ratio,
contributing to more accurate edge detection. The application of
Fig. 5. Original image and edge image. non-maximal suppressionin the Canny algorithm results in thin
edges, precisely single-pixel-wide edges in the output, considered all factors like size, lightning, angles. Both models
enhancing the clarity of detected edges. The thresholding were fine tuned for betterment and specific requirements of
method in Canny further contributes to robust edge detection. sign detection.
Unlike gradient-based operators such as Sobel, Prewitt, and
Roberts Cross, which may detect thick and rough edges, To detect objects in images the YOLOv4 method is perfect
Canny's design prioritizes thin and smooth edges. This is with good balance between speed and accuracy. This method
crucial for subsequent matching processes, where detailed uses several neural networks structures like CSPDarknet53,
and accurate edge information is essential. which is used to extract important features. To enhance feature
extraction the model uses path aggression network and spatial
pyramid pulling (SPP). Additionally we customize the
Phase 2: Object and Traffic sign Detection
YOLOv4 basic architecture for identifying traffic signs by
While moving next to the second phase, now our main adjusting input resolution, network structure to balance out
attention goes to the second phase object identification. We specific requirements of this task.
did a full comparison and analysis of CNN, RCNN and the
YOLO Model. By doing this phase we not only focus on but To ensure efficient identification and sort traffic signs based
also explore the benefits and limitations of these models but on various environments and image characteristics we created
also get to know about their importance to features like traffic a technique by carefully designed research phases.
sign detection and obstacle detection, integral for confirming Additionally our framework goes beyond identifying traffic
safe autonomous driving. signs. To identify the objects like identifying road blocks, any
For classification module we used GTSRB (German Traffic anonymous moment we have integrated YOLOv4. As soon as
Sign Recognition Benchmark). 51,800 images obstacles are identified our system give recommendation to
approximately. For validation 12,600 and for training data we apply break and to reduce speed for safety enhancement.
used 39,200 images. The measurement of these images
ranged from 15 x 15 to 250 x 250 pixels. 43 different traffic Phase 3: Speed Detection and Alert System
sign classes were there in our dataset for data collection and In the third phase our focus was to develop new advance
processing. features like alert system and quick identification. We came up
The GTSDB (German Traffic Sign Detection Benchmark with two plans. First one being evaluating vehicles speed
Dataset), which consist of 900 images with measurement of based on two nearest location by the help of GPS. Second, we
1360 x 800 pixels, was used in the detection module. In this used the OpenCV function to find an object and calculated the
phase, distinct subsets for training and validation were made distance between our location and the object that was found.
from this dataset. With the use of this data, the app will instantly notify the driver
To increase the quality of our dataset we took the help of to change his speed and apply brakes.
number of data enhancement strategies. Particularly, we Our system not only calculates the vehicle's speed throughGPS
begin a number of pre-processing steps in the classification but also employs computer vision to assess and evaluate the
phase. To make things work better, we made sure all images surroundings. In this process, the system detects objects in
had the same size of 32x32 pixels, turned them black and Depending on the detected objects and their distances, the
white, and adjusted the pixel values to be in a certain range. application issues alerts, recommending speed adjustments for
This helps our models learn faster and gives the surety that safety. Other than this, the system cross-references the current
the input data for the Convolutional Neural Network (CNN) location with legal speed limits, providing a comprehensive
is not changing in any circumstances. review to the driver and ensuring adherence to traffic
To make our dataset more varied, we also used tricks like regulations. This integrated approach enhances road safety by
flipping and rotating the images. All these enhancements are offering instant alerts depended on the driving environment.
very important to make dataset look more diverse to give This feature comprises two integral components:
precise and accurate results.
1st Part: In this section, our objective was to convey the
We used GTSRB dataset which we splitted into three parts- coordinates of the vehicle to our function, where we calculate
training, testing and validation. We distributed 60% for both the average and instantaneous speed of the vehicle.
training, 20% for testing and 20% for validation. To take care Utilizing the Indian navigation system (NavIC developed by
of uneven distribution and include variation among classes ISRO), we obtained the coordinates of our location, storing
we used augmentation techniques. TensorFlow was used for them in three 1D arrays: loc1, loc2, and loc3. Each array
making models to classify images. Our model included encompasses two variables, one for the x-coordinate and one
combination of categorical cross-entropy loss function, for the y-coordinate. These coordinates, representing the
convolutional layers and ReLU activation function. For vehicle's position, undergo conversion, equating to a change of
optimization we used Adam optimizer with a learning rate of 1 unit for every meter traveled. Later, these coordinates are
0.001. While for detection we experimented among two passed to the 2nd part to extract average and immediate speed.
advance method YOLOv4 and R-CNN. Using these methods loc1: Stores the coordinates of the starting position of the
allowed us to accurately detect and locate traffic sign. We
vehicle.
loc2: Stores the coordinates of the previous position of the IV. CONCLUSION
vehicle. Overall, the three-stage method explained in this study adds
value to driving studies by offering a framework for
loc3: Stores the coordinates of the current position of the resolving significant issues and enhancing general
vehicle. dependability and safety . It provides all necessary features
needed for autonomous vehicle and existence of non-
Beside these parameters, a variable is passed to the 2nd part
autonomous vehicle using mobiles software. The
that starts at 1 and incremented by 1 with each pass. The
comparative analysis carried out at every phase demonstrates
function calculates and returns the average and current speed
how crucial algorithm selection is to obtaining quality
of the vehicle. The obtained speeds are then checked against
outcomes and a standardized framework.
the user-entered speed limit. If either speed is higher than the
limit, analert is triggered, warning the driver to slow down the
vehicle speed. V. REFERENCES
[1] D. Parekh et al., “A Review on Autonomous Vehicles: Progress,
2nd Part: In this step, we used the location data we got in the Methods and Challenges,” Electronics, vol. 11, no. 14, p. 2162, Jul.
2022, doi: 10.3390/electronics11142162.
1st part to calculate the distance the vehicle has traveled. [2] M. V. Rajasekhar and A. K. Jaswal, "Autonomous vehicles: The future
Leveraging Python's math library, we appliedvarious formulas of automobiles," 2015 IEEE International Transportation Electrification
to determine the distance between two points, reflecting the Conference (ITEC), Chennai, India, 2015, pp. 1-6, doi:10.1109/ITEC-
India.2015.7386874.
vehicle's travel distance. The calculated speed is subsequently [3] Khan, Firoz & Ramasamy, Lakshmana & Kadry, Seifedine & Meqdad,
passed back to the main function (1st part) for further Maytham N. & Nam, Yunyoung. (2021). Autonomous vehicles: A study
of implementation and security. International Journal of Electrical and
processing. Computer Engineering. 11. 3013-3021. 10.11591/ijece.v11i4.pp3013-
3021.
Formulas Used for Average Speed: [4] N. J. Zakaria, M. I. Shapiai, R. A. Ghani, M. N. M. Yassin, M. Z. Ibrahim
and N. Wahid, "Lane Detection in Autonomous Vehicles: A Systematic
Review," in IEEE Access, vol. 11, pp. 3729-3765, 2023, doi:
10.1109/ACCESS.2023.3234442.
[5] J. Canny, "A Computational Approach to Edge Detection," in IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-
Explanation: 8, no. 6, pp. 679-698, Nov. 1986, doi: 10.1109/ TPAMI.1986.4767851.
[6] 6.W. Rong, Z. Li, W. Zhang and L. Sun, "An improved Canny edge
detection algorithm," 2014 IEEE International Conference on
• Line 1: Obtaining the start and current location Mechatronics and Automation, Tianjin, China, 2014, pp. 577-582, doi:
coordinates, calculating the total distance traveled. 10.1109/ICMA.2014.6885761.
[7] Akbari Sekehravani, Ehsan & Babulak, Eduard & Masoodi, Mehdi.
• Line 2: Using a variable x (2020). Implementing canny edge detection algorithm for noisy image.
Bulletin of Electrical Engineering and Informatics. 9. 1404-1410.
x that increments with each pass, determining the average 10.11591/eei.v9i4.1837.
speed (since the function is called every second). [8] Wanto, A., Deni Rizki, S., Andini, S., Surmayanti, S., Ginantra, N. L.
W. S. R., and Aspan, H., “Combination of Sobel+Prewitt Edge
• Line 3: Converting the speed to kilometers perhour Detection Method with Roberts+Canny on Passion Flower Image
Identification”, in <i>Journal of Physics Conference Series</i>, 2021,
and returning it to the 1st part. vol. 1933, no. 1. doi:10.1088/1742-6596/1933/1/012037.
[9] Ansari, Mohd & Kurchaniya, Diksha & Dixit, Manish. (2017). A
Formula Used for Instant Speed: Comprehensive Analysis of Image Edge Detection Techniques.
International Journal of Multimedia and Ubiquitous Engineering. 12.
1-12. 10.14257/ijmue.2017.12.11.01.
[10] Zheng, Wen-kui & Liu, Kun. (2017). Research on Edge Detection
Algorithm in Digital Image Processing. 10.2991/msmee-17.2017.227.
[11] Ahmed, Ahmed. (2018). COMPARATIVE STUDY AMONG
Explanation: SOBEL, PREWITT AND CANNY EDGE DETECTION
OPERATORS USED IN IMAGE PROCESSING. Journal of
Theoretical and Applied Information Technology. 96. 9.
• Line 1: Passing coordinates of the previous and [12] Srivast, Shrey & Divekar, Amit & Anilkumar, Chandu & Naik, Ishika
current locations to find the current instance's speed & Kulkarni, Ved & V., Pattabiraman. (2020). Comparative Analysis of
in meters per second. Deep Learning Image Detection Algorithms. 10.21203/
rs.3.rs-132774/v1.
[13] R-CNN, Fast R-CNN, Faster R-CNN, YOLO — Object Detection
• Line 2: Converting the speed to kilometers per hour
Algorithms : Towards Data Science.
and returning it to the 1st part. [14] v, Viswanatha & R K, Chandana & Ac, Ramachandra. (2022). Real
Time Object Detection System with YOLO and CNN Models: A
Our system also makes use of GPS and computer vision to Review. Xi'an Jianzhu Keji Daxue Xuebao/Journal of Xi'an University
of Architecture & Technology. XIV. 10.37896/ JXAT14.07/315415.
give a thorough evaluation of the environment around the [15] Cheng, Richeng. (2020). A survey: Comparison between
vehicle. This entails locating adjacent objects, estimating Convolutional Neural Network and YOLO in image identification.
their separation from the car, and sending out alerts in Journal of Physics: Conference Series. 1453. 012139.
10.1088/1742-6596/1453/1/012139.
response to their existence and closeness. The app offers [16] Farag, Wael & Saleh, Zakaria. (2018). Road Lane-Lines Detection in
instant alerts suitable for the driving situation and suggests Real-Time for Advanced Driving Assistance Systems. 1-8.
adjusting speed based on objects detected and their distance. 10.1109/3ICT.2018.8855797.

You might also like