0% found this document useful (0 votes)
13 views11 pages

Path Reader and Intelligent Lane Navigator by Autonomous Vehicle

The research article discusses the development of an autonomous vehicle navigation system utilizing deep neural networks to enhance safety and reduce human error in driving. It emphasizes the importance of using the Internet of Things (IoT) for real-time data processing, including pothole detection and lane navigation, to minimize accidents and improve traffic conditions. The study also highlights the need for increased public trust in self-driving technology to facilitate its widespread adoption.

Uploaded by

mahdijokar251374
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views11 pages

Path Reader and Intelligent Lane Navigator by Autonomous Vehicle

The research article discusses the development of an autonomous vehicle navigation system utilizing deep neural networks to enhance safety and reduce human error in driving. It emphasizes the importance of using the Internet of Things (IoT) for real-time data processing, including pothole detection and lane navigation, to minimize accidents and improve traffic conditions. The study also highlights the need for increased public trust in self-driving technology to facilitate its widespread adoption.

Uploaded by

mahdijokar251374
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Paladyn, Journal of Behavioral Robotics 2023; 14: 20220117

Research Article

Amar Shukla, Ankit Verma, Hussain Falih Mahdi*, Tanupriya Choudhury*, and Thipendra Pal Singh

Path reader and intelligent lane navigator


by autonomous vehicle
https://doi.org/10.1515/pjbr-2022-0117
received November 25, 2022; accepted March 30, 2023
1 Introduction
Abstract: Internet of Things (IoT) is a physical network of Globally, nearly 3,287 people die daily due to car accidents.
physical devices, such as widgets, structures, and other One of the major reasons behind this is the driver sleeping
objects, which can store program, sensors, actuators, and in the car or trying to stop the car when it is at a very high
screen configurations to allow the objects to assemble, con- speed. Just as the industrial revolution freed humanity
trol, display, and exchange data. The aim of this research from physical drudgery, artificial intelligence (AI) has the
was to develop an autonomous system with automated potential to free humans from mental drudgery.
navigation. Using this approach, we are able to make use To reduce the number of accidents that occur on a
of deep neural networks for automatic navigation as well daily basis, it is critical to reduce the amount of human
as the identification of pot holes and road conditions. error; it will be extremely fascinating if all we have to do is
Additionally, it displays potholes in traffic and the correct fit our destination into our schedule and keep working
lane on the screen. The system stresses how important it is until we reach our goal without making any mental or
to select the path from one node to the next. physical mistakes.
Keywords: convolution neural network, pavement condi- The use of a self-driving car can not only prevent acci-
tion, congestion condition, pothole condition, traffic light dents, but also provide self-relief for minor daily activities.
condition The Internet of Things (IoT) is a network of standard
items such as motorized vehicles, the Internet, televisions,
and other contraptions that are specifically linked together,
enabling new types of correspondence between things and
people as well as between things themselves. Building the
IoT has advanced in recent years, adding another estimation
 to the universe of data and correspondence movement
* Corresponding author: Hussain Falih Mahdi, Computer and estimations.
Software, College of Engineering, University of Diyala, Baqubah, Iraq,
Home automation or Smart Homes can be portrayed
e-mail: [email protected]
* Corresponding author: Tanupriya Choudhury, CSE Department, as presentation of improvement inside the home condition
Symbiosis Institute of Technology, Symbiosis International University, to give settlement, solace, security, and essential capacity
Pune, Maharashtra, 412115, India; School of Computer Science, University to its inhabitants. In addition, figuring out how to improve
of Petroleum and Energy Studies (UPES), Dehradun, 248007, home condition can give broadened singular satisfaction.
Uttarakhand, India; CSE Department, Daffodil International University,
With the presentation of the IoT, the examination and use
Daffodil Smart City, Birulia 1216, Bangladesh; CSE Department, Graphic
Era Hill University, Dehradun, 248002, Uttarakhand, India,
of home mechanization are getting progressively standard.
e-mail: [email protected], [email protected], Self-driving cars can help persons who are unable to
[email protected] drive on their own due to infirmities such as blindness.
Amar Shukla: School of Computer Science, University of Petroleum and According to studies, a driver’s mistake is cited as a reason
Energy Studies (UPES), Dehradun, 248007, Uttarakhand, India, in 94% of crashes, and self-driving vehicles can help elim-
e-mail: [email protected]
inate driver error. Because it does not require rest like
Ankit Verma: Wartin Labs Technologies LLP, H-187 WorkWings, Noida,
Uttar Pradesh 201301, India, e-mail: [email protected] people and can operate constantly for hours, it can increase
Thipendra Pal Singh: School of Computer Science Engineering and traffic congestion, save fuel, and reduce greenhouse gas
Technology, Bennett University, Greater Noida, Uttar Pradesh, 201310, emissions.
India; School of Computer Science, University of Petroleum and Energy Reduced travel time: Travel by a car should be safe
Studies (UPES), Dehradun, 248007, Uttarakhand, India,
whether the car is going slowly or rapidly. Higher speeds
e-mail: [email protected], [email protected]

Open Access. © 2023 the author(s), published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0 International License.
2  Amar Shukla et al.

similar or slightly modified forms of this technology are


still being used today [1].
We can increase people’s trust in driverless cars.
Although public trust is crucial to widespread adoption,
this is the main obstacle. The goal of this study is to deter-
mine which variables are most important in increasing the
use of driverless cars. The study found that a vehicle’s
ability to meet performance expectations and its reliability
were important adoption determinants, according to
quantitative research. The major concerns that were raised
had to do with privacy, such as location, security, and the
like [2].
Robotic platforms allow for incremental development
Figure 1: Flow diagram of the proposed process.
of manual processes, and path planning has become a cri-
tical area even if the environment inside and outside the
are likely to be possible, because computers will eliminate building is unknown. Our challenge is to invent ways to
human error as a cause of accidents. Less expensive make the algorithms as intelligent or pre-established as
insurance: If car insurance companies join the car move- possible, and to arrive at our destination in the most effi-
ment, your rates could decrease significantly. Risk alloca- cient manner. One of the most significant issues in this
tion depends more on the vehicle than the driver, so you field is finding a path that is free of static and dynamic
can expect insurance premiums to go down. Redirecting obstacles. In this article, a methodology is proposed to
our emergency services’ efforts and resources will allow cover the critical points and reach the initial key point in
us to redirect our emergency services to where they are a dynamic environment with the implementation details of
needed. the robotic platform. The primary computation is taking
In order to further approach with the process, the place inside the Raspberry Pi B + module, and other mod-
structural diagram will show the entire basic procedure ules include compass, wheel encoders, and ultrasonic sen-
of the progress of the study. sors [3].
The Figure 1 contains the basic flow information where Convolution neural networks (CNNs) are used to create
the data are gathered from the environment and then these a self-driving automobile based on monocular vision. The
data are further processed and resulted in the different authors of this research sought to develop a method for
maps of path observer, then the suitable path is chosen, modeling raw input photos to a specific steering angle
which taking the consideration of the lane tracking and predicted by the CNN. CNN was trained using data acquired
then the control is there while moving in the lane through from the vehicular road/platform Raspberry Pi 3 and a front
reading of the environment. connected camera, as well as photos of the road and time-
synchronized steering wheel angle gained through manual
driving. Whether road markers were present or not, the
2 Literature review speed reached was 5–8 km/h [4].
CNN maps raw images that have been taken from a
We can better examine and grasp new research projects if front attached camera on the vehicle directly to the steering
we have a broader view of the algorithms that can be wheel. This system works very well on traffic-filled roads
deployed alongside existing ones. with or without markings on the road. The only training
How is technology helping transform the world? There data provided were human steering angles which were
have been great changes in the technology of self-driving then used to predict the particular angle at which the car
and automatic cars since the 1920s when the first ever should be steered. This system has smaller networks and
radio-controlled vehicle/car was introduced. In the years better performance as minimal number of processing steps
following to this, many automatic electric cars were seen are required and this performs better as components can
on the roads that were powered by embedded circuits, and self-optimize [5].
by 1960 automatic cars which had same electronic guiding Efficient and extremely compact CNNs were generated
system came in the picture. In the 1980s, vision-guided in their study, which makes use of a novel sparse connec-
autonomous vehicles (AVs), which were a great achieve- tion topology. Because of the sparseness of inter layer filter
ment of the technology at that time, were introduced. The dependencies, this results in a significant reduction in
Path reader and intelligent lane navigator by autonomous vehicle  3

processing power consumption as well as a reduction in Vehicle speed, eye-gazing, and hand gestures [11] all
the number of parameters required, without sacrificing reveal a driver’s purpose and attentiveness. The appear-
CNN accuracy. The article’s findings indicated that the sys- ance and behavior of a car indicate to passengers whether
tem’s accuracy was greater than that of CNN’s cutting-edge the driver is likely to pay attention to the road. This
architecture. When compared to previous models, the research aims to enable passengers to comprehend and
model required 40% less parameters and was 31% faster express their autonomous car awareness and intent to
on the CPU, while preserving greater or similar efficiency [6]. pedestrians, which might be difficult if explicit interfaces
By comparing different models of CNN while imple- are avoided. The idea of an AV’s mission and awareness to
menting them on a self-driving car, they test which model pedestrians was conducted. Four user interface prototypes
is the best and proves to be the most efficient in a simu- were designed and tested on Segway and cars. It is possible
lated environment. The CNN has been trained by the to taste, touch, smell, and hear things out in the environ-
manually obtained data by driving a car and using pre- ment and combine the senses to do so.
viously obtained data from end-to-end deep learning tech- Deep learning-based vehicle [12] control systems are
niques. When training is done, the CNN is tested in the becoming more common. Before building a vehicle con-
driving simulator by checking its ability to reduce the dis- troller, engineers must rigorously test it under various
tance traveled by the car to go to the center, heading error, driving conditions. Recent improvements in deep learning
and root mean square error. The conclusion drawn was algorithms promise to solve challenging non-linear control
that adding long-short term memory layers in CNN pro- problems and transfer knowledge from earlier events to
duced better steering of the car which took into account new situations. These significant advances have gotten
the previously predicted value by CNN and not just the new little attention. This study uncovers current and valuable
predicted value or a single instance [7]. information on intelligent transportation systems, which is
Level 2 automatic cars are implemented by the authors vital for the field’s future. Control and perception are inter-
by taking the inputs from the front-facing camera on the woven in this research.
vehicle and feeding them as steering inputs. The network Modern autonomous [13] driving systems rely on his-
requires minimal human intervention as maximum vari- torical mapping. Although prevalent in cities, precise maps
able features are learnt from the camera inputs them- are difficult to develop, preserve, and transmit. Rural areas
selves. The data set used is from NVidia and Udacity, and have high turnover, making exact mapping challenging. A
when the CNN is given real inputs it can adapt to real self-driving automobile was tested in the countryside to
environment driving given a controlled environment. The ensure its functionality. The car uses its local sensing
setup consists of an ultrasonic sensor that will detect obsta- system to detect its road conditions. This system calculates
cles and an red green blue depth camera working at 10 HZ a car’s distance and speeds through recursive residual fil-
which outputs a steering angle [8]. tering and odometer, allowing it to navigate complex road
OShea and Nash [9] have described the various Artificial networks easily.
Neural Networks (ANNs) and their types, most significantly This AI product features [14] should assist in mini-
CNN. CNNs are mostly used to solve difficult image-driven mizing traffic congestion, road accidents, and social exclu-
tasks that require pattern recognition. These have precise sion. Future human transportation will have AI-powered
and simple architecture and are easy to implement; this drivers. Despite its apparent benefits, people are still wary
study gave great insight into ANN and especially CNN. about driverless cars. People’s trust in machines may help
Unlike typical cars [10], self-driving cars can park any- build autonomous systems. This study assesses the accep-
where. Instead, they can drive, fly, or cruise (circle around). tance of autonomous technologies. That is, future studies
Vehicles are enticed to work together to clog roads. According should examine user trust and approval. Changes to the
to San Francisco’s downtown data, self-driving cars might roadway and subsurface infrastructure impacted traffic,
roughly treble the number of vehicles entering, leaving, community attitudes and concerns, potential transferable
and inside cities. Planned travels extend due to parking and behaviors and requests, other business models, and strategy.
cruising. Parking subsidies may have the unintended conse- Malaysian law enforcement agencies must identify critical
quence of worsening congestion. According to the study’s elements to investigate AV manipulators’ conspiracy claims
conclusions, the introduction of congestion pricing in cities appropriately.
in the near future will be heavily reliant on AVs. Congestion A family of nonlinear [15] under-actuated systems was
pricing should incorporate a time-based penalty as well as a found to be soluble. The vehicle’s lateral dynamic control
distance- or energy-based fee to internalize various external- system incorporates the usage of forwarding and back-
ities associated with driving. ward controls. Even if the findings of theoretical studies
4  Amar Shukla et al.

Model contains advanced mode of AI and IoT-based


on AV lateral control can be applied to multiple circum-

This model provided the optimal solution in all the


Generalized social problem is well addressed with
Better fusion approach for identification of object

Model satisfies the objective controlling deviation


stances, the results can still be used in other applications.

Good model for approach to test and develop

Emergency steering control could avoid lot of


Model induces the new mode of cloud era to

Excellent mode of problem identification and


In the study, the performance of the closed-loop system

features for the solution of the problem


was compared to that of a typical human driver.
AVs will completely [16] revolutionize ground trans-
portation. In the future, new cars that can judge and drive

accident as we face in this era

and balancing of the vehicle


themselves are expected to replace traditional cars. Sen-
sors help self-driving cars sense and comprehend their
establish the new idea

modern technology
surroundings, whereas 5G allows them to sense and com-
scenario of impact
prehend distant environments. Local perception, like human
executed well
Advantages

perception, can be helpful for short-range vehicle control.


Despite the fact that people’s perspectives have broadened,
they can still prepare for the future and drive with greater
caution while adhering to a set of norms (safety, energy
Testing environment should have multiple modes of
Object classification and fusion approach proceeded

Testing and accuracy of validation can be improved


management, traffic optimization, comfort). Faults can
Validation and testing environment in the multiple
Multifunctionality with more environment can be

emerge as a result of background noise, ambient circum-


Marginal tendency of testing could be updated
Validity testing in the real-time environment

Effectiveness of the model can be improved


Effectiveness can be improved and analysis

stances, or manufacturing problems, regardless of how


well an electronic sensor has previously worked. The
parameter could be increased more

most practical solution to the shortcomings of individual


testing environment for validation

sensors is for them to be integrated. The goal of this


parameters can be adopted

research is to talk about performance optimization for


local automated driving systems in automobiles.
Issues and challenges

taken for consideration

Table 1 contains the multifunctionality of the AV and


the strategic issues and challenges which can really be
taken as the premium objective to work in this article
are as follows:
– An AV with a specific algorithm should contain more
amount of parameters for validation and testing.
Tested the relation of driver and AV, by using society of

Novel emergency steering-control strategy approached

– The involvement of the Deep Learning and AI model can


Used mixed integer linear programming for optimized
Vanet-based approach, where tactile internet over the

It proceeded with predictive model control, to control


Object classification and fusion approach proceeded

Contains rapid development and testing of vehicle

be enriched more to make the system more agile and


Chosen the calibrated models and evaluated the

using decision making and motion control layer

They approached the IoT-based learning model

updated.
– It should contain more fusion-based approaches for
innovative vehicle systems.
Table 1: Detailed study of the current models with various parameters

the vehicle deviation and balancing

– Effectiveness and accuracy in the fusion approaches can


be increased.
diagnosis of data controlling
automotive engineers levels

effective vehicle percentage


scheduling of vehicles
radio active network

3 Problem formulation
configurations

As mentioned in the literature and through the different


Approach

challenges we received from the survey, to do the auto-


matic navigation, we need to have specific road measures.
These measures are kept in consideration; one by one we
Self-Diagnosis System

discussed the standards and their factor which could affect


Multi-Level Cloud

the data processing. The following observation should be


AV Path tracking
Scheduling (AV)
Taxonomy (AV)

AV Impact (AV)

Avoidance (AV)
Autono Vi-Sim

taken as reference.
AVE (LIDAR)

System (AV)

– We need to verify all the road conditions and the feasi-


Collision
Optimal
Model

bility of the data receiving and processing through the


system.
– Then, by considering all the parameters, we need to
Study

[20]

[24]
[23]

[25]
[22]
[18]

[19]

[21]
[17]

design the mode of solution.


Path reader and intelligent lane navigator by autonomous vehicle  5

Table 2: Pavement condition Some factors that we will consider in optimization


of fuel-efficient routes are as follows:
Pavement Characteristics Roughness IRI – International Roughness Index (IRI) is pavement rough-
condition (mm/m) ness. The roughness parameter in Table 2 is calculated
PV1 Good driving limit ≤1.39 by the vertical oscillations of the vehicle chassis per road
exceeded section (generally 100 m). Its unit is mm/m. This table
PV2 Smooth surface 1.4–2.69 describes the roughness factor for the different surface
PV3 Uneven surface condition 2.7–4.19
parameter where it ranges up to 5.6 where it contains the
PV4 Border of road uneven 4.2–5.59
PV5 Irregular road, undrivable >5.6
different characteristics and satisfies all the conditions.
conditions
Taking the shortest route and avoiding (Table 3) con-
PV1: very good pavement, PV2: good pavement, PV3: fair pavement, PV4:
gested overcrowded paths should be followed since they
poor pavement, PV5: very poor pavement.
might result in increased fuel consumption. Congestion is
graded on a scale of 1 to 5. In this table, there is a conges-
tion factor for several conditions of measurable factor that
contain distinct features; if the ranges vary within the
Table 3: Congestion condition
range, the choice factor for choosing the road can be
Congestion Characteristics Congestion deviated or chosen.
condition factor We can save money by taking a route with less traffic
lights (Table 4), as opposed to spending a few minutes
CC1 No traffic measurability ≤1
CC2 Slight rush ≤2
standing in line to change signals and wasting fuel. This
CC3 Specific crowd in area ≤3 can be classified on a scale of 1–5. These are the most
CC4 Surface of the road is not ≤4 important factors to consider while determining the best
prevalent driving parameters for safety. It also defines the conditions
CC5 Surface is so much even ≤5 that are beneficial for moving from one node to another by
and undrivable
optimizing and selecting the best conditions for the
CC1: very good congestion condition, CC2: good congestion condition, journey.
CC3: fair congestion condition, CC4: poor congestion condition, CC5: The path has fewer potholes. Potholes on the road can
very poor congestion condition. be categorized on a scale of 1 to 5 (Table 5). Potholes are

Table 4: Traffic light availability

Light condition Characteristics Traffic factor Light

TL1 Light visibility is good; all the objects in the navigation are clearly visible ≤1
TL2 All the objects in the navigation are visible ≤2
TL3 Driving visibility is there ≤3
TL4 Uneven driving visibility ≤4
TL5 No light visibility, uneven traffic conditions ≤5

TL1: very good traffic light condition, TL2: good traffic light condition, TL3: fair traffic light condition, TL4: poor traffic light condition, TL5: very poor
traffic light condition.

Table 5: Pothole condition

Pothole condition Characteristics Road factor

PC1 Smooth driving condition with no potholes ≤1


PC2 Rare pothole in long distance travelled ≤2
PC3 Pothole is minor, driving can be possible ≤3
PC4 Lot of potholes, slow and steady driving can be done ≤4
PC5 Lot of potholes, uneven phase for driving ≤5
6  Amar Shukla et al.

also a key element in determining whether or not to travel Table 6: Finding out the navigation path through the proposed
on a road, and these conditions may be taken to avoid approach
catastrophic accidents and to offer a smooth driving
experience for visitors who travel on that route. Potholes Algorithm Finding out the navigation path through the
proposed approach
have distinct circumstances in the road where it disrupts
the smooth driving element in the road and these are also Input: Enter the Distance Nodes values
different factors which are required for determining the Output: Path Navigator and observed value
Begin
road conditions and evaluating the distance.
If nij = 0 if i = j Dij = 0 length (ni,nj) Cij = 0 otherwise NULL
for K = 0 to A-1
for J = 0 to A-1
3.1 Designing and development nij(K + 1) = min(nij(k)),Epv(nij(k) + nij(k) + dij(k)
End
First, calculate the optimized distance by consulting the for
End
effecting pavment condition (EPV) from equation 1. These
for
factors helps to analyze the proper navigation. We will End
convert the required map into a graph where each place
on potholes, the map will be depicted as the node on the
graph. This helps to detect the potholes in accurate manner. 4 Methodology
A car has to enter the area or the place where it wants
to start the journey and set the destination area. If the car First, we need to do the setup of the car as shown in Figure
has to go from one place to another, then all the routes 2 with the required hardware requirement. Take all the
possible according to the map are depicted in the form of four motors and connect jumping wires with them. Since
Figure 2 as shown. What is being added with each distance our H bridge can only handle only two motors at a time,
here is the factor effecting pavment condition (FEPV) value connect two motors with each other at one time. Assemble
that will help us find the best fuel-efficient path in the final all the motors in the plastic plates. Remember to cross-
route. couple to let the motors move in the same direction.
Fepv = Dp + (Pr + Cf + TL + Rf) /4Dp = Distance, open source computer vision library stores the image in
Pr = Pavement roughness, the form of the BGR color format; however, we need to
change it to RGB color format which is important to adjust
Cf = Congestion factor, (1)
the settings of the view we have. We will use a setup func-
Tl = Traffice light factor, tion for our camera to stabilize, then we will take region of
Rf = Road factor. interest around these four corners.
We will take a sample, region of interest as shown in
After this we have to design an algorithm for the afore-
Figure 3. For this we will define the region we want iden-
mentioned problem, and this project also aims to choose the
tify for lane to get focused by camera to move car in for-
best optimized algorithm which will be used to find the
ward direction. In the implementation process, we will
shortest path between two points entered by the user (Table 6).

Figure 2: These two figures contain the frame for the region of interest Figure 3: Calculation of the right and left position of the lane and the
and actual region of interest calculated. gray scale image.
Path reader and intelligent lane navigator by autonomous vehicle  7

convert our RGB image into gray scale to get the clear using the Canny edge detection technique. This process facil-
vision through camera. itates easier object identification for our autonomous car.
We define the threshold manually, initially setting a Prior to image processing, we convert our RGB image
specific value and creating a histogram for all values above in Figure 4 into gray-scale for easier manipulation. We
this threshold. These are converted into white pixels, while define the threshold manually by setting a specific value.
all remaining values become black pixels. The next step, Those greater than the threshold are turned into white
involves identifying all edges and corners within the lane pixels, while all remaining values are turned into black
pixels.
The next step is finding all the edges and corners
coming in the lane so that it can help our car identify
objects easily, with the help of canny edge detection.
Canny edge detection basically detects the sudden change
in the image gradient. For getting canny edges, we will
apply sobel operator on our threshold image.
In sobel operator, suppose Gx is an image pixel where
each pixel contains the horizontal derivative and Gy is an
image pixel where each pixel contains the vertical deriva-
tive, then G = sqrt(Gx2 + Gy2) where G represents the image
gradient. Then we will find the exact position of the lane,
i.e., right position and the left position. The next step is
Figure 4: Calculation of the center of the lane and calibrating lane center finding the left position and the right position of our lane
with frame center. where our autonomous car will move in during its journey.
Green lines depict the lane finder. In the next step, we
will find the lane center using the left lane position and the
right lane position. The blue color in Figure 5 shows the
lane center.
In the next step, we will calibrate our lane center with
frame center (Figure 5). The green line depicts the lane
center and blue line depicts the frame center; we will shift
frame center towards the left so that it can calibrate with
lane center. In the next step, we will move our autonomous
car in different directions and check the difference between
lane center and frame center as a result.
In the following stage, we will use CNNs (Figure 6). CNNs
are a type of neural network that have proven to be parti-
Figure 5: Difference between lane center and frame center. cularly effective in picture recognition and categorization.

Figure 6: CNN model.


8  Amar Shukla et al.

The CNN design categorizes and is primarily utilized for started, after we have finished pooling, we will go on to the
character recognition jobs, such as Classification, Convolu- dropout layer, which is a regularization approach that ran-
tion, Filters, Non-Linearity (ReLU) Activation Function, domly sets the weights of a section of the nodes in the layer
Pooling, or Sub Sampling (Fully Connected Layer). to zero. The last step involves dealing with the feature that
Since we are dealing with CNN, we will deal here with causes certain nodes to randomly disconnect from the net-
conv2D, MaxPool2D layer which is present in Keras, and after work. This necessitates the remaining network to reach a
we have imported the sequential model we will first add distributed solution.
some convolutions layer. We will first add convolution layer When it comes to increasing generalization and con-
with 32 filters for the first two convolution layers having 5x5 trolling over fitting, this strategy works well. (ReLu) is an
kernel matrix filter, which can be involved on original image abbreviation for maximal activation function (0,x). The
for extracting the important feature from image. rectifier activation function is used to introduce non-line-
The kernel matrix is applied on complete image matrix. arity into the system.
We have now incorporated a down-sampling filter, specifi- The flatten layer is used to convert the final feature
cally Max2D, which reduces the image’s dimensions. This mappings into a single 1D vector representation. It will be
process effectively shrinks the size of the image, simplifying necessary to flatten the layers once they have been convo-
further manipulation and analysis. lution and max pooled in order to use completely con-
Next, we must decide upon the pooling size. It is cri- nected layers. Essentially, it combines into the convolution
tical to select the pooling dimension as well. Also, we are layer all of the previously trained local properties.
using convolution and pooling in this layer to allow our As an alternative to digging deeper, we built an ANN
model to learn more information. classifier based on the properties of the previous layer. The
Next, we will add two more convolution layers, with 64 final layer produces a distribution of the likelihood of each
filters and down sampling, at the conclusion. To get things class, which is displayed on the screen.

Table 7: Analysis of pavement by considering the EPV factor 5 Result and analysis
Pavement Level Noise SS RG LC EPV Optimal The process has been thoroughly evaluated the crucial
level path aspects of the path by testing various parameters. While
PV1 0 0 0 0 0 NULL 0 updating the results, these parameters were analyzed in
PV2 0.25 1 1 0 0 Marginal 0.0055 relation to the condition of the pavement.
PV3 0.5 1 1 0 0 Good 0.0066 Table 7 contains the optimal path efficiency considering
PV4 0.75 1 1 1 0 Pleasant 0.0074
the EPV factor in the various dimension of pavement level,
PV5 1 1 1 1 1 Best 0.0084
and this table contains the level which defines the five con-
PV: pavement level, SS: smoothness, RG: roughness, LC: localization. straints from 0 to 1. This table defines the optimized path by

Figure 7: Optimal path by consideration of EPV factor and pavement level with different parameters.
Path reader and intelligent lane navigator by autonomous vehicle  9

Table 8: Analysis of congestion condition by considering the EPV factor

Congestion condition Traffic system Object identification Noise EPV calculation Optimal path

CC1 0 0 0 0 0
CC2 23 22 33.4 26.1 0.66
CC3 22 28 38 29.3 0.79
CC4 25 29 39 31 0.81
CC5 46 45 50 47 0.88

Figure 8: Analysis of the congestion condition.

checking the notation of the noise value of the road, smooth- optimal detection of the path to identify the feasibility
ness, roughness, localization, and finally, the EPV factor. for the driving. The table describes the conditions and
Figure 7 contains the detail analysis for the optimal reading obtained during the testing phase in the road.
path relation with the EPV and payment condition and Figure 9 describes the functional analysis for the
describes the feasible nature for the optimal decision- optimal travel path for deciding the final route on the basis
making by the system. of pothole condition and the traffic light condition by using
Table 8 contains the analysis of the congestion condition the EPV factor.
considering the EPV factor, and there was feasible observa- The figure discusses the setup model, and this model is
tion in the levels 5 and 4 where the driving condition is best tested in the different road structure and domain. And it
by considering the multifactor analysis (Figure 8). shows the promised observation in the strategic road
Table 9 analyzes the traffic condition and pothole con- condition.
dition and contains the major feature balance for the Table 10 contains the comparison of the existing system
in terms of the following parameters: sensor fusion, percep-
Table 9: Analysis of pothole condition and traffic light condition by tion, localization, mapping, and efficiency. The proposed
considering the EPV factor system shows better competency in deciding the path with
the existing system.
Pothole TL TS OI noise EPV Optimal path

PP1 TL1 8 12 8 9 0.3 5.1 Cost of hardware components


PP2 TL2 28 27 46.9 34 0.68
PP3 TL3 29 54 56 46 0.74 The whole component which is required for the creation of
PP4 TL4 29 59 39 42 0.87
these projects is very optimal as compared to the other
PP5 TL5 46 45 50 47 0.96
components which were used. Table 11 contains the cost
TL: traffic light, TS: traffic system: OI: object identification, OP: optimal path. of the hardware components in USD and INR.
10  Amar Shukla et al.

Figure 9: Analysis of pothole condition and traffic light condition.

to identify the most efficient approach to take while dis-


Table 10: Existing system comparison
cussing in a hostile environment. This was an effective way
Model SS PC LC MP
to determine the most effective course of action as it takes
into account multiple factors such as sensors, perception,
AV linear quadratic Gaussian (LQG) control [26] ✓ ✗ ✓ ✓
and localization. Furthermore, it further strengthens our
Hybrid cost and time path AV [27] ✓ ✗ ✓ ✓
team’s commitment to providing maximum efficiency with
Physics-based path AV [28] ✓ ✓ ✗ ✓
Predictive maneuver AV [29] ✓ ✗ ✓ ✓ minimum effort.
Proposed system ✓ ✓ ✓ ✓
Funding information: The authors state no funding involved.
SS: sensors, PC: perception, LC: localization, MP: mapping.
Author contributions: All authors have accepted responsi-
bility for the entire content of this manuscript and approved
Table 11: Cost analysis its submission.

S.No Hardware name INR USD Conflict of interest: The authors state no conflict of interest.
1 Arduino UNO R3 700 9.43
2 Robot wheel 20 0.27 Informed consent: Informed consent was obtained from
3 Wooden board 50 0.67 all individuals included in this study.
4 Rasbiri pipe 2,348 31.64
5 Connecting pipes 10 0.13
6 Artificial lanes 200 2.69
Ethical approval: The conducted research is not related to
Total 3,328 44.83 either human or animals use.

Data availability statement: Data sharing is not applicable


6 Conclusion to this article as no datasets were generated or analysed
during the current study.
Based on the findings from the system analysis, our team
attempted to compare the results based on sensors, percep-
tion, and localization. We suggested that the system have a References
multifunction choice for choosing the pathway, taking the
EPV component into account slightly to ensure it includes [1] J. Borenstein and Y. Koren, “Obstacle avoidance with ultrasonic
the most prevalent path. With this approach, we were able sensors,” IEEE J. Robot. Autom., vol. 4, no. 2, pp. 213–218, 1988.
Path reader and intelligent lane navigator by autonomous vehicle  11

[2] Y. Wang, E. K. Teoh, and D. Shen, “Lane detection and tracking using mous vehicle environment,” IEEE Trans. Ind. Inform., vol. 14, no. 9,
B-Snake,” Image Vis. Comput., vol. 22, no. 4, pp. 269–280, 2004. pp. 4224–4231, 2018.
[3] N. P. Pawar and M. M. Patil, “Driver assistance system based on [18] M. Khayyat, A. Alshahrani, S. Alharbi, I. Elgendy, A. Paramonov, and
Raspberry Pi,” Int. J. Comput. Appl., vol. 95, no. 16, p. 16, 2014. A. Koucheryavy, “Multilevel service-provisioning-based autono-
[4] T. D. Do, M. T. Duong, Q. V. Dang, and M. H. Le, “Real-time self- mous vehicle applications,” Sustainability, vol. 12, no. 6.
driving car navigation using deep neural network,” In 2018 4th p. 2497, 2020.
International Conference on Green Technology and Sustainable [19] R. McCall, F. McGee, A. Mirnig, A. Meschtscherjakov, N. Louveton,
Development (GTSD), 2018, pp. 7–12. T. Engel, et al., “ A taxonomy of autonomous vehicle handover
[5] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, situations,” Transportation Res. Part. A: Policy Pract., vol. 124,
P. Goyal, et al., “End to end learning for self-driving cars,” arXiv pp. 507–522, 2019.
preprint arXiv:1604.07316, 2016. [20] S. A. Fayazi and A. Vahidi, “Mixed-integer linear programming for
[6] Y. Ioannou, D. Robertson, R. Cipolla, and A. Criminisi. “Deep roots: optimal scheduling of autonomous vehicle intersection crossing,”
Improving CNN efficiency with hierarchical filter groups,” CoRR, IEEE Trans. Intell. Veh., vol. 3, no. 3, pp. 287–299, 2018.
abs/1605.06489, 2016. [21] D. Stanek, R. T. Milam, E. Huang, and Y. A. Wang, Measuring
[7] J. del Egio, L. M. Bergasa, E. Romera, C. Gómez Huélamo, J. Araluce, autonomous vehicle impacts on congested networks using simu-
and R. Barea, “Self-driving a car in simulation through a CNN,” In lation. Transportation Research Board 97th Annual Meeting; 2018.
Workshop of physical agents, 2018, pp. 31–43. [22] A. Best, S. Narang, L. Pasqualin, D. Barber, and D. Manocha,
[8] A. Agnihotri, P. Saraf, and K. R. Bapnad, “A convolutional neural “Autonovi-sim: Autonomous vehicle simulation platform with
network approach towards self-driving cars,” In 2019 IEEE 16th India weather, sensing, and traffic control,” In Proceedings of the IEEE
Council International Conference (INDICON), 2019, pp. 1–4. Conference on Computer Vision and Pattern Recognition Workshops,
[9] K. O'Shea and R. Nash, “An introduction to convolutional neural 2018, pp. 1048–1056.
networks,” arXiv preprint arXiv:1511.08458, 2015. [23] X. He, Y. Liu, C. Lv, X. Ji, and Y. Liu, “Emergency steering control of
[10] A. Millard-Ball, “The autonomous vehicle parking problem,” Transp. autonomous vehicle for collision avoidance and stabilisation,” Veh.
Policy, vol. 75, pp. 99–108, 2019. Syst. Dyn., vol. 57, no. 8, pp. 1163–1187, 2019.
[11] K. Mahadevan, S. Somanath, and E. Sharlin, Communicating [24] C. Sun, X. Zhang, Q. Zhou, and Y. Tian, “A model predictive con-
awareness and intent in autonomous vehicle-pedestrian interaction, troller with switched tracking error for autonomous vehicle path
New York, NY, USA, Association for Computing Machinery, 2018. tracking,” IEEE Access, vol. 7, pp. 53103–53114, 2019.
[12] S. Kuutti, R. Bowden, Y. Jin, P. Barber, and S. Fallah, “A survey of [25] Y. Jeong, S. Son, E. Jeong, and B. Lee, “An integrated selfdiagnosis
deep learning applications to autonomous vehicle control,” IEEE system for an autonomous vehicle based on an IoT gateway and
Trans. Intell. TransportatiSyst., vol. 22, no. 2, pp. 712–733, 2021. deep learning,” Appl. Sci., vol. 8, no. 7, p. 1164, 2018.
[13] T. Ort, L. Paull, and D. Rus, “Autonomous vehicle navigation in rural [26] K. Lee, S. Jeon, H. Kim, and D. Kum, “Optimal path tracking control
environments without detailed prior maps,” In 2018 IEEE International of autonomous vehicle: Adaptive full-state linear quadratic
Conference on Robotics and Automation (ICRA), 2018, pp. 2040–2047. Gaussian (LQG) control,” IEEE Access, vol. 7,
[14] N. Adnan, S. M. Nordin, M. A. bin Bahruddin, and M. Ali, “How trust pp. 109120–109133, 2019.
can drive forward the user acceptance to the technology? In- [27] H. Fazlollahtabar and S. Hassanli, “Hybrid cost and time path
vehicle technology for autonomous vehicle,” Transportation Res. planning for multiple autonomous guided vehicles,” Appl. Intell.,
Part. A: Policy Pract., vol. 118, pp. 819–836, 2018. vol. 48, no. 2, pp. 482–498, 2018.
[15] J. Jiang and A. Astolfi, “Lateral control of an autonomous vehicle,” [28] B. Sebastian and P. Ben-Tzvi, “Physics based path planning for
IEEE Trans. Intell. Veh., vol. 3, no. 2, pp. 228–237, 2018. autonomous tracked vehicle in challenging terrain,” J. Intell. Robotic
[16] J. Fayyad, M. A. Jaradat, D. Gruyer, and H. Najjaran, “Deep learning Syst., vol. 95, no. 2, pp. 511–526, 2019.
sensor fusion for autonomous vehicle perception and localization: [29] Q. Wang, B. Ayalew, and T. Weiskircher. “Predictive maneuver
A review,” Sensors, vol. 20, no. 15, pp. 4220–4220, 2020. planning for an autonomous vehicle in public highway traffic,”
[17] H. Gao, B. Cheng, J. Wang, K. Li, J. Zhao, and D. Li, “Object classi- IEEE Trans. Intell. Transportation Syst., vol. 20, no. 4,
fication using CNN-based fusion of vision and LIDAR in autono- pp. 1303–1315, 2019.

You might also like