Path Reader and Intelligent Lane Navigator by Autonomous Vehicle
Path Reader and Intelligent Lane Navigator by Autonomous Vehicle
Research Article
Amar Shukla, Ankit Verma, Hussain Falih Mahdi*, Tanupriya Choudhury*, and Thipendra Pal Singh
Open Access. © 2023 the author(s), published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0 International License.
2 Amar Shukla et al.
processing power consumption as well as a reduction in Vehicle speed, eye-gazing, and hand gestures [11] all
the number of parameters required, without sacrificing reveal a driver’s purpose and attentiveness. The appear-
CNN accuracy. The article’s findings indicated that the sys- ance and behavior of a car indicate to passengers whether
tem’s accuracy was greater than that of CNN’s cutting-edge the driver is likely to pay attention to the road. This
architecture. When compared to previous models, the research aims to enable passengers to comprehend and
model required 40% less parameters and was 31% faster express their autonomous car awareness and intent to
on the CPU, while preserving greater or similar efficiency [6]. pedestrians, which might be difficult if explicit interfaces
By comparing different models of CNN while imple- are avoided. The idea of an AV’s mission and awareness to
menting them on a self-driving car, they test which model pedestrians was conducted. Four user interface prototypes
is the best and proves to be the most efficient in a simu- were designed and tested on Segway and cars. It is possible
lated environment. The CNN has been trained by the to taste, touch, smell, and hear things out in the environ-
manually obtained data by driving a car and using pre- ment and combine the senses to do so.
viously obtained data from end-to-end deep learning tech- Deep learning-based vehicle [12] control systems are
niques. When training is done, the CNN is tested in the becoming more common. Before building a vehicle con-
driving simulator by checking its ability to reduce the dis- troller, engineers must rigorously test it under various
tance traveled by the car to go to the center, heading error, driving conditions. Recent improvements in deep learning
and root mean square error. The conclusion drawn was algorithms promise to solve challenging non-linear control
that adding long-short term memory layers in CNN pro- problems and transfer knowledge from earlier events to
duced better steering of the car which took into account new situations. These significant advances have gotten
the previously predicted value by CNN and not just the new little attention. This study uncovers current and valuable
predicted value or a single instance [7]. information on intelligent transportation systems, which is
Level 2 automatic cars are implemented by the authors vital for the field’s future. Control and perception are inter-
by taking the inputs from the front-facing camera on the woven in this research.
vehicle and feeding them as steering inputs. The network Modern autonomous [13] driving systems rely on his-
requires minimal human intervention as maximum vari- torical mapping. Although prevalent in cities, precise maps
able features are learnt from the camera inputs them- are difficult to develop, preserve, and transmit. Rural areas
selves. The data set used is from NVidia and Udacity, and have high turnover, making exact mapping challenging. A
when the CNN is given real inputs it can adapt to real self-driving automobile was tested in the countryside to
environment driving given a controlled environment. The ensure its functionality. The car uses its local sensing
setup consists of an ultrasonic sensor that will detect obsta- system to detect its road conditions. This system calculates
cles and an red green blue depth camera working at 10 HZ a car’s distance and speeds through recursive residual fil-
which outputs a steering angle [8]. tering and odometer, allowing it to navigate complex road
OShea and Nash [9] have described the various Artificial networks easily.
Neural Networks (ANNs) and their types, most significantly This AI product features [14] should assist in mini-
CNN. CNNs are mostly used to solve difficult image-driven mizing traffic congestion, road accidents, and social exclu-
tasks that require pattern recognition. These have precise sion. Future human transportation will have AI-powered
and simple architecture and are easy to implement; this drivers. Despite its apparent benefits, people are still wary
study gave great insight into ANN and especially CNN. about driverless cars. People’s trust in machines may help
Unlike typical cars [10], self-driving cars can park any- build autonomous systems. This study assesses the accep-
where. Instead, they can drive, fly, or cruise (circle around). tance of autonomous technologies. That is, future studies
Vehicles are enticed to work together to clog roads. According should examine user trust and approval. Changes to the
to San Francisco’s downtown data, self-driving cars might roadway and subsurface infrastructure impacted traffic,
roughly treble the number of vehicles entering, leaving, community attitudes and concerns, potential transferable
and inside cities. Planned travels extend due to parking and behaviors and requests, other business models, and strategy.
cruising. Parking subsidies may have the unintended conse- Malaysian law enforcement agencies must identify critical
quence of worsening congestion. According to the study’s elements to investigate AV manipulators’ conspiracy claims
conclusions, the introduction of congestion pricing in cities appropriately.
in the near future will be heavily reliant on AVs. Congestion A family of nonlinear [15] under-actuated systems was
pricing should incorporate a time-based penalty as well as a found to be soluble. The vehicle’s lateral dynamic control
distance- or energy-based fee to internalize various external- system incorporates the usage of forwarding and back-
ities associated with driving. ward controls. Even if the findings of theoretical studies
4 Amar Shukla et al.
modern technology
surroundings, whereas 5G allows them to sense and com-
scenario of impact
prehend distant environments. Local perception, like human
executed well
Advantages
updated.
– It should contain more fusion-based approaches for
innovative vehicle systems.
Table 1: Detailed study of the current models with various parameters
3 Problem formulation
configurations
AV Impact (AV)
Avoidance (AV)
Autono Vi-Sim
taken as reference.
AVE (LIDAR)
System (AV)
[20]
[24]
[23]
[25]
[22]
[18]
[19]
[21]
[17]
TL1 Light visibility is good; all the objects in the navigation are clearly visible ≤1
TL2 All the objects in the navigation are visible ≤2
TL3 Driving visibility is there ≤3
TL4 Uneven driving visibility ≤4
TL5 No light visibility, uneven traffic conditions ≤5
TL1: very good traffic light condition, TL2: good traffic light condition, TL3: fair traffic light condition, TL4: poor traffic light condition, TL5: very poor
traffic light condition.
also a key element in determining whether or not to travel Table 6: Finding out the navigation path through the proposed
on a road, and these conditions may be taken to avoid approach
catastrophic accidents and to offer a smooth driving
experience for visitors who travel on that route. Potholes Algorithm Finding out the navigation path through the
proposed approach
have distinct circumstances in the road where it disrupts
the smooth driving element in the road and these are also Input: Enter the Distance Nodes values
different factors which are required for determining the Output: Path Navigator and observed value
Begin
road conditions and evaluating the distance.
If nij = 0 if i = j Dij = 0 length (ni,nj) Cij = 0 otherwise NULL
for K = 0 to A-1
for J = 0 to A-1
3.1 Designing and development nij(K + 1) = min(nij(k)),Epv(nij(k) + nij(k) + dij(k)
End
First, calculate the optimized distance by consulting the for
End
effecting pavment condition (EPV) from equation 1. These
for
factors helps to analyze the proper navigation. We will End
convert the required map into a graph where each place
on potholes, the map will be depicted as the node on the
graph. This helps to detect the potholes in accurate manner. 4 Methodology
A car has to enter the area or the place where it wants
to start the journey and set the destination area. If the car First, we need to do the setup of the car as shown in Figure
has to go from one place to another, then all the routes 2 with the required hardware requirement. Take all the
possible according to the map are depicted in the form of four motors and connect jumping wires with them. Since
Figure 2 as shown. What is being added with each distance our H bridge can only handle only two motors at a time,
here is the factor effecting pavment condition (FEPV) value connect two motors with each other at one time. Assemble
that will help us find the best fuel-efficient path in the final all the motors in the plastic plates. Remember to cross-
route. couple to let the motors move in the same direction.
Fepv = Dp + (Pr + Cf + TL + Rf) /4Dp = Distance, open source computer vision library stores the image in
Pr = Pavement roughness, the form of the BGR color format; however, we need to
change it to RGB color format which is important to adjust
Cf = Congestion factor, (1)
the settings of the view we have. We will use a setup func-
Tl = Traffice light factor, tion for our camera to stabilize, then we will take region of
Rf = Road factor. interest around these four corners.
We will take a sample, region of interest as shown in
After this we have to design an algorithm for the afore-
Figure 3. For this we will define the region we want iden-
mentioned problem, and this project also aims to choose the
tify for lane to get focused by camera to move car in for-
best optimized algorithm which will be used to find the
ward direction. In the implementation process, we will
shortest path between two points entered by the user (Table 6).
Figure 2: These two figures contain the frame for the region of interest Figure 3: Calculation of the right and left position of the lane and the
and actual region of interest calculated. gray scale image.
Path reader and intelligent lane navigator by autonomous vehicle 7
convert our RGB image into gray scale to get the clear using the Canny edge detection technique. This process facil-
vision through camera. itates easier object identification for our autonomous car.
We define the threshold manually, initially setting a Prior to image processing, we convert our RGB image
specific value and creating a histogram for all values above in Figure 4 into gray-scale for easier manipulation. We
this threshold. These are converted into white pixels, while define the threshold manually by setting a specific value.
all remaining values become black pixels. The next step, Those greater than the threshold are turned into white
involves identifying all edges and corners within the lane pixels, while all remaining values are turned into black
pixels.
The next step is finding all the edges and corners
coming in the lane so that it can help our car identify
objects easily, with the help of canny edge detection.
Canny edge detection basically detects the sudden change
in the image gradient. For getting canny edges, we will
apply sobel operator on our threshold image.
In sobel operator, suppose Gx is an image pixel where
each pixel contains the horizontal derivative and Gy is an
image pixel where each pixel contains the vertical deriva-
tive, then G = sqrt(Gx2 + Gy2) where G represents the image
gradient. Then we will find the exact position of the lane,
i.e., right position and the left position. The next step is
Figure 4: Calculation of the center of the lane and calibrating lane center finding the left position and the right position of our lane
with frame center. where our autonomous car will move in during its journey.
Green lines depict the lane finder. In the next step, we
will find the lane center using the left lane position and the
right lane position. The blue color in Figure 5 shows the
lane center.
In the next step, we will calibrate our lane center with
frame center (Figure 5). The green line depicts the lane
center and blue line depicts the frame center; we will shift
frame center towards the left so that it can calibrate with
lane center. In the next step, we will move our autonomous
car in different directions and check the difference between
lane center and frame center as a result.
In the following stage, we will use CNNs (Figure 6). CNNs
are a type of neural network that have proven to be parti-
Figure 5: Difference between lane center and frame center. cularly effective in picture recognition and categorization.
The CNN design categorizes and is primarily utilized for started, after we have finished pooling, we will go on to the
character recognition jobs, such as Classification, Convolu- dropout layer, which is a regularization approach that ran-
tion, Filters, Non-Linearity (ReLU) Activation Function, domly sets the weights of a section of the nodes in the layer
Pooling, or Sub Sampling (Fully Connected Layer). to zero. The last step involves dealing with the feature that
Since we are dealing with CNN, we will deal here with causes certain nodes to randomly disconnect from the net-
conv2D, MaxPool2D layer which is present in Keras, and after work. This necessitates the remaining network to reach a
we have imported the sequential model we will first add distributed solution.
some convolutions layer. We will first add convolution layer When it comes to increasing generalization and con-
with 32 filters for the first two convolution layers having 5x5 trolling over fitting, this strategy works well. (ReLu) is an
kernel matrix filter, which can be involved on original image abbreviation for maximal activation function (0,x). The
for extracting the important feature from image. rectifier activation function is used to introduce non-line-
The kernel matrix is applied on complete image matrix. arity into the system.
We have now incorporated a down-sampling filter, specifi- The flatten layer is used to convert the final feature
cally Max2D, which reduces the image’s dimensions. This mappings into a single 1D vector representation. It will be
process effectively shrinks the size of the image, simplifying necessary to flatten the layers once they have been convo-
further manipulation and analysis. lution and max pooled in order to use completely con-
Next, we must decide upon the pooling size. It is cri- nected layers. Essentially, it combines into the convolution
tical to select the pooling dimension as well. Also, we are layer all of the previously trained local properties.
using convolution and pooling in this layer to allow our As an alternative to digging deeper, we built an ANN
model to learn more information. classifier based on the properties of the previous layer. The
Next, we will add two more convolution layers, with 64 final layer produces a distribution of the likelihood of each
filters and down sampling, at the conclusion. To get things class, which is displayed on the screen.
Table 7: Analysis of pavement by considering the EPV factor 5 Result and analysis
Pavement Level Noise SS RG LC EPV Optimal The process has been thoroughly evaluated the crucial
level path aspects of the path by testing various parameters. While
PV1 0 0 0 0 0 NULL 0 updating the results, these parameters were analyzed in
PV2 0.25 1 1 0 0 Marginal 0.0055 relation to the condition of the pavement.
PV3 0.5 1 1 0 0 Good 0.0066 Table 7 contains the optimal path efficiency considering
PV4 0.75 1 1 1 0 Pleasant 0.0074
the EPV factor in the various dimension of pavement level,
PV5 1 1 1 1 1 Best 0.0084
and this table contains the level which defines the five con-
PV: pavement level, SS: smoothness, RG: roughness, LC: localization. straints from 0 to 1. This table defines the optimized path by
Figure 7: Optimal path by consideration of EPV factor and pavement level with different parameters.
Path reader and intelligent lane navigator by autonomous vehicle 9
Congestion condition Traffic system Object identification Noise EPV calculation Optimal path
CC1 0 0 0 0 0
CC2 23 22 33.4 26.1 0.66
CC3 22 28 38 29.3 0.79
CC4 25 29 39 31 0.81
CC5 46 45 50 47 0.88
checking the notation of the noise value of the road, smooth- optimal detection of the path to identify the feasibility
ness, roughness, localization, and finally, the EPV factor. for the driving. The table describes the conditions and
Figure 7 contains the detail analysis for the optimal reading obtained during the testing phase in the road.
path relation with the EPV and payment condition and Figure 9 describes the functional analysis for the
describes the feasible nature for the optimal decision- optimal travel path for deciding the final route on the basis
making by the system. of pothole condition and the traffic light condition by using
Table 8 contains the analysis of the congestion condition the EPV factor.
considering the EPV factor, and there was feasible observa- The figure discusses the setup model, and this model is
tion in the levels 5 and 4 where the driving condition is best tested in the different road structure and domain. And it
by considering the multifactor analysis (Figure 8). shows the promised observation in the strategic road
Table 9 analyzes the traffic condition and pothole con- condition.
dition and contains the major feature balance for the Table 10 contains the comparison of the existing system
in terms of the following parameters: sensor fusion, percep-
Table 9: Analysis of pothole condition and traffic light condition by tion, localization, mapping, and efficiency. The proposed
considering the EPV factor system shows better competency in deciding the path with
the existing system.
Pothole TL TS OI noise EPV Optimal path
S.No Hardware name INR USD Conflict of interest: The authors state no conflict of interest.
1 Arduino UNO R3 700 9.43
2 Robot wheel 20 0.27 Informed consent: Informed consent was obtained from
3 Wooden board 50 0.67 all individuals included in this study.
4 Rasbiri pipe 2,348 31.64
5 Connecting pipes 10 0.13
6 Artificial lanes 200 2.69
Ethical approval: The conducted research is not related to
Total 3,328 44.83 either human or animals use.
[2] Y. Wang, E. K. Teoh, and D. Shen, “Lane detection and tracking using mous vehicle environment,” IEEE Trans. Ind. Inform., vol. 14, no. 9,
B-Snake,” Image Vis. Comput., vol. 22, no. 4, pp. 269–280, 2004. pp. 4224–4231, 2018.
[3] N. P. Pawar and M. M. Patil, “Driver assistance system based on [18] M. Khayyat, A. Alshahrani, S. Alharbi, I. Elgendy, A. Paramonov, and
Raspberry Pi,” Int. J. Comput. Appl., vol. 95, no. 16, p. 16, 2014. A. Koucheryavy, “Multilevel service-provisioning-based autono-
[4] T. D. Do, M. T. Duong, Q. V. Dang, and M. H. Le, “Real-time self- mous vehicle applications,” Sustainability, vol. 12, no. 6.
driving car navigation using deep neural network,” In 2018 4th p. 2497, 2020.
International Conference on Green Technology and Sustainable [19] R. McCall, F. McGee, A. Mirnig, A. Meschtscherjakov, N. Louveton,
Development (GTSD), 2018, pp. 7–12. T. Engel, et al., “ A taxonomy of autonomous vehicle handover
[5] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, situations,” Transportation Res. Part. A: Policy Pract., vol. 124,
P. Goyal, et al., “End to end learning for self-driving cars,” arXiv pp. 507–522, 2019.
preprint arXiv:1604.07316, 2016. [20] S. A. Fayazi and A. Vahidi, “Mixed-integer linear programming for
[6] Y. Ioannou, D. Robertson, R. Cipolla, and A. Criminisi. “Deep roots: optimal scheduling of autonomous vehicle intersection crossing,”
Improving CNN efficiency with hierarchical filter groups,” CoRR, IEEE Trans. Intell. Veh., vol. 3, no. 3, pp. 287–299, 2018.
abs/1605.06489, 2016. [21] D. Stanek, R. T. Milam, E. Huang, and Y. A. Wang, Measuring
[7] J. del Egio, L. M. Bergasa, E. Romera, C. Gómez Huélamo, J. Araluce, autonomous vehicle impacts on congested networks using simu-
and R. Barea, “Self-driving a car in simulation through a CNN,” In lation. Transportation Research Board 97th Annual Meeting; 2018.
Workshop of physical agents, 2018, pp. 31–43. [22] A. Best, S. Narang, L. Pasqualin, D. Barber, and D. Manocha,
[8] A. Agnihotri, P. Saraf, and K. R. Bapnad, “A convolutional neural “Autonovi-sim: Autonomous vehicle simulation platform with
network approach towards self-driving cars,” In 2019 IEEE 16th India weather, sensing, and traffic control,” In Proceedings of the IEEE
Council International Conference (INDICON), 2019, pp. 1–4. Conference on Computer Vision and Pattern Recognition Workshops,
[9] K. O'Shea and R. Nash, “An introduction to convolutional neural 2018, pp. 1048–1056.
networks,” arXiv preprint arXiv:1511.08458, 2015. [23] X. He, Y. Liu, C. Lv, X. Ji, and Y. Liu, “Emergency steering control of
[10] A. Millard-Ball, “The autonomous vehicle parking problem,” Transp. autonomous vehicle for collision avoidance and stabilisation,” Veh.
Policy, vol. 75, pp. 99–108, 2019. Syst. Dyn., vol. 57, no. 8, pp. 1163–1187, 2019.
[11] K. Mahadevan, S. Somanath, and E. Sharlin, Communicating [24] C. Sun, X. Zhang, Q. Zhou, and Y. Tian, “A model predictive con-
awareness and intent in autonomous vehicle-pedestrian interaction, troller with switched tracking error for autonomous vehicle path
New York, NY, USA, Association for Computing Machinery, 2018. tracking,” IEEE Access, vol. 7, pp. 53103–53114, 2019.
[12] S. Kuutti, R. Bowden, Y. Jin, P. Barber, and S. Fallah, “A survey of [25] Y. Jeong, S. Son, E. Jeong, and B. Lee, “An integrated selfdiagnosis
deep learning applications to autonomous vehicle control,” IEEE system for an autonomous vehicle based on an IoT gateway and
Trans. Intell. TransportatiSyst., vol. 22, no. 2, pp. 712–733, 2021. deep learning,” Appl. Sci., vol. 8, no. 7, p. 1164, 2018.
[13] T. Ort, L. Paull, and D. Rus, “Autonomous vehicle navigation in rural [26] K. Lee, S. Jeon, H. Kim, and D. Kum, “Optimal path tracking control
environments without detailed prior maps,” In 2018 IEEE International of autonomous vehicle: Adaptive full-state linear quadratic
Conference on Robotics and Automation (ICRA), 2018, pp. 2040–2047. Gaussian (LQG) control,” IEEE Access, vol. 7,
[14] N. Adnan, S. M. Nordin, M. A. bin Bahruddin, and M. Ali, “How trust pp. 109120–109133, 2019.
can drive forward the user acceptance to the technology? In- [27] H. Fazlollahtabar and S. Hassanli, “Hybrid cost and time path
vehicle technology for autonomous vehicle,” Transportation Res. planning for multiple autonomous guided vehicles,” Appl. Intell.,
Part. A: Policy Pract., vol. 118, pp. 819–836, 2018. vol. 48, no. 2, pp. 482–498, 2018.
[15] J. Jiang and A. Astolfi, “Lateral control of an autonomous vehicle,” [28] B. Sebastian and P. Ben-Tzvi, “Physics based path planning for
IEEE Trans. Intell. Veh., vol. 3, no. 2, pp. 228–237, 2018. autonomous tracked vehicle in challenging terrain,” J. Intell. Robotic
[16] J. Fayyad, M. A. Jaradat, D. Gruyer, and H. Najjaran, “Deep learning Syst., vol. 95, no. 2, pp. 511–526, 2019.
sensor fusion for autonomous vehicle perception and localization: [29] Q. Wang, B. Ayalew, and T. Weiskircher. “Predictive maneuver
A review,” Sensors, vol. 20, no. 15, pp. 4220–4220, 2020. planning for an autonomous vehicle in public highway traffic,”
[17] H. Gao, B. Cheng, J. Wang, K. Li, J. Zhao, and D. Li, “Object classi- IEEE Trans. Intell. Transportation Syst., vol. 20, no. 4,
fication using CNN-based fusion of vision and LIDAR in autono- pp. 1303–1315, 2019.