0% found this document useful (0 votes)
68 views11 pages

A Deep Learning Platooning-Based Video Information-Sharing Internet of Things Framework For Autonomous Driving Systems

This document summarizes a research article that proposes a deep learning platooning-based video information-sharing Internet of Things (IoT) framework for autonomous driving systems. The framework incorporates computer vision, artificial intelligence, sensor technology, and communication technology to share information like road edges and obstacles between vehicles. Vehicles can form platoons to reinforce safety and stability, especially in congested areas. The empirical evaluation shows this approach can decrease collision time compared to traditional systems.

Uploaded by

laeeeq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views11 pages

A Deep Learning Platooning-Based Video Information-Sharing Internet of Things Framework For Autonomous Driving Systems

This document summarizes a research article that proposes a deep learning platooning-based video information-sharing Internet of Things (IoT) framework for autonomous driving systems. The framework incorporates computer vision, artificial intelligence, sensor technology, and communication technology to share information like road edges and obstacles between vehicles. Vehicles can form platoons to reinforce safety and stability, especially in congested areas. The empirical evaluation shows this approach can decrease collision time compared to traditional systems.

Uploaded by

laeeeq
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Platform Technologies and Applications for Industrial Internet of Things (IIoT) (PTA) - Research Article

International Journal of Distributed


Sensor Networks
2019, Vol. 15(11)
A deep learning platooning-based Ó The Author(s) 2019
DOI: 10.1177/1550147719883133
video information-sharing Internet of journals.sagepub.com/home/dsn

Things framework for autonomous


driving systems

Zishuo Zhou1, Zahid Akhtar2, Ka Lok Man3,4 and Kamran Siddique1

Abstract
To enhance the safety and stability of autonomous vehicles, we present a deep learning platooning-based video
information-sharing Internet of Things framework in this study. The proposed Internet of Things framework incorpo-
rates concepts and mechanisms from several domains of computer science, such as computer vision, artificial intelli-
gence, sensor technology, and communication technology. The information captured by camera, such as road edges,
traffic lights, and zebra lines, is highlighted using computer vision. The semantics of highlighted information is recognized
by artificial intelligence. Sensors provide information on the direction and distance of obstacles, as well as their speed
and moving direction. The communication technology is applied to share the information among the vehicles. Since vehi-
cles have high probability to encounter accidents in congested locations, the proposed system enables vehicles to per-
form self-positioning with other vehicles in a certain range to reinforce their safety and stability. The empirical evaluation
shows the viability and efficacy of the proposed system in such situations. Moreover, the collision time is decreased con-
siderably compared with that when using traditional systems.

Keywords
Autonomous vehicle, information sharing, platooning-based, IoT, autonomous driving, convolutional neural networks

Date received: 30 June 2019; accepted: 17 September 2019

Handling Editor: SooKyun Kim

Introduction Mellon University (Pittsburgh, PA).1 Subsequently, the


University of the Bundeswehr Munich (UniBw
Autonomous control technologies gradually entered the Munich, Neubiberg, Germany) reported early results
vehicle market, including adaptive cruise control (accel- of high-speed motorway driving.2 In the Eureka
eration/deceleration), automated emergency braking, PROMETHEUS project in 1994, UniBw Munich and
and lane-changing and lane-keeping system for locking
onto a path, resulting in full autonomy of a self-driving 1
car. The sensory perception system is composed of many Xiamen University Malaysia, Sepang, Malaysia
2
The University of Memphis, Memphis, TN, USA
subsystems responsible for tasks such as car localization, 3
AI University Research Centre (AI-URC), Xi’an Jiaotong-Liverpool
static obstacles mapping, moving objects detection and University, Suzhou, China
4
tracking, road mapping, and traffic signalization detection Swinburne University of Technology, Sarawak, Malaysia
and recognition, among others.
Corresponding author:
As a matter of fact, autonomous driving began in
Kamran Siddique, Xiamen University Malaysia, Jalan Sunsuria, 43900
the 1980s when Navlab vehicles, which functioned in Sepang, Selangor, Malaysia.
structured environments, were presented by Carnegie Email: [email protected]

Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License
(http://www.creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work
without further permission provided the original work is attributed as specified on the SAGE and Open Access pages
(https://us.sagepub.com/en-us/nam/open-access-at-sage).
2 International Journal of Distributed Sensor Networks

Daimler-Benz showed that autonomous driving proposed approach is detailed in section ‘‘Methodology.’’
reached a speed of 130 km/h in three-lane French Experiments are described and discussed in section
Autoroute traffic, which included tracking other vehi- ‘‘Experimental evaluation,’’ and conclusions are drawn in
cles and lane markings. The system determined when section ‘‘Conclusion and future directions.’’
the vehicle would change between lanes by itself,
although a human driver was required to approve the
decisions for safety reasons.3 Related work
Autonomous or self-driving vehicles are typically In 2012, lane detection was used to facilitate lane depar-
trained offline before they are allowed to perform in the ture warnings6 for drivers and reinforce the driver head-
real world.4,5 However large the training dataset might ing control in lane-keeping assist systems. The detection
be, in real-world driving, a vehicle is bound to come and tracking of vehicles driving ahead was utilized in
across unexpected situations (e.g., accident) where it adaptive cruise control systems7 to keep a safe and
needs to act (steer, brake, etc.) quickly. Moreover, the comfortable distance. Precrash systems, which trigger
detecting instrument without penetration function can- full braking power to reduce damage if a driver reacted
not detect objects that are occluded by other vehicles. slowly, also emerged.8
Consequently, blind areas where objects cannot be In 2014, Mercedes-Benz9 successfully exhibited a
detected remain. This limitation causes autonomous demonstration on a Class S 500 that was equipped with
vehicles to be incapable of handling emergency situa- close-to-production sensor hardware and solely relied
tions. If future information can be obtained by current on vision and radar sensors combined with accurate
vehicle from a reliable source, such as from another digital maps to gain a comprehensive understanding of
vehicle traversing the same road in front of the current complex traffic circumstances.
vehicle or a drone or satellite, it will get more time to In 2016, a 4WIS4WID10,11 vehicle was proposed, in
act. This ‘‘future’’ data presented to the vehicle is a sali- which the judgments of vision with the fuzzy control
ent data point as it would have been unexpected given methods were integrated to ensure the correct motion
the model learned by the vehicle. The question ‘‘how of the vehicle. The vehicle was able to change its velo-
can this data point be used by the vehicle to safely miti- city in a timely manner under any condition and was
gate the unexpected situation?’’ is the main objective of able to move in a curved and narrow lane successfully.
this work. Two inner loops9 of simultaneous localization and
If platooning-based information-sharing technology mapping helped improve perception and planning per-
is used in autonomous driving, then vehicles can share formances. An algorithm was presented by adding an
the situations among themselves. The obstacles occluded inner loop to the perception system to expand the detec-
by A can also be detected by B through the approach in tion range of sensors. The other inner loop obtained
which A sends the positions of obstacles that are relative practical feedback to restrain mutations of two adjacent
to A and its own position. We can then calculate the planning periods.
positions of obstacles that are relative to B. Google’s automatic vehicle project Waymo12 has been
In this study, a camera, radar, and lidar instrument shown to distinguish the obstruction between pedestrians
are used to detect obstacles rather than just relying on and cars. It calculates their velocity and predicts their
high-definition mapping and localization techniques. motion paths the very next moment. Waymo’s software
Furthermore, convolutional neural networks (CNNs) determines the trajectory, speed, lane, and steering maneu-
are applied to realize classification and recognition. vers needed to progress along the route safely. Despite sig-
The proposed system enables vehicles to commute and nificant contributions in autonomous driving, the project
share detected information and perform self-positioning still needs more testing to get fully matured.
with one another in a platoon by wireless devices. We With the development of integrated and miniaturiza-
select the WiMAX (Worldwide Interoperability for tion techniques, additional instruments installed on
Microwave Access) technique as a wireless communica- vehicles outside have been much smaller than those
tion method due to its long transmission distance and installed on earlier autonomous vehicles.
high transmission rate. After receiving information
from other vehicles in a platoon, a vehicle constructs
the circumstance of surrounding obstacles by analyzing Methodology
information and becomes capable of abating blind
areas. Finally, the dynamic window approach (DWA)
Installation of cameras and sensors
algorithm is adopted to plan the path, proportional– The methodology of installation of cameras and sen-
integral–derivative (PID), and model predictive control sors is adoptive from the architecture of Baidu’s Apollo
(MPC) that are applied in vehicle control. autonomous vehicle. Figure 1(a) shows the structure of
The rest of this article is organized as follows: Section Baidu’s Apollo, which uses multisensor fusion to
‘‘Related work’’ presents work related to this study. The improve perception performance and is a basic example
Zhou et al. 3

Figure 1. The autonomous vehicles and the major sensors: (a) a structure of Baidu’s Apollo autonomous vehicle13 and (b) an
autonomous vehicle with sensors installed: cameras, lidars, and GPS antennas.8

of modern autonomous vehicles, whereas Figure 1(b)


shows an image of a real autonomous vehicle with
almost the same sensor installed. This vehicle was used
in DARPA Urban Challenge 2007.
Lidar sensors are used to detect obstacles in the dis-
tance. These sensors are usually installed on the top to
acquire optimum vision. The detection result is classi-
fied via point cloud segment, and then the type, dis-
tance, and velocity of the obstacles are determined.
Cameras mainly have three tasks to perform. The
first is to recognize objects, which is different from lidar
that recognizes objects via CNNs. The second is to
recognize traffic lights. With the classification of red
Figure 2. Lane tracking on a marked highway.8
and green lights and GPS location on map, vehicles can
decide to go straight or turn left/right or wait for traffic
lights. The third task is to track lanes, as shown in
GPS+ inertial sensor are utilized.14 The distribution
Figure 2. This feature is necessary for vehicles in order
method is similar to that of Google’s autonomous driv-
to drive on modern streets.
ing framework, and the statistics are shown in Figure 3.
Radar sensors mainly detect nearby objects. These
The main advantage of this distribution is that it can
sensors are usually installed around vehicles at the mar-
cover most of the areas surrounding the vehicle and
gin to ensure that the vehicles do not contact other
adapt to various traffic situations and weather condi-
objects accurately.
tions. However, the expenditure of this distribution is
GPS is used with the help of a high-resolution map
high, and it is not very suitable for parking tasks
to determine the location of vehicles at the centimeter
because distances less than 1 m are undetectable.
level. A wireless device is also installed to accomplish
information sharing. We calculate the relative positions
of objects that are occluded by other vehicles connected.
Classification, tracking, and segmentation using
Vehicles detect surrounding obstacles using camera,
lidar, radar, GPS, and inertial sensors that are com-
CNNs
monly available in the market. Four cameras, a 32-layer CNN, as the most important algorithm used in
lidar, a 4-layer lidar, a millimeter-wave radar, and a advanced driver-assistance systems and automotive
4 International Journal of Distributed Sensor Networks

Figure 3. The detection distribution statistics of sensors for an autonomous vehicle.

Figure 4. Example of the typical layered structure of a CNN.18

automatic vision systems, is expected to play an impor- abstract form. In the field of image recognition, this
tant role in fully automatic driving. CNN is efficient in indicates that the first layers react to stimuli such as
analyzing scenes. This algorithm divides scenes into light intensity changes or oriented fields, whereas the
recognizable objects until objects, pedestrians, cars, later layers decide the identification of objects and
trucks, shoulders, and landmarks in the scenes can be make intelligent evaluation on its importance. This
recognized in the camera system.15–17 CNNs can learn behaves as a large generalization of the pattern layers
how to recognize and extract information from the to ‘‘search for’’ in an image. These layers are based on
scenes when driving in real time by using a large the mathematical functions contained by their neurons
amount of training data. For example, corners/bends to process pixels of the image. While all layers are com-
can be found through various layers of CNN, and the posited by neurons, not all of them serve for the same
next objects are loops, road signs, and the meaning of objective.
road signs. This information is transmitted to the sen- Figure 4 shows a typical CNN structure beginning
sor and fused with data from other sensors, such as with convolutional layers with increasing complexity
lidar or radar. Flash warnings or controlling brakes or and ending in an FC layer to extract the data. With the
steering through a multimedia interactive system can help of the FC layer arranged at the end, the network
be issued to understand the situation and respond to can collect in-depth data in varied dimensions through
the scenes. its convolutional layers and extract these data to a
CNNs include multiple categories of layers in which readable output included in the final FC layer.18 We
all information is fed through. These layers are stacked select VGG16-Places365 as the basic model for location
in a hierarchical pattern and consist of convolutional recognition because it shows the best performance on
layers, pooling layers, fully connected (FC) layers, and multiple datasets. Beginning with LeNet-5,19 CNNs
a loss layer. Each type of layer has its own concentra- usually have standard stacked convolution layers
tion and objective in the procedure of analyzing data. (optionally, followed by batch normalization and maxi-
With each consecutive layer, the analysis turns to more mum pooling), followed by one or more FC layers.
Zhou et al. 5

VGG16-Places335 has the same structure as that of HmmDistanceij to represent similarity. The calculation
VGG (Visual Geometry Group), which consists of 16 process is
weight layers, including 13 convolution layers and 3
FC layers. The Places dataset contains more than 10 HmmDistanceij = HmmDistanceji
  ð2Þ
million images with 365 unique scene categories; there- bin bin
= bitsum Fcnn ðiÞ  Fcnnð jÞ
fore, the size of the last FC layer should be modified to
365. The 13 convolution layers are divided into five bin
where Fcnn(i) bin
and Fcnn(j) are the feature descriptors of
parts, each with the same data dimension. Behind each two images. Location is considered an image sequence
part, a maximum aggregation layer exists, which is exe- rather than a single image because it performs better in
cuted through a 2 3 2 pixel window with a span of 2. long-term and large-scale environments, as described in
Following the stack of convolution layers are three FC works such as Thorpe et al.1 and Zong et al.9 In our
layers: the first two layers have 4096 channels, and the method, we define Slength as matching the image
third layer performs 365 channel location classifica- sequence length of the current frame. Therefore, the
tions, thereby comprising 365 channels (one for each image sequence of the first frame consists of continuous
category). In addition to these layers, the last layer is images in the range (i 2 Slength + 1, i), and we connect
the soft-max layer, and all hidden layers are equipped bin bin bin
Fcnn(ik + 1) , Fcnn(ik + 2) , . . . , Fcnn(i) to the final feature
with rectifier linear unit nonlinearity. for matching. In this case, we can use the sequence
CNNs can learn advanced semantic features at vari- information of equation (3) to obtain the distance
ous levels of abstraction through deep architecture. among images. The distance is the similarity score of
However, spatial information of images is lost through different places, and we keep it in the similarity matrix
FC layers, which may not be ideal in applications such (M). If we find that the distance between two frames is
as visual location recognition. The experimental results less than the given threshold, then these positions will
in Chen et al.20 and Bai et al.21 show that the deep fea- be successfully identified23–25
tures based on CNN generated in the convolution layer
perform better than those of the FC layer in loop clo- P1
Slength
sure detection. We modify the CNN model by adding HmmDistanceik, jk
k=0
several pool layers and deleting the FC layers to reduce Distanceij = Distanceji =
Slength
feature size and save image-processing time. After
adjusting the features of the three layers to one dimen- ð3Þ
sion, we use the connection operation22 to fuse them.
Visual features are among the most important fac- WiMAX data transportation
tors that affect the accuracy of image matching. Our
method uses the CNN features extracted from the WiMAX features. IEEE 802.16 is a set of telecommuni-
given CNN model rather than the traditional hand- cations technology standards to provide wireless access
made features to calculate the similarity among images. over long distances in various ways that cover point-to-
Floating point is the type of CNN functionality that we point links to full-mobile cellular-type access. The
ultimately acquire from the module. We name this fea- WiMAX technology is a broadband wireless access
ture Fcnn, which has a dimension of 1 3 100,352. A technology for wireless metropolitan area networks.26
practical way to reduce the cost of image matching is This technology supports not only fixed terminals but
to convert feature vectors into binary codes, which can also portable and mobile terminals.
be used for fast comparison with the Hamming dis-
tance. We first standardize each element into 8-bit inte- 1. High transmission rate
std
gers and then obtain the integer characteristics Fcnn , as
shown in equation (1). They can be easily converted The access speed of WiMAX can reach 70 Mbit/s.26
bin
into binary features Fcnn High transmission rate can help vehicles to exchange
enough information that can be used to depict circum-
std Fcnn  minðFcnn Þ stances around them.
Fcnn = 3 255 ð1Þ
maxðFcnn Þ  minðFcnn Þ
2. Long transmission distance
The use of a binary descriptor to match the
Hamming distance is fast and effective and is adopted The transmission distance can be longer than 50 km
to calculate the distance among images. We have deter- theoretically. WiMAX can effectively resist attenuation
mined in many studies that the similarity of two frames and multipath effects and select different encoding tech-
can be calculated by matching a single image, and nologies according to channel state and transmission
therefore, we can calculate their Hamming distance rate to improve coverage and capacity. The support of
6 International Journal of Distributed Sensor Networks

Figure 6. Illustration of vehicles’ operation to calculate the


position of invisible vehicles.

We calculate the relative position of C to A through


the information sent to A by B. According to the
Pythagorean theorem
Figure 5. Platooning-based information sharing among
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
autonomous vehicles.
SAC = ðSAB sina+SBC sin bÞ2 +ðSAB cos a+SBC cos bÞ2
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 + S 2 + 2S S
= SAB BC AB BC cosða  bÞ
spatial multiplexing, multiuser detection, self-adaptive
power control, and other technologies enables WiMAX ð4Þ
to have large coverage and capacity.
where SAB, SBC, and SAC are the distances between A
and B, B and C, and A and C, respectively; a is the
3. Standardization and compatibility !
angle between moving direction of A and vector AB ;
Reconciling standards and interoperability compatibil- and b is the angle between moving direction of B and
!
ity can be promoted on the basis of unified technical vector BC .
standards. Standardization and compatibility are the According to cosine theorem
guarantee of popularization of technique.  2 
2 2
SAB + SAC  SBC
u = a + cos1 ð5Þ
2SAB SAC
Sharing GPS data in a vehicle platoon. Vehicles can detect
blind areas near themselves considerably by sharing where u is the angle between moving direction of A and
detection information in the vehicle platoon, as !
vector AC . The magnitude of velocity can be calculated
depicted in Figure 5. Moreover, vehicles can obtain in a very short time interval, DT (Figure 7(a)). The
road condition information at further distance by com- length B moving in this short time duration is
bining GPS and detection information; for example,
car A receives information that a traffic accident occurs L + L0
in the direction of 30° south to east of car B at DS = Du 3 ð6Þ
2
N 368180 51:74300 , E 288050 15400 .
where Du is the change of the angle between the con-
necting line and perpendicular direction of moving.
Calculation of the position of the occluded vehicles. By using Then, the magnitude of velocity of B (VBA) related to
the sharing detection information in a platoon, we can A is
calculate the relative position from another relative
position. DS Du L +2 L
0
DuðL + L0 Þ ðu  u0 ÞðL + L0 Þ
As shown in Figure 6, we are driving in vehicle A jVBA j = = = =
DT DT 2DT 2DT
and obtain the angle a and distance (SAB) to car B ð7Þ
through detection. At this moment, object C cannot be
detected by A but can be detected by B; thus, angle b where DT is the time duration. The velocity of C (VCB)
and distance SBC can be transmitted to A. related to B can be calculated in the same way.
Zhou et al. 7

Figure 7. Calculation of the velocities VCB and VBA: (a) magnitude of velocity and (b) direction of velocity.

In a very short time interval, the direction can be 1. Circular trajectories: Circular trajectories (cur-
approximately same as segment m, that is vatures) are uniquely determined by pairs (v,w)
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi of translational and rotational velocities and are
m= L2 + L0 2  2LL0 cosDu ð8Þ the main factors considered by the DWA. In this
step, the output is two-dimensional (2D) velo-
where a is the angle of driving direction between A and city search space.
B, which is expressed as 2. Admissible velocities: Only safe trajectories are
! considered because of the restriction to admissible
L2 + m2  L0 2 p  velocities. If the vehicle can stop before it reaches
a = cos1  u ð9Þ
2Lm 2 the closest obstacle on the corresponding curva-
ture, then A pair (v,w) is considered admissible.
Figure 7 illustrates the process of calculation of the 3. Dynamic window: The admissible velocities are
magnitude and the direction of the velocities VCB and limited to those that can be reached within a
VCA. The velocity of C (VCB) has been transmitted to short time interval given the restricted accelera-
A simultaneously, and the velocity of B (VBA) related tions of the vehicles by the dynamic window.
to A is detected by A. We then calculate the velocity of
C related to A using a relative velocity formula. Here,
we did not take into account the theory of relativity Optimization. The objective of optimization maximizes
because a vehicle’s velocity is much lower than the velo- the following function
city of light, that is
Gðv, wÞ = sða  headingðv, wÞ + b  distðv, wÞ
ð11Þ
VCA = VCB + VBA ð10Þ + g  velðv, wÞÞ
where VCA is the velocity of C related to A, VCB is the This function trades off the following three aspects
velocity of C related to B, and VBA is the velocity of B in accordance with the current position and orientation
related to A. of vehicles:
Once vehicle A obtains the relative position (SAC, u)
and velocity (VCA) of C, it can then speculate on the 1. Target heading: Heading is a measure of prog-
motion of C since A cannot detect C directly. ress toward the destination. This aspect is maxi-
mal if the vehicle moves directly toward its
Planning realization using the DWA algorithm target.
2. Clearance: This aspect indicates the distance
The DWA algorithm can be divided into two parts: from the vehicle to the closest obstacle on the
search space and optimization. The two parts can be trajectory. The smaller the distance to an obsta-
further divided into three steps. cle, the higher is the vehicle’s desire to move
around it.
Search space. The search space of the possible velocities 3. Velocity: vel represents the forward velocity of
can be achieved in three steps: the vehicle.
8 International Journal of Distributed Sensor Networks

The function s smoothes the weighted sum of the


three aspects and leads to considerable side clearance
from obstacles.27,28

Fuzzy adaptive PID and MPC


Fuzzy adaptive PID. The fuzzy adaptive PID control is
devised on the basis of PID control. Its general form
can be expressed as Figure 8. Structure of fuzzy adaptive PID controller.25

ðt
deðtÞ Compared with classical PID control, MPC has the
uðtÞ = kp eðtÞ + ki eðtÞdt + kd ð12Þ
dt capabilities of optimization and prediction. MPC is an
0
optimization control problem that aims to decompose a
where kp is the proportion coefficient, ki is the integral long time span, even an infinite time span, into several
coefficient, and kd is the differential coefficient. shorter or finite time spans for optimal control prob-
Summation and difference quotients are usually lems and still pursues the optimal solution to a certain
replaced with integral and differential coefficients, extent.
respectively, in the actual control. The discretization Three steps are performed in MPC:
equation can be expressed as
1. Predictive modeling is the basis of MPC, which
X
k 1 is used to predict the future output of the
uð k Þ = k p e ð k Þ + k i eðiÞ + kd ½eðk Þ  eðk  1Þ ð13Þ system.
i=0
2. Rolling optimization, an online optimization, is
The 2D fuzzy inference controller has two inputs used to optimize control inputs in a short time
and three outputs as shown in Figure 8. The inputs of to minimize the gap between the predictive
the fuzzy inference controller are the deviation e and model output and reference value.
deviation change rate ec between the expected and 3. Feedback correction, which is based on the
actual front wheel angle. The outputs are the deviation actual output of the controlled object at the
in the proportional, integral, and differential coeffi- new sampling time, corrects the output of the
cients, which can be expressed as Dkp, Dki, and Dkd, predictive model and then optimizes it to pre-
respectively. vent the large gap between the control output
In the steering process, the deviation and deviation and expectation caused by model mismatch or
change rate test the fuzzy inference controller con- external interference.33
stantly. The fuzzy controller can adjust the three para-
meters of kp, kd, and ki to meet the various Experimental evaluation
requirements of e and ec, which are deviation and
deviation change rate. The vehicle can have an appro- Here, we provide an experimental evaluation of the pro-
priate response and enhance the steering stability.29–31 posed Internet of Things (IoT) framework.

MPC. MPC is an advanced process control method that Experimental setup


is used to realize process control under certain con- We implemented the experiments by the simulation
straints. Its implementation depends on the dynamic software and achieved significant results. However, it is
model of the process (usually linear model). In the con- advised that the results should only be used as a refer-
trol time domain (for a limited period of time), it mainly ence. In order to simulate driving at residential districts,
optimizes the current time, considers the future time to urban roads, and highways, we considered 20, 40, 60,
obtain the optimal control solution of the current time, and 80 km/h speed wherever appropriate:
and optimizes repeatedly to achieve the optimal solu-
tion of the entire time domain.32,33 1. We release three to five autonomous vehicles by
MPC is a time-dependent method that uses the cur- disabling the status of platooning-based infor-
rent state of the system and the current control quantity mation-sharing function on a simulating road at
to realize the control of the future state of the system. 20, 40, 60, and 80 km/h speed. We then make
The future state of the system is uncertain; thus, the dynamic obstacles move along different direc-
future control quantity should be adjusted continuously tions. The obstacles must satisfy the inclusion of
according to the system state in the control process. visible and invisible objects. After the vehicles
Zhou et al. 9

Figure 9. Collision times in four different tests at various simulation speeds.

have traveled 5000 km, we count the occur- and B4. After performing several experiments,
rences of collisions and record them as a1, a2, we record the average results and report them in
a3, and a4. the following section.
2. We release three to five autonomous vehicles by
enabling the platooning-based information-
sharing function on a simulating road at 20, 40, Results and discussion
60, and 80 km/h speed. We then make dynamic Figure 9 shows the result statistics for all test groups
obstacles move along different directions. The with respect to speed and collision times. For Test 1,
obstacles must satisfy the inclusion of visible we switched off the wireless device in the proposed
and invisible objects. After the vehicles have tra- platooning-based information-sharing framework and
veled 5000 km, we count the occurrences of col- obtained the collision times of 21.4, 35.8, 51.2, and 84.4
lisions and record them as A1, A2, A3, and A4. against the speeds of 20, 40, 60, and 80 km/h, respec-
3. We release only one autonomous vehicle with tively. This scenario and performance is almost the
detectors that are completely covered on a simu- same as that available in most common autonomous
lating road at 20, 40, 60, and 80 km/h speed. We vehicles today. As we have predicted, collisions increase
then make dynamic obstacles move along differ- as velocity increases. The reaction distance decreases as
ent directions. The obstacles must satisfy the velocity increases. When invisible and moving obstacles
inclusion of visible and invisible objects. After emerge suddenly, inevitable collisions usually happen.
the vehicle has traveled 5000 km, we count the In contrast to Test 1, Test 2 is conducted by switch-
occurrences of collisions and record them as b1, ing on the wireless device. After they are connected with
b2, b3, and b4. each other, autonomous driving becomes safe and pre-
4. We release three to five autonomous vehicles dictable. An obvious decrease in collision times occurs,
that turn on platooning-based information-shar- especially when vehicles are moving at high speeds. The
ing function on a simulating road at 20, 40, 60, problem caused by reaction distance is insufficient due
and 80 km/h speed. We completely cover the to the platooning-based information-sharing function
detectors of one of the vehicles. We then make because vehicles obtain information on invisible moving
dynamic obstacles move along different direc- obstacles.
tions. The obstacles must satisfy the inclusion of Test 3 shows the worst results, which indicate that if
visible and invisible objects. After the vehicles the detectors do not function, then the vehicle would
have traveled 5000 km, we count the collision lose all safety. We set this test mainly to simulate the
that occurs on the vehicle with completely cov- most possible dangerous case that autonomous vehicles
ered detectors and record them as B1, B2, B3, may meet, namely, sensors are invalid. We switch off
10 International Journal of Distributed Sensor Networks

the wireless device to perform a contrast experiment. contributions to this article through the XJTLU Key
Numerous collisions occur. In this case, autonomous Programme Special Fund (KSF-P-02).
vehicles cannot guarantee the security of passengers.
We then switch on the wireless device to conduct ORCID iD
Test 4. Collision times are still high but have improved.
Kamran Siddique https://orcid.org/0000-0003-2286-1728
Vehicles can obtain information indirectly through
other vehicles connected to them. In sum, the four tests
prove that platooning-based information sharing has a References
positive effect on autonomous vehicles. 1. Thorpe C, Hebert M, Kanade T, et al. Toward autono-
mous driving: the CMU Navlab. Part II: system and
architecture. IEEE Expert 1991; 6(1): 44–52.
Conclusion and future directions 2. Dickmanns ED and Zapp A. Autonomous high speed
road vehicle guidance by computer vision. In: Proceed-
We improved the performance of autonomous vehicles ings of the 10th triennial world congress, Munich, 27–31
by adding a platooning-based information-sharing July 1987, vol. 4, pp.221–226. Amsterdam: Elsevier.
function to decrease the risk of crashing when detectors 3. Behringer R. Road recognition from multifocal vision.
are covered and invisible moving obstacles suddenly In: Proceedings of the intelligent vehicles ‘94 symposium,
appear. In other words, when the platooning-based Paris, 24–26 October 1994, pp.302–307. New York: IEEE.
information-sharing mode is enabled, the autonomous 4. Kim J, Lim G, Kim Y, et al. Deep learning algorithm
using virtual environment data for self-driving car. In:
vehicles can notice and predict the obstacles in advance.
Proceedings of the 2019 international conference on artifi-
The vehicles can plan and avoid the obstacles accu-
cial intelligence in information and communication
rately and enhance the safety in contrast to vehicles (ICAIIC), Okinawa, Japan, 11–13 February 2019,
without platooning-based information-sharing mode or pp.444–448. New York: IEEE.
that do not switch on this functionality. 5. Oh CS and Yoon JM. Hardware acceleration technology
In the future, we aim to optimize the path standing for deep-learning in autonomous vehicles. In: Proceed-
on a higher level, that is, by designing an algorithm ings of the 2019 IEEE international conference on big data
that enables vehicles to move cooperatively to save fuel and smart computing (BigComp), Kyoto, Japan, 27 Feb-
and time. We also intend to separate the different sta- ruary–2 March 2019, pp.1–3. New York: IEEE.
tus of vehicles and utilize them to improve accuracy 6. Suzukia K and Janssonb H. An analysis of driver’s steer-
and security. For instance, separating high-speed vehi- ing behaviour during auditory or haptic warnings for the
designing of lane departure warning system. JSAE Rev
cles from low-speed vehicles can save time and enhance
2003; 24(1): 65–70.
safety. Such improvements should be considered after 7. van Arem B, van Driel CJG and Visser R. The impact of
the safety of autonomous vehicles is sufficiently high cooperative adaptive cruise control on traffic-flow char-
and stable. At any time, the safety of autonomous vehi- acteristics. IEEE T Intell Transp 2006; 7(4): 429–436.
cles is always the most important consideration. In 8. Luettel T, Himmelsbach M and Wuensche H. Autono-
addition, with the application of 5G in the future, there mous ground vehicles—concepts and a path to the
will be a potential to reduce the latency of information future. P IEEE 2012; 100: 1831–1839.
transmission significantly.34,35 9. Zong W, Zhang C, Wang Z, et al. Architecture design
and implementation of an autonomous vehicle. IEEE
Access 2018; 6: 2169–3536.
Declaration of conflicting interests 10. Li TS, Lee M, Lin C, et al. Design of autonomous and
The author(s) declared no potential conflicts of interest with manual driving system for 4WIS4WID vehicle. IEEE
respect to the research, authorship, and/or publication of this Access 2016; 4: 2169–3536.
article. 11. Lee M and Li TS. Kinematics, dynamics and control
design of 4WIS4WID mobile robots. J Eng 2015; 2015(1):
6–16.
Funding 12. Saunders W, Sastry G, Stuhlmüller A, et al. Trial without
The author(s) disclosed receipt of the following financial sup- error: towards safe reinforcement learning via human inter-
port for the research, authorship, and/or publication of this vention. In: Proceedings of the 17th international conference on
article: This research was supported by Office of Research autonomous agents and multi agent systems, Stockholm, 10–15
and Innovation, Xiamen University Malaysia under XMUM July 2018, pp.2067–2069. Richland, SC: AAMAS.
Research Program Cycle 3 (Grant No: XMUMRF/2019-C3/ 13. Baidu Apollo, http://apollo.auto/platform/perception.
IECE/0006). Ka Lok Man thanks the AI University Research html
Centre (AI-URC), Xi’an Jiaotong-Liverpool University, 14. Ranges A, Yuen K, Satzoda RK, et al. A multimodal,
Suzhou, China, for supporting his related research full-surround vehicular testbed for naturalistic studies
Zhou et al. 11

and benchmarking: design, calibration and deployment, (ICRA), Seattle, WA, 26–30 May 2015, pp.6328–6335.
2019, https://arxiv.org/pdf/1709.07502.pdf New York: IEEE.
15. Gao H, Cheng B, Wang J, et al. Object classification 25. Zhu J, Ai Y, Tian B, et al. Visual place recognition in
using CNN-based fusion of vision and LIDAR in auton- long-term and large-scale environment based on CNN
omous vehicle environment. IEEE T Ind Inform 2018; feature. In: Proceedings of the 2018 IEEE intelligent vehi-
14(9): 4224–4231. cles symposium (IV), Changshu, China, 26–30 June 2018,
16. Kim H, Hong S, Son H, et al. High speed road boundary pp.1679–1685. New York: IEEE.
detection on the images for autonomous vehicle with the 26. So-In C, Jain R and Tamimi A. Scheduling in IEEE
multi-layer CNN. In: Proceedings of the 2003 interna- 802.16e mobile WiMAX networks: key issues and a sur-
tional symposium on circuits and systems (ISCAS ’03), vey. IEEE J Sel Area Comm 2009; 27(2): 156–171.
Bangkok, Thailand, 25–28 May 2003. New York: IEEE. 27. Fox D, Burgard W and Thrun S. The dynamic window
17. Ishibushi S, Taniguchi A, Takano T, et al. Statistical approach to collision avoidance. IEEE Robot Autom Mag
localization exploiting convolutional neural network for 1997; 4(1): 23–33.
an autonomous vehicle. In: Proceedings of the IECON 28. Seder M and Petrovic I. Dynamic window based
2015 – 41st annual conference of the IEEE Industrial Elec- approach to mobile robot motion control in the presence
tronics Society, Yokohama, Japan, 9–12 November 2015, of moving obstacles. In: Proceedings of the 2007 IEEE
pp.1369–1375. New York: IEEE. international conference on robotics and automation,
18. Welling E and Oppelt M. Convolutional neural networks Rome, 10–14 April 2007, pp.1986–1991. New York:
in autonomous vehicle control systems, 2017, https:// IEEE.
pdfs.semanticscholar.org/545b/2ce4bc5ed7b1c1089020b3e 29. Wang C, Zhou D, Zhao W, et al. Front wheel angle con-
53c1d67186370.pdf#targetText=Their%20layers%20allo trol of steering by wire system based on fuzzy adaptive
w%20them%20to,image%20analysis%20and%20object PID algorithm. WSEAS Trans Syst Control 2015; 10:
% 20recognition 577–583.
19. LeCun Y, Bottou L, Bengio Y, et al. Gradient-based 30. Wu Z and Mizumoto M. PID type fuzzy controller and
learning applied to document recognition. P IEEE 1998; parameters adaptive method. Fuzzy Set Syst 1996; 78(1):
86(11): 2278–2324. 23–35.
20. Chen Z, Lam O, Jacobson A, et al. Convolutional neural 31. Halin H, Haris H, Razlan ZM, et al. Simulation
network-based place recognition, 2014, https://arxiv.org/ studies—path tracking of an autonomous electric vehicle
ftp/arxiv/papers/1411/1411.1509.pdf (AEV) by using fuzzy information of speed and steering
21. Bai D, Wang C, Zhang B, et al. CNN feature boosted angle. In: Proceedings of the 2018 international conference
SeqSLAM for real-time loop closure detection, 2017, on computational approach in smart systems design and
https://arxiv.org/pdf/1704.05016.pdf applications (ICASSDA), Kuching, Malaysia, 15–17
22. Arroyo R, Alcantarilla PF, Bergasa LM, et al. Fusion August 2018, pp.1–4. New York: IEEE.
and binarization of CNN features for robust topological 32. Frasch JV, Gray A, Zanon M, et al. An auto-generated
localization across seasons. In: Proceedings of the IEEE/ nonlinear MPC algorithm for real-time obstacle avoid-
RSJ international conference on intelligent robots and sys- ance of ground vehicles. In: Proceedings of the 2013 Eur-
tems (IROS), Daejeon, South Korea, 9–14 October 2016, opean control conference (ECC), Zurich, 17–19 July 2013,
vol. 1, pp.4656–4663. New York: IEEE. pp.4136–4141. New York: IEEE.
23. Milford M and Wyeth GF. SeqSLAM: visual route-based 33. Holka KS and Waghmare LM. An overview of model
navigation for sunny summer days and stormy winter predictive control. Int J Control Autom 2010; 3(4): 47–64.
nights. In: Proceedings of the IEEE international confer- 34. Molina-Masegosa R and Gozalvez J. LTE-V for sidelink
ence on robotics and automation (ICRA), Saint Paul, 5G V2X vehicular communications: a new 5G technology
MN, 14–18 May 2012, pp.1643–1649. New York: IEEE. for short-range vehicle-to-everything communications.
24. Arroyo R, Alcantarilla PF, Bergasa LM, et al. Towards IEEE Veh Technol Mag 2017; 12(4): 30–39.
life-long visual localization using an efficient matching of 35. Campolo C, Molinaro A, Iera A, et al. 5G network sli-
binary sequences from images. In: Proceedings of the cing for vehicle-to-everything services. IEEE Wirel Com-
IEEE international conference on robotics and automation mun 2017; 24(6): 38–45.

You might also like