0% found this document useful (0 votes)
44 views15 pages

Tits 2020 2980855

1. This document discusses the development of an autonomous vehicle model that uses a monocular vision sensor and ultrasonic sensor for obstacle avoidance. It is designed to accomplish autonomous driving on a specified track. 2. The model utilizes a single Raspberry Pi as its sole computational unit, representing an economical hardware approach. Experimental results indicate the model can achieve real-time response without significant overhead. 3. By using inexpensive hardware like the Raspberry Pi, this model presents a cost-effective solution for autonomous driving capabilities that could be applied to intelligent transportation systems.

Uploaded by

rahul r
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views15 pages

Tits 2020 2980855

1. This document discusses the development of an autonomous vehicle model that uses a monocular vision sensor and ultrasonic sensor for obstacle avoidance. It is designed to accomplish autonomous driving on a specified track. 2. The model utilizes a single Raspberry Pi as its sole computational unit, representing an economical hardware approach. Experimental results indicate the model can achieve real-time response without significant overhead. 3. By using inexpensive hardware like the Raspberry Pi, this model presents a cost-effective solution for autonomous driving capabilities that could be applied to intelligent transportation systems.

Uploaded by

rahul r
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

This article has been accepted for inclusion in a future issue of this journal.

Content is final as presented, with the exception of pagination.

IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 1

An Efficient and Scalable Simulation Model for


Autonomous Vehicles With Economical Hardware
Muhammad Sajjad , Muhammad Irfan , Khan Muhammad , Member, IEEE,
Javier Del Ser , Senior Member, IEEE, Javier Sanchez-Medina , Member, IEEE,
Sergey Andreev , Senior Member, IEEE, Weiping Ding , Senior Member, IEEE,
and Jong Weon Lee

Abstract— Autonomous vehicles rely on sophisticated hardware Index Terms— Autonomous driving, Raspberry Pi,
and software technologies for acquiring holistic awareness of scalar-visual sensor, intelligent transportation systems.
their immediate surroundings. Deep learning methods have
effectively equipped modern self-driving cars with high levels I. I NTRODUCTION
of such awareness. However, their application requires high-end
computational hardware, which makes utilization infeasible for
the legacy vehicles that constitute most of today’s automotive
industry. Hence, it becomes inherently challenging to achieve
A VEHICLE capable of perceiving its surrounding envi-
ronment and driving by itself safely without human
intervention is known as an autonomous vehicle (also referred
high performance while at the same time maintaining adequate to as self-driving, driverless, or robotic vehicle) [1], [2].
computational complexity. In this paper, a monocular vision
and scalar sensor-based model car is designed and implemented Autonomous cars are constantly making headline news over
to accomplish autonomous driving on a specified track by the last few years. Different manufacturing companies and
employing a lightweight deep learning model. It can identify startups are targeting to develop safer, more responsive, and
various traffic signs based on a vision sensor as well as avoid reliable cars for consumers of the next generation [3]. There
obstacles by using an ultrasonic sensor. The developed car utilizes is a growing competition among the biggest car manufac-
a single Raspberry Pi as its computational unit. In addition, our
work investigates the behavior of economical hardware used to turing companies, each making their own version of a self-
deploy deep learning models. In particular, we herein propose a driving car.
novel, computationally efficient, and cost-effective approach. The Companies like Google, Apple, Honda, Porsche, and Tesla
designed system can serve as a platform to facilitate the devel- have also established labs for developing self-driving cars.
opment of economical technologies for autonomous vehicles that Baidu [4], which is a Chinese web services corporation, has
can be used as part of intelligent transportation or advanced
driver assistance systems. The experimental results indicate also focused their attention on improving different factors
that this model can achieve real-time response on a resource- involved in self-driving cars. Other companies and research
constrained device without significant overheads, thus making it labs are also working on individual layers involved in
a suitable candidate for autonomous driving in current intelligent autonomous vehicles, such as sensor, communication, operat-
transportation systems. ing system, infotainment system, and computational hardware
to enhance their performance. As autonomous vehicles rely
Manuscript received August 15, 2018; revised June 29, 2019, September 25, on several capabilities, these individual factors can be further
2019, and December 16, 2019; accepted January 24, 2020. This research was
supported by the MSIT (Ministry of Science, ICT), Korea, under the ITRC fused in them to reliably resolve different challenges in the
(Information Technology Research Center) support program (IITP-2019-2016- field of self-driving cars.
0-00312) supervised by the IITP (Institute for Information & Communications More than 1.25 million people die in car accidents around
Technology Promotion), the Natural Science Foundation of Jiangsu Province
under Grant BK20191445, the Six Talent Peaks Project of Jiangsu Province the globe each year. According to a report of the World Health
under Grant XYDXXJS-048, and in part by the RADIANT Project, Academy Organization [5], over 50 million people suffer from non-fatal
of Finland. The Associate Editor for this article was E. I. Vlahogianni. injuries, while many acquire a disability. Car crashes cause
(Corresponding author: Khan Muhammad.)
Muhammad Sajjad is with the Digital Image Processing Laboratory, extensive financial losses to individuals, their relatives, and the
Islamia College University, Peshawar 25000, Pakistan (e-mail: muham- nations. These losses include the cost of treatment, time taken
[email protected]). off from jobs, and effort to care for the injured. The cause of
Muhammad Irfan, Khan Muhammad, and Jong Weon Lee are with
the Department of Software, Sejong University, Seoul 143-747, South these road crashes and accidents is distracted driving, which
Korea (e-mail: [email protected]; [email protected]; takes lives of innocent people. Considering these losses of pre-
[email protected]). cious human beings, a system is needed, which is totally free
Javier Del Ser is with TECNALIA, Basque Research and Technology
Alliance (BRTA), 48160 Derio, Spain, and also with the University of the of human intervention or partially assists humans to minimize
Basque Country, 48013 Bilbao, Spain (e-mail: [email protected]). these fatalities, thus advancing autonomous driving industry.
Javier Sanchez-Medina is with the Centro de Innovación para la Sociedad de Researchers from different parts of the globe are con-
la Información, University of Las Palmas de Gran Canaria, 35001 Las Palmas,
Spain (e-mail: [email protected]). tributing to different aspects of autonomous vehicles [6]–[11].
Sergey Andreev is with the Unit of Electrical Engineering, Tampere To motivate research community toward autonomous driving
University, 33720 Tampere, Finland (e-mail: [email protected]). technology, the DEFENSE Advanced Research Projects
Weiping Ding is with the School of Information Science and Technology,
Nantong University, Nantong 226019, China (e-mail: [email protected]). Agency (DARPA) arranged the Grand and Urban challenge
Digital Object Identifier 10.1109/TITS.2020.2980855 competitions in the USA [12], [13]. The purpose of the chal-
1524-9050 © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

2 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS

lenge was the development of autonomous vehicles that can 2. Raspberry Pi is used as an independent processing unit
traverse off-road terrain by themselves [14]. Urban challenge to handle visual and scalar data in real time, without
competitors focused on improvement of autonomous vehicles reliance on a centralized server for model loading and
with urban driving technology. As a result of these competi- processing.
tions, several IT companies and automakers including General 3. Deep Neural Network (DNN) models require a high-
Motors, Volkswagen, Google, and Toyota have invested into end processing unit for execution in real time. Therefore,
commercializing the concept of autonomous cars. Similarly, a lightweight deep model has been proposed for resource-
another competition in autonomous vehicles was held by the constrained devices, which is executed in real time for
Hyundai Motor Group in the years of 2010 and 2012 in South autonomous maneuvering.
Korea for establishing the foundation for autonomous driving The rest of the paper is organized as follows: Section II
technology. conducts a literature review. In Section III, the proposed
IntelliDrive, also known as Connected Vehicle (CV), framework and the methodology offered for the development
enables [15] double-way wireless communication for of the autonomous car are discussed. Experimental results and
vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) challenges faced during the development of this project are
communications. Within the CV environment, vehicles with summarized in Section IV. Section V concludes the paper with
communication devices and roadside infrastructure share the the future directions of work.
previously exclusive traffic including the vehicle maneuvers,
trajectories, and origins/destinations. Hence, the environment II. R ELATED W ORK
of CV will allow for better control of urban intersections
In this section, we discuss some of the existing research
cooperatively for other vehicles and infrastructure. This
already completed on autonomous driving generally or on the
CV environment for collaboration among vehicles has
individual factors involved in autonomous cars, either as a
attracted significant attention because of its potential benefits.
safety tool for public or as a financial source for industries.
A prominent development of cooperation among the CV
This section further incorporates the traditional and deep
vehicles is a Cooperative Adaptive Cruise Control system
learning approaches, limitations of the existing systems, and
[16], [17] intended to optimally operate vehicle tactics in
current challenges encountered in the domain of autonomous
various situations. Furthermore, the vision of a Cooperative
driving.
Vehicle Intersection Control (CVIC) system is that an
intersection controller and a vehicle may work together to
expand traffic maneuvers for fully automated cars. A. Conventional Approaches
In this paper, we outline an efficient and economical solution Various car manufacturing and IT-related companies includ-
for autonomous driving in the form of a lightweight deep ing General Motors, Waymo, Daimler-Bosch, Volkswagen
learning model running over a resource-constrained device Group, and Groupe Peugeot S.A. (Groupe PSA) are aiming to
in real time. The model car is equipped with scalar and contribute in the field of autonomous cars. Human error can
vision sensors using Raspberry Pi as ground processing unit be minimized by making the concept commercial, which will
for autonomous driving on a pre-defined track. The proposed provide a means of safe transport for public. As autonomous
solution can be used as part of intelligent transportation or cars are equipped with many sensors, these produce lots of
advance driver assistance system to improve traffic manage- data. The latter is analyzed by various companies including
ment. A Raspberry Pi camera sensor is utilized for capturing Google and Facebook and researchers for many purposes and
a video stream in real time. Preprocessing is applied to the applications. For instance, Chen et al. [18] constructed high-
data collected from the vision and scalar sensors. A com- quality 3D objects for autonomous cars. Objects from high-
putationally efficient deep neural network is then trained for fidelity imagery are constructed in the form of 3D bounding
making decisions to go straight, stop, and turn left or right boxes. The problem has been formulated using the Markov
autonomously. Ultrasonic sensors are needed for detecting and field encoding, ground plane, and various depth structures.
avoiding obstacles along the path of the car. The following are Their technique performs well on the KITTI training set
the major contributions of this work: leading to 25% higher recall than other existing techniques.
1. A Raspberry Pi based framework for a self-driving model Similarly, Guidolini et al. [19] proposed an automatic obstacle
car is proposed, which can handle four tasks: avoidance mechanism for autonomous cars using the IARA
a. Self-driving car model moves on a pre-defined track dataset. The technology works effectively by avoiding obsta-
autonomously, being capable of driving in three direc- cles that appear suddenly in the frame of view as well as when
tions: straight, left, or right. they appear normally as expected. The response time for obsta-
b. The model car detects and recognizes various traffic cle avoidance is nearly 3 milliseconds. Choi et al. [20] studied
signs and takes action accordingly. obstacle detection using LIDARs. The algorithm designed
c. The distance to a traffic sign is calculated by using for obstacle detection generates the obstacle position map
a vision sensor. Obstacles are detected by processing from LIDARs data. With the help of six LIDARs fitted on a
ultrasonic data and are then avoided by the self-driving passenger car, it can successfully detect obstacles by reaching
model car. its targets.
d. Different HAAR Cascade based classifiers for the traffic Lenoard et al. [21] developed a software architecture for
signs and a deep learning model are trained for the track; a perception-driven autonomous car. In the proposed work,
they are loaded by Raspberry Pi in real-time. they feed all the sensor data into a Kino-dynamic motion

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

SAJJAD et al.: EFFICIENT AND SCALABLE SIMULATION MODEL FOR AUTONOMOUS VEHICLES WITH ECONOMICAL HARDWARE 3

planning algorithm to accomplish the autonomous tactics. Sun et al. [38] carried out a brief study on lane detection
They have achieved autonomous driving of 55 miles over for autonomous cars. In their work, images are converted into
6 hours without a mishap. In contrast with the current trends the binary form by adaptive thresholding, and then edges of
in autonomous cars, Kalra et al. [22] carried out a study on the road are extracted. Lanes are extracted from the edges
how safe the journey of the autonomous car will be when followed by their detection. They used different road images
driving up to hundreds of billions miles. Their study suggests of various weather conditions, basic thresholds, and propor-
that autonomous cars must be driven for billions of miles tional coefficients. The same problem is also discussed by
to check the reliability of autonomous driving. Their study Saha et al. [39] suggesting a similar algorithm. An RGB
claims that due to safety reasons it may not be possible to image is taken from the autonomous car and converted into
make it available for public use. Another study conducted by a grayscale image. A flood-fill algorithm labels the largest
Petrovskaya et al. [23] developed a module to detect a moving components that are connected in the grayscale image. After
vehicle and track the detected vehicle for their autonomous applying the flood-fill algorithm, extraction of the largest
robot named “Junior”. To estimate the position of the tracked connected component is completed. Unwanted regions in the
vehicle, they used geometric and dynamic properties of the image are skipped and ROI is filtered for lane and road edge
detected vehicles. For position tracking, a Bayes filter is detection. Hong et al. [40] studied the problem of detecting
used for each vehicle and a Red-Blackwellised Particle Filter solid and dashed lanes in the road. Existing techniques only
(RBPF) is applied to eliminate separate data segmentation. detect the central lanes in the road and are unable to detect
“Junior” can find the position, shape, and velocity of the the solid and dashed lanes. The proposed algorithm overcomes
vehicles tracked. these limitations and can differentiate between dashed and
B. Deep Learning Approaches solid lanes.
The privacy of autonomous cars in vehicular networks is
DNN, which is a part of artificial intelligence, is widely
paramount in all aspects, same as privacy of other domains,
used in different fields including computer vision [24], [25],
such as industrial and surveillance environments [41]. For self-
natural language processing [26], speech recognition [27],
driving cars, a secure channel is needed for exchanging data.
machine translation [28], social networks filtering [29], and
M. Ali Alheeti et al. [42] proposed a scheme for intrusion
bioinformatics [30]. In computer vision, DNN is utilized for
detection based on Integrated Circuit Metric (ICM). Authors
image classification [31], [32] and object detection [33]. Deep
claim that the features extracted by the ICM can be used for
Learning is also employed for object segmentation and several
many purposes including security and identification with the
other applications [34]. For instance, Behrendt et al. [35]
efficient use of time, speed, and memory. The study focused
carried out a study on detection of traffic lights for autonomous
on enhancing the authentication in autonomous driving and
cars in real time using DNNs. The information for the traffic
building an IDS as well as making them intelligent by using
lights in the early system was map-based. A neutral network
the features of autonomous vehicles. The main theme of the
is trained on a thousand images to achieve high accuracy.
study was proposing a scheme primarily based on MEMS
In experimental analysis, a video sequence of more than
gyroscope and constructing a system for identifying the car
8,000 frames was used. The contribution of the proposed
as an ICM vehicle. The authors argue for satisfactory results
system is traffic light detection, tracker, and classification
while using FFNN-IDS and k-NN-IDS in blocking malevolent
of light (red, yellow, green, or off). The proposed approach
vehicles. M. Ali et al. [43] proposed a scheme for detecting
achieved high accuracy in challenging environments. Using
a malicious car in an urban transport scenario. The detection
artificial intelligence for vehicle and lane detection in real time
system was mainly based on a Fuzzy Petri Net (FPN). The
is another important task, to which several works are dedi-
FPN was used for detecting dropped packets in vehicular ad-
cated. For example, Huval et al. [36] studied the problem of
hoc networks. By finding the number of received and dropped
vehicle and lane detection with the help of DNNs. This study
packets, IDS showed satisfactory results in terms of vehicular
mainly focused on vehicle detection in real-time scenarios.
network security.
A large amount of data is usually required for training a neural
Self-driving cars use V2V or V2I technology for exchanging
network, which includes data on different road scenarios and
information in a vehicular network. Gora et al. [44] built
highways. The proposed algorithm reported to produce high
a microscopic module to arrange the traffic and exchange
accuracy during practical scenarios in challenging environ-
information with other autonomous cars in vehicular networks.
ments. Instead of mediated perception and behavior reflex
They developed simulation software to reduce the chances of
approaches, Chen et al. [37] proposed another paradigm – a
collision in autonomous cars, while envisioning the flow of
direct perception approach for the estimation of afford driving.
traffic. They included traffic lights, road lane junctions, and a
In the proposed method, an input image is resolved into small
specific route for the car to reach a random point. Speed and
key points. This representation offers a description of the
other properties of autonomous cars are adjusted according to
scene for autonomous driving. A deep Convolutional Neural
the information received from other vehicles for safer driving.
Network was trained for this purpose, whose details are given
Path planning, vehicle sensing, and navigation of autonomous
in Section III.
cars are challenging tasks. It is crucial for the autonomous car
C. Limitations and Major Challenges to safely maneuver in different environments. This problem is
One of the biggest problems for autonomous cars is find- discussed by Huy et al. [45] who propose a path-planning
ing the track and following it for the rest of the journey. algorithm. The latter follows a probabilistic method while

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

4 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS

filtering particles for dynamic obstacles. Obstacle points are


detected by using a support vector machine to generate a
smooth path via a Bezier curves algorithm. They tested the
algorithm in simulation software and claimed that it obtained
acceptable results in complicated scenarios with multiple mov-
ing objects. Road, vehicle, and obstacle recognition on the road
using a vision sensor is another challenging task in the field of
moving robotics. Birdal et al. [46] proposed algorithms, which
help overcome the aforementioned problems while driving in
vehicular networks. Images from the vision sensor are con-
verted into gray values and then different techniques including
texture analysis, geometric transformation, background model-
ing, and contour algorithms are applied for extracting different
ROIs. The latter are tested using different road images, thus
achieving significant results.
Fig. 1. Hardware assembly of our self-driving car. Raspberry Pi GPIO pins
D. Advancement of State-of-the-Art are extended through extension board, whereto physical units are connected.

The perspectives of autonomous vehicles cannot be easily


predicted. However, such prediction enables planning possible A. System Design
future conditions. Many practitioners and analysts are worried This subsection provides a detailed description of how the
about the future of self-driving cars in terms of their effect input units (vision and ultrasonic sensors) are connected and
on parking, traffic problems, and public transportation [47]. the information inferring is elaborated. The system design for
In this line of reasoning, the Society of Automobile Engi- our proposed method consists of three main parts: input units,
neers (SAE) claims that by 2030 autonomous cars will be camera calibration for distance measurements, and output
safe and reliable enough to replace human driving, thereby units.
minimizing driver stress and tediousness [48]. Advancement 1) Input Units: The input components are composed of
in technologies for autonomous vehicles will also reduce a vision sensor with an ultrasonic sensor working together
accidents, congestion, and pollution problems. Some analysts to collect data from the environment in real time. A stream
also foresee that the total cost of shared passenger travel in of video frames is supplied to Raspberry Pi from cam-
autonomous vehicles will be lower than in human operated era, which are processed by the processing unit. Ultrasonic
vehicles [48]. HC-SR04 unit is used for obstacle detection and measurement
Despite many benefits of autonomous cars, they do impose of distance to the obstacle. Table I shows the specification of
extra cost by adopting different equipment and tools for hardware modules used in this prototype car. The specified
processing and sensing the external environment. For instance, ultrasonic unit has four pins, Trigger pulse unit (TRIG), Echo
the introduction of a simple electronic device, such as an unit (ECHO), Ground unit (GND), and Power supply unit
adaptive cruise, an active lane assist, a high beam assist, (VCC). The power was supplied to the unit from 5 volts
or a top-view camera, will cost thousands of dollars. Sim- General Purpose Input Output (GPIO) pins of Raspberry Pi.
ilarly, the processing unit of autonomous vehicles is even Fig. 1 shows the assembly of all hardware sensors in detail.
more expensive, thus increasing the overall price as well as The ultrasonic sensor works by the principle that a pulse is
requiring extra costs [48]. By analyzing the current trends in sent from the sensor using TRIG. This pulse is bounced by
autonomous cars, we conclude that their focus is mainly on the nearby objects and received by the ECHO. Distance is
autonomous decision-making technologies. Even though there calculated from the difference of the transmitted pulse and the
exist variations in such technologies, their common attribute received pulse using a speed formula. The speed of sound is
is an autonomous driver system for the vehicle. 343 (34300 centimeters) meters per second in the air. Time is
divided by 2 because the pulse travels to the object and back
III. P ROPOSED M ETHODOLOGY again:
For a better understanding of the proposed system, this
d = speed × ti me, (1)
section is broadly divided into two subsections: A) System
34300 × echo
Design and B) Methodology. Subsection A is structured as: d = , (2)
i) input units, which relate to vision and ultrasonic sensors, 2
d = 17150 × echo. (3)
ii) camera calibration and distance measurements, and iii) out-
put units, which include DC motors, a buzzer, and a LED. In equation (3), echo is the return pulse time and d is the
Subsection B is further divided into: i) preprocessing module, distance to the object. A threshold value of 15 inches is set
where morphological operations are applied on the input for the distance. The obtained distance is passed from the
frame, ii) classification module, which includes DNN units, threshold for controlling the motor’s Pulse Width Modulation
traffic sign detection, and distance calculation using a vision (PWM). For instance, if the distance is greater than the
sensor, and iii) decision module for driving the autonomous threshold value, PWM is set to 100 Hertz, and if the distance is
car. less than the threshold value, PWM is set to zero. The returned

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

SAJJAD et al.: EFFICIENT AND SCALABLE SIMULATION MODEL FOR AUTONOMOUS VEHICLES WITH ECONOMICAL HARDWARE 5

Fig. 2. Self-driving car on pre-designed track with traffic signs. To assess the
efficiency of the system, several experiments are performed for each traffic
sign.

TABLE I Fig. 3. Checkered-board is used for camera calibration using OpenCV.


S PECIFICATION OF H ARDWARE M ODULES

value from the ultrasonic sensor is also used for the buzzer
as a horn. When the car reaches an obstacle, the buzzer is
turned on after the car stops and warns on the obstacle next
to it. A message Obstacle Detected is displayed on the LED.
Fig. 2 captures images of the self-driving car on a pre-defined
track.
2) Camera Calibration and Distance Measurements: Radial Fig. 4. Calibrated output image of the Raspberry Pi Camera using OpenCV.
and tangential types of distortion are commonly introduced by
a camera to the images, due to which some of the meaningful There are five parameters, which are known as distortion
information is lost. Straight lines in the images will appear coefficients and are given below:
curved in radial distortion. This effect can be easily observed
Di stor ti on Coe f f i ci ent = (k1k2P1P2k3) , (8)
while moving away from the center of the image. In Fig. 3,
a checkered board is marked with red lines at the edges. It can where k1, k2, and k3 are the radial distortion, while P1 and
be seen that the border is not a straight line and does not match P2 are the tangential distortion coefficient of lens. Along with
with the red line. these parameters, the information like intrinsic and extrinsic
The method proposed by [49] is used for the calibration parameters of the camera is also required. Intrinsic information
of the Raspberry Pi camera. The distortion is resolved using includes focal length (fx, fy) and optical centers (cx, cy). All
equations (4) and (5). these parameters are camera-specific and only depend on the
  model and focal length of the camera. Expression (9) is used
Udist = U 1 + k1r 2 + k2r 4 + k3r 6 , (4) for the camera calibration.
   
Vdist = V 1 + k1r 2 + k2r 4 + k3r 6 ,  fx 0 cx 
(5)  
Camera Matrix =  0 fy cy  . (9)
where Udist and Vdist are the undistorted pixel location and 0 0 1 
U and V are the normalized image coordinates. Tangential
distortion occurs because the lenses that capture images are Extrinsic parameters are related to the translation and rota-
not aligned perfectly parallel to the plane of the image. Due tion vectors, which translate into the coordinates of a three-
to this, some regions in the image appear closer than expected. dimensional point. All these distortions must be corrected to
This can be resolved with the help of equations (6) and (7). obtain optimized results. Sample images, in which the patterns
OpenCV library has built-in tools to calibrate the camera and are well defined, are provided for correction. Checker board
reduce the distortions based on (6) and (7). images are suitable in this case.
   OpenCV provides convenient functions to further facilitate
Udist = U + 2P1 U V + P2 R 2 + 2U 2 , (6) the calibration process. The OpenCV method findChessboard-
    Corner() returns corners when a 7 × 6 grid image is passed to
Vdist = V + P1 R 2 + 2V 2 + 2P2 U V . (7) it as shown in Fig. 4. The accuracy of the points is improved

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

6 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS

B. Methodology

This subsection offers a closer look at the proposed system


where each module combines different units. The first module
is composed of various morphological operations carried on
the input frames for enhancement. Classification module con-
sists of two units: DNN and traffic sign detection with distance
calculation. The last module is an array where different
predictions from the second module are stored. Operation on
this array is carried from left to right where the last column is
Fig. 5. Monocular-vision based distance measurement. the final decision for given input conditions. The overall view
of our methodology is shown in Fig. 6.
1) Preprocessing Module: Input frames from the vision
by using the cornerSubPix() method. The drawChessboard- sensor are passed from several modules in order to remove
Corner() function is used for drawing patterns. noise and crop the track region from the frame. In the first step
The calibration of the camera is now a one-step process of the preprocessing, an image is passed from Gaussian blur
with calibrateCamera() by obtaining the image and the object effect step to smoothen it. Due to different lighting conditions
points. This will return the camera matrix, the distortion and motion of the car, input frames contain noise, which is
coefficients, as well as the rotation and translation vectors. removed by using a noise removal module. This is based
The measurements of distance for detecting a sign using a on a morphological operation, such as opening followed by
monocular vision sensor is one of the most challenging tasks closing. In the last step of the preprocessing, extra regions
in this system. The method proposed by [50] to calculate the from the frame are cropped to only the track region and
distance from traffic signs in real time is employed. resized to 640 × 480 resolution. This frame is supplied to
From Fig. 5, we suppose that an object P is at the distance the classification module for further operations.
of D from the optical center. H is the height of the optical 2) Classification Module: This module is composed of two
center, F is the focal length of the camera, (x0, y0) are the parts: 1) neural network and 2) traffic sign detection with
point of intersection for the image and the optical axis; while distance calculation units. Input frame from the preprocessing
the projection of point P is given by (x, y). Consider further module is passed from each unit of the classification module
parameters, such as (u0, v0) for the camera coordination; then, for further predictions.
the physical dimensions of the pixel corresponding to x-axis
a) Neural network for prototype model car: The perfor-
and y-axis on the image plane are Dx and D y .
mance of the DNN architecture is directly dependent upon the
total number of hidden layers inside the model. However, these
H
D= , (10) hidden layers and the number of nodes do not interact with the
tan (α + ar ctan (y1 − y0 / f )) outside environment directly, but they affect the output result.
x y
u= + u0, v = + v0 , (11) The DNN model with very few neurons in the hidden layers
Dx Dy will cause underfitting. In this case, the number of neurons
H f is insufficient to detect the signal accurately and transfer the
D=    , a y = . (12)
tan α + ar ctan v 1 − v 0 /a y Dy output to the next hidden layer. On the other hand, using
a larger number of neurons in the hidden layers will cause
3) Output Units: The output components used in the devel- overfitting, which requires more information and training data.
opment of the system include DC motors (wheels), a 2 × 16 To overcome these problems, there should be a tradeoff among
led for showing the necessary information (i.e., IP address of the numbers of hidden layers, neurons, and training data.
the Raspberry Pi, “Ready”, “Obstacle detected”, “Left turn”, For this reason, we started with a simple DNN containing
“Right turn”, “Going straight”, and “Stop”), and a buzzer as 128 nodes in the first hidden layer and 16 nodes in the second
a horn. In the first stage, an ultrasonic sensor is measuring hidden layer. The complete details of these nodes are given
the distance to the obstacle (if any). If there is no obstacle in Table II, where the first two columns represent the numbers
in the track ahead, the car travels smoothly on it. If there of nodes in the first and the second hidden layer with their
is an obstacle on the track, the ultrasonic sensor measures corresponding accuracy in the third column. The performance
the distance to this obstacle. When the distance reaches a of the model is observed carefully, and the numbers of nodes
minimum of 15 inches, voltage supply to the wheels is cut off are adjusted accordingly.
and the car stops at a distance of 15 inches from the obstacle. After a node adjustment in the hidden layers, Hidden
In the second stage, the wheels are controlled from the vision layers 1 and 2 contain 4800 and 64 nodes, respectively, while
sensor. Each frame is scanned for the detection of traffic the output layer contains four nodes that are responsible for
signs. After that, the ROI is passed through the respective controlling the wheels for driving. The output of the model
Haar cascade classifier for further action. Four Haar cascade includes straight, left, right, and an optional layer “reverse”
classifiers are used, and in the identification of each traffic that is not used in the current system. The number of images
sign, the wheels are controlled according to the identified used for training is ten thousand, where 80% of the images
sign. are used for training and the rest of 20% serve in validation.

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

SAJJAD et al.: EFFICIENT AND SCALABLE SIMULATION MODEL FOR AUTONOMOUS VEHICLES WITH ECONOMICAL HARDWARE 7

Fig. 6. Detailed overview of the proposed framework. In Preprocessing module, input frame is passed from morphological operation to enhance the frame.
In Classification module, input frame is further processed for traffic sign detection i.e., Go, Stop, Slow, “Right not turn” and “Left not turn”, and classification
of frame i.e. straight, left, or right. Decision module stores information received from the second step into an array. Decision is taken on the array from left
to right, in which the last column is the final decision for the input given conditions.

TABLE II
N UMBER OF N ODES AND C ORRESPONDING A CCURACY

Fig. 7. Neural network for self-driving car maneuvering, where Hidden


layer 1, Hidden layer 2, and final Output contain 4800, 64, and 4 nodes,
respectively.
The images used for training are cropped, and only the track
portions of these images are supplied to the network for six hundred epochs with a starting learning rate of 0.01, which
classification. The size of these images is fixed to 80 × 60 is adjusted by the gradient descent optimizer according to the
(4800 nodes) for a performance optimization of the network. performance of the model. Fig. 7 shows a brief description of
During the training process, the model is trained for more than the neural network for the self-driving car.

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

8 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS

TABLE III
D IMENSIONS AND N UMBERS OF P OSITIVE /N EGATIVE
I MAGES FOR H AAR C ASCADE T RAINING

of the images. Each frame is given to a function for detecting


the ROI, while only the contour region is extracted and passed
from the distance calculation module for distance calculation.
Decisions are made after recognizing a specific sign.
3) Output decision module: Processed data from the
processing module are passed to the output decision module
for maneuvering the model car on the track. Return data from
Fig. 8. Training a DNN-based classifier for a self-driving car. In the first the processing module are put into an array. The returned array
step, a multi-dimension frame is converted into the 1 × 4800 array. In the next contains information about the frame classification (straight,
step, the associated labels are concatenated at the end of each frame array. right, left), traffic sign detection (go, stop, left not turn, right
After training the DNN, the trained model is saved for future use.
not turn, slow), distance to the detected traffic sign, distance
to the obstacle detected by an ultrasonic sensor, and decision
i.e., stop, go, turn left or right. For instance, if the frame is
classified as ‘right’, the detected sign is Go, the distance to
the traffic sign is greater than the threshold, and there is no
obstacle on the track, then the final decision for this type of
information is “turn right”. These decisions are forwarded to
the output units in system design where voltage to certain
motors is controlled depending upon the decision.

Fig. 9. Training of Haar cascades with Positive (left) and Negative (right) IV. E XPERIMENTAL R ESULTS
images for the traffic signs detection.
All the experiments are carried out in optimized OpenCV
The processing of each frame is presented in Fig. 8. In the version 3.3.0 compiled on Raspberry Pi 3 Model B+, 4x ARM
first step, an image is cropped and converted into a Numpy Cortex-A53 1.2GH processor using Python version 3.6 and
array. In the second step, labels for the training images are Tensorflow version 1.4.0. Other dependencies include Numpy,
supplied in a separate labels file. After the DNN training, Scipy, and Matplotlib for visualizing and processing of output
the trained model is saved to the local directory for future data. The Raspberry Pi has limited resources in terms of
use. the computational capacity and memory [52]. ARM proces-
b) Traffic sign detection and distance calculation: sor comes with ARM NEON optimization architecture and
An extraction of multiple ROIs from the supplied frames VFPV3 extension for the purposes of faster image, video, and
includes traffic sign detection and distance measurements to speech processing, machine learning techniques, and floating-
the detected signs. For an intelligent transportation system, point optimization. ARM NEON supports the use of Single
the detection and recognition of traffic signs is an essential Instruction Multiple Data (SIMD), where multiple processing
capability. Taking advantage of the “Haar based classifier” elements in the pipeline perform on multiple data points,
method by P. Viola [51], we conduct traffic sign detection all executed with a single instruction. VFPV3 comes with
and recognition. This algorithm requires a large number of configurable rounding modes and customizable default Not
positive and negative images to train the cascade function. a Number (NaN) mode. Enabling all these special modes of
A separate Haar Cascade classifier is trained for each traffic Raspberry Pi while compiling OpenCV results in running our
sign. OpenCV provides libraries for both training and detecting neural network faster, while the compiled OpenCV can be
the Haar cascades. 2,000 negative images (other than the traffic referred as Optimized OpenCV. Taking advantage of these
sign images) and 50 positive images (Traffic sign images) features (ARM NEON, SIMD, VFPV3, and NaN) in the
were used for training the Haar cascade. Only regions of Raspberry Pi, OpenCV is a built-in optimized mode. Further,
interest were supplied in the positive case. Five Haar cascades tensorflow provides a possibility to use a number of processor
including “Go”, “Stop”, “No Left Turn”, “No Right Turn”, and cores for a task. Leveraging this feature, deep experiments are
“Slow” signs are used in this system. Fig. 9 shows samples set on multiple cores and different versions of OpenCV. Fig. 10
of positive and negative images, and Table III demonstrates shows the average execution time of the normal OpenCV,
traffic signs, numbers of positive and negative images, and size optimized OpenCV, and Optimized OpenCV with the support

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

SAJJAD et al.: EFFICIENT AND SCALABLE SIMULATION MODEL FOR AUTONOMOUS VEHICLES WITH ECONOMICAL HARDWARE 9

Fig. 10. Time complexity. Number of CPU cores vs. different versions of
OpenCV, i.e., Normal, Optimized, and Optimized + Movidius confirming
that the number of available resources and the use of movidius improve
performance of the system.

Fig. 12. Distance calculated by a vision sensor. As model car moves toward
the detected traffic sign, the difference between the actual and the predicted
values decreases.

(right and non-right), and (left and non-left). Each frame


obtained from the first version is placed in their respective
class as a ground truth for prediction and training. The sec-
ond version video was predicted by the model car and was
compared with the ground truth. The overall accuracy of the
model is presented in Table V. The results are also color
coded for the ease of interpretation. Green columns represent
Fig. 11. Average temperature of different CPU cores on various versions tests for the class “straight”; the ground truth for this class
of OpenCV, i.e., Normal, Optimized, and Optimized + Movidius. Average includes 60 “straight” frames and 98 “non-straight” frames.
temperature of Raspberry Pi increases as DNN process is distributed among During the validation, the model car has predicted all the
several cores.
60 frames as “straight” and 98 as “non-straight” frames, thus
TABLE IV reaching its destination without any error. Yellow columns
D ESCRIPTION OF PARAMETERS show the ground truth for the class “right”. In this class, 55 are
“right” frames and 103 are “non-right” frames. The model car
has identified 52 as “right” frames and 106 frames as “non-
right” frames. The fifth and sixth columns of Table V are
colored blue; they are the ground truth for the class “left”.
There are 103 frames for “left” and 55 frames for “non-left”.
The model car identified 107 frames as “left” and 51 frames
as “non-left”, thus attaining an overall accuracy of 98.5%.
Distance calculations with monocular vision sensors became
of Movidius Intel Computing Stick. Fig. 11 demonstrates a challenging task. A shorter distance to the sign gives nearly
the average temperature of the core during frame processing. the actual distance, but when the distance from the sign was
A high increase in temperature on cores 3 and 4 is because increased, error in the distance calculations also increased as
the load shifting probability is decreased among the CPU shown in Fig. 12.
cores. Fig. 13 shows a sample image of the distance calculated
In order to achieve the maximum possible accuracy and to by a vision sensor. The difference between the actual distance
reduce the computational cost, the number of frames per sec- and the distance calculated by the vision sensor may be due
ond was decreased, thus generating a prediction 5x faster i.e., to the following reasons:
5 frames/s. The parameters used in this work are described 1) Error in the measurements of the actual values.
in Table IV. For evaluating the model car, different types of 2) Error in the camera calibration.
tests were conducted using various track scenarios. A total 3) Variation in the object bounding box while detecting the
number of 158 frames (31.6 seconds video) were evaluated signs.
under different track scenarios. 4) Non-linear relationships between distance and camera
Two versions of videos from the model car were obtained: coordinates. With greater distance, the coordinates of the
the first version was converted into frames and categorized into camera change rapidly, thus resulting in higher error.
three groups including straight, left, and right. Each group is 5) Raspberry Pi camera is general-purpose camera and has
further divided into two classes: (straight and non-straight), average image quality.

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

10 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS

TABLE V
OVERALL A CCURACY OF M ODEL C AR

Fig. 13. Sample images showing distance to the traffic sign by using a vision sensor. Each sign is detected (green rectangle), recognized, and the distance
is calculated between the traffic sign and the model car.

Ultrasonic sensors have only been used for detecting objects


and distances from the car. An ultrasonic sensor uses sound
waves to calculate the distance. Due to this, some errors were
experienced in calculating the distance during our demon-
strations. Fig. 14 shows the actual distance and the distance
calculated by the ultrasonic sensor. The difference between the
actual and the measured values may be due to the following
reasons:
1 Sound waves easily strike larger objects as compared to
smaller objects. The farther the distance is, the greater
the error is since few pulses are returned from the object.
2 Ultrasonic waves are greatly influenced by the air tem-
perature. The sensor calculates the distance to the object
using the speed of sound. The speed of ultrasonic waves
alters as the air temperature changes [53].
3 The ultrasonic waves are also influenced by air pressure.

A. Energy Consumption Fig. 14. Distance measurements: actual vs. predicted by an ultrasonic sensor.
The actual and predicted distances are nearly equal as the model car moves
The total expenditures used by a system during completing towards the obstacle.
a specific task are known as energy consumption. To eval-
uate the total energy consumption by our system with deep The parameters have been calculated using a Keweisi device
learning model, 5 frames have been processed in one second. while estimating the total power consumption of our system.

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

SAJJAD et al.: EFFICIENT AND SCALABLE SIMULATION MODEL FOR AUTONOMOUS VEHICLES WITH ECONOMICAL HARDWARE 11

TABLE VIII
S PECIFICATION OF C OMPUTING S YSTEM

Fig. 15. Power consumption for on-board execution of deep learning model
on Raspberry Pi. The upper value shows current while the second value shows
voltage consumed by the Raspberry Pi during execution of the DNN model.
TABLE VI
P OWER AND E NERGY C ONSUMPTION OF R ASPBEERY P I IN I DLE S TATE

TABLE VII Fig. 16. Average power consumption of different multicore computing
E NERGY AND P OWER C ONSUMPTION D URING N EURAL platforms.
N ETWORK E XECUTION

Fig. 15 shows sample images of power consumption. The Fig. 17. Average energy consumption on different platforms.
unit power and the energy consumption of the Raspberry Pi
can be observed in Table VI when the system is idle and no
average power consumed by the GPU on the frame processing
task is in progress. The total ampere, voltage, time, power,
is 440 Watts, as the normal current increases from 1.5 to 2amps
and energy drain by Raspberry Pi during processing a single
on the execution of the neural network. The power consump-
frame with deep learning can be seen in Table VII.
tion of the GPU is 330 Watts in the idle state. Yellow bar shows
the average power consumption of the CPU on the execution
B. Comparison With Other Computing Platforms of the deep model. The total average power consumed by the
To compare the performance of Raspberry Pi with other CPU on executing the deep model is 224.4 Watts. A small
computing platforms in terms of the energy, power, and amount of power of 6 Watts is consumed by the Raspberry
average processing time of a frame, we used Intel CPU and Pi, as shown in green color in Fig. 16, while attaining the
GPU with NVIDIA graphics card. The specification of each same accuracy as obtained by the GPU and CPU.
system is presented in Table VIII. In order to evaluate the The energy consumption of different computing platforms
performance of the deep model on CPU and GPU, a multi- is presented in Fig. 17. The average energy consumption
thread TCP server is used for receiving the video stream of the GPU is 4.4 Joule on a single frame execution. The
and the ultrasonic data from Raspberry Pi on the computer. CPU consumes higher energy than the GPU since the former
Data from the Raspberry Pi is processed on the computer and is not Graphics card enabled. Raspberry Pi, in contrast to
only the decisions reached by the deep model are sent to the GPU and CPU, consumes less energy, which is on average
Raspberry Pi to take the necessary actions. 1.38 Joule/frame.
In Fig. 16, the red bar shows the average power consumption Fig. 18 shows the time complexity of a single frame on
of the GPU during processing the deep model. The total different hardware platforms. A total of 1,000 frames are

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

12 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS

TABLE X
P ERFORMANCE OF H AAR -BASED C LASSIFIER

TABLE XI
C OMPARISON OF T RAINED H AAR C ASCADE C LASSIFIER
W ITH E XISTING T ECHNIQUES

Fig. 18. Average time complexity of frame/sec execution on different


platforms.
by the motors of the model car, and 15% is consumed by
TABLE IX Raspberry Pi.
V OLTAGE D ROP OF M ODEL C AR A FTER E ACH T EST

D. Comparison With Other State-of-the-Art Methods


With advancements in technology, the performance of
resource-constrained devices, such as Raspberry Pi, has been
left behind. Today’s automotive industry mainly focuses on
the development of autonomous decision-making capabilities
by deferring as the second design driver the optimization of
efficiency in terms of the hardware costs. This subsection
elaborates on the performance and accuracy of our prototype
executed on each platform. The time complexity of the GPU with respect to the related more expensive technologies.
on a single frame is 0.01 seconds. The average time consumed For comparison with other methods, several experiments
by the CPU to execute a single frame is 0.06 seconds. were carried out on traffic sign detection using Raspberry Pi.
Raspberry Pi did not perform well as it had limited resources A total of 100 images of different traffic signs are used for
and took an average of 0.23 seconds to execute a single testing. Table X presents the traffic sign, the number of test
frame. Summarizing all the conducted experiments, we con- images correctly classified, and the accuracy of our system.
clude that Raspberry Pi, an economical computing platform, In the first test, a total of 25 images of the stop sign are
is powerful enough and capable of running a DNN for real- passed from the trained Haar based classifier. The system
time applications with low power and energy consumption. recognized all the 25 images correctly. In the second test,
GPU and CPU perform well in terms of the execution time, a total of 25 images of the go sign are tested on the trained
but as evident from our experimental results, these platforms Haar classifier, which detects 24 signs correctly. Results for
consume excessive energy and power while attaining a similar other traffic signs can be seen in Table X.
accuracy with Raspberry Pi. These results are compared with the existing techniques to
assess the performance of the trained Haar-based classifier.
Table XI compares the proposed system with other two exist-
C. Speed Test
ing methods in terms of the accuracy and the execution time
Speed of an object can be referred to as the total distance for frame processing. The state-of-the-art methods contain
covered by a body over a unit time. To analyze the speed several RC-car based self-driving car test-beds. For instance,
of our model car on the track, we have conducted a total of a study conducted by MIT [54] is based on NVIDIA jetson
seven tests to assess the average speed of the car. Details of computing platform and Shim [55]. Both works are based
each test are offered in Table IX. The length of the track was on LIDAR and many other sensors. However, such systems
kept to 132 inches (11 feet). The speed of the model car is consume excessive energy and power. Also, the employed
directly related to the voltage; i.e., greater voltage increases sensors cost more than $4,000, thus requiring much invest-
the speed of the vehicle. During each test, the voltage and ments. Compared to these solutions, we propose a CNN-based
current supplied to the car are kept constant. Due to the energy workload in real time on a resource-constrained and low-cost
consumption of the different units in the model car, the voltage computing platform, thus providing a cheaper solution for real-
is dropped after each test thus resulting in a decrease in time applications.
the speed of the model car. Further, with increased voltage
supply to the motors of the model car, it covers the distance V. C ONCLUSION AND F UTURE W ORK
in less time, while decreasing supplied voltage increases the This paper presents a cost-effective and computationally
time. For 3000mah battery, this model car works for an efficient solution for autonomous maneuvering based on
average of 45 minutes, in which 85% of the battery is drained resource-constrained devices and a lightweight deep learning

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

SAJJAD et al.: EFFICIENT AND SCALABLE SIMULATION MODEL FOR AUTONOMOUS VEHICLES WITH ECONOMICAL HARDWARE 13

model that can be used to facilitate vehicular perception and [7] M. Althoff and A. Mergel, “Comparison of Markov chain abstraction
autonomous guidance in intelligent transportation systems. and Monte Carlo simulation for the safety assessment of autonomous
cars,” IEEE Trans. Intell. Transp. Syst., vol. 12, no. 4, pp. 1237–1247,
The proposed system achieves attractive performance scores in Dec. 2011.
terms of the detection and avoidance of obstacles, traffic sign [8] Q. Li, L. Chen, M. Li, S.-L. Shaw, and A. Nuchter, “A sensor-
recognition, and intelligently following a smooth trajectory. fusion drivable-region and lane-detection system for autonomous vehicle
navigation in challenging road scenarios,” IEEE Trans. Veh. Technol.,
The assembly of different hardware components, scalar, and vol. 63, no. 2, pp. 540–555, Feb. 2014.
vision sensors have their role in the overall output of the [9] A. Borkar, M. Hayes, and M. T. Smith, “A novel lane detection system
with efficient ground truth generation,” IEEE Trans. Intell. Transp. Syst.,
system. When compared to other more expensive solutions, vol. 13, no. 1, pp. 365–374, Mar. 2012.
our economical and computationally efficient prototype car [10] H. Yoo, U. Yang, and K. Sohn, “Gradient-enhancing conversion for
is capable of autonomously driving on a specified track by illumination-robust lane detection,” IEEE Trans. Intell. Transp. Syst.,
vol. 14, no. 3, pp. 1083–1094, Sep. 2013.
avoiding obstacles as well as detecting and recognizing five [11] S. Sivaraman and M. M. Trivedi, “Integrated lane and vehicle detection,
different traffic signs based on artificial intelligence meth- localization, and tracking: A synergistic approach,” IEEE Trans. Intell.
ods. An ultrasonic sensor is used for obstacle detection, Transp. Syst., vol. 14, no. 2, pp. 906–917, Jun. 2013.
[12] M. Buehler, K. Iagnemma, and S. Singh, The DARPA Urban Challenge:
which helps avoid collisions, thereby preventing the car from Autonomous Vehicles in City Traffic (Springer Tracts in Advanced
accidents. A Haar cascade classifier is used for traffic sign Robotics), vol. 56. Berlin, Germany: Springer-Verlag, 2009.
[13] J. Levinson et al., “Towards fully autonomous driving: Systems and
detection. The car can identify a traffic sign and adjust its algorithms,” in Proc. IEEE Intell. Vehicles Symp. (IV), Jun. 2011,
speed of wheel motors by using these cascades. An algorithm pp. 163–168.
capable of calculating the distance by using only a monocular [14] S. Thrun et al., “Stanley: The robot that won the DARPA grand
challenge,” J. Field Robot., vol. 23, no. 9, pp. 661–692, 2006.
vision sensor is used to detect and measure the distance to [15] C.-Y. Lee, C.-T. Lin, C.-T. Hong, and M.-T. Su, “Smoke detection
the traffic signs. Rich experimental results show that our using spatial and temporal analyses,” Int. J. Innov. Comput., Inf. Control,
prototype model car achieves autonomous driving with an vol. 8, no. 7, pp. 4749–4770, 2012.
[16] W. J. Schakel, B. van Arem, and B. D. Netten, “Effects of cooperative
overall accuracy of 95.5% adaptive cruise control on traffic flow stability,” in Proc. 13th Int. IEEE
Future research will be devoted to enhancing our algorithm Conf. Intell. Transp. Syst., Sep. 2010, pp. 759–764.
[17] B. van Arem, C. J. G. van Driel, and R. Visser, “The impact of
further in order to increase the admissible number of frames cooperative adaptive cruise control on traffic-flow characteristics,” IEEE
per second and accommodate higher car speeds. Specifically, Trans. Intell. Transp. Syst., vol. 7, no. 4, pp. 429–436, Dec. 2006.
studies in the short term will elaborate on the distance cal- [18] X. Chen et al., “3D object proposals for accurate object class detection,”
in Proc. Adv. Neural Inf. Process. Syst., 2015, pp. 424–432.
culation, the consideration of the vehicle sensing capability [19] R. Guidolini, C. Badue, M. Berger, L. D. P. Veronese, and
and overtaking other vehicles, the detection and recognition A. F. De Souza, “A simple yet effective obstacle avoider for the IARA
of traffic lights, and ultimately the optimization of the overall autonomous car,” in Proc. IEEE 19th Int. Conf. Intell. Transp. Syst.
(ITSC), Nov. 2016, pp. 1914–1919.
perception results. Moreover, instead of a single Raspberry Pi [20] J. Choi et al., “Environment-detection-and-mapping algorithm for
node, we will scale up the number and the heterogeneity of autonomous driving in rural or off-road environment,” IEEE Trans.
Intell. Transp. Syst., vol. 13, no. 2, pp. 974–982, Jun. 2012.
sensing devices for handling more realistic scenarios of inher- [21] J. Leonard et al., “A perception-driven autonomous urban vehicle,”
ently higher complexity. To this end, other input devices like J. Field Robot., vol. 25, no. 10, pp. 727–774, 2008.
LIDAR sensors will be under consideration in order to scan [22] N. Kalra and S. M. Paddock, “Driving to safety: How many miles of
driving would it take to demonstrate autonomous vehicle reliability?”
the surrounding environment for other obstacles. Similarly, Transp. Res. A, Policy Pract., vol. 94, pp. 182–193, Dec. 2016.
an addition of vision sensors on the back of the car model [23] A. Petrovskaya and S. Thrun, “Model based vehicle detection and
may equip the vehicle with the capability of reverse function tracking for autonomous urban driving,” Auto. Robots, vol. 26, nos. 2–3,
pp. 123–139, Apr. 2009.
and, eventually, make it turn around to avoid possible detected [24] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN
obstacles by harnessing artificial intelligence and computer features off-the-shelf: An astounding baseline for recognition,” in Proc.
IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, Jun. 2014,
vision capabilities similar to the ones presented in this work. pp. 806–813.
[25] S. Khan, K. Muhammad, S. Mumtaz, S. W. Baik, and
V. H. C. de Albuquerque, “Energy-efficient deep CNN for smoke
detection in foggy IoT environment,” IEEE Internet Things J., vol. 6,
R EFERENCES no. 6, pp. 9237–9245, Dec. 2019.
[26] Y. Kim, “Convolutional neural networks for sentence classifica-
[1] S. K. Gehrig and F. J. Stein, “Dead reckoning and cartography using tion,” 2014, arXiv:1408.5882. [Online]. Available: http://arxiv.org/
stereo vision for an autonomous car,” in Proc. IEEE/RSJ Int. Conf. Intell. abs/1408.5882
Robots Syst. Hum. Environ. Friendly Robots High Intell. Emotional [27] O. Abdel-Hamid, A.-R. Mohamed, H. Jiang, L. Deng, G. Penn,
Quotients, Oct. 1999, pp. 1507–1512. and D. Yu, “Convolutional neural networks for speech recognition,”
[2] S. Thrun, “Toward robotic cars,” Commun. ACM, vol. 53, no. 4, IEEE/ACM Trans. Audio, Speech Language Process., vol. 22, no. 10,
pp. 99–106, Apr. 2010. pp. 1533–1545, Oct. 2014.
[28] Y. Wu et al., “Google’s neural machine translation system: Bridging the
[3] B. Schoettle and M. Sivak, “A survey of public opinion about gap between human and machine translation,” 2016, arXiv:1609.08144.
autonomous and self-driving vehicles in the US, the UK, and Australia,” [Online]. Available: http://arxiv.org/abs/1609.08144
Transp. Res. Inst., Univ. Michigan, Ann Arbor, MI, USA, Tech. Rep. [29] M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural
UMTRI-2014-21, 2014. networks on graphs with fast localized spectral filtering,” in Proc. Adv.
[4] Baidu Just Made Its 100th Autonomous Bus Ahead of Commercial Neural Inf. Process. Syst., 2016, pp. 3844–3852.
Launch in China, CW Company, Burbank, CA, USA, Jul. 2018. [30] Y. S. Wong, N. K. Lee, and N. Omar, “GMFR-CNN: An integration
[5] H. Woo et al., “Lane-change detection based on vehicle-trajectory of gapped motif feature representation and deep learning approach for
prediction,” IEEE Robot. Autom. Lett., vol. 2, no. 2, pp. 1109–1116, enhancer prediction,” in Proc. 7th Int. Conf. Comput. Syst.-Biol. Bioinf.
Apr. 2017. (CSBio), 2016, pp. 41–47.
[6] K. Jo and M. Sunwoo, “Generation of a precise roadway map for [31] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification
autonomous cars,” IEEE Trans. Intell. Transp. Syst., vol. 15, no. 3, with deep convolutional neural networks,” in Proc. Adv. Neural Inf.
pp. 925–937, Jun. 2014. Process. Syst., 2012, pp. 1097–1105.

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

14 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS

[32] M. Sajjad, S. Khan, K. Muhammad, W. Wu, A. Ullah, and S. W. Muhammad Sajjad received the master’s degree
Baik, “Multi-grade brain tumor classification using deep CNN with from the Department of Computer Science, College
extensive data augmentation,” J. Comput. Sci., vol. 30, pp. 174–182, of Signals, National University of Sciences and
Jan. 2019. Technology, Rawalpindi, Pakistan, and the Ph.D.
[33] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look degree in digital contents from Sejong University,
once: Unified, real-time object detection,” in Proc. IEEE Conf. Comput. Seoul, South Korea. He is currently working as an
Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 779–788. Associate Professor with the Department of Com-
[34] E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks puter Science, Islamia College Peshawar, Pakistan,
for semantic segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., where he is also the Head of the Digital Image
vol. 39, pp. 640–651, 2017. Processing Laboratory (DIP Lab). His research
[35] K. Behrendt, L. Novak, and R. Botros, “A deep learning approach to interests include digital image super-resolution and
traffic lights: Detection, tracking, and classification,” in Proc. IEEE Int. reconstruction, medical image analysis, video summarization and prioriti-
Conf. Robot. Autom. (ICRA), May 2017, pp. 1370–1377. zation, image/video quality assessment, computer vision, and image/video
[36] B. Huval et al., “An empirical evaluation of deep learning on high- retrieval.
way driving,” 2015, arXiv:1504.01716. [Online]. Available: http://arxiv.
org/abs/1504.01716
[37] C. Chen, A. Seff, A. Kornhauser, and J. Xiao, “DeepDriving: Learning
affordance for direct perception in autonomous driving,” in Proc. IEEE Muhammad Irfan received the B.S. degree in
Int. Conf. Comput. Vis. (ICCV), Dec. 2015, pp. 2722–2730. computer science from Islamia College Univer-
[38] T. Sun, S. Tang, J. Wang, and W. Zhang, “A robust lane detection method sity, Peshawar, Pakistan. He is currently pursu-
for autonomous car-like robot,” in Proc. 4th Int. Conf. Intell. Control ing the master’s degree with the Department of
Inf. Process. (ICICIP), Jun. 2013, pp. 373–378. Software, Sejong University, Seoul, South Korea.
[39] A. Saha, D. D. Roy, T. Alam, and K. Deb, “Automated road lane His research interests include image and video
detection for intelligent vehicles,” Global J. Comput. Sci. Technol., vol. processing, intelligent transportation systems, com-
12, no. 6, Mar. 2012. puter vision, machine learning, and deep learning.
[40] T. Hoang, H. Hong, H. Vokhidov, and K. Park, “Road lane detection by
discriminating dashed and solid road lanes using a visible light camera
sensor,” Sensors, vol. 16, no. 8, p. 1313, 2016.
[41] M. Sajjad, I. U. Haq, J. Lloret, W. Ding, and K. Muhammad, “Robust
image hashing based efficient authentication for smart industrial envi-
Khan Muhammad (Member, IEEE) received the
ronment,” IEEE Trans. Ind. Informat., vol. 15, no. 12, pp. 6541–6550,
Ph.D. degree in digital contents from Sejong Uni-
Dec. 2019. versity, South Korea. He is currently working as
[42] K. M. A. Alheeti, R. Al-Zaidi, J. Woods, and K. McDonald-Maier,
an Assistant Professor with the Department of
“An intrusion detection scheme for driverless vehicles based gyroscope
Software and a Lead Researcher of the Intelli-
sensor profiling,” in Proc. IEEE Int. Conf. Consum. Electron. (ICCE),
gent Media Laboratory, Sejong University, Seoul.
2017, pp. 448–449.
[43] K. M. A. Alheeti, A. Gruebler, K. D. McDonald-Maier, and A. Fernando, His research interests include intelligent video sur-
“Prediction of DoS attacks in external communication for self-driving veillance (fire/smoke scene analysis, transportation
vehicles using a fuzzy Petri net model,” in Proc. IEEE Int. Conf. systems, and disaster management), medical image
Consum. Electron. (ICCE), Jan. 2016, pp. 502–503. analysis (brain MRI, diagnostic hysteroscopy, and
[44] P. Gora and I. Rüb, “Traffic models for self-driving connected cars,” wireless capsule endoscopy), information security
Transp. Res. Procedia, vol. 14, pp. 2207–2216, 2016. (steganography, encryption, watermarking, and image hashing), video sum-
[45] Q. H. Do, S. Mita, H. T. N. Nejad, and L. Han, “Dynamic and safe marization (single-view and multi-view), multimedia, computer vision, the
path planning based on support vector machine among multi moving IoT, and smart cities. He has registered over 7 patents and published more
obstacles for autonomous vehicles,” IEICE Trans. Inf. Syst., vol. E96.D, than 100 articles in peer-reviewed international journals and conferences in
no. 2, pp. 314–328, 2013. these research areas with target venues as IEEE Communications Magazine,
[46] T. Birdal and A. Erçil, “Real-time automated road, lane and car detection NETWORK, TII, TIE, the IEEE T RANSACTIONS ON S YSTEMS , M AN ,
for autonomous driving,” Tolga Birdal, Aytül Erçil Sabanci Univ., AND C YBERNETICS : S YSTEMS , IoTJ, IEEE A CCESS , TSC, Elsevier INS,
Istanbul, Turkey, Tech. Rep., 2007. Neurocomputing, PRL, FGCS, ASOC, IJIM, SWEVO, COMCOM, COMIND,
[47] B. Grush and J. Niles, The End of Driving: Transportation Systems JPDC, PMC, BSPC, CAEE, Springer NCAA, MTAP, JOMS, and RTIP. He is
and Public Policy Planning for Autonomous Vehicles. Amsterdam, also serving as a professional reviewer for more than 70 well-reputed journals
The Netherlands: Elsevier, 2018. and conferences. He is currently involved in editing of several special issues
[48] T. Litman, Autonomous Vehicle Implementation Predictions. Victoria, as GE/LGE.
BC, Canada: Victoria Transport Policy Institute Victoria, 2017.
[49] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans.
Pattern Anal. Mach. Intell., vol. 22, no. 11, pp. 1330–1334, Nov. 2000. Javier (Javi) Del Ser (Senior Member, IEEE)
[50] C. Jiangwei, J. Lisheng, G. Lie, Libibing, and W. Rongben, “Study on received the Ph.D. degree (cum laude) in telecom-
method of detecting preceding vehicle based on monocular camera,” in munication engineering from the University of
Proc. IEEE Intell. Vehicles Symp., Jun. 2004, pp. 750–755. Navarra, Spain, in 2006, and the Ph.D. degree
[51] P. Viola and M. Jones, “Rapid object detection using a boosted cascade
(summa cum laude) in computational intelligence
of simple features,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis.
from the University of Alcala, Spain, in 2013.
Pattern Recognit. (CVPR), Dec. 2001, p. 1.
[52] M. Sajjad et al., “Raspberry Pi assisted face recognition framework He has held several positions as a Professor and
for enhanced law-enforcement services in smart cities,” Future Gener. a Researcher at different institutions of the Basque
Comput. Syst., to be published. Research Network (including the University of Mon-
[53] A. Vladišauskas and L. Jakevičius, “Absorption of ultrasonic waves in dragon, CEIT, and Robotiker). He is currently a
air,” Ultragarsas, vol. 50, no. 1, pp. 46–49, 2004. Research Professor of data analytics and optimiza-
[54] S. Karaman et al., “Project-based, collaborative, algorithmic robotics tion with TECNALIA, Spain, and also an Adjunct Professor with the Uni-
for high school students: Programming self-driving race cars at MIT,” versity of the Basque Country (UPV/EHU). He is also a Senior AI Advisor
in Proc. IEEE Integr. STEM Edu. Conf. (ISEC), Mar. 2017, pp. 195–203. at the technological startup Sherpa. His research interests gravitate on the
[55] I. Shim et al., “An autonomous driving system for unknown environ- use of descriptive, predictive and prescriptive algorithms for data mining and
ments using a unified map,” IEEE Trans. Intell. Transp. Syst., vol. 16, optimization in a diverse range of application fields such as energy, transport,
no. 4, pp. 1999–2013, Aug. 2015. telecommunications, health and industry, among others. In these fields, he
[56] Y. Yang, H. Luo, H. Xu, and F. Wu, “Towards real-time traffic sign has published more than 300 scientific articles, co-supervised ten Ph.D.
detection and classification,” IEEE Trans. Intell. Transp. Syst., vol. 17, thesis, edited seven books, coauthored nine patents, and participated/led more
no. 7, pp. 2022–2031, Jul. 2016. than 43 research projects. He serves as an Associate Editor in a number
[57] F. Zaklouta, B. Stanciulescu, and O. Hamdoun, “Traffic sign classifi- of indexed journals, including Information Fusion, Swarm and Evolutionary
cation using K-d trees and random forests,” in Proc. Int. Joint Conf. Computation, Cognitive Computation, and the IEEE T RANSACTIONS ON
Neural Netw., Jul. 2011, pp. 2151–2155. I NTELLIGENT T RANSPORTATION S YSTEMS .

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

SAJJAD et al.: EFFICIENT AND SCALABLE SIMULATION MODEL FOR AUTONOMOUS VEHICLES WITH ECONOMICAL HARDWARE 15

Javier Sanchez-Medina (Member, IEEE) received Weiping Ding (Senior Member, IEEE) received the
the master’s degree in engineering from the Ph.D. degree in computation application from the
Telecommunications Faculty in 2002 and the Ph.D. Nanjing University of Aeronautics and Astronautics
degree from the Computer Science Department (NUAA), Nanjing, China, in 2013. He was awarded
in 2008. His Ph.D. dissertation versed on the use the Outstanding Doctoral Dissertation by NUAA.
of Genetic Algorithms, Parallel Computing and He was a Visiting Scholar with the University of
Cellular Automata-based Traffic Microsimulation to Lethbridge (UL), Alberta, Canada, in 2011. From
optimize the Traffic Lights Programming within an 2014 to 2015, he was a Postdoctoral Researcher with
Urban Traffic Network. He is currently an Associate the Brain Research Center, National Chiao Tung
Professor with the Computer Science Department, University (NCTU), Hsinchu, Taiwan. In 2016, he
University of Las Palmas de Gran Canaria (ULPGC), was a Visiting Scholar with the National University
Spain. He has been volunteering for several years at many international of Singapore (NUS), Singapore. From 2017 to 2018, he was a Visiting
conferences related to intelligent transportation, computer science, and evolu- Professor with the University of Technology Sydney (UTS), Ultimo, NSW,
tionary computation. His research interests include mainly the application of Australia. He has published more than 60 articles in flagship journals and con-
data mining, evolutionary computation and parallel computing to intelligent ference proceedings as the first author, including the IEEE T RANSACTIONS
transportation systems, in particular to traffic modeling and prediction. Since ON F UZZY S YSTEMS , the IEEE T RANSACTIONS ON N EURAL N ETWORK
2010, he has served for the IEEE ITS Society organizing the TBMO AND L EARNING S YSTEM , the IEEE T RANSACTIONS ON C YBERNETICS ,
2010 Workshop at ITSC2010, co-organizing the Travel Behavior Research: the IEEE T RANSACTIONS ON S YSTEMS , M AN , AND C YBERNETICS : S YS -
Bounded Rationality and Behavioral Response Special Session at ITSC2011, TEMS , the IEEE T RANSACTIONS ON E MERGING T OPICS IN C OMPUTA -
being a Publications Chair at the IEEE FISTS2011, a Registration Chair TIONAL I NTELLIGENCE, and CIKM. To data, he has held 15 approved
at the IEEE ITSC2012 and Workshops, and a Tutorials Chair for IEEE invention patents in total more than 20 issued patents. His main research
ITSC 2013, Panels Chair for IEEE VTC-Fall 2013, a Program Co-Chair for directions involve deep learning, data mining, evolutionary computing, gran-
IEEE ITSC2014 and IEEE ITSC2016, a Publicity Chair for IEEE IV2017, ular computing, machine learning, and big data analytics.
a Program Chair for IEEE ICVES 2017, and a Program Co-Chair for He is a member of IEEE-CIS, ACM, IAENG and a Senior CCF. He is a
IEEE ITSC2018. He served as a General Chair for the IEEE ITSC2015, member of Technical Committee on Soft Computing of IEEE SMCS, a mem-
a record beating edition of IEEE ITSCs, hosted at Las Palmas de Gran ber of Technical Committee on Granular Computing of IEEE SMCS, and
Canaria, Spain, in September 2015. He has also contributed to the IEEE a member of Technical Committee on Data Mining and Big Data Analytics
ITS Society as a Founding Editor-in-Chief of the IEEE ITS Podcast from Technical Committee of IEEE CIS. He is also a member of IEEE CIS Task
May 2013 to December 2016, and the Editor-in-Chief of the IEEE ITS Force on Adaptive and Evolving Fuzzy Systems. He serves/served as a pro-
Newsletter in 2015 and 2016. He is a reviewer for some Transportation related gram committee member for several international conferences and workshops.
journals like the IEEE ITSS T RANSACTIONS, or the IEEE T RANSACTIONS He was a recipient of the Computer Education Excellent Paper Award (First-
ON V EHICULAR T ECHNOLOGY . He has recently been appointed as the Vice Prize) from the National Computer Education Committee of China, in 2009.
President of Technical Activities for the IEEE ITS Society and the President He was an Excellent-Young Teacher (Qing Lan Project) of Jiangsu Province
of its Spanish Chapter from 2017 to 2018. Before that, he served as the Vice in 2014, and a High-Level Talent (Six Talent Peak) of Jiangsu Province
President of that Spanish Chapter in 2015 and 2016. He has widely published in 2016. He was awarded the Best Paper of ICDMA’15, and awarded an
his research with more than 30 international conference papers and more than Outstanding Teacher of Software Design and Entrepreneurship Competition by
20 international journal articles and three research chapters, being the main the Ministry of Industry and Information Technology, China, in 2017. He was
author of more than half of them. He has also been a keynote speaker at two a recipient of the Medical Science and Technology Award (Second Prize) of
conferences (ANT2017, SOLI2017) and a Distinguished Lecturer at the IEEE Jiangsu Province, China, in 2017, and the Education Teaching and Research
ITSS Tunisian Chapter, in November 2016. Achievement Award (Third Prize) of Jiangsu Province, China, in 2018. He was
a recipient of the Natural Science Outstanding Academic Paper Award (First
Prize), Nantong, China, in 2017, the Science and Technology Progress Award
(Second Prize), Nantong, China, in 2018, the Outstanding Associate Editor
of 2018 for IEEE A CCESS Journal. He was awarded two Chinese Government
Scholarships for Overseas Studies in 2011 and 2016. He is the Chair of IEEE
CIS Task Force on Granular Data Mining for Big Data. He currently serves
on the Editorial Advisory Board of Knowledge-Based Systems and Editorial
Board of Information Fusion and Applied Soft Computing. He serves/served
as an Associate Editor of several prestigious journals, including the IEEE
T RANSACTIONS ON F UZZY S YSTEMS , Information Sciences, Swarm and
Evolutionary Computation, IEEE A CCESS , and the Journal of Intelligent and
Fuzzy Systems, the Co-Editor-in-Chief of the Journal of Artificial Intelligence
Sergey Andreev (Senior Member, IEEE) received and Systems, and the lead guest editor in several prestigious international
the Ph.D. degree from TUT in 2012 and the Spe- journals.
cialist and Cand.Sc. degrees from SUAI in 2006 and
2009, respectively. Since 2018, he has also been
a Visiting Senior Research Fellow with the Centre Jong Weon Lee received the M.S. degree in electri-
for Telecommunications Research, King’s College cal and computer engineering from the University of
London, U.K. He is currently an Assistant Professor Wisconsin–Madison in 1991 and the Ph.D. degree
of communications engineering and an Academy from University of Southern California in 2002.
Research Fellow with Tampere University, Finland. He is currently a Professor of the Department of
He coauthored more than 200 published research Software, Sejong University. His research interests
works on intelligent IoT, mobile communications, include augmented reality, computer vision, machine
and heterogeneous networking. He has been serving as an Editor for the learning, human–computer interaction, and serious
IEEE W IRELESS C OMMUNICATIONS L ETTERS since 2016 and as a Lead game.
Series Editor of the IoT Series for the IEEE Communications Magazine since
2018.

Authorized licensed use limited to: University of Glasgow. Downloaded on June 02,2020 at 16:07:31 UTC from IEEE Xplore. Restrictions apply.

You might also like