Embedded System Vehicle Based On Multi-Sensor Fusion
Embedded System Vehicle Based On Multi-Sensor Fusion
ABSTRACT As intelligent driving vehicles came out of concept into people’s life, the combination of
safe driving and artificial intelligence becomes the new direction of future transportation development.
Autonomous driving technology is developing based on control algorithms and model recognitions. In this
paper, a cloud-based interconnected multi-sensor fusion autonomous vehicle system is proposed that uses
deep learning (YOLOv4) and improved ORB algorithms to identify pedestrians, vehicles, and various traffic
signs. A cloud-based interactive system is built to enable vehicle owners to master the situation of their
vehicles at any time. In order to meet multiple application of automatic driving vehicles, the environment
perception technology of multi-sensor fusion processing has broadened the uses of automatic driving vehicles
by being equipped with automatic speech recognition (ASR), vehicle following mode and road patrol mode.
These functions enable automatic driving to be used in applications such as agricultural irrigation, road
firefighting and contactless delivery under new coronavirus outbreaks. Finally, using the embedded system
equipment, an intelligent car was built for experimental verification, and the overall recognition accuracy of
the system was over 96%.
INDEX TERMS Automatic driving, multi-sensor fusion, Internet of Things (IoT), edge intelligence, multi-
object detection, YOLOv4.
I. INTRODUCTION rently, autonomous vehicle have been put into trial operation
Nowadays automobiles have become an indispensable tool in China and have achieved some success on specific roads.
for people to travel. Moreover, the level of manufacturing In recent years, the construction of smart cities and smart
and popularity of automobiles plays an important role in transportations have made intelligent driving a new direction
measuring a country’s level of modernization and technology. of urban transportation development [2], [3] [4]. Automatic
With the continuous progress of science and technology, there driving technology has become the main research direction
are various potential safety hazards in the traditional manual of road traffic [5]. The paper [6] improves road traffic con-
driving vehicles. The continuous growth of car ownership has gestion by optimizing traffic signal lights. The paper [7] con-
also caused road traffic congestion. In addition to increasing trols and predicts vehicle routes by optimizing and predict-
the safety awareness of traffic participants, all sectors of soci- ing traffic signals to alleviate traffic congestion. Intelligent
ety hope to reduce the occurrence of traffic accidents and traf- transportation systems have been used in different countries
fic congestion through technological progress, among which to help solve traffic problems, the paper [8] has studied many
intelligent driving vehicles are the most concerned [1]. Com- countries’ approaches to solving traffic problems involving
pared with traditional manual driving, automatic driving has their tools and technologies, and examines their different
great advantages in safety, convenience and efficiency. Cur- tools, technologies and applications. The paper [9] optimized
the route through deep learning. The paper [10] reviews
The associate editor coordinating the review of this manuscript and the latest technology of autonomous artificial intelligence.
approving it for publication was Alessandro Pozzebon. it pays special attention to Special attention data preparation,
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
50334 For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ VOLUME 11, 2023
R. Tong et al.: Embedded System Vehicle Based on Multi-Sensor Fusion
feature engineering and automatic super parameter optimiza- application. The paper also points out the future development
tion. With more and more researches around intelligent trans- trends of environmental awareness technology. Environment
portation, the deep integration of intelligent driving technol- sensing mainly uses fusion sensing to sense the driving envi-
ogy, machine vision, environment awareness technology, and ronment around the vehicle (infrared sensing, radar system).
edge intelligent processing have become research hotspots. The different sensors take full advantage to form a sensor sys-
Unmanned driving technology has developed rapidly. tem with complete information coverage. Deep learning is a
With the upgradation of the global industrial chain, the new subject in the field of machine learning. At present, deep
division and cooperation of global factories have promoted learning networks [17] and radio frequency technology [18]
the rapid development of the automobile industry [11]. are used for target detection and image processing. As one
As of December 2019, China’s car ownership had exceeded of the cores of intelligent transportation, pedestrian detection
250 million. The increase of car ownership not only causes is highly relevant to the industry and has important applica-
traffic congestion, air pollution, and other problems but also tion value. Road sign recognitions are diverse, and specific
brings new challenges to urban planning. Traffic accidents instruction sign recognitions enable driverless vehicles to be
caused by automobiles also cause loss of life, health and prop- used in night road patrols. Multi sensor fusion technology
erty. The use of new energy vehicles reduces the dependence expands the application of driverless vehicles to agricultural
on fossil fuels [12] and reduces air pollution. Road traffic irrigation, intelligent transportation, road fire protection and
safety has become a serious issue. In 2018, there were about other places.
60000 people died in traffic accidents in China, meanwhile The Internet of Things (IoT) technology and the
there are about 1.24 million people died in traffic accidents fifth-generation mobile network (5G) provide a large number
each year all over the world. According to the China High- of interfaces for edge intelligent devices through low-cost
way, 86% of motor vehicle accidents were caused by illegal radio modules [19]. A large amount of local data that are
driving, where as the top five were caused by failing to give difficult to store can be stored in the ECS through the Internet
ways, driving without a license, speeding, drunk driving, and of Things technology. IoT technology allows intelligent edge
driving in the opposite direction. In the current environment, devices to access and provide feedback to users. At the
in addition to improving driver safety awareness and regu- same time, client data are collected from the client and the
lating civilized driving, all sectors of society hope to reduce remote-control command [20] is executed. the paper [21] uses
the occurrence of traffic accidents and traffic congestion a combination of the Internet and machine vision to remotely
through technological progress. The paper [13] surveys and control the crawler.The user function of an intelligent driving
summarizes that most of the current traffic accidents were system is designed based on the development of Internet of
caused by human error, and the most prominent reason was Things technology. The system significantly enhances the
fatigue driving. Automatic and auxiliary driving technologies security of a vehicle through the combination of cloud-based
can help reduce the occurrence of traffic accidents. In the unlocking and local fingerprint unlocking through small
1950s, some developed countries began studying automatic programs. This paper [22] proposed and implemented an
driving. integrated automatic login platform based on mobile finger-
Since the beginning of the 21st century, Internet of Things print identification using block-chain theory, and constructed
(IoT) technology, 5G applications, physical computing abil- a convenient and integrated automatic login platform through
ity, dynamic vision technology, artificial intelligence technol- smart phones, which has strong security.
ogy, and automatic driving technology have been developed. Our contributions are mainly as follows:
The paper [14] completely abandons the concept of ‘‘driver’’
and introduces a autonomous vehicle based on deep learning 1) We also considered the safety of driver-less vehicles
and autonomous navigation. It uses a DRL-based urban envi- from another perspective. As there is no driver, the
ronment automatic driving strategy, performs sensor fusion instrumentalization attributes of the vehicles become
on two types of input sensor data, and trains the network increasingly obvious. How can owners absolute control
to test the vehicle’s automatic driving ability. This provides their vehicles? Through IoT and wireless communica-
new possibilities for solving complex control and naviga- tion technologies, we provide a control scheme based
tion related tasks. Compared with human-driving vehicles, on cloud remote data interactions. As the owner’s
intelligent driving vehicles offer higher reliability and faster permission is obtained through the cloud, the user of the
reaction times, which can effectively reduce the number of vehicle is confirmed through fingerprint verification,
traffic collisions. Based on automatic driving, this paper [15] which provides a reference for the accountability of
proposed two DNN based deep learning models for the first driver-less vehicles after traffic accidents.
time. It is used to measure the reliability of the driverless vehi- 2) We designed a multi device management platform
cle and its main on-board unit (OBU) components; however, through the Internet of Things technology to real-
this model increases the complexity of the control system and ize the information connection between embedded
increases the system costs. In [16], The new challenges facing devices, mobile WeChat applets and database platform.
environmentally awareness technology are discussed from The intelligent recognition system and motion control
three perspectives: technology, the external environment, and system use different master chips, which allows the
points. An image area, the element expression of the corre- divides the input image into grids of equal size, and each
sponding 2 × 2 matrix is: grid predicts candidate boxes and category information of two
X scales to complete the detection, which can quickly complete
mpq = X p yq I (x, y) (2) the target detection. In Figure 2,describe the process of AI
x,y
development and training.
X, Y is the coordinate value, and I (x, y) is the pixel value. The detection rate of CNN algorithm is low and can not
The centroid of the image area is: meet the real-time detection of road conditions. The charac-
m10 m01 teristic of YOLO algorithm is that only one CNN operation
C =( , ) (3) is required for the image, so that the target position can be
m00 m00
selected in the output layer. YOLOv4’s backbone feature
The direction of the feature point can be obtained, thus
extraction network is CSPDarkNet53, which makes a little
achieving rotation invariance.
improvement on Darknet53 and uses CSPNet (cross stage
θ = ar ctan(m 10 , m 01 ) (4) partial networks) for reference. CSPNet solves the problem
of repeating gradient information in network optimization in
1) Step 1: the image is smoothed by Gauss check to
other large convolutional neural network framework back-
enhance the noise resistance of the image.
bone, and integrates the gradient changes into the feature map
2) Step 2: in the field of each feature point, compare the
from beginning to end, thus reducing the parameter quantity
pixel sizes of the two for binary assignment.
( and flops value of the model, ensuring the inference speed
1 if p(x)< p(y) and accuracy, and reducing the model size.
τ ( p; x, y) = (5) Data of prediction box position:
0 if p(x)>= p(y)
In the formula, P (x) and P (y) are the pixel values of bx = σ (tx ) + cx (6)
random points x and y respectively. In this way, the binary by = σ (ty ) + cy (7)
assignment of each feature point will form a binary code, bw = pw + ecw (8)
which forms a binary feature point descriptor. bh = ph + ech (9)
3) Step 3: to satisfy the invariance of scaling, the brief
algorithm constructs the image pyramid. Here, bx and by are the coordinates of the center point of
the prediction frame; A2 and A2 is the width and height of the
2) MODEL TRAINING prediction frame; Cx , Cy is the coordinate of the center point
Early target detection was mostly simple shape objects, of the first verification frame; pw , ph are the width and height
or template matching technology was used to realize the of a priori frame;(tx ty ph th ) is the location information of
recognition of simple objects. The traditional target detec- the prediction box; σ is the activation function, representing
tion technology had low accuracy and cumbersome feature the probability between [0,1]. The calculation formula of
extraction steps. Different algorithms were needed for dif- confidence is:
ferent objects. With the development of image processing truth
confidence = Pr(object) × IOUpred (10)
technology, deep learning has been applied in image classifi-
Pr(object) confirms whether the target exists, IOUpredtruth is
cation [28], [29].
Convolutional neural network (CNN) is one of the most the interaction ratio between the prediction frame and the real
widely used neural networks in image recognition at present. frame.
Some researchers try to apply CNN to target testing and
obtain good results. The target testing method based on deep B. FINGERPRINT VERIFICATION
learning develops rapidly. There are two kinds: two-stage Fingerprint identification is controlled by EAIDK-310’s
target object detection mode and single-stage target object retrieval of the allowed data from the cloud. When core
detection mode. The two-stage object detection method gen- processor obtains the allowed signal from the cloud, it sends
erates a large number of candidate regions in the image, and a signal to the fingerprint module. After receiving the sig-
then classifies and regresses each region. Girshick et al. [30] nal, the fingerprint module enters the detection mode and
proposed R-CNN, whose performance is significantly supe- compares it with the stored fingerprint after receiving the
rior to other traditional target detection models, and initiated input. At this stage, the fingerprint is verified by using AS608,
a new era of target detection based on deep learning. The simulating the door part in the actual use of the vehicle. The
paper [31] proposed a convolution network method based on simulation of our car door is through the fingerprint module.
fast region (Fast R-CNN) for object detection, The paper [32] The paper [33] considers the uniqueness of fingerprint infor-
uses a pedestrian detection method based on genetic algo- mation and applies fingerprint verification to transactions in
rithm to optimize XGBoost training parameters for pedestrian the financial market. Article [34] applies AS608 to the data
recognition. The single-stage target detection method is a acquisition end of the access control system.
method that transforms the target detection problem into a This design uses template matching method to extract fin-
regression problem. YOLO (you only look once) network gerprint image on the basis of thinning image. The template
vehicle. Confirm the identity of the driver-less vehicle user 3) Voice interaction system: the whole voice control system
through the communication between the fingerprint identifi- is based on a dialogue between LD3320 and VG009, which
cation system and the cloud data. The sharing of driver-less is an interaction mode designed based on user experience.
vehicles is promoted through the interconnection of WeChat 4) Fingerprint identification system: ATK-as608 finger-
applet on mobile phone. print module is used for fingerprint identification to col-
1) Motion control system: the motion control system lect fingerprints. The chip has built-in DSP arithmetic unit
is designed to complete the processing instructions of the and integrated fingerprint identification algorithm, which can
intelligent platform. The motion control system includes efficiently collect fingerprint images. The data collected by
STM32f103c8t6, TA6586 motor drive board, Tcrt5000 the fingerprint module is stored in the CPU.The ATK-as608
infrared sensor and DC motor. The motion control sys- has a fingerprint wake-up function. When a finger is detected
tem integrates the data of various sensors, receives the on the fingerprint detection module, the wake-up pin on the
signals of the intelligent platform, executes the process- module sends a high level back to the IO port of the CPU.
ing results of EAIDK-310, and completes the mechanical After receiving the high level, the CPU starts to receive the
functions of the vehicle. The whole motion control sys- fingerprint data.
tem is an execution system for multi-sensor fusion of road
surface. B. MECHANICAL STRUCTURE
2) Intelligent detection system: the whole intelligent con-
In Figure 8,the purpose of mechanical design is to flexibly
trol system includes EAIDK-310, LCD display screen, cam-
integrate all parts to facilitate the realization of various func-
era and other modules. As the core of the whole system
tions, the measurement angle of sensors, the sensor layout of
control, EAIDK-310 realizes the real-time detection of road
the whole vehicle, and the convenience of human-computer
information, directly interacts with the cloud information
interaction. The integrated platform retains the original intel-
through WiFi, and detects the cloud control signal online in
ligent vehicle model, and has been transformed and inno-
real time. EAIDK-310 is equipped with a high-performance
vated. The design of the mechanical structure of the automatic
ARM processor, with a main frequency of 1.3GHz. At the
driving vehicle includes the design of the body, the layout
same time, its on-board running memory is as high as 1GB.
of the hardware structure, the design of the sensor and the
Its rich peripheral interfaces include WiFi, USB, Ethernet
human-computer interaction.
and HDMI interfaces, providing good convenience for the
interaction between the entire intelligent system and users.
EAIDK-310 preloaded embedded deep learning framework 1) INTELLIGENT VEHICLE BODY DESIGN
Tengin supports the direct deployment of Caffe / tensorflow / In the design of the whole system, the control system of
Python / mxnet / onnx / Darknet and other training frame- the chassis mobile platform is the most important, which is
work models, supports network performance optimization the core of the stable operation of the whole system. The
strategies such as layer fusion and quantification, provides level of the control system is directly related to the intelligent
a unified API (C / Python / JNI) interface, and provides level of the intelligent platform. The design strategy of the
self-defined operators for extended interfaces. EAIDK-310 control system also determines the functional characteristics,
can be used in a variety of intelligent detection and has a wide application scope and expansibility of the whole design sys-
range of practicability. tem. There are many kinds of operating structures of mobile
FIGURE 14. Pedestrians and cars are marked in the figure. As an image
annotation tool, labelimg can save the generated annotations as an XML
FIGURE 13. Identification process.
file in Pascal VOC format without secondary conversion.
transformation of images in computer vision. In this paper, 5607 pavement conditions. According to the ratio of training
rectangular box is selected for image annotation, and labe- set and test set 9:1. In this experiment, the size of picture
limg (a labeling tool for image preprocessing) is selected for input is 800 × 800 for training, and the initial learning rate
image annotation. The annotation of the data set is shown in is 0.001. The learning rate is a parameter determined by the
the figure14. programmer. A high learning rate means that the action of
In order to obtain good recognition accuracy, a large weight updating is larger, so it may take less time for the mode
number of data sets are needed. We constructed a data set to converge to the optimal weight. However, if the learning
of omni-directional pedestrian and vehicle detection. The rate is too high, the jump will be too large and not accurate
collected images need to consider pedestrian and vehicle enough to achieve the best. This value is usually 0.1-0.001.
detection under various road conditions. We found a total of The data set is shown in the figure 15.
FIGURE 18. WeChat applet login interface, cloud unlock, weather view.
We have illustrated the flow chart of pedestrian recognition FIGURE 19. Parking position viewing and navigation through external API.
in Figure 16.
2) MULTI-PLATFORM INTERCONNECTION WeChat applet to provide convenience for remote control and
Before planning to build multi-platform interconnection, facilitate the user to understand the use of the vehicle.
we need to consider the overall design of the traditional soft- WeChat applet, developed by Tencent in China based on
ware and hardware integration platform. The design platform WeChat ecosystem, is an application level program that does
divides the data flow transmission into three corresponding not need to be downloaded. The combination of WeChat
parts, the control end facing users, the edge intelligent device applet and unmanned user terminal realizes the visibility of
end, and the data storage transfer station. The user control data on the mobile terminal and greatly facilitates the user’s
terminal is the channel for direct interaction between the use experience.
intelligent system and the user, and needs to be convenient Before planning and designing each page, we learn from
to use, complete in function and easy to control. The edge the early user login mode to identify users and distinguish the
intelligent device is the execution carrier of the task. Due vehicles that need to be controlled. As a cloud platform con-
to the restriction of local storage, the data supplier needs to nection station facing the actual user needs, WeChat applet
upload the data generated in the process of use. The data collects and uploads the control data flow of the user during
storage transfer station is used to analyze, classify and store the interaction with the cloud, extracts the corresponding
the uploaded data. user data flow from the cloud for display, helps the user to
The self-driving vehicle designed in this paper uses the remotely control the vehicle, view the parking position, And
network module of EAIDK-310 as the channel for uploading remind the user of the weather conditions through the external
the application information of edge intelligent processing API in Figure 19.
equipment, and the cloud provides the possibility for remote Onenet is a PAAS (Platform as a service) Internet of
data interaction and storage. In Figure 20,At the remote user things open platform created by China Mobile. It can greatly
end, we design an interactive page through the user’s mobile facilitate developers to connect devices, quickly complete the
A. FINGERPRINT IDENTIFICATION
The fingerprint module is debugged by the upper computer
is based on the cloud interconnection, and the entire sys- input detection. We input the fingerprint templates of the
tem is simulated on embedded devices. We carried out a three main authors of this design and store them in the
comprehensive experiment on smart cars in Yangpu District, local end of the vehicle. The fingerprint template matching
Shanghai, focusing on the accuracy verification of interactive detection at the local end and the cloud cooperate to realize
devices with many human factors. We tested cloud access the double verification of the unmanned vehicle. As shown
using scripts. The communication between EAIDK-310 and in Table 3.In order to verify the accuracy of fingerprint
the motion control system was tested through automatic data recognition, we entered three user templates, and each user
transmission. Under the condition of ensuring the accuracy of conducted 300 tests.
image recognition, voice interaction and fingerprint verifica-
tion, the whole system rarely had problems when the signal B. VOICE INTERACTION SYSTEM
was good. The wake-up waiting is set to 5s. After wake-up, the indicator
We made assumptions about the experimental environ- stays on and waits for the input of voice commands. The
ment: setting of wake-up words greatly improves the efficiency of
1)We use the embedded intelligent car to simulate the real voice recognition. In the voice interaction system, we use the
vehicle, and use the motion control system to simulate the serial port to output commands for on-demand. Voice inter-
motion of the real vehicle. action files we use iFLYTEK’s online voice synthesis tool
2)The signal coverage of the whole experimental environ- to synthesize voice. There are two types of wake-up: voice
ment is good. wake-up and serial wake-up. When the correct fingerprint
TABLE 6. Evaluation index of vehicle and pedestrian detection. Finally, the specific functions are tested through physical
construction to complete the implementation of the whole
cloud networking control system.
The feasibility of the entire system was verified in this
study. In subsequent experiments, we considered connecting
with a physical vehicle. It is necessary to further consider the
accuracy of various sensors to avoid traffic accidents and to
create a more complete cloud and local storage system to cope
with the subsequent large number of visits.
REFERENCES
[1] L. Forlano, ‘‘Cars and contemporary communications| stabiliz-
ing/destabilizing the driverless city: Speculative futures and autonomous
vehicles,’’ Int. J. Commun., vol. 13, p. 28, Jun. 2019.
[2] C. Legacy, D. Ashmore, J. Scheurer, J. Stone, and C. Curtis, ‘‘Planning the
driverless city,’’ J. Transp. Rev., vol. 39, no. 1, pp. 84–102, Jan. 2019.
[3] E. F. Z. Santana, G. Covas, F. Duarte, P. Santi, C. Ratti, and F. Kon, ‘‘Tran-
sitioning to a driverless city: Evaluating a hybrid system for autonomous
and non-autonomous vehicles,’’ Simul. Model. Pract. Theory, vol. 107,
Feb. 2021, Art. no. 102210.
[4] F. Shatu and M. Kamruzzaman, ‘‘Planning for active transport in driverless
cities: A conceptual framework and research agenda,’’ J. Transp. Health,
vol. 25, Jun. 2022, Art. no. 101364.
[5] Y. Wiseman, ‘‘Driverless cars will make passenger rail obsolete [opinion],’’
IEEE Technol. Soc. Mag., vol. 38, no. 2, pp. 22–27, Jun. 2019.
[6] I. Tomar, I. Sreedevi, and N. Pandey, ‘‘State-of-art review of traffic light
synchronization for intelligent vehicles: Current status, challenges, and
emerging trends,’’ Electronics, vol. 11, pp. 2079–9292, Feb. 2022.
[7] L. Zhang, M. Khalgui, and Z. Li, ‘‘Predictive intelligent transportation:
FIGURE 27. Pedestrian and vehicle detection. Alleviating traffic congestion in the Internet of Vehicles,’’ Sensors, vol. 21,
no. 21, p. 7330, Nov. 2021.
[8] U. Makhloga, ‘‘Improving India’s traffic management using intelligent
Z 1
transportation systems,’’ Tech. Rep., 2022.
AP = P(R)dR (17) [9] Á. Fehér, S. Aradi, and T. Bécsi, ‘‘Online trajectory planning with
0 reinforcement learning for pedestrian avoidance,’’ Electronics, vol. 11,
ϵ
P pp. 2079–9292, Jul. 2022.
( APi ) [10] P. Radanliev and D. De Roure, ‘‘Review of the state of the art in
i=1
mAP = × 100% (18) autonomous artificial intelligence,’’ AI Ethics, vol. 18, p. 62, Jun. 2022.
c [11] N. Shigeta and S. E. Hosseini, ‘‘Sustainable development of the automobile
In the figure27,the recognition result shows that it has a industry in the United States, Europe, and Japan with special focus on the
vehicles’ power sources,’’ Energies, vol. 14, no. 1, p. 78, Dec. 2020.
good recognition effect on pedestrians and vehicles under [12] K. Rajagopalan, B. Ramasubramanian, S. Velusamy, S. Ramakrishna,
general road conditions. However, the recognition accuracy A. M. Kannan, M. Kaliyannan, and S. Kulandaivel, ‘‘Examining the eco-
of vehicles with severe occlusion will be reduced. For our nomic and energy aspects of manganese oxide in Li-ion batteries,’’ Mater.
Circular Economy, vol. 4, no. 1, pp. 1–22, Dec. 2022.
800 × 800 pixel image, the time for checking each frame [13] D. Prasad, A. Anand, V. A. Sateesh, S. K. Surshetty, and V. Nath,
in visual processing is 0.09s. In the actual test, we think ‘‘Accident avoidance and detection on highways,’’ in Microelectronics,
the feedback time of the whole communication equipment is Communication Systems, Machine Learning and Internet of Things, 2022,
pp. 513–528.
within 1s.
[14] A. R. Fayjie, S. Hossain, D. Oualid, and D. Lee, ‘‘Driverless car:
Autonomous driving using deep reinforcement learning in urban envi-
V. CONCLUSION ronment,’’ in Proc. 15th Int. Conf. Ubiquitous Robots (UR), Jun. 2018,
In this study, a set of auto-drive systems were designed pp. 896–901.
[15] G. Karmakar, A. Chowdhury, R. Das, J. Kamruzzaman, and S. Islam,
based on cloud interconnections through EAIDK-310, ‘‘Assessing trust level of a driverless car using deep learning,’’ IEEE Trans.
STM32F103C8T6, and the interaction between the WeChat Intell. Transp. Syst., vol. 22, no. 7, pp. 4457–4466, Jul. 2021.
applet and cloud data, the double verification of the remote [16] Q. Chen, Y. Xie, S. Guo, J. Bai, and Q. Shu, ‘‘Sensing system of envi-
ronmental perception technologies for driverless vehicle: A review of state
and local ends of the car is realized to ensure the control rights of the art and challenges,’’ Sens. Actuators A, Phys., vol. 319, Mar. 2021,
of the car owner. The double verification of the car remotely Art. no. 112566.
and locally to ensure the owner’s absolutely control. The [17] P. Fergus and C. Chalmers, ‘‘Deep reinforcement learning applied deep
learning,’’ Tech. Rep., 2022, pp. 255–264.
sharing concept of unmanned driving is promoted through
[18] S. Sigg, M. Scholz, S. Shi, Y. Ji, and M. Beigl, ‘‘RF-sensing of activities
the interaction with cloud location data flow. This paper from non-cooperative subjects in device-free recognition systems using
introduces the overall design scheme, the work flow of the ambient and local signals,’’ IEEE Trans. Mobile Comput., vol. 13, no. 4,
system, the design of the embedded system, the identification pp. 907–920, Apr. 2014.
[19] S. Kumar, P. Tiwari, and M. Zymbler, ‘‘Internet of Things is a revolutionary
algorithm adopted, the model deployment, the construction approach for future technology enhancement: A review,’’ J. Big Data,
of the cloud platform and page design of the WeChat applet. vol. 6, no. 1, pp. 1–21, Dec. 2019.
[20] C. Suppatvech, J. Godsell, and S. Day, ‘‘The roles of Internet of Things RUI TONG was born in China, in 1998.
technology in enabling servitized business models: A systematic literature He received the bachelor’s degree in electrical
review,’’ Ind. Marketing Manage., vol. 82, pp. 70–86, Oct. 2019. engineering and automation from Shanghai Dianji
[21] S. Wang, S. Zhang, R. Ma, E. Jin, X. Liu, H. Tian, and R. Yang, ‘‘Remote University, in 2020. He is currently pursuing the
control system based on the Internet and machine vision for tracked master’s degree in electrical engineering with the
vehicles,’’ J. Mech. Sci. Technol., vol. 32, no. 3, pp. 1317–1331, Mar. 2018. University of Shanghai for Science and Tech-
[22] J.-H. Huh and K. Seo, ‘‘Blockchain-based mobile fingerprint verification nology, Yangpu, Shanghai. His research interests
and automatic log-in platform for future computing,’’ J. Supercomput., include predictive control and parameter identifi-
vol. 75, no. 6, pp. 3123–3139, Jun. 2019.
cation.
[23] T. U. Rehman, M. S. Mahmud, Y. K. Chang, J. Jin, and J. Shin, ‘‘Current
and future applications of statistical machine learning algorithms for agri-
cultural machine vision systems,’’ Comput. Electron. Agricult., vol. 156,
pp. 585–605, Jan. 2019.
[24] M. L. Smith, L. N. Smith, and M. F. Hansen, ‘‘The quiet revolution QUAN JIANG was born in China, in 1963.
in machine vision—A state-of-the-art survey paper, including histori- He received the B.Eng. degree in electrical engi-
cal review, perspectives, and future directions,’’ Comput. Ind., vol. 130, neering from the Hefei University of Technol-
Sep. 2021, Art. no. 103472.
ogy, China, in 1983, and the M.Eng. and Ph.D.
[25] J. Zhao, B. Liang, and Q. Chen, ‘‘The key technology toward the self-
degrees in electrical engineering from Southeast
driving car,’’ Int. J. Intell. Unmanned Syst., vol. 6, no. 1, pp. 2–20,
Jan. 2018. University, China, in 1986 and 1991, respectively.
[26] C. Luo, W. Yang, P. Huang, and J. Zhou, ‘‘Overview of image matching He is currently the Head and a Professor with the
based on ORB algorithm,’’ J. Phys., Conf. Ser., vol. 1237, no. 3, Jun. 2019, Department of Electrical Engineering, University
Art. no. 032020. of Shanghai for Science and Technology, Shang-
[27] M. Bansal, M. Kumar, and M. Kumar, ‘‘2D object recognition: A compar- hai, China. He has rich research experience in PM
ative analysis of SIFT, SURF and ORB feature descriptors,’’ Multimedia motors and drives, switched reluctance motors and drives, and dc motors.
Tools Appl., vol. 80, no. 12, pp. 18839–18857, May 2021. He has published more than 110 academic articles. He is the coauthor of two
[28] H. Fujiyoshi, T. Hirakawa, and T. Yamashita, ‘‘Deep learning-based books. He was granted 20 patents in the USA, China, Singapore, and Japan.
image recognition for autonomous driving,’’ IATSS Res., vol. 43, no. 4, His current research interests include the design, control, and testing of
pp. 244–252, Dec. 2019. electric machines, electric drives, power electronics, finite element analysis
[29] K. Xia, H. Fan, J. Huang, H. Wang, J. Ren, Q. Jian, and D. Wei, ‘‘An intelli- of electromagnetic fields, and applications of micro-controller, DSP, and
gent self-service vending system for smart retail,’’ Sensors, vol. 21, no. 10, FPGA devices.
p. 3560, May 2021.
[30] R. Girshick, J. Donahue, T. Darrell, and J. Malik, ‘‘Rich feature hierarchies
for accurate object detection and semantic segmentation,’’ in Proc. IEEE
Conf. Comput. Vis. Pattern Recognit., Jun. 2014, pp. 580–587. ZUQI ZOU was born in China, in 1997.
[31] R. Girshick, ‘‘Fast R-CNN,’’ in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), He received the bachelor’s degree in electrical
Dec. 2015, pp. 1440–1448. engineering and automation from Shanghai Dianji
[32] Y. Jiang, G. Tong, H. Yin, and N. Xiong, ‘‘A pedestrian detection method University, in 2020. He is currently pursuing the
based on genetic algorithm for optimize XGBoost training parameters,’’ master’s degree in electrical engineering with the
IEEE Access, vol. 7, pp. 118310–118321, 2019. University of Shanghai for Science and Tech-
[33] T. Sangeetha, M. Kumaraguru, S. Akshay, and M. Kanishka, ‘‘Biometric nology, Yangpu, Shanghai. His research interest
based fingerprint verification system for ATM machines,’’ J. Phys., Conf. includes motor control.
Ser., vol. 1916, no. 1, May 2021, Art. no. 012033.
[34] Z. Changhong and X. Reneng, ‘‘Intelligent laboratory access control sys-
tem based on ZigBee technology,’’ in Proc. Int. Conf. Virtual Reality Intell.
Syst. (ICVRIS), Jul. 2020, pp. 688–690.
[35] C. Xiong, ‘‘Design of intelligent garbage classification bin based on TAO HU was born in China, in 1997. He received
LD3320,’’ in Proc. Int. Conf. Signal Process. Mach. Learn. (CONF- the bachelor’s degree in electrical engineering and
SPML), Nov. 2021, pp. 7–10. automation from Changzhou University, in 2020.
[36] J. Kowalski, A. Jaskulska, K. Skorupska, K. Abramczuk, C. Biele, He is currently pursuing the master’s degree
W. Kopeć, and K. Marasek, ‘‘Older adults and voice interaction: A pilot
in electrical engineering with the University of
study with Google home,’’ in Proc. Extended Abstr. CHI Conf. Hum.
Shanghai for Science and Technology, Yangpu,
Factors Comput. Syst., May 2019, pp. 1–6.
Shanghai. His research interests include embedded
[37] A. Lee, K. Oura, and K. Tokuda, ‘‘MMDAgent—A fully open-source
toolkit for voice interaction systems,’’ in Proc. IEEE Int. Conf. Acoust., technology and the control method of permanent
Speech Signal Process., May 2013, pp. 8382–8385. magnet synchronous motors.
[38] D. Wang, X. Wang, and S. Lv, ‘‘An overview of end-to-end automatic
speech recognition,’’ Symmetry, vol. 11, no. 8, p. 1018, Aug. 2019.
[39] D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk,
and Q. V. Le, ‘‘SpecAugment: A simple data augmentation method for
TIANHAO LI was born in China, in 1998.
automatic speech recognition,’’ 2019, arXiv:1904.08779.
He received the bachelor’s degree in electrical
[40] K. Xia, X. Xie, H. Fan, and H. Liu, ‘‘An intelligent hybrid–integrated
system using speech recognition and a 3D display for early childhood engineering and automation from the Henan Uni-
education,’’ Electronics, vol. 10, no. 15, p. 1862, Aug. 2021. versity of Science and Technology, in 2021. He is
[41] P. Fu, D. Liu, and H. Yang, ‘‘LAS-transformer: An enhanced transformer currently pursuing the master’s degree in electrical
based on the local attention mechanism for speech recognition,’’ Informa- engineering with the University of Shanghai for
tion, vol. 13, no. 5, p. 250, May 2022. Science and Technology, Yangpu, Shanghai. His
[42] J.-W. Hu, B.-Y. Zheng, C. Wang, C.-H. Zhao, X.-L. Hou, Q. Pan, and Z. Xu, research interests include embedded technology,
‘‘A survey on multi-sensor fusion based obstacle detection for intelligent the application of power electronics technology in
ground vehicles in off-road environments,’’ Frontiers Inf. Technol. Elec- power systems, the modulation and control strate-
tron. Eng., vol. 21, no. 5, pp. 675–692, May 2020. gies of multilevel converters, and the development of high-power power
[43] Z. Wang, Y. Wu, and Q. Niu, ‘‘Multi-sensor fusion in automated driving: electronics devices.
A survey,’’ IEEE Access, vol. 8, pp. 2847–2868, 2020.