Huang 2019
Huang 2019
**[email protected]
††
[email protected]; [email protected]
With the development of digital image processing technology, the application scope of image
recognition is more and more wide, involving all aspects of life. In particular, the rapid devel-
opment of urbanization and the popularization and application of automobiles in recent years
have led to a sharp increase in tra±c problems in various countries, resulting in intelligent
transportation technology based on image processing optimization control becoming an im-
portant research ¯eld of intelligent systems. Aiming at the application demand analysis of
intelligent transportation system, this paper designs a set of high-de¯nition bayonet systems for
intelligent transportation. It combines data mining technology and distributed parallel Hadoop
technology to design the architecture and analysis of intelligent tra±c operation state data
analysis. The mining algorithm suitable for the system proves the feasibility of the intelligent
tra±c operation state data analysis system with the actual tra±c big data experiment, and aims
to provide decision-making opinions for the tra±c state. Using the deployed Hadoop server
cluster and the AdaBoost algorithm of the improved MapReduce programming model, the
example runs large tra±c data, performs tra±c analysis and speed–overspeed analysis, and
extracts information conducive to tra±c control. It proves the feasibility and e®ectiveness of
using Hadoop platform to mine massive tra±c information.
¶ Corresponding author.
2054024-1
M. Huang et al.
1. Introduction
For a long time, tra±c di±culties have been a common concern for people. Rapid
economic growth has led to the increasing of motor vehicles, and the improvement
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
of people's living standards has led to an increase in the number of private cars.
However, land resources are limited, resulting in the shortage of roads required for
existing roads and vehicles, resulting in a series of tra±c jams. Tra±c problems
include tra±c pollution and low transportation e±ciency. In the initial stage of
tra±c reform and development, the construction of comprehensive urban roads
has slowed down the contradiction between large and rapid growth of vehicles and
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
2054024-2
Key Technologies of Intelligent Transportation
development. Ma13 proposed that intelligent tra±c construction can e®ectively in-
tegrate urban transportation resources and solve urban tra±c problems. Based on
the development goals and connotation of intelligent transportation, the evaluation
index system of urban intelligent transportation development level is constructed.
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
Eight cities are selected as research samples, and the intelligent tra±c level is
evaluated by using entropy weight method and Grey correlation method. The do-
mestic research team has also begun to study intensively. Yu23 proposed that the
current evaluation system of intelligent transportation is in its infancy, and there is
a great lack of concepts and technologies. It is imperative to establish an evaluation
system. Haitao6 processed the collected images, obtained image information in the
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
process, re°ected the intensity of the vehicle, con¯rmed the potential tra±c con-
gestion, further assisted the evaluation of the dynamic image detection, and accu-
rately determined the tra±c congestion situation in dynamic mode. Changqing3
based on elaboration likelihood model (ELM), adding improved Principal compo-
nent analysis (PCA) method, proposed a PCA-ELM image recognition algorithm
for tra±c sign recognition system. Through the extraction of HOG features, im-
proved PCA method dimension reduction, ELM model feature training to realize
the identi¯cation of tra±c sign images, Yufu24 can e®ectively identify the license
plate number information of the vehicle under the lens through image segmentation
and image recognition methods, and integrate it into the camera through hardware
to realize real-time recognition. Zhe25 and others integrated image processing and
image recognition technology into the vehicle °ow monitoring system, which will
greatly improve the e±ciency of detection and the quality of management. The
paper mainly combines intelligent tra±c to study image processing and image
recognition technology. Yuxin21 designed an intelligent tra±c light control system
based on FPGA chip and deep learning in the Quartus II environment using VHDL
language combined with hardware circuit. The results show that the addition of the
image recognition module enables the system to quickly and e®ectively manage the
tra±c intersections in an emergency, thus realizing the intelligence of the tra±c
light control system. Hongping8 uses a series of processing on the ingested image,
including image conversion, mathematical shape expansion, image noise processing,
threshold segmentation and edge detection and other image research methods,
extracting image feature values using image technology to detect and identify tra±c
sign information and complete the identi¯cation of tra±c signs, the most important
part of the ITS. Mingsheng16 in the study, with tra±c management as the core,
analyzes the application of image recognition technology in ITS, studies the license
plate recognition technology in urban tra±c management and image recognition
technology in pedestrian tra±c safety management, and promotes the development
of ITS, providing relevant reference and help for relevant researchers. The experi-
mental results of Weibin19 show that the proposed method has higher recognition
accuracy and time than traditional recognition methods, which greatly improves the
2054024-3
M. Huang et al.
department, which is crucial for the seizure of illegal and criminal vehicles. Meng-
shang,14 on the basis of ensuring the recognition e±ciency and accuracy of license
plates, combined with network interconnection and data sharing, can completely
replaced the traditional IC cartoon line mode, shortening the vehicle up and down
station time.
Although experts and scholars at home and abroad have achieved fruitful results
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
2054024-4
Key Technologies of Intelligent Transportation
better human–computer interaction experience and achieve the ideal e®ect of visual
image recognition in multi-channel display control system. Jiangshan,10 for the CT
image reconstruction problem, ¯rstly deduced the relationship between the attenu-
ation coe±cient and the pixel value, and then used various ¯lters for ¯ltering back
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
meters, and the detection error was within 4%. The author veri¯es the rationality
and applicability of the proposed vision sensor and its detection algorithm and image
processing °ow. Liping,12 in order to e®ectively improve the quality of sound-re-
solved photoacoustic images, the author's fuzzy boundary and loss details for in-
terpolation, and an image enhancement method combining fuzzy set and fractional
di®erential wavelet enhancement is proposed. Mingjun25 and others to obtain the
maximum contrast, set the constraints of the gain factor, and ¯nd the optimal value.
At the same time, in order to solve the block e®ect brought by the block enhance-
ment, the gain factor is optimized by using the pilot ¯lter. The experimental results
show that the method can e®ectively enhance the degraded teaching video images,
and has better enhancement e®ect and e±ciency.
In order to solve some technical problems existing in smart transportation, this
paper adopts the fusion of data mining technology and distributed parallel Hadoop
technology, designs and analyzes for the mining algorithm suitable for the system,
and prove the feasibility of the key technology of intelligent transportation with the
experiment of actual tra±c big data. Using the deployed Hadoop server cluster, using
improved algorithms, the instance runs tra±c big data, carries out tra±c °ow
analysis and speed–overspeed analysis, and extracts information that is bene¯cial to
tra±c control. It proves the feasibility and e®ectiveness of using Hadoop platform to
mine massive tra±c information. The face recognition results show that when the
number of test samples is small, the recognition rate of the SURF algorithm is
relatively high; when the number of samples increases, the number of nonpositive
faces in the sample increases, causing the SURF algorithm to be tested during the
test. The feature points and the trained sample match feature points are reduced.
The recognition result of the vehicle logo shows that the recognition accuracy rate of
the complex vehicle logo has a certain degree of decline compared with the simple
vehicle logo, but the reduction rate of the SIFT feature extraction method is the
smallest, so the recognition e®ect of the SIFT is also the best. In the overspeed
analysis, if the support is very large, it indicates that the station has a serious
overspeed phenomenon in this direction. The higher the con¯dence level, the greater
the chance the location of the station will overspeed in this direction.
2054024-5
M. Huang et al.
2. Method
2.1. Image preprocessing
The purpose of image preprocessing is to suppress uninteresting features for a given
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
application, enhance certain features of interest, and prepare for subsequent pro-
cessing. Commonly used image preprocessing techniques include: image ¯ltering,
image enhancement, image sharpening, and other techniques.
(1) Image ¯ltering
Image ¯ltering is one of the important means to eliminate noise. It is an indis-
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
In the above formula, fðx; yÞ is the m pixels of the template neighborhood, and
gðx; yÞ is the pixel value of the target pixel mean.
(2) Image sharpening
The image subjected to the sharpening operation requires a higher signal-to-noise
ratio, otherwise the image signal-to-noise ratio of the sharpened image will be lower,
which will reduce the image quality. So the image is usually processed ¯rst,
then sharpened. A commonly used di®erential sharpening method is Laplacian
sharpening.
@ 2f @ 2f
Laplacian operator: r 2 f ¼ @x 2 þ @y 2 . For the discrete mathematical graph fði; jÞ
2054024-6
Key Technologies of Intelligent Transportation
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
feature. Each feature template consists of two rectangles, white and black. The
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
feature value is equal to the sum of all pixels in the white rectangle of the template
minus the sum of all the pixels in the black rectangle. Five basic Haar features are
shown in Fig. 1. In Fig. 1(a), 1 2 indicates that the model is a pixel block of 1 row
2 columns, and the others are similar. The rectangular template can be placed in any
pixel area of the input image, and the size of the template can be changed. Therefore,
the three factors change, so that the detection window has many rectangular fea-
tures. In order to be able to quickly calculate the eigenvalues, the concept of integral
graphs needs to be added. The integral graph concept comes from the integral
operation, and the expression is
Z yZ x
fðx; yÞ ¼ gðx 0 ; y 0 Þdxdy: ð4Þ
0 0
In the above formula, fðx; yÞ represents the integral graph, and gðx 0 ; y 0 Þ represents
the original image. The calculation method is to calculate all the pixel values at the
upper left corner of ðx; yÞ and the integral map values at the corresponding (x, y) of
the integral map. In the above formula, fðx; yÞ represents the integral map, and
gðx 0 ; y 0 Þ represents the original image.
The formula for image template matching is expressed as: the sizes of two images
I1(x, yÞ and I21(x, yÞ are, respectively, m1 n1 and m2 n2 , and the mapping
between the two images is as shown in the formula
I2 ðx; yÞ ¼ gðI1 ðsðx; yÞÞÞ; ð5Þ
where s represents geometric coordinate transformation, and g represents image
luminance transformation.
2054024-7
M. Huang et al.
where Gðx; y; dÞ represents a Gaussian kernel function and d represents a scale space
factor. The Gaussian di®erence-scale image is then constructed using Gaussian scale
images, as shown in Eqs. (4)–(11)
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
space and the DOG (Gaussian di®erence scale space) scale space, and the position of
the feature points is initially obtained. The pixels of the neighborhood are composed
of: eight pixels adjacent to a certain pixel, and 9 2 points corresponding to the
upper and lower adjacent scales. Search for extreme points in the neighborhood to
ensure that extreme points are detected in both the scale space and the 2D image
space. These extreme points are used as the initial feature points, and then the
second-order Taylor expansion of the DOG function is used to accurately locate the
feature points.
hðrk Þ nk
pðrk Þ ¼ ¼ ðk ¼ 0; 1; 2; . . . ; L 1Þ: ð9Þ
n n
Through the formula (4.3), the equalized histogram range [0, 1] is all stacked, and the
brightness of the image is obviously enhanced.
2054024-8
Key Technologies of Intelligent Transportation
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
2054024-9
M. Huang et al.
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
upper layer. The data processing and analysis layer solution in this architecture
proposes to combine data mining technology with Hadoop technology to analyze
tra±c data and status.
Hadoop is a framework for applications running on large server clusters that excel
at complex analysis of large datasets. As an open source distributed technology,
Hadoop has many advantages: (1) low construction cost: software Linux, hardware
only requires cheap computers; (2) reliability: fault tolerance mechanism and copy
mechanism; (3) scalability: easy Increase or decrease computer storage resources, the
scale of calculation is unlimited; (4) using native Java technology to achieve. Hadoop
includes two key technologies: reliable data storage with HDFS and high-perfor-
mance parallel data processing with MapReduce technology. Simply put, Hadoop is
an open source implementation of the MapReduce programming model. In addition,
the entire Hadoop ecosystem also includes a unique DW HBase, the entire system
coordinator Zookeeper, serialization module Avro and other projects, as shown in
Fig. 3.
3. Experiment
3.1. System face detection algorithm implementation
The implementation of face detection refers to the face detection program with face
Haar feature in OpenCV, directly using the face in OpenCV Haar feature classi¯er
haarcascades frontalface alt.xml, added directly in the program local. Then the
human eye classi¯er is added on this basis, and the face detection °ow chart is shown
in Fig. 4. After the image is input, each image to be tested is ¯rst preprocessed. The
original image comes from the camera or the capture machine. If the original image is
from the camera, the program will detect the skin color of each frame of the video and
2054024-10
Key Technologies of Intelligent Transportation
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
¯lter out the image containing the skin color of the face. If the original image comes
from the capture machine, the program performs a skin color detection on each
captured image to ¯lter the image containing the face color. Then, the face classi¯er
is loaded, and the Haar feature face is detected by the AdaBoot algorithm. The
¯ltered image is passed along with the cascade classi¯er to the OpenCV function
cvHaarDetectObject(), which returns the detected face variable. If the variable is 0,
it returns to continue loading the image left by the next ¯lter; if it is not 0, it
continues to the next position detection of both eyes.
Among them, the cvHaarDetectObjects function used in the program speci¯es the
corresponding face classi¯er, and returns the detected face through a rectangle. The
prototype of the cvHaarDetectObjects function is as follows:
CVAPI(CvSeq*)cvHaarDetectObjects(
Const CvArr* inage,
2054024-11
M. Huang et al.
CvHarrClassi¯erCascade* cascade,
CvMemStorage* storage,
Double scale factor CV DEFAULT(1.1),
Int min neighbors CV DEFAULT(3),
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
. Point A, with a blue area on the left and a white area on the right.
. Point B, with a white area on the left and a blue area on the right.
. There is a certain threshold for the distance between A and B.
According to the above de¯nition, the license plate will contain the number of
colored points above.
In order to reduce the amount of calculation, the colored pairs in the license plate
are concentrated in the area where the color changes sharply. Therefore, before the
color recognition, the edge extraction is generally performed in the preprocessing to
improve the operation precision and e±ciency. Figure 5 is an e®ect diagram.
(2) License plate shape recognition
On the basis of ¯nding colored pairs of license plate areas, morphological pro-
cessing is required to ¯nd dense areas of colored pairs and highlight the license plate
position. The morphological closing operation and the 10 etching operations are
performed on the image after the search for the colored dots, and the morphologically
processed region is obtained, and then the shape is judged according to the license
2054024-12
Key Technologies of Intelligent Transportation
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
plate shape proportional coe±cient to determine whether it is the license plate re-
gion. As shown in Fig. 6, the car card has a morphological process after searching for
a colored point, and conforms to the license plate shape feature image, so it can be
initially identi¯ed as a license plate area.
In the Qt Creator development environment, the face license plate recognition
system was developed using OpenCV. Import the local image to be tested, then use
the steps in this chapter to identify the license plate, and ¯nally output the recog-
nition result in text form on the interface. The system mainly imports local images,
recognizes them, imports local images for recognition, and automatically imports 100
local license plates. The recognition e®ect is ideal, reaching 90%. The image recog-
nition result with increased angular deviation in the original license plate will be
wrong.
4. Results
4.1. Analysis of face recognition results
Through the test, the program reads the ORL face database image in sequence, and a
total of 400 pictures are used for face detection, with six faces not being detected. The
detection rate can reach 98.5%. As shown in Fig. 7, Fig. 7(a) is the original input
2054024-13
M. Huang et al.
image, and Fig. 7(b) is a diagram showing the e®ect of the human face and the
human eye. Since the original image itself has a pixel size of 92 112, the resolution
is so low that the eye area is not clear and cannot be detected. Therefore, a high-
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
de¯nition camera is used in the HD bayonet system. When the program calls the
camera and obtains the face image in the video surveillance area through the camera,
the program runs for one hour, and detects each frame of the video to output a face
image. The face detection rate is relatively low, mainly because of the limitations of
the hardware such as the camera, and the image resolution obtained is too low, so the
detection rate is relatively low.
The comparison algorithm is based on the algorithms contained in the OpenCV
library and is tested using the ORL face database. The ORL face database consists of
40 people, taking the ¯rst 2, 4 and 6 images of each person as the training set, that is,
the training set is 80, 160 and 240 face images, respectively. Compare the recognition
rate and time e±ciency of the PCA and SURF algorithms. Table 3 shows the rec-
ognition rate of the algorithm test.
It can be seen from the above results that when the number of test samples is
small, the recognition rate of the SURF algorithm is relatively high; when the
number of samples increases, the number of nonpositive faces in the sample increases,
causing the SURF algorithm to be tested during the test. The feature points of
the sample and the sample matching feature points of the training are reduced, so the
recognition rate is reduced, so it is necessary to extract the positive face in the
detection phase.
2054024-14
Key Technologies of Intelligent Transportation
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
Through the experimental results, it can be found that in the feature extraction of
several simple car logo images, the correct recognition rate of the vehicle logos can
reach more than 85% in various feature extraction methods, in Mazda, Honda,
Lexus, Citroen, Chevrolet, Mercedes-Benz. The advantages of SIFT are not obvious
when these features are relatively simple to extract the vehicle logo, but the overall
performance is better than several other feature vector extraction methods, in order
to make overall performance evaluation, but also for relatively complex vehicles. The
performance of the standard structure was compared. The experimental results are
shown in Fig. 10.
From the above experimental results, it can be clearly seen that when identifying
the vehicle logo of complex structure, the e±ciency is obviously better than several
other feature extraction methods. In the course of the experiment, Rolls-Royce,
Maserati, K€ onigsig, Saab, Aston Martin, and Jaguar were selected to compare the
experimental results. It can be seen that although at the ¯nal recognition rate, the
types of feature extraction methods have reached more than 80%, the recognition
rate of complex vehicle logos has a certain degree of decline compared with simple
2054024-15
M. Huang et al.
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
vehicle standards, but the decline rate of SIFT feature extraction methods is the
smallest, so the recognition e®ect of SIFT is also the best.
2054024-16
Key Technologies of Intelligent Transportation
indicates that the station has a serious overspeed phenomenon in this direction. The
higher the con¯dence level, the location of the station is very easy to overspeed in this
direction.
5. Discussions
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
This paper mainly introduces the key technologies of intelligent transportation based
on image recognition optimization and control. Firstly, several commonly used
classic face detection algorithms are introduced, including: feature-based methods
and algorithms based on statistical methods. The principle and advantages of these
algorithms are introduced in detail. By comparing their own face detection algorithm
ideas, the ¯rst choice is to use the skin color detection image to remove the image
without skin color information. Then the AdaBoost algorithm is used to detect the
remaining images to obtain the face information image; ¯nally, binocular positioning
detection is performed to obtain an accurate positive face image. And under the ORL
face database simulation to achieve their own algorithm, and the results. Based on
the research in this paper, the rapid detection of face positive face can be realized,
which is bene¯cial to the recognition rate of subsequent recognition work.
The image preprocessing system automatically preprocesses and the preproces-
sing function is called during the recognition process. When there are multiple faces
in the original image, detecting the extracted faces, you can view all the extracted
faces. In the HD bayonet system, the development requirement is to use a high-
de¯nition camera combined with a smart ¯ll light device to enable captured high-
de¯nition images to capture clear faces of drivers or other tra±c participants. In the
process of recognition, combined with the face detection algorithm of the paper, the
image containing the face is preprocessed, the face location detection, the face region
map is obtained to obtain the face region image, and the ¯nal feature is extracted and
read into the XML ¯le for comparison to identify the face and output the result. Due
to the code writing ability and the limitations of the computer, the program reads the
local pictures in sequence, and then the recognition takes a long time, and the
subsequent program data processing and code optimization need to continue to
deepen.
The license plate number is displayed in the control to realize the recognition of
the license plate. And need to establish a license plate character library before
identi¯cation, directly called in the program. During the company's internship, the
license plate recognition subsystem was embedded in the HD bayonet system. Since
the HD bayonet system was developed using Web, the interface was not the same as
the small system written in the lab. In the bayonet system, the license plate recog-
nition results are displayed directly on the picture in text form.
2054024-17
M. Huang et al.
6. Conclusions
This paper is to study the application of face recognition and license plate recognition
in the bayonet system. In intelligent tra±c, face recognition or license plate recog-
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
nition for fugitives, criminals and vehicles through the bayonet system helps to
realize informationization and intelligence for tra±c. Safety management is required
to improve the harmony of society. The high-bay bayonet system uses a ¯ve
megapixel (1 inch CCD) capture unit to record multiple high-de¯nition images of
pedestrians, vehicles, etc. passing through the detection area. The recorded images
clearly re°ect the characteristics of pedestrians and vehicles, and the front-row dri-
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
vers of the vehicle, facial features and clothing, driving lanes, surroundings, etc. The
server acquires the image captured by the front-end capture machine, performs
processing analysis, and analyzes face recognition, license plate recognition, color
recognition, and the like, mainly including images. The analysis results are trans-
mitted to the central management platform through the network, and the results are
displayed by the central management platform. The HD bayonet system realizes
many functions such as real-time vehicle detection, license plate recognition, face
recognition, etc. The client developed in web form is convenient for users to log in to
the system and remotely monitor at any time.
(1) In this paper, a set of high-de¯nition bayonet system for intelligent transportation
is designed. The data mining technology and the distributed parallel Hadoop
technology are used to design the architecture of intelligent tra±c operation state
data analysis and analyze the mining algorithm suitable for the system. The
experiment of tra±c big data proves the feasibility of the intelligent tra±c op-
eration state data analysis system, aiming to provide decision-making opinions for
tra±c status. The face recognition results show that when the number of test
samples is small, the recognition rate of the SURF algorithm is relatively high;
when the number of samples increases, the number of nonpositive faces in the
sample increases, causing the SURF algorithm to be tested during the test. The
feature points and the trained sample match feature points are reduced.
(2) The recognition result of the vehicle logo shows that the recognition accuracy
rate of the complex vehicle logo has a certain degree of decline compared with
the simple vehicle logo, but the reduction rate of the SIFT feature extraction
method is the smallest, so the recognition e®ect of the SIFT is also the best.
In the overspeed analysis, if the support is very large, it indicates that the
station has a serious overspeed phenomenon in this direction. The higher
the con¯dence is, the station is very easy to overspeed in this direction. Using
the deployed Hadoop server cluster, using the improved AdaBoost algorithm
of the MapReduce programming model, the example runs tra±c big data,
carries out tra±c °ow analysis and speed overspeed analysis, and extracts in-
formation favorable for tra±c control. It proves the feasibility and e®ectiveness
of using Hadoop platform to mine massive tra±c information.
2054024-18
Key Technologies of Intelligent Transportation
Acknowledgments
This work was supported by Liaoning Provincial Social Science Planning Fund
Project (L18BGL026) and Basic scienti¯c research project of institutions of higher
by UNIVERSITY OF NEW ENGLAND on 10/14/20. Re-use and distribution is strictly not permitted, except for Open Access articles.
References
1. W. Bingxin, Q. Meiyan, Z. Qingkai, Z. Xunqing, Z. Wei and Z. Haisen, Identi¯cation and
width calculation optimization method for shield tunnel segment cracks, Shanxi Archi-
tecture 45(1) (2019) 155–156.
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
2. S. Bowen, Z. Zhiming, G. Jichang and Z. Tianyi, Vision sensor detection algorithm based
on combined laser structured light and image processing °ow optimization, J. Tsinghua
Univ. (Science and Technology) 6 (2019) 445–452.
3. Z. Changqing and Y. Nan, An image recognition algorithm for tra±c sign recognition
system, Electron. Technol. 7 (2019) 60–64.
4. W. Feiyan and Z. Hongyan, Optimization design and implementation of driver visual
image recognition system, Comput. Simul. 33(8) (2016) 158–162.
5. L. Fugui and L. Mingzhen, Structural optimization of deep CNN model based on con-
volution kernel decomposition and its application in small image recognition, J. Jing-
gangshan Univ. (Natural Science) 39(2) (2018) 31–39.
6. G. Haitao, Research on regional tra±c congestion state discrimination based on image
recognition technology, Inform. Record. Mater. 20(1) (2019) 74–76.
7. X. Haopeng and M. Hui, Research and design of deck truck identi¯cation system based on
image processing technology, Inform. Syst. Eng. 2017(9) (2017) 77–79.
8. S. Hongping, Image recognition—detection and recognition of tra±c signs, Technol.
Innov. Appl. 1 (2018) 130–131.
9. S. Jagatheeswaran, Achieving smart transportation system for chennai: Problems and
issues, Proc. 2018 World Transportation Conf. (China Association for Science and
Technology, Ministry of Transport, Chinese Academy of Engineering: China Highway
Society, 2018), p. 11.
10. L. Jiangshan, Z. Yihua, Y. Zhu and W. Zhiyong, Optimization of CT system calibration
and ¯lter back projection image reconstruction based on optimization, Exp. Sci. Technol.
17(1) (2019) 12–16.
11. N. Liao, Research on the design of agricultural transportation system based on smart city
internet of things, Proc. 2018 5th Int. Conf. Electrical & Electronics Engineering and
Computer Science (ICEEECS 2018) (Institute of Management Science and Industrial
Engineering: Computer Science and Electronic Technology International Society, 2018),
p. 5.
12. F. Liping, W. Cheng, C. Zhaoxue, Z. Gang, C. Minghui, X. Huazhong and Z. Dawei,
Optimization of acoustic resolution scanning photoacoustic image, Opt. Tech. 45(1)
(2019) 95–101.
13. M. Ma, Study on evaluation model of smart transportation development level based on
entropy method and grey relational analysis, Proc. 4th Int. Symp. Social Science (ISSS
2018) (Wuhan Zhicheng Times Culture Development Co., Ltd., 2018), p. 5.
14. Y. Mengshang, Discussion on the No-cartoon line model of expressway based on license
plate recognition technology, J. Highway Transport. Technol. (Application Technology
Edition) 13(1) (2017) 54–55.
15. Z. Mingjun, Y. Wenjing and W. Ying, The teaching video image enhancement method
based on local contrast optimization, Mod. Electron. Tech. 42(2) (2019) 75–79.
2054024-19
M. Huang et al.
18. Y. Tang, Research on the development of intelligent transportation based on smart city,
Proc. 2018 3rd Int. Conf. Control, Automation and Arti¯cial Intelligence (CAAI 2018)
(Advanced Science and Industry Research Center: Science and Engineering Research
Center, 2018), p. 4.
19. D. Weibin and C. Rui, Study on 3D gesture image recognition simulation in tra±c
command, Comput. Simul. 34(11) (2017) 95–98.
20. L. Wenxuan and S. Jifeng, A road sign text recognition algorithm based on composite
Int. J. Patt. Recogn. Artif. Intell. 2020.34. Downloaded from www.worldscientific.com
optimization for deep boltzmann machine, Comput. Eng. Sci. 40(1) (2018) 79–85.
21. H. Yixin, Design and implementation of intelligent tra±c light system based on FPGA
and deep learning, China Collective Econ. 573(25) (2018) 73–74.
22. X. Miao, W. Chen, F. Bi et al., Mine ¯re image recognition based on improved FOA
optimized SVM, Comput. Eng. 45(4) (2019) 267–274.
23. W. Yu, Research on urban intelligent transportation system, Light Ind. Sci. Technol.
35(4) (2019) 93–94.
24. Z. Yufu, The application of Skynet system based on big data in intelligent transportation,
Inside and Outside Lantai 234(10) (2018) 82.
25. Z. Zhe and L. Ting, Application of image recognition in the ¯eld of intelligent trans-
portation, Wireless Interconn. Technol. 15(16) (2018) 139–140.
2054024-20
Key Technologies of Intelligent Transportation
direction is as follows:
2054024-21