0% found this document useful (0 votes)
70 views15 pages

Detection and Recognition of Traffic Sign Boards Using Random Forest Classifier

Detection and Recognition of Traffic Sign Boards Using Random Forest Classifier
Copyright
© Public Domain
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views15 pages

Detection and Recognition of Traffic Sign Boards Using Random Forest Classifier

Detection and Recognition of Traffic Sign Boards Using Random Forest Classifier
Copyright
© Public Domain
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Review of Computer Engineering Research

2022 Vol. 9, No. 3, pp. 135-149.


ISSN(e): 2410-9142
ISSN(p): 2412-4281
DOI: 10.18488/76.v9i3.3109
© 2022 Conscientia Beam. All Rights Reserved.

DETECTION AND RECOGNITION OF TRAFFIC SIGN BOARDS USING RANDOM


FOREST CLASSIFIER

B. Jagadeesh1+ Gayatri Vidya Parishad College of Engineering (A), Visakhapatnam, A.P.,


1,2

D. V. Vidhya Sree2 India.


1
Email: [email protected] Tel: +919440043096
2
Email: [email protected] Tel: +919866199532
(+ Corresponding author)
ABSTRACT
Article History The traffic sign recognition system is a vital aspect of an intelligent transportation
Received: 16 May 2022 system since it provides information to drivers to help them drive more safely and
Revised: 12 July 2022
Accepted: 29 July 2022 effectively. This paper addresses some of these concerns, which will be accomplished in
Published: 26 August 2022 two steps. The first is the detection of traffic signs, which is divided into two stages.
After a picture has been preprocessed to emphasize relevant information, signs are
Keywords segmented based on color thresholding, shape-based detection. The second task is the
Advanced driver assistance
system recognition of traffic signs. There are two steps involved in this method. In this study,
Traffic sign detection Histogram of Oriented Gradient is utilized as a feature extractor, and Random Forest
Color thresholding
HOG Classifier is used in the recognition stage. The findings of the experiment show that
Random forest classifier utilizing Random Forest Classifier resulted in an accuracy score of 95.59 %, precision of
Traffic sign recognition.
97.55 %, recall of 95.37% in the recognition process and 90.34 % accuracy in the
detection process.

Contribution/Originality: In this research work, detection and recognition of traffic sign boards a related
research question is formulated using Random Forest Classifier, created the models, implemented the codes,
conducted research experiments for evidence collection, preparation and presentation of the published work, and
coordination of research activities lead to this publication.

1. INTRODUCTION
Traffic signboards are essential for regulating traffic and preventing accidents. They’re made to be one-of-a-
kind, sturdy, and easily noticeable to drivers, with little variation in design. As a result, automatic traffic sign
detection and identification have become crucial features for adaptive cruise control. These road signs detecting and
identification systems are crucial not just for advanced driver assistance systems but also for other practical systems
like metropolitan scenario interpretation, self-drive vehicles, and even signs surveillance for maintenance. Detecting
and recognizing traffic signs, on the other hand, is difficult due to a number of issues, including:
• A wide range of illumination conditions.
• Obstacles, like trees, pedestrians, and vehicles.
• Motion blur caused by vehicle speed and friction.
• Vandalism.
• The current meteorological conditions.
• Historical context (for example, the existence of skyscrapers, roadways and signboards).

135
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

The purpose of signboard detection is to figure out where potential traffic signs are located, whereas the goal of
sign recognition is to figure out what class the traffic sign belongs to. The visual features of traffic signs encode the
information they provide: color, shape, and pictogram. Figure 1 shows different traffic signs.

Figure 1.Different traffic signs.

The color and shape are essential traits which can aid drivers to understand road conditions; traffic signs have
some consistent attributes that can be used to detect and classify them. The color of traffic signs in each country is
essentially identical, with basic colors like red, blue, yellow, etc., as well as standardized shapes like circles,
triangles, rectangles, etc. External elements, such as weather, frequently impact the appearance of traffic signs. As a
result, traffic-sign recognition is both a challenging and crucial topic in traffic engineering research. Based on the
detection, feature extraction, and machine learning approach, this study proposes a traffic sign detection and
recognition system [1-3]. Detection and recognition are the two main techniques. The first technique involves
detection, which is split into two stages. The detection of signs is segmented based on color thresholding and
shape-based detection after a picture is preprocessed to emphasize relevant information. There are two steps
involved in the recognition stage. The first is feature extraction, in which HOG is utilized as a feature extractor.
With HOG features extracted and input into a model using a Random Forest Classifier to categorize traffic signs
into 43 categories.
This paper is divided into five modules. Module II contains information on related works. Module III displays
the proposed method. Module IV describes our research's experiments and findings. Module V sets out our
conclusion and issues for further research.

2. RELATED WORK
Various approaches to traffic sign recognition have been presented, and because they are based on different
data, it is difficult to compare them. Different approaches to the detection and recognition challenges have been
offered.
Ying-Chi, et al. [4] devised a detecting technique that uses different resolutions. In this method, the network
training and testing are evaluated by comparing different input image resolutions and ROIs. Hemadri and Kulkarni
[5] presented traffic sign identification with a 98 percent accuracy utilizing the dense scale-invariant feature
transform (DSIFT) approach for feature extraction and SVM for classification. Cahya, et al. [6] proposed a
technique that divides traffic sign detection into two stages. After segmenting the image using RGBN (Normalized
RGB), the blobs are processed to recognize traffic signs. The process of recognizing traffic signs is broken into two

136
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

parts. In this study, a combination of some feature extraction methods such as HOG, Gabor, LBP, and HSV color
space is used, and certain classifiers such as SVM, KNN, Random Forest, and Naïve Bayes are compared in the
recognition stage. The RGBN method had precision and recall of about 98.7% and 95.1 percent in detecting traffic
signs, and 100% precision and 86.7 percent recall in recognizing processes using SVM Classifier, according to the
results of the experimental work.
Tarequl [7] suggested a method in which the signs are not limited to a few and in which 28 signs are used for
classification and two independent classifiers are used. These photos were analyzed to identify the region of interest,
which is subsequently classified using two CNN classifiers. Pranjali and Ramesh [8] proposed TSDR for
recognizing traffic signs positioned alongside the road using the template-matching approach. The template-
matching approach was utilized in this study to identify accurately, eliminating false positive detection and loss of
detection and to recognize road signs in low-light conditions. Adonis, et al. [9] before detection, a method is
proposed in which the bilateral filtering pre-processing technique is used. Color thresholding in HSV color space,
followed by segmentation of the region of interest using the Hough transform. A Histogram of Oriented Gradients
is extracted from candidate traffic signs as the essential feature in classification in the recognition phase, and MLP
is employed as a classifier. For traffic sign recognition, the Multilayer Perceptron Classifier has the best accuracy.
Immawan, et al. [10] proposed using Gabor Wavelet and Principle Component Analysis to recognize traffic signs.
On the traffic sign picture database, Gabor wavelet feature extractors paired with PCA showed higher recognition
performance. Wang [11] provided a brief explanation of TSDR utilizing deep learning, stating that the model can
learn the deep characteristics within the image independently from the training samples. It emphasizes the accuracy
and efficiency of detection and recognition. Deep Learning dramatically reduces the time and cost of training
negative samples, as well as improves the accuracy of the SoftMax classifier. Ivona, et al. [12] used Support Vector
Machines (SVMs) with Gaussian and median filters to eliminate noise, Hough Transformation in the detection
phase, and a Histogram of the Oriented Gradient descriptor for feature extraction in the classification phase.
Finally, traditional methods, such as machine learning, are seen to be more robust than neural networks [13,
14]. This study describes the classification and recognition of German traffic signs by combining numerous
methodologies and features, which are explained in the next section.

3. PROPOSED METHOD
The proposed method is accomplished in three major steps, as shown in Figure 2. First, a pre-processing
approach based on RGB to HSV color space conversion and the median filtering methodology is proposed [15-18].
Next, it is proposed that a detection technique based on color thresholding and shape detection be used. Lastly, it is
suggested a recognition strategy based on a histogram of oriented gradients (HOG) and a Random Forest
Classifier. HOG is utilized for feature extraction, and RF is used for recognition.

Figure 2. Proposed algorithm flow diagram.

3.1. Pre-Processing Stage


The goal of pre-processing is to remove useless data from photographs and restore useful information. This
technique improves the accuracy of the detection. For picture pre-processing, this article primarily employs image
enhancement and color space conversion [19].

137
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

3.1.1. Image Enhancement


The fundamental aspect of image pre-processing is image enhancement. The goal of image enhancement
technology is to make confusing messages in an image clear so that the relevant messages can be extracted from the
image. Median filtering is one of the most often used methods in image enhancement technologies to remove noise
from images. It is a nonlinear filtering technique. When a sliding window has an odd number of points, the value of
the window's center point is replaced with the median value of the window. The image's details can be protected by
using median filtering. Figure 3 shows a comparison of before and after median filtering.

Figure 3. Before and after median filtering comparison.

3.1.2. Color Space Conversion


The RGB color space is widely used [20]. The color values of the image are represented by the red, green, and
blue components. RGB space is described in terms of three fundamental colors; the components (R, G, B) are highly
correlated to one another; any transformation in the element will lead to improvements in the image's pixel color
value; this change is ineffective for distinguishing road signs. Using RGB to assess the image will not aid later
operations because traffic signs are exposed to the elements year after year and can fade or be destroyed as a
consequence of the weather's influence. The HSV color scheme is used in this study, and its three components
demonstrate the image's brilliance: Hue, Saturation, and Value.
The Equation 1. gives the formula for converting RGB to HSV:
Divide the R, G, and B values by 255 to shift the range from 0… 255 to 0… 1.
𝑹 ′ 𝑮 ′ 𝑩
𝑹′ = 𝑮 = 𝑩 =
𝟐𝟓𝟓 𝟐𝟓𝟓 𝟐𝟓𝟓
𝐶𝑚𝑎𝑥 = max(𝑅′ , 𝐺 ′ , 𝐵′ )

𝐶𝑚𝑖𝑛 = min(𝑅′ , 𝐺 ′ , 𝐵′ )

∆= 𝐶𝑚𝑎𝑥 − 𝐶𝑚𝑖𝑛

138
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

𝑅′ − 𝐺 ′
60° × ( + 4) , 𝐶𝑚𝑎𝑥 = 𝐵′

𝐵′ − 𝑅 ′
𝐻𝑢𝑒 𝐶𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑖𝑜𝑛: 𝐻 = 60° × ( + 2) , 𝐶𝑚𝑎𝑥 = 𝐺 ′

𝐺 ′ − 𝐵′
60° × ( 𝑚𝑜𝑑6) , 𝐶𝑚𝑎𝑥 = 𝑅′
{ ∆

0 , ∆= 0
𝑆𝑎𝑡𝑢𝑟𝑎𝑡𝑖𝑜𝑛 𝐶𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑖𝑜𝑛: 𝑆 = { ∆
, ∆<> 0
𝐶𝑚𝑎𝑥
𝑉𝑎𝑙𝑢𝑒 𝐶𝑎𝑙𝑐𝑢𝑙𝑎𝑡𝑖𝑜𝑛: 𝑉 = 𝐶𝑚𝑎𝑥 (1)

Because it is less susceptible to fluctuations in external illumination, HSV is closer to an image viewed by
human vision, yet has a greater color spectrum than RGB color model. As a result, HSV color space outperforms
RGB color space in the detection of traffic signs. As a result, the RGB color space of the image is converted to HSV
during the pre-processing phase, that is more suitable for the next stage. Color spaces RGB and HSV are depicted
in Figure 4.

Figure 4. RGB and HSV Image.

3.2. Traffic Sign Detection


The proposed technique consists of two major components [21, 22]. The first is ROI segmentation using a
color threshold. The image is then labeled and the candidate area is identified in the second stage. Detecting traffic
signs with shape detection and region props analysis of each candidate region

3.2.1. Color Thresholding


Color thresholding is a method of establishing a color range and producing a black and white image. All colors
between the start and stop colors (inclusively) become white, while the remaining image pixels become black.
Let I be the color image. Array splicing is used to extract and threshold the different color spaces because the
image is represented as a [M*N*3] array. Table 1 illustrates the threshold values for color thresholding, to
extract red, blue, green components. Equation 2 gives the formula for color thresholding.

139
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

Table 1. Threshold values for color thresholding.


ThR = 210 ThB = 70 ThG = 100

𝑟𝑒𝑑(𝑖, 𝑗) = 𝑡𝑟𝑢𝑒, 𝑖𝑓 𝑟(𝑖, 𝑗) ≥ 𝑇ℎ𝑅


𝑎𝑛𝑑 𝑔(𝑖, 𝑗) ≤ 𝑇ℎ𝐺
𝐹𝑎𝑙𝑠𝑒, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑏𝑙𝑢𝑒(𝑖, 𝑗) = 𝑡𝑟𝑢𝑒, 𝑖𝑓 𝑏(𝑖, 𝑗) ≥ 𝑇ℎ𝐵
𝐹𝑎𝑙𝑠𝑒, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝐺𝑟𝑒𝑒𝑛(𝑖, 𝑗) = 𝑡𝑟𝑢𝑒, 𝑖𝑓 𝑔(𝑖, 𝑗) ≥ 𝑇ℎ𝐺
𝐹𝑎𝑙𝑠𝑒, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒(2)

Color-based approaches make use of the fact that traffic signs are designed to be easily recognized from their
surroundings and are frequently painted in highly visible contrasting color. Figure 5 is a Binary Image after color
thresholding. These colors are retrieved from the input image and used as the detection foundation. Signs have
distinct colors, but they also have distinct shapes that can be looked for.

Figure 5. Binary image after color thresholding.

3.2.2. Shape Detection


The road sign is made up of distinct colors and shapes such as round, octagonal, and rectangular. To detect the
signboard based on shape, the smaller objects in the binary image must be deleted.
Smaller items are deleted based on threshold level, i.e. objects with less than the threshold number of pixels are
removed (pixel values become 0), and holes and regions in the binary image are filled in the same way (pixel values
become 1). The binary image's items are labeled, and their attributes such as Area and Perimeter are determined.

a) Using Region Props to Label Images and Classify Shapes


To begin, it must identify the relevant entities with in processed bi-level image to locate probable traffic sign
areas. to locate probable traffic sign areas. The area of each identified region was estimated using the region props()
function. Following the calculation of the area, each blob (ROI) is evaluated to determine which would be the traffic
sign boards. However, there have been numerous items, as in the picture that could not be a traffic sign. Evaluating

140
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

such objects was pointless and strenuous. Circularity, a parameter, is used to determine the shapes. Circularity is a
measure of an object's roundness. It is typically given as in Equation 3.
4𝜋∗𝐴𝑟𝑒𝑎
𝑐𝑖𝑟𝑐𝑢𝑙𝑎𝑟𝑖𝑡𝑦 = (3)
𝑃𝑒𝑟𝑖𝑚𝑒𝑡𝑒𝑟 2

As a result, after determining the area of each blob, regions that were too tiny or too large were deleted.
Equation 4 was used to delete the superfluous object.Figure 6 is after deleting the superfluous objects, this is the
logical image.
𝑡𝑟𝑢𝑒; 𝑓𝑜𝑟 𝑐𝑖𝑟𝑐𝑙𝑒𝑠 , 𝑐𝑖𝑟𝑐𝑢𝑙𝑎𝑟𝑖𝑡𝑦 = 1
𝑏𝑙𝑜𝑏 { 𝑓𝑜𝑟 𝑛𝑜𝑛 𝑐𝑖𝑟𝑐𝑙𝑒𝑠, 𝑐𝑖𝑟𝑐𝑢𝑙𝑎𝑟𝑖𝑡𝑦 > 1}(4)
𝑓𝑎𝑙𝑠𝑒; 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

Figure 6. After deleting the superfluous objects, this is the logical image.

b) Evaluating Each Candidate Region and Setting Bounding Boxes


When the picture frame has one or even more candidate regions, which implies blobs1 > 0, the program uses a
"for" loop to determine whether such a road sign exists and also to display the bounding box just above the input
image depending on the candidate region that is regarded as a road sig. Figure 7 illustrates the detected traffic sign.

Figure 7. Detected traffic sign.

3.3. Traffic Sign Recognition


There are two major phases at this stage of recognition [23-25].

141
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

3.3.1. Feature Extraction


HOG is one of the most efficient feature extraction techniques accessible today, out of the several available.
Because it concentrates on the geometry or structure of the image, this technique is best suited for detecting
algorithms. HOG is a feature-rich extractor for datasets involving characters, numbers, and people. The HOG
descriptor is based on the idea that the image shape and the object’s appearance may be expressed by spreading
intensity gradients. The gradient orientation and magnitude are represented by this histogram. The gradient
distribution indicates the features of each image. This property is gained by dividing the image into small areas
known as cells. Each cell is structured as a gradient histogram. This histogram's combination is used as a
description to represent an object. Figure 8 depicts the overall HOG algorithm.
According to Figure 8, the first phase of HOG is to calculate the gradient value of the input image using
Equations 5 – 6. Gradients are minor changes in the x and y directions.
𝐻𝑋 = 𝑋 2 − 𝑋1 (5)

𝐻𝑌 = 𝑌 2 − 𝑌1 (6)

Using the gradients generated in the previous step, magnitude and direction are calculated for each pixel using
Equations 7 – 8.

Figure 8. The overall HOG algorithm.

𝑀 = √𝐻𝑥2 + 𝐻𝑦2 (7)


𝐻
𝐴𝑛𝑔𝑙𝑒(Ɵ) = tan−1 ( 𝑥 )(8)
𝐻𝑦

The calculated gradients' histograms are now generated. Histograms can be standardized to improve accuracy.

Figure 9. The input image and HOG features of the input image.

142
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

To compute the HOG features, we normalize the previously identified window to 200 × 200. The normalized
window is divided into 8 × 8 overlapping blocks, with each block divided into 2 × 2 cells. Figure 9 is the input
image and HOG features of the input image.

3.3.2. Classifier
Random Forests are gaining popularity because they can outperform single classifiers in terms of accuracy and
robustness to noise. It is a collection of classification trees in which each tree casts a single vote for the most
frequent class to be assigned to the input data. It adds a new level of randomization to bagging. Another feature of
Random Forests is their simplicity, with only two parameters: the number of variables in the random subset at each
node and the number of trees in the forest [23-26]. The values of the two parameters have little effect on Random
Forests.
The following is how a random tree grows:
• With replacement, a subset I ⊂IN of training samples is chosen at random. This subset is used to grow
the tree, which is not trimmed.
• A subset F of features is chosen at random in each node. Using the feature f ϵ F and threshold t ϵ
[min(f), max(f)]with the maximum information gain Δ, the current data subset is divided into Il and Ir.
Over the label frequencies in I, entropy E is determined in Equations 9 – 13.
∀ 𝑓 ∈ 𝐹 ∀ 𝑡 ∈ [min(𝑓) , max (𝑓)](9)
𝐼𝑙 (𝑡, 𝑓) = 𝑑 ∈ 𝐼|𝑓(𝑑) < 𝑡(10)
𝐼
𝐼𝑟 (𝑡, 𝑓)= (11)
𝐼𝑙 (𝑡)

|𝐼𝑙 (𝑡, 𝑓)|


∆𝑙 = − 𝐸 (𝐼𝑙 (𝑡, 𝑓))
|𝐼|
|𝐼𝑟(𝑡,𝑓)|
=− |𝐼|
𝐸 (𝐼𝑟 (𝑡, 𝑓))(12)

𝑓 𝑜𝑝𝑡 = 𝑎𝑟𝑔𝑚𝑎𝑥𝑓 ∆(𝑡, 𝑓)(13)


The size of F is determined empirically. Too many features slow down training and increase the risk of
overfitting. If F is small, randomization is stronger and training is faster, but the danger of underfitting increases.

Figure 10. Classification of sample x in a random forest.

Completely randomized trees with |F| = 1 give greater classification accuracy for some data sets. Figure 10
depicts the classification of sample x in a Random Forest [27-31]. Each of the random trees Tjϵ {T1,T2,.., TTis
traversed in the Random Forest. Pj(l|x) in the leaf acquired in Tj gives the posterior probability that x belongs to

143
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

the class l ϵ{1,2, ...L}. The ensemble of trees' aggregated judgment is used to generate the class l* of x as follows in
the Equations 14 – 15.
1
𝑃(𝑙|𝑥) = ∑𝑇𝑗=1 𝑃𝑗 (𝑙|𝑥)
T
(14)

𝑙 ∗ = max 𝑃 (𝑙𝑖 |𝑥)(15)


𝐿

In many multiclass classification applications, Random Forests outperform state-of-the-art performance over
decision trees [2], [5], [7]. To identify the discovered signs, a machine learning classifier model is used. The
model is based on the Random Forest method, which takes several decision tree outputs and uses a majority vote to
determine the output. The extracted features are given as input with labels of the class of the images to the
classifier. This trained model is used for the recognition of road signs.

4. EXPERIMENTAL RESULTS AND FINDINGS


The detection is correct if the ground truth bounding box corresponds to at least half of the traffic signs. The
following are outcomes of traffic sign detection:

Figure 11. Traffic sign detection results.

144
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

Figure 11shows traffic signs. (i), (ii), (iii), (iv) and (v) are accurate since the bounding box area is exactly on the
traffic signs.
The GTSDB and GTSRB datasets have been used to evaluate the proposed algorithm. The results demonstrate
that detection and recognition accuracy is very good. The detection accuracy for the GTSDB dataset was 90.34 %,
while the recognition accuracy for the GTSRB dataset was 95.59 %. The recognition stage was evaluated using the
precision-recall curve, ROC curve, with the recall, precision, and F 1-score values computed as follows using
Equations 16 – 18 [32]:
𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒
Recall = (16)
𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒+𝐹𝑎𝑙𝑠𝑒 𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒

𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒
Precision = (17)
𝑇𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒+𝐹𝑎𝑙𝑠𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒

𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛×𝑅𝑒𝑐𝑎𝑙𝑙
𝐹1 − 𝑆𝑐𝑜𝑟𝑒 = 2 × (18)
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+𝑅𝑒𝑐𝑎𝑙𝑙

When applied to the data set, the suggested approach's precision-recall curve and Receiver Operator
Characteristic Curve are as follows the precision-recall curve depicts the trade-off between precision and recall for
various threshold values. High scores for both indicate that the classifier is producing accurate results (high
precision) and producing the majority of all positive results (high recall). As illustrated in Figure 12.

Figure 12. Precision-recall curve.

Figure 13. Receiver operator characteristic curve.

145
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

The ROC curve (Receiver Operator Characteristic Curve) is a probability curve, and AUC (Area Under Curve)
is a measure of separability. The ROC curve demonstrates the trade-off between sensitivity (or TPR) and specificity
(1 – FPR). As depicted in Figure13.
Figure14 (i), (ii), (iii) and (iv)depict the outcome of various recognized traffic sign boards.

Figure 14. Results of traffic sign recognition.

Table 2. Recall, precision, AUC, and F1- Score of TSR.


Method Precision Recall AUC F1-Score
Random Forest 97.55 95.37 99.61 0.9645

The Random Forest categorization produces more true positive values based on the test results on the dataset
shown in Table 2. The classification approach employing RF has 99.61%; AUC, a precision of 97.55 % and a recall
of 95. 37 %.
Comparison with existing methodologies:
Table 3shows area undercurve, precision and recall. It is observed that better Area undercurve, precision and
recall are obtained for the proposed method.

Table 3.Comparison of Auc, Precision, Recall between proposed method and Cahya, et al. [6].
Method AUC Precision Recall
Cahya, et al. [6] Random Forest 99.4 96.4 90.0
Proposed method Random Forest 99.61 97.55 95.37

Table 4. Comparison of accuracy between proposed method and Ivona, et al. [12].
Method Dataset Accuracy
Ivona, et al. [12] Multi class SVM GTSRB `93.54
Proposed method Random Forest GTSRB 95.59

146
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

Table 4 shows accuracy for multiclass SVM and Random Forest Classifier. It is observed that better accuracy is
obtained for Random Forest compared with multiclass SVM.
The proposed methodology was compared with other methods like Multiclass SVM and Random Forest
Classifier. It was found that accuracy of the proposed method was more than Ivona, et al. [12] The AUC, precision
and accuracy are greater for the proposed methodology.

5. CONCLUSION
The purpose of this research work was to create an efficient TSDR system based on a traffic sign dataset [26-
30]. In the proposed method, images were captured by an onboard camera under various weather conditions, and
image preprocessing was done using Median filtering and RGB to HSV color space conversion. A detection
algorithm was proposed using color thresholding and shape detection. The recognition process was done using
HOG and Machine Learning algorithms using Random Forest with a bagged kernel for traffic sign classification.
For feature extraction, HOG was employed, and RF was used for classification. This algorithm was used on the
GTSDB and GTSRB datasets. The proposed system produced promising results in terms of recognition
accuracyof95.59%, detection accuracy of 90.34%, precision of 97.55%, recall of 95.37% and an F1-score of 96.45%.
ROC curve analysis was used to assess recognition performance.

6. FUTURE SCOPE
The proposed approach will be expanded in the future to handle live video sequences that are used in advanced
driver assistance systems. The system can be improved by employing low-cost solutions that assist the driver in
notifying the distance between the road sign and the vehicle's current location.

Funding: This study received no specific financial support.


Competing Interests: The authors declare that they have no competing interests.
Authors’ Contributions: Both authors contributed equally to the conception and design of the study.

REFERENCES
[1] M. Felix, F. Cristian, A. Manuel, and P. Daniel, "Detection and recognition of traffic signs using gabor filters," in
Proceedings of IEEE 34th International Conference on Telecommunications and Signal processing (TSP), Budapest, 2011, pp.
554-558.
[2] B. Anass, B. Abdrrahim, B. Mohammed, and T. Ahmed, "An enhanced approach in detecting object applied to
automotive traffic roads signs," in Proceedings of IEEE 6th International Conference on Optimization and Applications
(ICOA), Beni Mellal, 2020.
[3] S. Ying, G. Pingshu, and L. Dequan, "Traffic sign detection and recognition based on convolutional neural network,"
in Proceedings of IEEE 2019 Chinese Automation Congress (CAC), Hangzhou, 2019.
[4] C. Ying-Chi, L. Huei-Yung, and T. Wen-Lung, "Implementation and evaluation of CNN based traffic sign detection
with different resolutions," in Proceedings of IEEE International Symposium on Intelligent Signal Processing and
Communication Systems (ISPACS), Taipei, 2019.
[5] V. B. Hemadri and U. P. Kulkarni, Recognition of traffic sign based on support vector machine and creation of the indian traffic
sign recognition benchmark. In: Nagabhushan T., Aradhya V., Jagadeesh P., Shukla S., M.L. C. (Eds.), Cognitive Computing and
Information Processing. CCIP 2017 vol. 801. Singapore: Communications in Computer and Information Science,
Springer, 2018.
[6] R. Cahya, F. R. Isna, A. A. Rosa, and A. Supriatna, "Indonesian traffic sign detection and recognition using color and
texture feature extraction and SVM classifier," in Proceedings of IEEE International Conference on Information and
Communications Technology (ICOIACT), Yogyakarta, 2018, pp. 50-55.

147
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

[7] M. I. Tarequl, "Traffic sign detection and recognition based on convolutional neural networks," in Proceedings of IEEE
International Conference on Advances in Computing, Communication and Control (ICAC3), Mumbai, 2019.
[8] P. Pranjali and K. Ramesh, "Traffic Sign Detection Using Template Matching Technique," in Proceedings of IEEE
Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), Pune 2018.
[9] S. Adonis, A. A. Patricia, O. Carlos, and R. Rosula, "Traffic sign detection and recognition for assistive driving," in
Proceedings of IEEE 2019 International Symposium on Multimedia and Communication Technology (ISMAC), Quezon City,
2019.
[10] W. Immawan, K. Hendra, and A. S. Tri, "Traffic sign image recognition using gabor wavelet and principle component
analysis," in Proceedings of IEEE 2019 International Conference of Artificial Intelligence and Information Technology
(ICAIIT), Yogyakarta, 2019.
[11] C. Wang, "Research and application of traffic sign detection and recognition based on deep learning," in Proceedings of
IEEE International Conference on Robots & Intelligent System (ICRIS), Changsha, 2018.
[12] M. Ivona, K. Zdravko, and R. Krešimir, "The speed limit road signs recognition using hough transformation and
multi-class svm," in Proceedings of International Conference on Systems, Signals and Image Processing, IWSSIP, Osijek,
Croatia 2019.
[13] W. Liu, R. Lu, and i. Liu, "Traffic sign detection and recognition via transfer learning," in Proceedings of IEEE The 30th
Chinese Control and Decision Conference (2018 CCDC), Shenyang, 2018.
[14] T. George-Zamfir and P. Marian-Silviu, "Neural network based traffic sign recognition for autonomous driving," in
Proceedings of International Conference on Electrochemical and power Systems (SIELMEN), Craiova, Romania, 2019.
[15] D. K. Amara, R. Karthika, and P. Latha, "Novel deep learning model for traffic sign detection using capsule networks,"
arXiv preprint arXiv:1805.04424, 2018.
[16] A. Jain, A. Mishra, A. Shukla, and R. Tiwari, "A novel genetically optimized convolutional neural network for traffic
sign recognition: A new benchmark on Belgium and Chinese traffic sign datasets," Neural Processing Letters, vol. 50, pp.
3019-3043, 2019.Available at: https://doi.org/10.1007/s11063-019-09991-x.
[17] K. N. Sruthi and R. P. Aneesh, "Recognition of speed limit from traffic signs using naive bayes classifier," in Proceedings
of International Conference on Circuits and Systems in Digital Enterprise Technology (ICCSDET), Kottayam, India 2018.
[18] B. Ilya, T. Sergey, and Y. Dmitry, "Traffic sign recognition on video sequence using deep neural networks and
matching algorithm," in Proceedings of International Conference on Artificial Intelligence Applications and Innovations (IC-
AIAI), Belgrade, Serbia, 2019.
[19] H. T. Manjunatha, A. Danti, and K. L. ArunKumar, A novel approach for detection and recognition of traffic signs for
automatic driver assistance system under cluttered background. In: Santosh K., Hegadi R. (Eds.), Recent Trends in Image
Processing and Pattern Recognition. RTIP2R 2018 vol. 1035. Communications in Computer and Information Science,
Springer, Singapore, 2019.
[20] K. Bayoudh, F. Hamdaoui, and A. Mtibaa, "Transfer learning based hybrid 2D-3D CNN for traffic sign recognition
and semantic road detection applied in advanced driver assistance systems," Applied Intelligence, vol. 51, pp. 124-142,
2021.Available at: https://doi.org/10.1007/s10489-020-01801-5.
[21] S. Santaji, S. Santaji, and S. Hallur, "Detection and classification of occluded traffic sign boards," in 2018 International
Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT). IEEE, 2018, pp.
157-160.
[22] K. H. Moh, M. Shahjalal, Z. C. Mostafa, T. L. Nam, and M. J. Yeong, "Simultaneous traffic sign recognition and real-
time communication using dual camera in ITS," in Proceedings of International Conference on Artificial Intelligence in
Information and Communication (ICAIIC), Okinawa, Japan, 2019.
[23] T. Dogancan, C. Min-Hung, and A. Ghassan, "Traffic sign detection under challenging conditions: A deeper look into
performance variations and spectral characteristics," in Proceedings of IEEE Transactions on Intelligent Transportation
Systems, 2019, pp. 3663 - 3673.

148
© 2022 Conscientia Beam. All Rights Reserved.
Review of Computer Engineering Research, 2022, 9(3): 135-149

[24] W. Jingyu, W. Weiran, and Z. Anliang, "The faster detection and recognition of traffic signs based on CNN," in
Proceedings of IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Dalian, China
2019.
[25] N. D. Gabriel, A.-K. Abdullah, B. Jorge, G. F. Fernando, and F. L. Gerardo, "Traffic sign detection and 3D localization
via deep convolutional neural networks and stereo vision," in Proceedings of IEEE Intelligent Transportation Systems
Conference (ITSC), Auckland, New Zealand, 2019.
[26] L. Xi, Z. Jing, T. Qi, L. Jiafeng, and Z. Li, "A saliency guided shallow convolutional neural network for traffic signs
retrieval," in Proceedings of IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Miami, FL,
USA, 2018.
[27] A. R. Edison, N. A. Joshua, R. P. V. Ryan, P. D. Elmer, and A. B. Argel, "Vision-based traffic sign compliance
evaluation using convolutional neural network," in Proceedings of IEEE International Conference on Applied System
Invention (ICASI), Chiba, Japan, 2018.
[28] S. Liu, "A traffic sign image recognition and classification approach based on convolutional neural network," in
Proceedings of 11th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), Qiqihar,
China, 2019.
[29] S. S. P. Sivagnana and V. S. Ganesh, "Automatic traffic sign identification system for real time operation," in
Proceedings of 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)I-SMAC (IoT
in Social, Mobile, Analytics and Cloud) (I-SMAC), Palladam, India, 2018.
[30] P. R. Shehan, S. Linu, R. Pradeep, and V. Sajith, "Fast and accurate traffic sign recognition for self driving cars using
retinanet based detector," in Proceedings of 2019 International Conference on Communication and Electronics Systems
(ICCES), Coimbatore, India, 2019.
[31] A. J. Prakash and S. Ari, "A system for automatic cardiac arrhythmia recognition using electrocardiogram signal. In
Bioelectronics and Medical Devices," ed: Woodhead Publishing, 2019, pp. 891-911.
[32] K. K. Patro, A. Jaya Prakash, M. Jayamanmadha Rao, and P. Rajesh Kumar, "An efficient optimized feature selection
with machine learning approach for ECG biometric recognition," IETE Journal of Research, pp. 1-12, 2020.

Views and opinions expressed in this article are the views and opinions of the author(s), Review of Computer Engineering Research shall not be responsible or
answerable for any loss, damage or liability etc. caused in relation to/arising out of the use of the content.

149
© 2022 Conscientia Beam. All Rights Reserved.

You might also like