Detection Algorithm For Overlapping LEDs in Vehicu
Detection Algorithm For Overlapping LEDs in Vehicu
fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
ABSTRACT In a vehicular visible light communication system, the visible light signals are obtained
from a series of image frames showing LEDs as a transmitter which are captured by a high-speed camera
as a receiver. However, when two LEDs overlap in the captured image, some serious problems related to
data transmission arise, such as high data loss and bit-error-rate. To resolve these problems, in this paper,
a method comprising three main steps for separating the overlapping LEDs is proposed. First, according
to LED luminance, the edges of the LEDs are detected by applying an improved Canny edge detector
algorithm. Then these edges are used to extract all the contours of the two overlapping LEDs. Finally,
a derivative of the generalized Hough transform algorithm is utilized to distinguish each LED from the
overlapping region according to the obtained LED contours. The performance of the proposed algorithm
under different parametric conditions in real situations is analyzed according to the results of experiments
conducted in an outdoor environment.
INDEX TERMS Edge detection, image segmentation, light emitting diode, overlapping objects, visible
light communication
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
T.-H.Huynh, T.-A.Pham, M.Yoo: Detection Algorithm for Overlapping LEDs in Vehicular Visible Light Communication System
it must be assumed that the objects can be approximately Moreover, in the first frame, all the LEDs are currently being
fitted as an elliptical or circular shape. Therefore, these tracked, and therefore, we suppose that the positions of the
approaches will probably fail when the objects in the image LEDs are already known. In this study, we address only the
have multiple shapes or their shapes are arbitrary. In addition, separation of the overlapping LEDs in the image, and our
it is important to note that the authors of these approaches method is not related to the LED detection process.
used only the plain background in their research contexts, An overview of the proposed method is shown in Fig. 1.
and therefore, their approaches may be sensitive to noise, First, in the first frame, a list of LED R-tables is extracted for
specifically when the background is complicated. Because storing LED shape information and these tables are used to
of these disadvantages, these approaches cannot be applied track LED centroids on the remaining frames. Then, the LED
in the case of overlapping LEDs. In the overlapping LED edge detection, LED contour extraction and LED centroids
context, three obstacles must be addressed when separating updating steps are executed to update the LED centroids
overlapping LEDs. First, the background in the captured in each frame. Finally, in the last frame, the contours of
image contains a considerable amount of noise, which has to the LEDs are completely drawn to separate overlapping
be removed as much as possible. The second obstacle is the LEDs. LED edge detection and LED contour extraction are
deformation of the captured LED shapes due to the change of described in Section II-A and Section II-B, respectively. The
the camera’s viewpoint when the vehicles move. Third, the proposed method also includes the technique for extracting
variety of LED shapes used in the different vehicular brands the list of R-tables and updating the LED centroids, as
must be considered. presented in Section II-C. It should be noted that the primary
In this paper, a method to resolve the problem of overlap- goal of this study was LED detection to handle cases when
ping LEDs is proposed, which can be divided into three main LEDs overlap, and therefore, we assume that the input as
steps: LED edge detection, LED contour extraction and LED the series of frames is taken from the LEDs’ first appearance
separation. The contribution of this paper is three-fold. First, until they overlap.
we propose an adaptive double threshold technique for LED
edge detection. Second, to deal with the multiple shapes of A. LED EDGE DETECTION
LEDs and its deformation in LED separation step, a com- In general, an edge forms the outline of an object. Edges
pound model of a conventional generalized Hough transform in an image are defined as the points (pixels) at which
incorporated with prior knowledge about the LED shape is image brightness or intensity changes sharply and shows
proposed. Finally, experiments are conducted in an outdoor discontinuities. In other words, they indicate the boundaries
environment using two LEDs to analyze the performance of between objects and background. Because the performance
our method in real situations. The results also provide insight of the Canny edge detector algorithm [12] is outstanding as
into the effects of different parameters including working compared to that of several popular edge detection algorithms
distance, ambient illuminance (environmental illuminance), [13], we decide to apply it to this study. The process of this
and level of occlusion, on the recognition rate. algorithm consists of five sequential stages: applying Gaus-
The remainder of the paper is organized into three sections. sian filter to reduce noise, determining the intensity gradient
Section II describes the proposed method in detail. In Section of the image, applying non-maximum suppression to reduce
III, the proposed method is evaluated by analyzing the results the risk of detecting false edges, utilizing double threshold
of outdoor experiments. Finally, a conclusion is drawn in to identify potential edge points, and finally tracking edge by
Section IV. hysteresis to remove all weak or discrete candidates. These
stages are illustrated by Fig. 2. An adaptive thresholding
II. PROPOSED METHOD technique is contributed in the double thresholding stage to
In the vehicular visible light communication system, LED deal with the LED intensity feature. The whole process of
detection is the first step which the receiver has to execute. Canny edge detector applied in this study will be described
After the positions of LEDs are determined, the receiver in the following sections.
needs to distinguish whether the LEDs of vehicles are used
to transmit data. This type of LED can be recognized by 1) Gaussian filtering
identifying a blinking pattern which is transmitted from the The Canny edge detector applies a Gaussian filter convolving
front vehicles through a series of frames. Then, the receiver with the input image to reduce image noise and prevent a high
starts tracking vehicular LEDs. However, sometimes the re- possibility of false detection. In this study, the Gaussian filter
ceiver may lose track of LED because of many scenarios with kernel size 5×5 was used. Denote the input image by A
including overlapping LEDs, LEDs occlusion, or LEDs are and the smoothed image by B, then:
out of the camera’s field of view. In this paper, we assume that
the system has lost the LED position because of overlapping
2 4 5 4 2
LEDs. To separate overlapping LEDs, the proposed method 4 9 12 9 4
uses a series of consecutive frames as input. The first frame 1 5 12 15 12 5 ∗ A
B= (1)
is the frame that two LEDs have not yet overlapped and the 159
4 9 12 9 4
last frame is the frame that the LED overlapping happens. 2 4 5 4 2
2
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
T.-H.Huynh, T.-A.Pham, M.Yoo: Detection Algorithm for Overlapping LEDs in Vehicular Visible Light Communication System
Series of frames First frame LED edge detection LED contour extraction List LED R-tables
no yes
Is the last frame?
2) Intensity gradient determination In Fig. 3, each vector represents the gradient of its respec-
In computer vision, the image gradient is defined as a di- tive point. The vector length corresponds to the gradient mag-
rectional change in the intensity or color of the image, and nitude and its direction corresponds to the gradient direction.
it is one of many essential factors for finding the edge.
To attain our objective, we calculate the intensity gradient
magnitude and direction at each point in the image. The
direction determines the edge orientation and the magnitude
value determines whether this point lies on an edge. If this
value is high, there is a rapid change in the intensity, likely
implying that the point is on an edge, whereas when there are
no substantial changes in the intensity, the point may not be
on any edge.
Assuming that gx is a derivative with respect to the x-
axis, which represents the horizontal direction and gy is a
derivative with respect to the y-axis, which represents the
vertical direction. The equation for calculating gx and gy of FIGURE 3. An example of the LED gradient vector field.
smoothed image B is:
1 0 −1 3) Non-maximum suppression
gx = 2 0 −2 ∗ B
Consider a case where the gradient value produced by the
1 0 −1 previous stage results in rather blurred edges. The final result
(2) ideally should have thin edges; that is, only the points in
1 2 1
which their gradient values indicate the sharpest change of
gy = 0 0 0 ∗B
intensity value should be preserved. This reduces the calcu-
−1 −2 −1
lation complexity and supports the next step so that it yields
considerably better results. For this reason, non-maximum
After gx and gy are obtained, the gradient magnitude g and
suppression must be performed to reduce the risk of false
direction Φ are identified respectively as:
responses to edge detection by suppressing unwanted points
q that may not constitute a real edge.
g= gx2 + gy2 (3) Non-maximum suppression compares the gradient mag-
nitude of each candidate edge point with each of its 8-
gy connected neighborhoods in the positive and negative gradi-
Φ = arctan ( ) (4) ent directions. If its value is the largest, the candidate edge
gx
3
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
T.-H.Huynh, T.-A.Pham, M.Yoo: Detection Algorithm for Overlapping LEDs in Vehicular Visible Light Communication System
point will be preserved; otherwise, this point will be sup- Let pi denote the probability corresponding to gradient
pressed. For example, if the gradient direction is horizontal, magnitude value i, the high threshold hi separates the gra-
a point is considered a possible edge point if its gradient dient magnitude ranges of C0 and C1 as [lo + 1, hi] and [hi
magnitude value is greater than that of the points in the left + 1, m], respectively. Therefore, the corresponding weights
and right direction. of "possible-edge" class C0 and "sure-edge" class C1 are:
( Phi
4) Double thresholding ω0 = i=lo+1 pi
Pm (6)
After non-maximum suppression has been applied, some ω1 = i=hi+1 pi
points caused by noise or color variation seemingly still Their corresponding class means are:
exist and could create false edge detection results. To solve
Phi
this problem, the Canny edge detector algorithm uses two
( ipi
µ0 = Pi=lo+1
ω0
different values as the high and low threshold, respectively, to m
ipi (7)
i=hi+1
categorize all the image points into three classes: "possible- µ1 = ω1
edge" C0 , "sure-edge" C1 , and "not-edge" C2 . All points Their corresponding variances are:
with the gradient magnitude smaller than the low threshold (
are marked as "not-edge" instances and filtered out as back- Phi
σ02 = i=lo+1 (i − µ0 )2 pi
ground, whereas those with the gradient values higher than Pm (8)
σ12 = i=hi+1 (i − µ1 )2 pi
the high threshold are identified as "sure-edge" instances; the
remaining points having the gradient magnitudes between the The combined mean of two classes is calculated by:
high and low thresholds are "possible-edge" instances, each
of which may be an edge point or not. µ = µ0 ω0 + µ1 ω1 (9)
2
The within-class variance (intra-class variance) σwithin of
a: Limitation of the original algorithm
these two classes is defined as the sum of the two variances
In the original algorithm, the threshold values were chosen multiplied by their associated weights (the sum of within-
manually through experience and empirically, and hence, class variance of C0 and within-class variance of C1 ), which
the algorithm lacks the flexibility and robustness required to are illustrated by Fig. 4.
handle a very large number of different images with different
complex contents. Moreover, the chosen threshold must be 2
σwithin = ω0 σ02 + ω1 σ12 (10)
reasonable to reflect the image feature approximately.
Suppose that the gradient magnitude range of the entire = ω0 (µ0 − µ)2 + ω1 (µ1 − µ)2 (11)
image is [0, m] with m is the maximum value, the low 2
= ω0 ω1 (µ0 − µ1 )
threshold lo is experimentally determined as the half of m:
Among many possible values for high threshold hi be-
m tween lo + 1 and m, we have to determine the optimal value
lo = (5) 2
of hi that makes the within-class variance σwithin minimal.
2
4
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
T.-H.Huynh, T.-A.Pham, M.Yoo: Detection Algorithm for Overlapping LEDs in Vehicular Visible Light Communication System
Finish
No
i := i + 1 var > max_var
Yes
max_var := var
hi := i
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
T.-H.Huynh, T.-A.Pham, M.Yoo: Detection Algorithm for Overlapping LEDs in Vehicular Visible Light Communication System
TABLE 1. R-Table format. array with the size equal to the image size and each array
element is initially set to zero. For each boundary point in
j φj Rφj
1 φ1 {(rb , αb ) | B(xb , yb ) ∈ P , Φb = φ1 }
the image, its gradient direction Φb is looked up in R-table.
2 φ2 {(rb , αb ) | B(xb , yb ) ∈ P , Φb = φ2 } The search results may include many tuples (r, α). For each
... ... ... tuple (rb , αb ), the possible reference point (xc , yc ) is calcu-
m φm {(rb , αb ) | B(xb , yb ) ∈ P , Φb = φm }
lated based on Eq. (14) and the value of the corresponding
element in accumulator array is increased by 1. The process
including searching in the R-table and updating values in the
1) Conventional generalized Hough transform
accumulator array is called the voting process. If the object
The generalized Hough transform [22] is a novel method
is not overlapped, the element with the highest vote in the
to detect arbitrary shapes. It uses the gradient direction of
accumulator array indicates the most possible reference point
every point on the object boundary as a key feature. To
of the object, whereas, when the object is overlapped, not the
store the shape information of the object, the template table
highest, but the element with second or third highest vote,
called as the R-table is built. First of all, a reference point
may correspond to the reference point.
C(xc , yc ) of the object shape is chosen, the reference point
is typically chosen as a point inside the object region. Then,
2) Proposed overlapping LEDs separation algorithm
suppose that the set of boundary points is P , g~b is the gradient
vector, the following parameters of each point B(xb , yb ) in P As mentioned in Section I, some previous works resolving
are computed and stored in a row of R-table (Table 1): Φb the overlapping object problem only deal with a given shape
denotes its gradient direction, rb denotes the length of the of objects. But in the overlapping LEDs context, the shape
line segment BC and the angle αb between BC and the x- of LED is varying and deformable. Therfore, based on the
axis. rb is also called the radial distance, which is computed main idea of conventional generalized Hough transform al-
by Eq. (13). gorithm that is used for detecting arbitrary shape of object,
we develop the algorithm which can separate each LED from
p overlapping LEDs region. The input of the algorithm is the
rb = (xc − xb )2 + (yc − yb )2 (13)
series of n consecutive frames, which have all been converted
The geometry of an object’s shape for the generalized into grayscale images.
Hough transform is shown in Fig. 8. Assuming that the time interval when overlapping LEDs
occurs is very short and there is no more vehicle coming in
or coming out camera’s field of view. Moreover, in the last
frame, the receiver may lose track of some LEDs because
of overlapping LEDs and the number of these LEDs can be
calculated by subtracting the number of tracked LEDs in the
last frame from the first frame.
C(xc, yc) In the first frame, for each LED which is lost track, the
gb corresponding R-table is built and this LED’s centroid is
rb chosen as a reference point. Let nbp denote the number of
Φb boundary points of each LED, then we can calculate the
αb coordinate of LED centroid by the following equation:
Pn
xi
(
B(xb, yb) x xc = i=1 nbp
Pn
yi (15)
yc = ni=1 bp
FIGURE 8. The geometry of shape object for generalized Hough transform.
From the next to the last frame, R-tables are used to match
When the gradient direction of every boundary point is objects which have the same shapes with the LEDs in the
considered, it may be found that some points have the same first frame. Also in the last frame, the contour of each LED is
gradient direction. In Table 1, φj represents a unique gradient completely drawn due to R-tables, so the overlapping LEDs
direction value (j = 1, 2, ..., m) and m is the number of are separated.
unique gradient direction values. Each index φj may contain The procedure for separating the overlapping LEDs is
multiple tuples (rb , αb ), means multiple boundary points. shown in Algorithm 1. In the first frame, the positions of
( each LED pos, which is presented by the bounding box - the
xc = xb + rb cos αb rectangle border surrounds the LED, is already known. From
(14) those bounding boxes, the edges of LEDs LED_edges are
yc = yb + rb sin αb
extracted. Next, all the LED contours LED_contours are
After the R-table has been built, it is utilized to detect an extracted and applied for building the list of the R-tables R.
object via locating its reference point from the image. To The list of the R-tables is applied to update the LED centroids
find an exact reference point, we build a 2D accumulator frame by frame. For each frame, three main steps includ-
6
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
T.-H.Huynh, T.-A.Pham, M.Yoo: Detection Algorithm for Overlapping LEDs in Vehicular Visible Light Communication System
ing LED edge detection, LED contour extraction and LED Algorithm 2 Procedure Update_LED_Centroids
centroids updating are executed sequentially. The procedure Input: List of R-Tables R, current frame f , List of nLED
for updating LED centroids in each frame is described in LED centroids in the previous frame centrs_prev
Algorithm 2. At first, all the edges of the brightest objects Output: List of LED centroids in current frame
in the image edges are extracted by using the Canny edge 1: Calculate edges by using edge detection of f
detector algorithm integrated with the adaptive threshold 2: Extract contours by using contour extraction and edges
technique. In this step, some noise, which may have a similar 3: Calculate grad_directs of every contour point
shape with LEDs, are discarded. The output image from the 4: LED_centrs ← [] . Initialize list of current LED
edge detection step is a binary image. Next, all the contours centroids
contours are extracted and prepared for the LED centroids 5: for i ← 1 to nLED do
updating step. In this step, the gradient direction of every 6: Initialize accumulator array ar
boundary point in the image grad_directs is calculated 7: Voting possible centroids pc corresponding to the ith
using Eq. (2) and Eq. (4). With each boundary point in the LED in ar using R[i]
image, the R-table of each LED is used to vote for its possible 8: Find LED_centr based on centrs_prev[i] and pc
centroid pc from the corresponding accumulator array ar. 9: Add LED_centr to list of LED_centrs
The current centroid of each LED is the possible centroid that 10: end for
is nearest to the centroid of this LED in the previous frame. 11: return LED_centrs
After the centroid of each LED LED_centrs has been found
in the last frame, its corresponding R-table is used to draw the
LED contour.
Algorithm 1 Procedure Overlapping_LED_Separation
Input: series of n frames f [1], f [2], ..., f [n], positions of
LEDs in the first frame pos
Output: Overlapping LEDs are separated
1: Calculate LED edges in the first frame LED_edges
using edge detection and pos
2: Extract LEDs contours in the first frame
LED_contours using contour extraction and
LED_edges
3: Calculate list of LED centroids LED_centrs from
LED_contours
4: Build list of R-tables R from LED_contours and FIGURE 9. Experimental setup in an outdoor environment.
LED_centrs
5: for i ← 2 to n do
6: LED_centrs ← U pdate_LED_Centroids(R,
f [i], LED_centrs)
7: end for
FIGURE 10. Simple illustration of captured video.
8: Draw LED contours to separate overlapping LEDs
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
T.-H.Huynh, T.-A.Pham, M.Yoo: Detection Algorithm for Overlapping LEDs in Vehicular Visible Light Communication System
Parameter Value
Shutter speed 1/6400 s
ISO speed 800 d(R,G) ground truth centroid G
Resolution 1920 x 1080
Lens aperture 11 detected centroid R
White balance Auto rLED
Focus mode AF-C
LED diameter 17 cm
LED rated luminous flux 3200 lm
Initial distance between 2 LEDs 27 cm
LED frequency 60 Hz FIGURE 12. Ground truth and detected centroid of LED.
LED height 25 cm
Height difference between
camera and LEDs 50 cm to 70 cm
Working distance 10 m to 70 m, interval 10 m
Level of occlusion 30% to 60%
Ambient illuminance 0 lux, 6000 lux, 10000 lux, 30000 lux d(R, G) = (xr − xg )2 + (yr − yg )2 (17)
Acceptance threshold τ 0.15 to 0.25, interval 0.025
d(R, G)
Table 2. Let t denote the exposure time (shutter speed) and Θ= (18)
rLED
fLED denote the frequency of LED. In the experiments, the
exposure time of the camera is always shorter than the LED To evaluate the performance of the proposed method on
pulse duration so that the camera can record the signal in the the entire dataset, including multiple series of frames, we
data transmission process [23]. computed the recognition rate γ, that is, the number of series
in which both LEDs were successfully separated Nsucceed as
1 a percentage of the total amount of all series Nseries .
t< (16)
fLED
After obtaining each series of frames, the centroid of Nsucceed
γ= ∗ 100 (%) (19)
each LED in the overlapping frame was estimated using the Nseries
proposed method. The final result of this experiment was that In addition, different acceptance thresholds were used to
the overlapping LEDs were separated, as shown in Fig. 11. examine the changes in the recognition rate of the proposed
method. The acceptance threshold value was chosen accord-
ing to the type of application. For example, in the commu-
nication process between vehicles, the acceptance threshold
must be as small as possible to avoid data loss, whereas, if the
application is focused on tracking the vehicles to determine
their positions, the value of the acceptance threshold can be
relatively large.
The proposed algorithm was implemented using Python
programming language. All the experiments were conducted
on the computer system with 8GB RAM and Intel Core i3-
3230 3.30GHz processor. The average computational time
was about 54ms/frame, which was equivalent to 19fps.
FIGURE 11. Overlapping LED separation result.
B. RESULTS AND DISCUSSION
Let G(xg , yg ) denote the actual centroid (ground truth The dataset used in the evaluation of the performance of
centroid), R(xr , yr ) denote the estimated centroid (detected our method contained 400 series of frames recorded with
centroid), rLED denote the measured radius of the LED, different parameters, as provided in Table 2. In Table 2,
d(R, G) denote the position error and Θ denote the rate be- some exposure settings of the camera should be set in auto
tween d(R, G) and rLED , these parameters were illustrated mode in the real scenario. However, to examine the effect of
in Fig 12. each individual parameter on the performance of algorithm,
A procedure consisting of two steps was used to we fixed the ISO, aperture and shutter speed parameters.
check whether the LEDs were successfully separated. First, The performance was analyzed according to the working
d(R, G) was calculated by Eq. (17). Second, Θ was com- distance, level of occlusion, ambient illuminance, and ac-
puted by Eq. (18). An LED was recognized as separated ceptance threshold parameters. When the recognition rate
successfully if the value of Θ was below a certain acceptance corresponding to each parameter was considered, the other
threshold τ . parameters were fixed.
8
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
T.-H.Huynh, T.-A.Pham, M.Yoo: Detection Algorithm for Overlapping LEDs in Vehicular Visible Light Communication System
1) Performance corresponding to different working distances points decreases, in particular in the case of an occluded
To show the effect of the distance between the camera (the LED. The degradation of the occluded LED boundary points
receiver) and the vehicular LEDs (the transmitter), i.e., the is caused by the increase in the working distance and the LED
working distance, on the recognition rate of the proposed occlusion. When the occlusion level is fixed and the working
method, we conducted an experiment in an outdoor envi- distance is small, the number of occluded LED boundary
ronment in which the working distance was varied. The points is decreased, but the algorithm can vote for the correct
maximum distance is 70m and the minimum distance is 10m. LED centroid because the number of LED boundary points
The recognition rates corresponding to the different working remains large. However, when the working distance becomes
distances are shown in Fig. 13. In the experimental setup, larger, the number of occluded LED boundary points lost is
the illuminance of ambient light and level of occlusion were greater and the algorithm is prone to vote for an incorrect
fixed at 30000 lux and 30%, respectively. LED centroid.
Fig. 13 shows that the recognition rate of the proposed As can be expected, the recognition rate of the algorithm
method decreases when the working distance increases. Fur- is increased when the acceptance threshold is increased.
thermore, the algorithm’s performance is degraded slowly in When the acceptance threshold is increased, the maximum
the distance range from 10 to 50 m and at a faster rate when allowable position error increases also, and therefore, the
the distance is over 50 m. There are two reasons that cause recognition rate is better.
performance degradation.
2) Performance corresponding to different occlusion levels
100
100
80
Recognition rate (%)
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
T.-H.Huynh, T.-A.Pham, M.Yoo: Detection Algorithm for Overlapping LEDs in Vehicular Visible Light Communication System
lux, 6000 lux, 10000 lux, and 30000 lux . The effect of sunlight and reflected light. This increases the intensity of the
different ambient illuminances on the performance of the LED boundary points and the position of the LED boundary
algorithm is shown in Fig. 15. The figure shows that the points are changed because the redundant photons overflow
recognition rate drops slightly when the ambient illuminance the surrounding LED boundaries. Consequently, the position
increases. error increases and the recognition rate decreases.
IV. CONCLUSION
100 In this paper, a challenging problem in V2LC, overlapping
vehicular LEDs, was considered. The overlapping LEDs
80 phenomenon causes many serious difficulties in real traffic
systems such as interruptions in vehicle tracking processes.
Recognition rate (%)
REFERENCES
[1] C. Premachandra, T. Yendo, M. P. Tehrani, T. Yamazato, H. Okada, T.
Fujii and M. Tanimoto, ”Outdoor Road-to-Vehicle Visible Light Commu-
nication Using On-Vehicle High-Speed Camera,” International Journal of
(a) (b) Intelligent Transportation Systems Research, 2014, Springer US, pp. 1-9.
[2] H. Chinthaka N. Premachandra, T. Yendo, T. Yamazato, T. Fujii, M.
Tanimoto and Y. Kimura, ”Detection of LED Traffic Light by Image
Processing for Visible Light Communication System,” Proc. of 2009 IEEE
FIGURE 16. The blooming effect corresponding to different ambient
Intelligent Vehicles Sym., pp. 179-184, June 2009.
illuminances. (a) 0 lux. (b) 30000 lux.
[3] H. Chinthaka N. Premachandra, T. Yendo, M. P. Tehrani, T. Yamasato, H.
Okada, T. Fujii and M. Tanimoto, ”High-speed-camera Image Processing
Fig. 16 shows the blooming effect corresponding to the dif- Based LED Traffic Light Detection for Road-to-vehicle Visible Light
Communication,” Proc. of 2010 IEEE Intelligent Vehicles Sym., pp. 793-
ferent ambient illuminances. As introduced in Section II-C, 798, June 2010.
the information of every boundary point is the key feature [4] A.-M. Cailean and M. Dimian, “Current challenges for visible light
of the R-table. When the ambient illuminance increases, the communications usage in vehicle applications: A survey,” IEEE Commun.
Surveys Tuts., vol. 19, no. 4, pp. 2681–2703, Fourthquarter 2017.
LED boundary points are prone to change in position and [5] L. Cheng, W. Viriyasitavat, M. Boban, and H.-M. Tsai, ”Comparison
intensity. This occurs because of the blooming effect in the of radio frequency and visible light propagation channels for vehicular
captured image. The blooming effect is the phenomenon communications,” IEEE Access, vol. 6, pp. 2634–2644, 2018.
[6] C. Park, J. Z. Huang, J. X. Ji, and Y. Ding, ”Segmentation, inference and
in which the photodiodes of the image sensor receive an
classification of partially overlapping nanoparticles,” IEEE Trans. Pattern
excessive number of photons and the redundant photons Anal. Mach. Intell., vol. 35, pp. 669–681, Mar. 2013.
overflow to the surrounding photodiodes. We can observe [7] W.-H. Zhang, X. Jiang, and Y.-M. Liu, ”A method for recognizing over-
that the blooming effect in Fig. 16(a) is less than that in Fig. lapping elliptical bubbles in bubble image” Pattern Recognit. Lett., vol. 33,
pp. 1543–1548, Sep. 2012.
16(b), because the image sensors received a different amount [8] S. Kothari, Q. Chaudry, and M. D. Wang, ”Automated cell counting and
of light from the environment. Under nighttime conditions, cluster segmentation using concavity detection and ellipse fitting tech-
the image sensor of the camera receives light only from niques,” in Proc. IEEE International Symposium on Biomedical Imaging:
From Nano to Macro, 2009, pp. 795–798.
the LED, whereas under full daylight conditions, the image [9] J. Ni, Z. Khan, S. Wang, K. Wang, and S. K. Haider, ”Automatic detection
sensor receives more light from the environment, such as and counting of circular shaped overlapped objects using circular Hough
10
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/ACCESS.2019.2933863, IEEE Access
T.-H.Huynh, T.-A.Pham, M.Yoo: Detection Algorithm for Overlapping LEDs in Vehicular Visible Light Communication System
transform and contour detection,” in Proc. 12th World Congress on Intel- MYUNGSIK YOO received his B.S. and M.S.
ligent Control and Automation (WCICA), 2016, pp. 2902–2906. degrees in Electrical Engineering from Korea Uni-
[10] S. Zafari, T. Eerola, J. Sampo, H. Kälviäinen, and H. Haario, ’Segmenta- versity, Seoul, in 1989 and 1991, respectively, and
tion of Overlapping Elliptical Objects in Silhouette Images,” IEEE Trans. his Ph.D. in Electrical Engineering from the State
Image Process, vol. 24, pp. 5942–5952, Dec. 2015. University of New York at Buffalo, New York, in
[11] K. Abhinav, J. S. Chauhan, and D. Sarkar, ”Image Segmentation of 2000. He was a senior research engineer at Nokia
Multi-Shaped Overlapping Objects,” in Proc. International Conference on Research Center, Burlington, Massachusetts. He is
Computer Vision Theory and Applications, 2018, pp. 5942–5952.
currently a professor at the School of Electronic
[12] J. Canny, ”A computational approach to edge detection,” IEEE Transac-
Engineering, Soongsil University, Seoul, Korea.
tions on Pattern Analysis Mach. Intell., vol. PAMI-8, pp. 679–698, Nov.
1986. His research interests include visible light commu-
[13] S. Singh and R. Singh, ”Comparison of various edge detection techniques,” nications, optical networks, sensor networks, and SDN/NFV.
in Proc. 2nd International Conference on Computing for Sustainable
Global Development (INDIACom), 2015, pp. 393–396.
[14] J. Zhang and J. Hu, ”Image segmentation based on 2D Otsu method
with histogram analysis,” in Proc. International Conference on Computer
Science and Software Engineering, vol. 6, 2008, pp. 105–108.
[15] N. Zhu, G. Wang, G. Yang, and W. Dai, ”A fast 2D Otsu thresholding
algorithm based on improved histogram,” in Proc. Chinese Conference on
Pattern Recognition, 2009, pp. 1–5.
[16] W. Rong, Z. Li, W. Zhang, and L. Sun, ”An improved canny edge detection
algorithm,” in Proc. IEEE International Conference on Mechatronics and
Automation, 2014, pp. 577–582.
[17] N. Otsu, “A threshold selection method from gray-level histograms,” IEEE
Trans. Syst., Man, Cybern. B, Cybern., vol. 9, pp. 62–66, Jan. 1979.
[18] G. Bertasius, J. Shi, and L. Torresani, ”Deepedge: A multiscale bifurcated
deep network for top-down contour detection.” in The IEEE Conference
on Computer Vision and Pattern Recognition, 2015, pp. 4380-4389.
[19] S. Zie and Z. Tu, ”Holistically-nested edge detection,” in IEEE Interna-
tional Conference on Computer Vision, 2015, pp. 1395-1403.
[20] L.-C. Chen, J. T. Barron, G. Papandreou, K. Murphy, A. L. Yuille, ”Seman-
tic image segmentation with task-specific edge detection using cnns and a
discriminatively trained domain transform,” in Proc. IEEE Conf. Comput.
Vis. Pattern Recog., pp. 4545-4554, 2016.
[21] S. Suzuki and K. Abe, “Topological structural analysis of digitized binary
images by border following,” Comput. Vision, Graph. Image Process, vol.
30, pp. 32–46, Apr. 1985.
[22] D. H. Ballard, ”Generalizing the Hough transform to detect arbitrary
shapes,” Pattern Recognit., vol. 13, pp. 111–122, 1981.
[23] T.-H. Do and M. Yoo, ”Performance analysis of visible light communica-
tion using CMOS sensors,” Sensors, vol. 16, no. 3, p. 309, 2016.
11
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.