A General Line Tracking Algorithm Based on Computer Vision
A General Line Tracking Algorithm Based on Computer Vision
Abstract: This paper presents a general algorithm to track the line on the ground based on computer vision, which
utilizes Raspberry Pi 2 to complete the image processing. The algorithm accomplishes image processing firstly, then
recognizes the line by edge detection and scanning model. The proposed algorithm can be used in a variety of platforms
including intelligent car system, robot platform, UAV platform and other mobile platforms. In this paper, the algorithm is
verified by quadrotor platform and achieves great line-tracking effect by tuning parameters. Series of experiments show
that the algorithm has great validity, robustness, and generality.
Key Words: Computer Vision, Mobile Platform, Image Processing, Edge Detection, Scanning Model
1 INTRODUCTION employing the lane line tracking above the highway. Pipe
robots track the pipeline for inspection. The Robots in the
With the development of the computer, computer vision robot restaurant accomplish the tasks according to the mag-
has made tremendous development in the fields of science netic wire. There are many algorithms that have been used
and engineering. Computer vision is about biological visu- in line recognition and line tracking. J. K. Dong [8] used
al simulation of computer and related equipment including Hough-Transform to achieve straight line model’s param-
collection, processing, analysis and understanding of the eters. B-Snake spline curve [9] and Catmull-Rom spline
image. It has the following four advantages. (1) The image curve [10] were used to model the rodesides. Template
information is rich, containing distance information, color matching [11], edge detection, threshold segmentation [12]
information and shape information of the object. Computer and other methods were used to recognize lane. Corner de-
vision is consistent with human’s visual angle. (2) Cam- tection and matching were used for visual tracking during
era has advantages of small size, light weight, low power power line inspection [13]. As a result, the realization of
consumption and cheap price. (3) Acquisition of image is the general line tracking algorithm which can be applied to
fast and real-time. (4) Multiple cameras will not interfere multiple platforms promotes the development of many in-
with each other when they work together. At the momen- dustries.
t, computer vision has been used in various fields. L. H. The paper focuses on the realization of the general line-
Xuan [1] utilized a combination of eigenface and adaptive tracking algorithm based on computer vision. The pro-
skin-color model to finish face detection in video. T. W. posed algorithm can be utilized in multiple platforms con-
Yang [2] realized moving target tracking and measurement taining intelligent car system, robot platform, UAV platfor-
with a binocular vision system. B. Benfold [3] studied sta- m and so on. It can be used to track the lane line, pipeline,
ble multi-target tracking in real-time surveillance video. X. power transmission line and so on. It acquires video stream
Zhao [4] utilized vision to track target based on UAV. H. from camera and processes image by Raspberry Pi 2. Im-
Hattori [5] studied stereo for 2D visual navigation. R. Cha- age processing includes converting the color space and fil-
puis [6] realized accurate road following and reconstruction tering. The algorithm combines edge detection and scan-
by computer vision [7]. ning models to recognize and track the line. The algorithm
In the field of application, there are a lot of work based on is applied to the quadrotor platform. Series of experiments
the line tracking. Mobile robots or unmanned aerial ve- show that the algorithm is effective.
hicles maintain the power transmission line based on the The contents are presented as follows. Section 2 shows
line tracking. Lane yaw system of the intelligent car sys- the Raspberry Pi 2 and camera. Section 3 presents the line
tem recognizes and tracks the lane line to realize the early tracking algorithm in detail. Section 4 shows the experi-
warning system. Unmanned aerial vehicles finish the cruise ments and results. Finally, the conclusion is made in Sec-
tion 5.
*Corresponding author
This work is supported by Nature Science Foundation of Zhejiang
Province under grant LQ16F030005, and National Science Foundation
(NNSF) of China under grant 61375072.
978-1-4673-9714-8/16/$31.00 2016
c IEEE 5365
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 10,2024 at 22:52:16 UTC from IEEE Xplore. Restrictions apply.
Capture a frame
image
Convert color
space
Image filtering
Figure 1: Raspberry Pi and camera
Line scan
2 RASPBERRY PI AND CAMERA
The algorithm utilizes Raspberry Pi (RPI) 2 to process im-
age. The RPI 2 is initially available only in one configura- Elliptical scan
tion of Secure Digital and features a Broadcom BCM2836
SoC, with a quad-core ARM Cortex-A7 CPU with 4 cores
and a VideoCore IV dual-core GPU, 1 GB of RAM with Recognize the
remaining specifications being similar to those of the pre- line
vious generation model of MicroSD. The RPI 2 runs at
900 MHz by default. Raspberry Pi 2 can run on a variety
of operating systems including Linux, Ubuntu, Raspbian,
Figure 2: Flowchat of the algorithm
OpenELEC, RISC OS and Windows 10. With the increase
of the CPU Clock Speed and the number of CPU cores
compared with the previous generation, the image process-
ing based on OpenCV is realized well. As a popular card
computer with rich community resources, the platform has
the versatility to run the Linux operating system which has
excellent processing property.
The algorithm gets video stream from the 5-megapixel
camera. The viewing angle of the camera is 75.7 degrees
and the resolution is 640×480. Since the algorithm is
verified by the platform of quadrotor in the experiments,
the camera is installed between two arms, and the angle
between the ground and the camera is about 45 degrees.
Raspberry Pi and camera are shown in Figure 2. The im-
age processing based on RPI can be completed in real time.
The experiments achieve visualization of image processing Figure 3: RGB image
by using VNC remote desktop.
3 LINE TRACKING ALGORITHM necessary to convert the RGB color space to the grayscale
In this section, four parts will be presented including image space. The RGB image is shown in Figure 3. The gray
processing, line scanning model, elliptical scanning mod- scale image is shown in Figure 4.
el and the judgement of the line-trackings status. Video Considering the noise in the actual environment, gray scale
stream is captured by the camera, then is processed and image should be filtered. Morphological filtering [15] is u-
analyzed. Image processing utilizes OpenCV library func- tilized in this paper. Erosion and dilation are two basic mor-
tions. Then the algorithm combines edge detection and s- phological operations. When the origin of the structural
canning models to recognize and track line. A loop pro- element is aligned with the given pixel, all pixels which
cesses a frame of images and the frequency is about 60Hz. intersect with the structural element are contained in the
The flowchart of the algorithm is shown in Figure 2. current set of pixels. If erosion is adopted, the given pixel
is replaced by the minimum pixel in the set of the curren-
3.1 Image Processing t pixels. However, if dilation is adopted, the given pixel is
The images of the video stream are RGB images. R, G, replaced by the maximum pixel in the set of the current pix-
B refer to red, green, and blue. RGB space is the original els. Erosion can remove some tiny objects and dilation can
color space of computer acquisition and display. Because fill some small holes. Three times of eroding operation are
the color space has much information which is complicated done firstly, which is shown in Figure 5. Then, five times
to deal with and has relatively low processing speed, it’s of dilating operation are done, which is shown in Figure 6.
5366 2016 28th Chinese Control and Decision Conference (CCDC)
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 10,2024 at 22:52:16 UTC from IEEE Xplore. Restrictions apply.
Figure 7: Gray values of a row pixels
and {
xr = xscan + cos(π × (−α )) × a
(2)
yr = yscan − sin(π × (−α )) × a,
where (xl ,yl ) is the left endpoint of the ellipse, (xr ,yr ) is
the right endpoint of the ellipse and (xscan ,yscan ) is the line
position of the last scan which is the center coordinate of
the elliptical scan. The parameter α is the bending angle
of the curve which is the angle between the line of the two
adjacent ellipse center and the Y axis. The parameter a
is the semi-major axis of the ellipse. The parameters are
shown in Figure 10.
Elliptical scan starts from the left endpoint of the ellipse
and stores the gray value of the pixel for every 5 degrees
Figure 6: Dilation image
until the right endpoint of the ellipse. Parametric equations
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 10,2024 at 22:52:16 UTC from IEEE Xplore. Restrictions apply.
Initial line scanning
radius is 300
sĆ
Line scanning radius
becomes 100
Line tracking
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 10,2024 at 22:52:16 UTC from IEEE Xplore. Restrictions apply.
Figure 12: Trackbar window
edge information. Considering the width of different lines, Figure 16: Elliptical scan
it needs to adjust the semi-major axis and semi-minor ax-
is. When we adjust the parameters, we use the trackbar in
OpenCV to avoid recompiling. The parameters a = 40 and tracks the curve. It’s necessary to regulate the major ax-
b = 20 are chosen by the experiments. The threshold val- is and minor axis when tracking the curve by elliptical s-
ue is set to judge whether the recognition of line is in the can model. Semi-major axis becomes semi-minor axis, and
case of existing the line. Since the change of illumination semi-minor axis becomes semi-major by tuning parameter-
has an impact on the selected threshold value, we also use s. The elliptical scan can scan longer line and have more
the trackbar to adjust the threshold value in order to avoid efficiency. But if circle scan wants to get the same effect ,
recompiling. The trackbar is shown in Figure 12. Series it needs to increase the radius of circle. The whole circle
of experiments show that line−threshold = 53 is selected becomes bigger which will be affected by noise easily. The
which achieves great result of line tracking, The result is contrast between circle scan and elliptical scan is shown in
shown in Figure 13 and Figure 14. Figure 15 and Figure 16. Series of experiments show that
At the beginning, circle scanning model is chosen. But the the algorithm has great validity, robustness, and generality.
elliptical scan has more advantages than circle scan when
5 CONCLUSION
This paper presents a general algorithm based on comput-
er vision to track the line on the ground. Raspberry Pi 2
is used for the image processing. The algorithm is real-
ized by edge detection and scanning models. The algorithm
has generality which can be used in many platforms includ-
ing intelligent car system, robot platform, unmanned aerial
vehicles platform and other mobile platforms. In this pa-
per, the algorithm is verified by quadrotor platform, and
achieves great line-tracking effect by tuning parameters.
Image processing is real time with Raspberry Pi 2. Series
of experiments show that the algorithm has great validity,
robustness, and generality.
The algorithm is mainly to track the still line now. The fu-
ture work is to realize the varying line tracking. If the work
Figure 14: Curve tracking
can be achieved, the algorithm will be more general. At
2016 28th Chinese Control and Decision Conference (CCDC) 5369
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 10,2024 at 22:52:16 UTC from IEEE Xplore. Restrictions apply.
present, it has a great challenge due to the complexity of
the environment and the difficulty of getting the informa-
tion of varying line.
REFERENCES
[1] L. H. Xuan, S. Nitsuwat, Face recognition in video, a com-
bination of eigenface and adaptive skin-color model, Intelli-
gent and Advanced Systems, 2007. ICIAS 2007. Internation-
al Conference on, 742 - 747, 2007.
[2] T. W. Yang, K. Zhu, Q. Q. Ruan, J. D. Han, Moving Target
Tracking and Measurement with a Binocular Vision System,
Mechatronics and Machine Vision in Practice, 2008. M2VIP
2008. 15th International Conference on, 85-91, 2008.
[3] B. Benfold, I. Reid, Stable multi-target tracking in real-time
surveillance video, Computer Vision and Pattern Recognition
(CVPR), 2011 IEEE Conference on, 3457-3464, 2011.
[4] X. Zhao, Q. Fei, Q. Geng, Vision based ground target track-
ing for rotor UAV, Control and Automation (ICCA), 2013
10th IEEE International Conference on, 1907-1911, 2013.
[5] H. Hattori, Stereo For 2D Visual Navigation, Intelligent Ve-
hicles Symposium, 2000. IV 2000. Proceedings of the IEEE,
31-38, 2000.
[6] R. Chapuis, R. Aufrere, F. Chausse, Accurate road following
and reconstruction by computer vision, IEEE Transactions
on Intelligent Transportation Systems, Vol.3, No.4, 261-270,
2003.
[7] S. Nedevschi, R. Schmidt, T. Graf, D. Frentiu, 3D Lane
Detection System Based On Stereovision, Intelligent Trans-
portation Systems, 2004. Proceedings. The 7th International
IEEE Conference on, 161-166, 2004.
[8] J. K. Dong, J. W. Choi, I. S. Kweon, Finding and tracking
road lanes using “line-snakes”, Intelligent Vehicles Sympo-
sium (1996 : Tokyo, Japan). Proceedings of the 1996 IEEE
Intelligent Vehicles Symposium, 189-194, 1996.
[9] Y. Wang, E. K. Teoh, D. Shen, Lane detection and tracking
using B-Snake, Image & Vision Computing, Vol.22, No.4,
269-280, 2004.
[10] Y. Wang, D. Shen, E. K. Teoh, Lane detection using catmull-
rom spline, IEEE International Conference on Intelligent Ve-
hicles. Proceedings of the 1998 IEEE International Confer-
ence on Intelligent Vehicles Vol. 1, 51-57, 1998.
[11] K. Kluge, S. Lakshmanan, A deformable-template approach
to lane detection, Proceedings of the 1995 IEEE Intelligent
Vehicles Symposium, 54-59, 1995.
[12] J. Huang, H. Liang, Z. Wang, Y. Song, Lane marking detec-
tion based on adaptive threshold segmentation and road clas-
sification, Robotics and Biomimetics (ROBIO), 2014 IEEE
International Conference on, 291-296, 2014.
[13] Golightly, I. and Jones, D., Corner detection and matching
for visual tracking during power line inspection., Image &
Vision Computing, Vol.21, No.9, 827-840, 2003.
[14] D. Marr, E. Hildreth, Theory of Edge Detection, Royal So-
ciety of London Proceedings, Vol.207, No.1167, 187-217,
1980.
[15] J. Serra, L. Vincent, An overview of morphological filtering,
IEEE Trans on Cirucits Systems & System & Signal Process,
Vol.11, No.1, 47-108, 1992.
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR. Downloaded on August 10,2024 at 22:52:16 UTC from IEEE Xplore. Restrictions apply.