0% found this document useful (0 votes)
11 views

Camera calibration method based on circular array calibration board

This document presents a camera calibration method utilizing a circular array calibration board, which improves accuracy and stability in measurement systems. The method employs a subpixel edge detection algorithm for image preprocessing and utilizes geometric constraints to determine the projection point of the circle's center, achieving an average reprojection error of less than 0.12 pixels under various illumination conditions. The study highlights the advantages of circular features over traditional checkerboard methods, particularly in terms of robustness to noise and segmentation thresholds.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Camera calibration method based on circular array calibration board

This document presents a camera calibration method utilizing a circular array calibration board, which improves accuracy and stability in measurement systems. The method employs a subpixel edge detection algorithm for image preprocessing and utilizes geometric constraints to determine the projection point of the circle's center, achieving an average reprojection error of less than 0.12 pixels under various illumination conditions. The study highlights the advantages of circular features over traditional checkerboard methods, particularly in terms of robustness to noise and segmentation thresholds.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Systems Science & Control Engineering

An Open Access Journal

ISSN: (Print) (Online) Journal homepage: www.tandfonline.com/journals/tssc20

Camera calibration method based on circular


array calibration board

Haifeng Chen, Jinlei Zhuang, Bingyou Liu, Lichao Wang & Luxian Zhang

To cite this article: Haifeng Chen, Jinlei Zhuang, Bingyou Liu, Lichao Wang & Luxian Zhang
(2023) Camera calibration method based on circular array calibration board, Systems Science &
Control Engineering, 11:1, 2233562, DOI: 10.1080/21642583.2023.2233562

To link to this article: https://doi.org/10.1080/21642583.2023.2233562

© 2023 The Author(s). Published by Informa


UK Limited, trading as Taylor & Francis
Group.

Published online: 15 Jul 2023.

Submit your article to this journal

Article views: 791

View related articles

View Crossmark data

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tssc20
SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL
2023, VOL. 11, NO. 1, 2233562
https://doi.org/10.1080/21642583.2023.2233562

REVIEW ARTICLE

Camera calibration method based on circular array calibration board


Haifeng Chena , Jinlei Zhuangb , Bingyou Liua , Lichao Wanga and Luxian Zhangc
a School of Electrical Engineering, Anhui Polytechnic University, Wuhu, People’s Republic of China; b Wuhu Robot Industry Technology Research,
Institute of Harbin Institute of Technology, Wuhu, People’s Republic of China; c Anhui Anjian automobile skylight Technology Co., Ltd, Wuhu,
People’s Republic of China

ABSTRACT ARTICLE HISTORY


Camera calibration will directly affect the accuracy and stability of the whole measurement sys- Received 27 February 2023
tem. According to the characteristics of circular array calibration plate, a camera calibration method Accepted 22 April 2023
based on circular array calibration plate is proposed in this paper. Firstly, subpixel edge detec- KEYWORDS
tion algorithm is used for image preprocessing. Then, according to cross ratio invariance and Systems identification and
geometric constraints, the projection point position of the center point is obtained. Finally, the signal processing; Image
calibration experiment was carried out. Experimental results show that under any illumination processing and vision;
conditions, the average reprojection error of the center coordinates obtained by the improved cal- Theory-framework; Least
ibration algorithm is less than 0.12 pixels, which is better than the traditional camera calibration squares methods;
algorithm. Theory-framework; Optimal
estimation

1. Introduction
three types namely, calibration method based on cali-
With the continuous expansion of computer vision appli- bration template (Tsai, 1986), calibration method based
cation fields, the application scenarios of 3D vision mea- on active vision (Maybank & Faugeras, 1992), and camera
surement are also expanding. Cameras are the most self-calibration method (Zhang & Tang, 2016). The so-
important sensors in machine vision, which has a wide called calibration method based on calibration template
range of applications in artificial intelligence (Graves et al., uses a calibration object with a known structure and high-
2009), vision measurement (Huang et al., 2017; Kanakam, precision as a spatial reference, establishes the constraint
2017; Liu et al., 2017), and robotics technology (Kahn et al., relationship between camera model parameters through
1990; Yang et al., 2007). As a key technology of visual mea- the correspondence between spatial points, and solves
surement, camera calibration plays a key role in the fields these parameters on the basis of the optimal algorithm.
of machine vision ranging, pose estimation, and three- parameter. Typical representative methods include direct
dimension (3D) reconstruction (Liu, 2001). The calibration linear transformation (DLT) (Abdel-Aziz, 2015) and Tsai
process establishes the transformation relationship from two-step method (Tsai, 1986). The calibration method
the 3D image coordinate system to the 3D world coor- based on the calibration template can obtain calibra-
dinate system (Sang, 2021). The accuracy of calibration tion with relatively high accuracy, but the processing and
parameters directly affects the accuracy of vision appli- maintenance of the calibration object is complicated, and
cations (Chen, 2020; Huang et al., 2020; Li et al., 2020). setting the calibration object in the harsh and danger-
Presently, domestic and foreign scholars have carried out ous actual operating environment is difficult. The cam-
extant research on camera calibration technology and era calibration method based on active vision refers to
have proposed many camera calibration algorithms (Qiu obtaining multiple images by actively controlling the
et al., 2000; Zhang et al., 2019). camera to perform some special motions on a platform
On the basis of the different number of vision sensors, that can be precisely controlled and using the images
existing camera calibration methods can be divided into collected by the camera and the motion parameters of
monocular vision camera calibration, binocular vision the controllable camera to determine the camera param-
camera calibration, and multi-vision camera calibration. eters. The representative method of this class is the linear
On the basis of the different calibration methods, the method based on two sets of three orthogonal motions
camera calibration methods can usually be divided into proposed by Massonde (Sang, 1996). Subsequently, Yang

CONTACT Bingyou Liu [email protected]


© 2023 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this article has been published allow the posting of the Accepted
Manuscript in a repository by the author(s) or with their consent.
2 H. CHEN ET AL.

et al. proposed an improved scheme, that is, based on asymmetric projection of the circle’s centre to calculate
four groups of plane orthogonal motion and give groups the centre coordinates of the ellipse after projection. The
of plane orthogonal motion, the camera is linearly cal- theoretical value and the actual value in the image are
ibrated by using the pole information in the image (Li matched by least squares, but the internal parameters
et al., 2000; Yang et al., 1998). This calibration method of the camera cannot be assumed in practical applica-
is simple to calculate and can generally be solved lin- tions, and the scope of application is small. Wu et al. (Wu
early and has good robustness, but the system cost is et al., 2018) proposed a circular mark projection eccen-
high, and it is not applicable when the camera motion tricity compensation algorithm based on three concentric
parameters are unknown or the camera motion cannot circles, which is calculated according to the eccentricity
be precisely controlled. In recent years, the camera self- model of three groups of ellipse fitting centre coordi-
calibration method proposed by many scholars can inde- nates and the amount of calculation is large; Lu et al. (Lu
pendently calibrate the reference object by only using the et al., 2020) proposed a high-precision camera calibra-
correspondence between multiple viewpoints and the tion method based on the calculation of the real image
surrounding environment during the natural movement coordinates of the centre of the circle, which obtains the
of the camera. This method has strong flexibility and high true centre of the circle through multiple iterative cal-
applicability, and it is usually used for camera parame- culations. However, the projection process is relatively
ter fitting based on absolute quadric or its dual absolute complex and requires a large amount of computation;
quadric (Wu & Hu, 2001). However, this method belongs Xie et al. (Xie & Wang, 2019) proposed a circle centre
to nonlinear calibration, and the accuracy and robustness extraction algorithm based on geometric features of dual
of the calibration results are not high. quadratic curves, but the computational complexity is
The camera calibration method of Zhang (Zhang, large; Peng et al. (Peng et al., 2022) proposed a method of
1999) requires shooting checkerboard calibration board plane transformation, which uses front and back perspec-
images from several angles. Because this method is sim- tive projection to obtain the coordinates of the landmark
ple and effective which is often used in camera calibration points, but it requires more manual work when select-
processes. However, in calibrating with checkerboard, the ing corner points and adjusting parameters. Aiming at
accuracy of corner extraction is greatly affected by noise the characteristics of the circular calibration plate, this
and image quality (Wu et al., 2013), whereas circular fea- paper proposes a camera calibration method on the basis
tures are not sensitive to segmentation thresholds, the of the circular array calibration plate. First, the sub-pixel
recognition rate is relatively high, and the projection of edge detection algorithm is used to detect the edge of
circles image noise has a strong known effect (Crom- the preprocessed image, then, according to the principle
brugge et al., 2021), so circular features have good appli- of finding the centre of the ellipse according to the geo-
cation prospects in vision systems (Rudakova & Monasse, metric constraints of the plane, the equation system is
2014). In the perspective projection transformation, when established to solve the position of the projection point of
the circular feature calibration plate is used for calibra- the circle’s centre, and finally, Zhang’s plane-based cam-
tion, the collected circle will be transformed into an era is used according to the coordinates of the circle’s
ellipse (referred to as a projection ellipse). Presently, the centre. Calibration method for camera calibration. The
positioning of projected ellipse has become a research experimental results show that the combination of the
hotspot in machine vision (Zhang et al., 2017), and its ellipse contour extraction algorithm and Zhang’s camera
positioning accuracy will directly affect the camera cali- calibration method can obtain higher camera calibration
bration accuracy and object measurement accuracy. The accuracy.
commonly used algorithms for ellipse extraction at this
stage are Canny detection least squares ellipse fitting
2. Image acquisition and preprocessing
method (Wang et al., 2016), Hough transform method
(Bozomitu et al., 2016; Ito et al., 2011), gray centre of grav- This article uses the thousand-eyed wolf 30 W pixel cam-
ity (Frosio & Borghese, 2008), Hu invariant moment (Hu, era of Fuhuang Junda Hi-Tech to take pictures and col-
1962), and other methods. The Hough transform method lect images. First, the camera is fixed on the stand and
has good anti-noise and strong robustness, but has a stand still, then, the calibration plate is moved and rotated
large amount of storage, high computational complex- with a 7 × 7 dot array, and 14 pictures of the calibration
ity, and poor pertinence; the gray-scale centroid method plate are taken with different poses and directions, 12 of
requires uniform gray levels, otherwise the error will be which are shown in Figure 1. First, the collected image
large (Zhang et al., 2017); Canny detection is the smallest. is converted into a grayscale image, then, the grayscale
The quadratic ellipse fitting method is fast and accurate image is edge preserved and denoized through guided
(Wang et al., 2016). Zhu et al. (Zhu et al., 2014) used the filtering, next, the image sharpening algorithm is used to
SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 3

Figure 1. 12 Calibration board images with different pose and direction.

Figure 2. Pretreatment images.

highlight the circular markers in the image, and then the Among them, S and P represent the area and perimeter
Canny operator is used to detect the image edge. Finally, of the graphic shape, respectively. When the circularity
the sub-pixel edge detection algorithm is used for edge C is 1, it means that the graphic shape is a perfect circle,
detection, and the results are shown in Figure 2, which and when the circularity C is 0, it means that the shape is
are (a) grayscale image, (b) denoised image, (c) enhanced a gradually elongated polygon. Therefore, the closer the
image, and (d) extractible image. feature points to be extracted in this paper are to a circle,
In the perspective projection transformation, when the the closer the value of circularity C is to 1.
circular feature calibration plate is used for calibration,
the collected circle will be transformed into an ellipse (2) Eccentricity
because it is in a non-parallel state with the camera. In this
paper, the sub-pixel edge detection algorithm is used to Eccentricity is the degree to which a conic deviate from
detect the edge of the collected image. Then, the circu- an ideal circle. The eccentricity of an ideal circle is 0, so
larity, eccentricity, and convexity conditions are restricted the eccentricity represents how different the curve is from
according to the characteristics of the obtained closed the circle. The greater the eccentricity, the less camber of
edge to extract the ellipse contour that meets the require- the curve. Among them, an ellipse with an eccentricity
ments. between 0 and 1 is an ellipse, and an eccentricity equal to
1 is a parabola. Given that directly calculating the eccen-
(1) Roundness tricity of a graphic is complicated, the concept of image
moment can be used to calculate the inertial rate of the
The roundness feature reflects the degree to which the graphic, and then the eccentricity can be calculated from
figure is close to a perfect circle, and its range is (0, 1). The the inertial rate. The relationship between the eccentricity
circularity C can be expressed as. E and the inertia rate I is:

C = 4π(S/P2 ) (1) E 2 + I2 = 1 (2)


4 H. CHEN ET AL.

In the formula, the eccentricity of the circle is equal to 0,


and the inertia rate is equal to 1. The closer the inertia rate
is to 1, the higher the degree of the circle.

(3) Convexity

For a figure and any two points A and B located in the


figure F, if all the points on the line segment AB are always
located in the figure, the figure is called a convex figure,
otherwise, the figure is called a concave figure. Convex-
ity is the degree to which a polygon is close to a convex
figure, and the convexity V is defined as:

V = S/H (3)
Figure 3. Schematic diagram of projective transformation.
In the formula, H represents the convex hull area corre-
sponding to the figure. The closer the convexity is to 1,
the closer the figure is to a circle. and D exist, in the plane, their intersection ratio can be
written as:

3. Precise positioning algorithm of the (A, B, C) ||AC||/||BC||


(A, B; C, D) = = (6)
projection point of the circle’s centre (A, B, D) ||AD||/||BD||
3.1. Projection model of the space circle on camera On the basis of the projective transformation diagram, the
imaging plane four collinear points a, b, c, and d on the straight line L1 are
mapped to the four collinear point, A, B, C, and D on the
Let the position of the circle’s centre be the origin of the
straight line L2 , then:
world coordinate system, the Z axis of the world coordi-
nate system is perpendicular to the plane where the cir-
(A, B; C, D) = (a, b; c, d) = λ1 /λ2 (7)
cular pattern is located, and the plane where the graphic
pattern is located is the plane of the world coordinate sys- A schematic of the transformation is shown in Figure 3.
tem. Suppose the radius of the circle is r, so the equation Particularly, when (A, B, C, D) = −1, the cross ratio is
of the circle in the plane of the world coordinate system called the harmonic ratio, and the four collinear points,
is, then the matrix C is expressed as. A, B, C, and D, are called harmonic conjugates. If point C is
⎡ ⎤⎡ ⎤ ⎡ ⎤ the midpoint of AB and point D among the four collinear
  1 0 0 x   x
points satisfying the harmonic ratio is an infinite point
x y 1 ⎣0 1 0 ⎦ ⎣y ⎦ = x y 1 C ⎣y ⎦ (4)
in the direction of the straight line where points A and
0 0 −r2 1 1
B are located, then the four collinear points, A, B, C, and
The general formal equation of an ellipse can be D, are harmonically conjugated. Thus, C, and D can be
expressed as, which is organized into a matrix form and expressed as.
represented by a matrix E. 
C = A + λ1 B
⎡ ⎤ (8)
a c/2 d/2 D = A + λ2 B
E = ⎣ c/2 b e/2 ⎦ (5) If the four collinear points, A, B, C, and D, are harmon-
d/2 e/2 f ically conjugated, then there is λ1 /λ2 = −1. When the
four collinear points A, B, C, and D are harmonically con-
3.2. Projective geometry theory jugated, there is λ1 = −λ2 .
The equation of the infinite line in the plane where the
If the camera distortion is ignored, the camera imaging inner circle of the world coordinate system is located is
is the projective transformation of the calibration plate l∞ = [0, 0, 1]T , and the equation of the projected
⎡ ⎤ line on
plane, and the properties of the projective transforma- 0
tion can be used to calculate the transformation relation- the imaging plane is l∞ = H−T l∞ = H−T ⎣0⎦. The pro-
ship between the imaging plane and the calibration plate 1
plane. Among them, the intersection ratio is a basic invari- jection point of the centre O on the imaging plane is
ant in projective geometry. If four collinear points, A, B, C, O = HO. According to Formula (2), the product of the
SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 5

marked as 1. After the connected domain is extracted, the


boundary tracking algorithm is executed to obtain the
boundary point set of the projected ellipse in the image.
On the basis of the extracted boundary point set, the
ellipse equation is fitted by the method of ellipse direct
least square fitting, and the general equation of ellipse
ax 2 + by2 + cxy + dx + ey + f = 0 is obtained.
The knowledge of plane geometry demonstrates that
the general equation of ellipse E is ax 2 + by2 + cxy +
dx + ey + f = 0 (a, b, c, d, e, f are known quantities), any
point P on ellipse E is taken, and the tangent of ellipse E is
Figure 4. Geometric constraint relationship. drawn through this point, when this tangent slope exists,
i +byi +d
the slope is ki = − 2ax bxi +2cyi +e (i = 1, 2, 3 . . .).
The coordinates of the feature points, A , B and C  , can
projected ellipse equation E and the projected point O’ be extracted from the collected images, and their homo-
at the circle’s centre is. geneous coordinates may be set as (x1 , y1 , z1 ), (x2 , y2 , z2 )
⎡ ⎤⎡ ⎤
1 0 0 0 and (x3 , y3 , z3 ), respectively. According to the mathemat-
EO = H−T CH−1 HO = H−1 CO = H−1 ⎣0 1 0 ⎦ ⎣0⎦ ical knowledge of plane geometry, the equation of the
0 0 r2 1 tangent line passing through points A , B and C  can be
(9) expressed as a point-slope equation as.

⎨l1 : y − y1 = k1 (x − x1 )
3.3. Geometric constraints l : y − y2 = k2 (x − x2 ) (10)
⎩2
An inscribed triangle of the circle is drawn on the per- l3 : y − y3 = k3 (x − x3 )
fect circle image of the calibration plate. The intersection From this, the coordinates of the intersection points, T1  ,
points with the circle are points A, B and C. The tangent T2  , and T3  , of the tangent lines can be obtained.
of the circle is drawn at three points A, B and C, and their Let the homogeneous coordinates of M1  , M2  , M3  be.
intersection points are T1 , T2 , and T3 , respectively, assum-
⎧  
ing that M1 , M2 , and M3 are the midpoints of the chords ⎨M1 : A + αC  = (x1 + αx3 , y1 + αy3 , 1 + α)
AC, BC, and AB. According to geometric knowledge, the M  : C  + βB = (x3 + βx2 , y3 + βy2 , 1 + β) (11)
⎩ 2 
intersection of the tangent to the circle and the midpoint M3 : B + γ A = (x2 + γ x1 , y2 + γ y1 , 1 + γ )
of the corresponding chord passes through the circle’s
centre. During projective transformation, this property where α, β and γ are unknown quantities.
does not change. In addition, the projection of a straight Let the infinity points of the straight lines be A C  , A B
line remains a straight line, and the tangent of a circle and B C  be V1  , V2  and V3  respectively. According to
remains tangent to the projected ellipse in the imaging Formula (8), we have.
plane (Wu & Hu, 2001). Therefore, in the projection ellipse, ⎧ 
⎨V1 − A − αC  = (x1 − αx3 , y1 − αy3 , 1 − α)
the line connecting the tangent intersection points T1  , V  : C  − βB = (x3 − βx2 , y3 − βy2 , 1 − β) (12)
T2  , T3  and the corresponding projection points M1  , M2  , ⎩ 2  
V3 : B − γ A = (x2 − γ x1 , y2 − γ y1 , 1 − γ )
M3  must intersect at a point O’, which is the projec-
tion point of the circle’s centre on the imaging plane. A In the projective transformation, the projection points
schematic is shown in Figure 4. of the infinity points in the direction of the straight line
where the three chords are located are collinear, then
there are:
3.4. Calculation of the projection point of the real
centre of the circle V2  • [V1  × V3  ] = 0 (13)
After the grayscale processing of the image captured by Let the straight lines, T1  M1  , T2  M2  , T3  M3  , be the straight
the camera, the Otsu method is used first to obtain the lines, L1 , L2 , and L3 , respectively, because the coefficient
binary segmentation threshold, and the image is pro- vector of the equation connecting the two points in the
cessed to be binarized. Given that the pattern on the projective plane is the cross product of the homogeneous
calibration plate is usually a black circle on a white back- coordinate vectors of the two points, the homogeneous
ground, for convenience, the binary image is inverted, coordinates of the intersection of the two straight lines
and the pixels belonging to the projected ellipse area are are the cross product of the coefficient vectors of the
6 H. CHEN ET AL.

straight line equations, so the coefficient vectors of the obtained through the transformation between these four
straight lines, L1 , L2 , and L3 are. coordinate systems.
⎧ ⎡ ⎤ ⎡ ⎤⎡ ⎤
 
⎨L1 = T1 × M1 u ku 0 u0 x
L = T2 × M2 
 (14) ⎣ v ⎦ = ⎣ 0 kv v 0 ⎦ ⎣ y ⎦ (18)
⎩ 2
L3 = T3  × M3  1 0 0 0 1
⎡ ⎤ ⎡ ⎤⎡ ⎤
x fx 0 0 Xc
From the geometric relationship, the three lines, L1 , L2 , ⎣y ⎦ = ⎣ 0 fy 0⎦ ⎣Yc ⎦ (19)
and L3 , have the same point, then there are:
1 0 0 1 Zc
L2 • [L1 × L3 ] = 0 (15) In the formula, H1 is the internal parameter matrix, fx , fy ,
u0 , v0 are the parameters of the internal parameter matrix,
The intersection of L2 and L3 is the centre projection point
and ku and kv are the scale factors on the X-axis and Y-axis,
O’, O = L2 × L3 . The projection of the infinite straight line
respectively.
of the plane where the circle is located on the imaging
In summary, the conversion relationship between the
plane is a finite straight line. Formula (5) demonstrates
world coordinate system and the pixel coordinate system
that the projection equation is:
can be expressed as
l∞  = EO = E • [L2 × L3 ] (16) ⎡ ⎤
⎡ ⎤ ⎡ ⎤ Xw
u fx 0 u0 0 ⎢Yw ⎥
R T ⎢ ⎥
Given that all infinity points are on this infinity straight Zc ⎣v ⎦ = ⎣ 0 fy v0 0⎦ T
0 1 ⎣Zw ⎦
line, the point V3 ’ is on l∞  , then we have: 1 0 0 1 0
1
⎡ ⎤ ⎡ ⎤
l∞  • V3  = 0 (17) Xw Xw
⎢Yw ⎥ ⎢Yw ⎥
Formulas (13), (15), and (17) are combined to obtain a = H1 H2 ⎢ ⎥ ⎢ ⎥
⎣Zw ⎦ = H ⎣Zw ⎦ (20)
nonlinear equation system containing three unknowns α, 1 1
β and γ , and this nonlinear equation system is solved to
obtain the values of α, β and γ , and α, β and γ into the In the formula: Zc is the Z-axis coordinate value in the
expressions of the straight lines L1 , L2 , and L3 to obtain the camera coordinate system, and R and T represent the
coordinates of the projection point of the circle’s centre. rigid body transformation. H is a homography matrix,
which contains the camera internal parameter matrix
H1 and the external parameter matrix H2 . The internal
4. Camera calibration parameter matrix is only related to the camera’s own
Camera calibration is to determine the correspondence attributes and internal structure; the external parameter
between a certain point in space and its position in a matrix is completely determined by the mapping rela-
2D image by calculating the camera’s internal parameters tionship between the world coordinate system and the
and external coordinate system position parameters. The camera coordinate system.  
calibration method used in this paper is Zhang’s plane Assuming that the ideal pixel coordinate is u v ,
calibration method. In the imaging geometry of the cam- because the camera is distorted during
 the
 shooting pro-
 
era, the linear imaging model of the camera describes cess, the real pixel coordinate is u v . Nonlinear dis-
the imaging process based on four coordinate systems, tortion is mainly divided into radial distortion, tangential
which are the world coordinate system, the camera coor- distortion, and centrifugal distortion, whereas Zhang’s
dinate system, the image physical coordinate system, and
the image pixel. Coordinate System. Let the homoge- Table 2. Average projection error of image.
neous coordinate of the world coordinate system of a
 T Circular Circular
point P in space is Xw Yw Zw 1 , the coordinate mean Checkerboard mean Checkerboard
of the rigid body transformed into the camera coordi- Image error Mean error Image error Mean error
 T number (pixels) (pixels) number (pixels) (pixels)
nate system is Xc Yc Zc , the coordinate of the per-
1 0.017408 0.136221 2 0.013896 0.14586
spective projection into the image imaging coordinate
 T 3 0.015124 0.124842 4 0.018201 0.129999
system is x y z , and the corresponding pixel 5 0.013081 0.133154 6 0.013234 0.124160
 coor-
 7 0.012524 0.124391 8 0.016687 0.133391
dinate of the final projection into the image is u v . 9 0.017558 0.120564 10 0.014313 0.150970
The transformation relationship between the world coor- 11 0.015641 0.140890 12 0.013821 0.124783
13 0.016153 0.152299 14 0.013468 0.148469
dinate system and the image coordinate system can be
SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 7

Table 1. Camera internal and external parameters.


fx (mm) fy (mm) K1 K2 K3 P1 P2
Checkerboard 4871.170691 4882.933311 −0.458001 25.274913 25.274913 0.003443 −386.538727
Circular array 4896.563707 4893.757057 0.188941 −4.354822 0.006908 0.009040 0.000000

plane calibration method only considers radial distortion. Table 4. Average projection error of image.
To improve the accuracy of camera calibration, this paper Circular Circular
not only obtains the radial distortion coefficients k1 , k2 , mean Checkerboard mean Checkerboard
Image error mean error Image error mean error
and k3 during calibration, but also obtains two tangential number (pixels) (pixels) number (pixels) (pixels)
distortion coefficients p1 and p2. The nonlinear distortion 1 0.014948 0.177772 2 0.014980 0.150344
model can be expressed as. 3 0.017036 0.153539 4 0.016154 0.154179
5 0.013988 0.158444 6 0.013338 0.153158
7 0.013955 0.151452 8 0.015043 0.141982
9 0.014111 0.140072 10 0.013256 0.165126
⎧ 2 4
11 0.014871 0.149872 12 0.013886 0.162394

⎪ x = x + x[k1 (x 2 + y2 ) + k2 (x 2 + y2 ) + k3 (x 2 + y2 ) ] 13 0.013313 0.160741 14 0.013468 0.173434

⎨ +[p1 (3x 2 + y2 ) + 2p2 xy]


⎪ y = y + y[k1 (x 2 + y2 ) + k2 (x 2 + y2 )2 + k3 (x 2 + y2 )4 ]


+[p2 (3x 2 + y2 ) + 2p1 xy] (1) Uneven illumination (Table 1 and Table 2):
    (21)
 
In the formula, x y and x y represent the coor- ⎡
    4896.563707 0
 
dinates of u v and u v in the image coordinate H1circular array = ⎣ 0 4893.757057
system, respectively. The radial distortion coefficients k1 , 0 0
k2 , k3 and the tangential distortion coefficients p1, p2 can ⎤
774.879906
be obtained by using the least square method. 549.748346⎦
1

5. Calibration results and analysis


The experimental operating platform used in this article
is mainly Lenovo computer, which has a 64 bit Windows ⎡
4871.170691 0
10 system and an Intel (R) Core (TM) i7-10700 proces- H1checkerboard =⎣ 0 4882.933311
sor [email protected] GHz. The resolution of the camera used in 0 0
the experiment is 1920 × 1080 pixels. The checkerboard ⎤
calibration board is a 12 ∗ 9 grid, with each grid size of 950.355523
14 mm ∗ 14 mm. The circular array calibration plate is a 440.277167⎦
7 ∗ 7 dot with a diameter of 5.0 mm and a centre dis- 1
tance of 10.0 mm. First, 14 images collected by the camera
are preprocessed, and VS2019 and OpenCV4.5 are used
(2) Strong illumination (Table 3 and Table 4):
to read the processed images into the written C++ pro-
gram. By extracting the centres of all circles in 14 images
and comparing them with the 3D space of the centres on ⎡
4996.563707 0
the calibration board, corresponding values and calibra- H1circular array = ⎣ 0 4993.757057
tion results are obtained. Other experimental conditions 0 0
remain unchanged. Under three lighting conditions, 14 ⎤
chessboard calibration images are also collected for cali- 704.562029
bration. Therefore, the experimental results are as follows. 588.646041⎦
The camera internal parameter matrices are: 1

Table 3. Camera internal and external parameters.


fx (mm) fy (mm) K1 K2 K3 P1 P2
Checkerboard 4995.705161 4986.210450 −0.221745 21.891224 −0.003153 0.008866 −529.347014
Circular array 4996.563707 4993.757057 0.188941 −4.354822 0.006908 0.009040 0.000000
8 H. CHEN ET AL.

Table 5. Camera internal and external parameters.


fx (mm) fy (mm) K1 K2 K3 P1 P2
Checkerboard 4995.705161 4986.210450 −0.137448 0.864256 0.000626 −0.002753 −1246.029283
Circular array 4999.048416 4994.348380 0.211150 −3.114353 0.011188 −0.003241 0.000000

⎡ Table 6. Average projection error of image.


4995.705161 0
H1checkerboard =⎣ 0 4986.210450 Circular Circular
0 0 mean Checkerboard mean Checkerboard
Image error mean error Image error mean error
⎤ number (pixels) (pixels) number (pixels) (pixels)
996.259745
553.611204⎦ 1 0.014367 0.059496 2 0.018606 0.077081
3 0.015429 0.071327 4 0.017141 0.055403
1 5 0.015045 0.052088 6 0.014020 0.048004
7 0.014336 0.049508 8 0.018846 0.061008
(3) Weak illumination (Table 5 and Table 6): 9 0.016542 0.067049 10 0.015396 0.061885
11 0.017989 0.047330 12 0.019875 0.069282
⎡ 13 0.018618 0.055810 14 0.021017 0.06104
4999.048416 0
H1circular array =⎣ 0 4994.348380
0 0 Table 7. Average error of 14 images under three types of illumi-
⎤ nation.
629.362277
Circular mean Checkerboard
584.669272⎦ Illumination conditions error (pixels) mean error (pixels)
1 Uneven illumination 0.108721 0.134999
Strong illumination 0.101625 0.156608
Weak illumination: 0.119881 0.159737

4995.705161 0
H1checkerboard = ⎣ 0 4986.210450
The mean is calculated as follows.
0 0 
⎤ 2  2
1
n
895.582206  
Ed = x − xpi + y − ypi (22)
573.315153⎦ n pi
i=1
pi
1
The smaller the total residual mean, the more accurate the
The difference between fx and fy is not significant, and established mathematical model and the higher the cal-
the radial distortion coefficient and tangential distortion ibration accuracy. The average error values of 14 circular
coefficient are not large. To test the accuracy and fea- calibration images and checkerboard calibration images
sibility of the calibration results, this paper adopts re- under three illumination conditions in the experiment
projection error calculation. First, the high-precision pixel are shown in Table 7. From the average data, the aver-
coordinates of the circle’s centre are obtained on the basis age residual error of 14 circular array images is smaller
of the algorithm proposed in this paper, and then using than the average residual error of 14 chessboard images
the correspondence between the pixel coordinate sys- under any lighting condition, which verifies the calibra-
tem and the world coordinate system mentioned above, tion based on circular array proposed in this article. The
the coordinates are reprojected to obtain the correspond- feasibility and effectiveness of onboard camera calibra-
ing reprojected pixel coordinate values. Among them, tion methods.
the circular coordinates extracted in this paper are sorted
logically from left to right and from top to bottom, the 6. Conclusion
coordinate system is established according to the princi-
ple of the right-handed coordinate system, and the centre Aiming at the characteristics of the circular calibration
coordinate of the first circle in the upper left corner of the plate, this paper proposes a camera calibration method
calibration board is defined as the coordinate axis. The on the basis of the circular array calibration plate. By
origin, where z = 0 is to obtain the coordinate value of extracting the ellipse contour and the centre of the char-
the circle’s centre in the world coordinate system. Assum- acteristic circle on the calibration plate with 14 different
ing that theoriginal  pixel coordinates of a point P in poses, the coordinates of the circle’s centre are obtained
  with high-precision. The internal and external parame-
the space is x y , the re-projected pixel coordinates
pi pi
  ters and smaller radial distortion coefficient, tangential
obtained by the above calibration method is xpi ypi distortion coefficient. The experimental results show that
SYSTEMS SCIENCE & CONTROL ENGINEERING: AN OPEN ACCESS JOURNAL 9

the average re-projection error of the circle centre coor- force sensors. IEEE Transactions on Instrumentation and Mea-
dinates obtained by the improved calibration algorithm surement, 66(32), 3180–3189. https://doi.org/10.1109/TIM.20
is less than 0.12 pixels under any under any illumination 17.2746438
Huang, Z., Su, Y., & Wang, Q. (2020). Zhang C G. Research on
conditions. Compared with the use of checkerboard as external parameter calibration method of two-dimensional
the calibration object and Zhang’s plane-based camera lidar and visible light camera. Journal of Instrumentation.
calibration method for camera calibration, the calibra- https://doi.org/10.19650/j.cnki.cjsi.J2006756
tion results are more accurate, which meet the actual Ito, Y., Ogawa, K., & Nakano, K. (2011). Fast Ellipse Detection
calibration application requirements. Algorithm Using Hough Transform on the GPU. Proc. Int. Conf.
on Net. & Comput. (ICNC).
Kahn, P., Kichen, L., & Riseman, E. M. (1990). A fast line finder for
Disclosure statement vision-guided robot navigation. IEEE Transactions on Instru-
No potential conflict of interest was reported by the author(s). mentation and Measurement, 12(11), 1098–1102. https://doi.
org/10.1109/34.61710
Kanakam, T. M. (2017). Adaptable ring for vision-based measure-
Funding ments and shape analysis. IEEE Transactions on Instrumenta-
This work was supported in part by Academic support project tion and Measurement, 66(4), 746–756. https://doi.org/10.11
for top-notch talents in disciplines (majors) in Colleges and uni- 09/TIM.2017.2650738
versities (No. gxbjzd2021065), and in part by Anhui Polytech- Li, H., Wu, F. & Hu, Z. (2000). A new self-calibration method for lin-
nic University – Jiujiang District Industrial collaborative innova- ear camera. Journal of Computer Science, 23(11), 9. https://doi.
tion special fund project, “Research on high precision collab- org/10.19650/j.cnki.cjsi.J2005999
orative control system of multi degree of freedom robot” (No. Li, M., Ma, K., Xu, Y., & Wang, F. (2020). Research on error
2021cyxtb2), and in part by Wuhu key R & D project “R & D and compensation method of morphology measurement based
application of key technologies of robot intelligent detection on monocular structured light. Journal of Instrumentation,
system based on 3D vision” (No. 2021yf32). 41(05|5). https://doi.org/10.19650/j.cnki.cjsi.J2005999
Liu, K., Wang, H., Chen, H., Qu, E., Tian, Y., & Sun, H. (2017).
Steel surface defect detection using a new haar-weibull-
Data availability statement variance model in unsupervised manner. IEEE Transactions
The data used to support the findings of this study are available on Instrumentation and Measurement, 6(10), 2585–2596.
from the corresponding author upon request. https://doi.org/10.1109/TIM.2017.2712838
Liu, Y. (2001). Accurate calibration of standard plenoptic cam-
eras using corner features from raw images. Optics Express,
References 21(1), 158–169. https://doi.org/10.1364/OE.405168
Abdel-Aziz, Y. L. (2015). Direct linear transformation from Lu, X., Xue, J., & Zhang, Q. (2020). A high-precision camera
comparator coordinates into object space coordinates in calibration method based on the calculation of real image
close-range photogrammetry. Photogrammetric Engineering coordinates at the center of a circle. China Laser, 47(3).
& Remote Sensing Journal of the American Society of Pho- https://kns.cnki.net/kcms/detail/31.1339.
togrammetry. https://doi.org/10.14358/PERS.81.2.103 Maybank, S. J., & Faugeras, O. D. (1992). A theory of self-
Bozomitu, R. G., Pasarica, A. & Cehan, V. (2016). Implementation calibration of a moving camera. International Journal of Com-
of eye-tracking system based on circular Hough transform puter Vision, 8(2), 123–151. https://doi.org/10.1007/BF001
algorithm. Proc. 5th Conf. on E-Health and Bioe. (EHB). 27171
Chen, W. (2020). Research on calibration method of industrial Peng, Y., Guo, J., Yu, C., & Ke, B. (2022). High precision
robot vision system based on halcon. Electronic Measurement camera calibration method based on plane transforma-
Technology, 43(21). https://doi.org/10.19651/j.cnki.emt.2004 tion. Journal of Beijing University of Aeronautics and Astronau-
852 tics. https://doi.org/10.13700/j.bh.1001-5965.2021.0015
Crombrugge, I. V., Penne, R., Vanlanduit S. (2021). Extrinsic Qiu, M., Ma, S. & Li, Y. (2000). Overview of camera calibra-
camera calibration with line-laser projection. Sensors-Basel, tion in computer vision. Journal of Automotive Technology.,
21(4). https://doi.org/10.3390/s21041091 26(1). https://doi.org/10.16383/j.aas.2000.01.006
Frosio, I., & Borghese, N. A. (2008). Real-time accurate circle fit- Rudakova, V. & Monasse, P. (2014). Camera matrix calibration
ting with occlusions. Pattern Recognition, 41(3), 1041–1055. using circular control points and separate correction of the
https://doi.org/10.1016/j.patcog.2007.08.011 geometric distortion field. Proc. 11th Conf. on Comput. & Rob.
Graves, A., Liwicki, M., Fernandez, S., Bertolanmi, R., Bunke, H., Vis. (CRV).
& Schmidhuber, J. (2009). A novel connectionist system for Sang, D. M. (1996). A self-calibration technique for active vision
unconstrained hand-writing recognition. IEEE Transactions systems. IEEE Transactions on Robotics and Automation, 12(1),
on Pattern Analysis and Machine Intelligence, 31(5), 855–868. 114–120. https://doi.org/10.1109/70.481755
https://doi.org/10.1109/TPAMI.2008.137 Sang, J. (2021). Constrained multiple planar reconstruction for
Hu, M. K. (1962). Visual pattern recognition by moment invari- automatic camera calibration of intelligent vehicles. Sensors,
ants. Information Theory, IRE Transactions, 8(2), 179–187. 21(14), 4643–4643. https://doi.org/10.3390/s21144643
https://doi.org/10.1109/TIT.1962.1057692 Tsai, R. Y. (1986). An efficient and accurate camera calibration
Huang, X., Zhang, F., Li, H., & Liu, X. (2017). An online technology technique for 3D machine vision. Proc. of Comp. vis. Patt.
for measuring icing shape on conductor based on vision and Recog. (CVPR).
10 H. CHEN ET AL.

Wang, W., Zhao, J., Hao, Y., Zhang XL. (2016). Research on non- Yang, J., Zhang, D., Yang, J. Y., & Niu, B. (2007). Globally maximiz-
linear least squares ellipse fitting based on Levenberg Mar- ing, locally minimizing: Unsupervised discriminant projection
quardt algorithm. Proc. 13th Int. Conf. on Ubiquit. Rob. and with applications to face and palm biometrics. IEEE Trans-
Amb. Intel. (URAI). actions on Pattern Analysis and Machine Intelligence, 12(4),
Wu, F., & Hu, Z. (2001). Linear theory and algorithm of cam- 650–664. https://doi.org/10.1109/TPAMI.2007.1008
era self-calibration. Journal of Computer Science, 24(011|11), Zhang, M., Yang, Y., & Qin, R. (2017). Dynamic adaptive genetic
1121–1135. https://doi.org/10.3321/j.issn:0254-4164.2001. algorithm camera calibration based on circular array tem-
11.001 plate. Proc. 36th Chinese Control Conf. (CCC).
Wu, F., Liu, J., & Ren, X. (2013). Calibration method of panoramic Zhang, M. Y., Zhang, Q., & Duan, H. (2019). Pose self-calibration
camera for deep space exploration based on circular marker method of monocular camera based on motion trajectory.
points. Journal of Optics, 11. https://doi.org/CNKI:SUN:GXXB. Journal of Huazhong University of Science and Technology:
0.2013-11-023 Natural Science Edition. https://doi.org/10.13245/j.hust.190
Wu, J., Jiang, L., Wang, A., & Yu, P. (2018). Offset compensa- 212
tion algorithm for circular sign projection. Chinese Journal of Zhang, Z. (1999). Flexible camera calibration by viewing a plane
Image and Graphics, 23(10). https://doi.org/CNKI:SUN:ZGTB. from unknown orientations. Proc. IEEE 7th Int. Conf. on Com-
0.2018-10-010 put. Vis. (ICCV).
Xie, Z., & Wang, X. (2019). Center extraction of planar calibration Zhang, Z., & Tang, Q. (2016). Camera self-calibration based on
target marker points. Optics and Precision Engineering, 27(02), multiple view images. Proc. Nicograph International, Hanzhou,
440–449. https://doi.org/10.3788/OPE.20192702.0440 China.
Yang, C., Wang, W., & Hu, Z. (1998). A self-calibration method of Zhu, W., Cao, L., & Mei, B. (2014). Accurate calibration of industrial
camera internal parameters based on active vision. Journal of camera using asymmetric projection of circle center. Opti-
Computer Science, 21(5). https://doi.org/10.3321/j.issn:0254- cal Precision Engineering, 22(8). https://doi.org/10.3788/OPE.
4164.1998.05.006 20142208.2267

You might also like