Boudine 2016
Boudine 2016
PII: S1047-3203(16)30061-X
DOI: http://dx.doi.org/10.1016/j.jvcir.2016.05.003
Reference: YJVCI 1754
Please cite this article as: B. Boudine, S. Kramm, N.E. Akkad, A. Bensrhair, A. Saaidi, K. Satori, A Flexible
Technique based on Fundamental matrix for Camera Self-Calibration with Variable Intrinsic Parameters from two
views, J. Vis. Commun. Image R. (2016), doi: http://dx.doi.org/10.1016/j.jvcir.2016.05.003
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
A Flexible Technique based on Fundamental matrix for Camera Self-Calibration
with Variable Intrinsic Parameters from two views
Bouchra Boudinea,b,∗, Sebastien Krammb , Nabil El Akkada , Abdelaziz Bensrhairb , Abderahim Saaidia,c , Khalid
Satoria
a LIIAN, Department of Mathematics and Computer Science, Faculty of Science Dhar El Mahraz Sidi Mohamed Ben Abdellah University, Fez P.O
1796, Morocco
b LITIS, INSA ROUEN, Avenue of the University, 76801 Saint Etienne of Rouvray, Rouen, France
c LSI, Department of Mathematics Physics and Informatics, Polydisciplinary Faculty University of Sidi Mohamed Ben Abdellah University, Taza
Abstract
We propose a new self-calibration technique for cameras with varying intrinsic parameters that can be computed using
only information contained in the images themselves. The method does not need any a priori knowledge on the
orientations of the camera and is based on the use of a 3D scene containing an unknown isosceles right triangle. The
importance of our approach resides at minimizing constraints on the self-calibration system and the use of only two
images to estimate these parameters. This method is based on the formulation of a nonlinear cost function from the
relationship between two matches which are the projection of two points representing vertices of an isosceles right
triangle, and the relationship between the images of the absolute conic. The resolution of this function enables us to
estimate the cameras intrinsic parameters. The algorithm is implemented and validated on several sets of synthetic
and real image data.
Keywords: Self-calibration. Variable intrinsic parameters. Fundamental matrix. Absolute conic. Unknown 3D scene.
2
(1994b) presented a method of self-calibration using a ro- 3D object. Applicative domains are numerous, includ-
tating camera. When there is no translation of the cam- ing robotics or autonomous driving, using some efficient
era between views, there is an image-to-image projective video encoding techniques Yan et al. (2014b,a).
mapping which can be calculated using point matches.
Luong & Viéville (1994) showed that a camera can be cal-
3. Camera Self-calibration tools
ibrated using two or more rotated views of an affine struc-
ture. A stratification approach was proposed by Pollefeys 3.1. The Camera Model
& Van Gool (1997) and refined recently by Chandraker
A camera is modeled by the usual pinhole model, see
et al. (2010). Triggs (1998) develops a practical algo-
Fig. 1. The mapping is a perspective projection from
rithm for the self-calibration of a moving projective cam-
3D projective space P3 (the world), to 2D projective
era, from m ≥ 5 views of a planar scene. This technique
space P2 (the image plane). The world point denoted by
based on some constraints involving the absolute quadric
B = (X, Y, Z, 1)T is mapped to the image point denoted
and the scene-plane to image-plane collineations. Sturm
by b = (x, y, 1)T by a 3 × 4 matrix Mi in homogeneous
& Maybank (1999) and Zhang (2000) independently pro-
coordinates, which is:
posed to use planar patterns in 3D space to calibrate cam-
eras. Triggs (1997) introduced the absolute (dual) conic as
b = Mi B (1)
a numerical device for formulating auto-calibration prob-
lem. These early works are however quite sensitive to Mi is called the camera projection matrix, which mixes
noise and unreliable Bougnoux (1998), Hartley & Zis- both intrinsic and extrinsic parameters. The matrix Mi
serman (2003). Li & Hu (2002) propose a linear camera may be decomposed as:
self-calibration technique based on projective reconstruc-
tion that can compute the 5 intrinsic parameters. In this Mi = Ki (Ri ti ) (2)
method, the camera undergoing at least a pure translation
Where (Ri ti ), called the extrinsic parameters, Ri is the
and two arbitrary motions. The numerical solution using
rotation and ti translation which relates the world coordi-
interval analysis was presented in Fusiello et al. (2004),
nate system to the camera coordinate system, and Ki is a
but this method is quite time consuming. Other tech-
3×3 calibration matrix containing the intrinsic parameters
niques, use different constraints, such as camera motion
of the camera, given by:
Hartley (1994b), Stein (1995), Horaud & Csurka (1998),
De Agapito et al. (1999), plane constraints Triggs (1998),
fi τi u0i
Sturm & Maybank (1999), Knight et al. (2003), concen- Ki = 0 εi fi v0i (3)
tric circles Kim et al. (2005) and others Pollefeys et al. 0 0 1
(1999), Liebowitz & Zisserman (1999). Malis & Cipolla
(2000b) present a technique to self-calibrate cameras with where, fi represents the focal length of the camera for
varying focal length. This method does not need any a pri- the view i, εi is the scale factor, (u0i , v0i ) are the coordi-
ori knowledge of the metric structure of the plane. More- nates of the principal point in the image i and τi is the
over Malis & Cipolla (2002, 2000a) extend their work image skew.
into allow the recovering of the varying focal length and
propose to impose the constraints between collineation 3.2. Matching and control Points
using a different iterative method. Jiang & Liu (2012) In this work, we used the Harris algorithm (Harris &
presents a method of self-calibration of varying internal Stephens, 1988) to detect interest points. The matching of
camera parameters that based on quasi-affine reconstruc- interest points between each pair of images is performed
tion, this method does not require a special scene con- by ZNCC correlation measure (Zero mean Normalized
straints or a special movement informations to achieve the Cross Correlation) (Lhuillier & Quan, 2002; Di Stefano
goal of self-calibration. El Akkad et al. (2014) proposes a et al., 2005), and then we eliminate the false matches by
self-calibration method of camera having varying intrin- using the RANSAC algorithm (Fischler & Bolles, 1981;
sic parameters from a sequence of images of an unknown Wang & Zheng, 2008; Raguram et al., 2008).
3
Ω = I3 . The projection of the absolute conic in the im-
age plane i (ωi Image Absolute Conic (IAC)) is directly
related to the camera internal matrix Ki by expression 5:
Fi j ∼ [e j ]∧ Hi j (4)
With the skew-symmetric matrix [e j ]∧ defined as:
0 −e3 e2
[e j ]∧ = e3 0 −e1
−e2 e1 0
Figure 2: An isosceles right triangle
Once Fi j is estimated, the epipole e j = (e1 , e2 , e3 )T of
the image j can be estimated from the formula Fi j T e j =
0.
4. Method description
3.4. Absolute Conic and its Image
One of the most important concepts for self-calibration 4.1. Vision system
is the absolute conic (AC) and its Image (IAC). The ab- We consider two points B and C of the 3D scene and
solute conic is an imaginary conic of the plane of infinity their projections (b1i , b2i ) and (b1 j , b2 j ) in the planes of
4
the images i and j, respectively (Fig. 3). To estimate the Points Affine plan Euclidien plan
projections matrices, we note by Π the plane of this scene 0 r
which contains three vertices A, B and C of the isosceles C B1 = 1 B′1 = r
right triangle ABC and we consider two references: ref-
1 1
erence Affine ℜ(A, Xa , Ya , Za ) and and a Euclidean refer- 1 2r
ence ℜ(A, Xe , Ye , Ze ) fixed on the planar scene and asso- B B2 = 0 B′2 = 0
ciated to the isosceles right triangle ABC such as: Za ⊥Π
1 1
and Ze ⊥Π. 0 0
Table 1 presents the coordinates of the vertices A, B and C A B3 = 0 B′3 = 0
of an isosceles right triangle in two references Affine and 1 1
Euclidean. In practice there is not an automatic method to
determine the triangle in the images. But we can always
determine which two vertices of the triangle ABC. Table 1: Homogeneous coordinates of vertices of the Triangle in the two
references Affine and Euclidean.
bn j ∼ H j Bn for n = 1, 2 (8)
where bni and bn j , respectively, represent the points in
the images i and j that are the projections of the two ver-
tices B and C of the 3D scene, and Hm , m = i, j represent
the homography matrices (Fig. 3) that permits to project
the plan of the scene in the image i and j (Saaidi et al.,
2009). Homography matrices are expressed as follows:
1 0
Hm = Km Rm 0 1 RTm tm for m = i, j (9)
0 0
Figure 3: Projection of triangle ABC in the two images i and j by the With Km the matrix of intrinsic parameters, Rm the ro-
two matrixes Pi and P j .
tation matrix and tm the translation vector of the displace-
ment between the frame of the 3D plane and the camera
frame. Expressions (7) and (8) can be written as follows:
4.2. Computing the Projection Matrices using the Funda-
mental matrix and the Epipoles
Considering two homographies Hi and H j that can bnm ∼ Hm Q Bn for m = i, j and n = 1, 2 (10)
project the plane Π in images i and j, therefore, the pro-
jection of the two points B and C can be given by the where: B1 = (0, 1, 1)T and B2 = (1, 0, 1)T . The relation
following expression: (10) can be written as:
5
With:
2r r 0
Q= 0 r 0 bnm ∼ Pm Bn for m = i, j and n = 1, 2 (21)
0 0 1
This leads to:
Q is the passage matrix between the Affine and Eu-
clidean frames of the vertices of the triangle such as:
bni ∼ Pi Bn for n = 1, 2 (22)
B′n = Q Bn (13)
bn j ∼ P j Bn for n = 1, 2 (23)
From the equation (10) we can write the projection ma-
trices by: Furthermore, from the equations (20) and (23) we can
write:
Pm ∼ Hm Q for m = i, j (14)
bn j ∼ Hi j Pi Bn for n = 1, 2 (24)
This leads to:
Similarly, this leads to:
Pi ∼ Hi Q and Pj ∼ Hj Q (15)
6
4.3. Camera self-calibration equations GT G GT RTj t j
PTj ω j P j ∼ T (34)
tj Rj G tTj t j
In this part, we present our method to calculate the Where ωm = (Km KmT )−1 for m = i, j is the image of the
equations of self-calibration based on the idea of an absolute conic.
isosceles right triangle ABC. The main idea is to demon- From the expression (22), we can write:
strate the relationship between the images of the two ab-
solute conics (ωi and ω j ) and two points of the 3D scene αni bni = Pi Bn for n = 1, 2 (35)
and their projections (b1i , b2i ) and (b1 j , b2 j ) in the planes
of the images i and j, respectively. A nonlinear cost func- Where : Bn = (s1 , s2 , 1) T
From the previous expression (31), we can write: The same for P j , we can write :
′
• If m = i: P j ∼ bn j Vn−1 for n = 1, 2 (38)
From the relations (33) and (37), we have for m = i the
GT G GT RTi ti
PTi ωi Pi ∼ (33) following expression:
tiT
Ri G tiT ti
• If m = j:
7
Let us note by E the matrix corresponding to
′ ′
′ (b1i V1−1 )T ωi (b1i V1−1 ):
(bni Vn−1 )T ωi (b′ni Vn−1 )
GT G GT RTi ti e11i e12i e13i
∼ T for n = 1, 2 E= e12i e12i e23i
ti Ri G tiT ti
(39) e13i e23i e33i
Similarly, from expressions (34) and (38) we have for
m = j the following expression: Similarly, let us note by D the matrix corresponding to
′ ′
(b2i V2−1 )T ωi (b2i V2−1 ):
′ ′
(bn j Vn−1 )T ω j (bn j Vn−1 ) d11i d12i d13i
D= d12i d12i d23i
GT G GT RTj t j
∼ T for n = 1, 2 d13i d23i d33i
tj Rj G tTj t j
(40) Therefore, we deduce from (45) that:
By replacing in (39) n by 1 and 2, we have:
′ ′ e
(b1i V1−1 )T ωi (b1i V1−1 ) 11i d11i
= ⇔ e11i d12i − d11i e12i = 0
GT G GT RTi ti (41)
e12i d12i
∼
e d12i
tiT Ri G tiT ti
12i = ⇔ e12i d13i − d12i e13i = 0
e13i d13i (46)
And:
e13i
=
d13i
⇔ e13i d23i − d13i e23i = 0
e23i d23i
′
(b2i V2−1 )T ωi (b′ 2i V2−1 )
e23i =
d23i
⇔ e23i d33i − d23i e33i = 0
GT G GT RTi ti (42) e33i d33i
∼ T
ti R i G tiT ti According to the expressions (43) and (44) we deduce
Similarly, (40) gives: that the matrices are identical:
( ′ −1 )T ( ′ ) ( ′ )T ( ′ )
b1 j V1 ω j b1 j V1−1 ∼ b2 j V2−1 ω j b2 j V2−1
′ ′ | {z } | {z }
(b1 j V1−1 )T ω j (b1 j V1−1 ) L N
(43) (47)
GT G GT RTj t j
∼ Let us note by L the matrix corresponding to
T
tj Rj G tTj t j ′
T ′
b1 j V1−1 ω j b1 j V1−1 :
And:
′ ′ l11 j l12 j l13 j
(b2 j V2−1 )T ω j (b2 j V2−1 ) L= l12 j l12 j l23 j
(44)
T
G G GT RTj t j l13 j l23 j l33 j
∼ T
tj Rj G tTj t j
Similarly, let us note by N the matrix corresponding to
′
T ′
According to the expressions (41) and (42), we deduce b2 j V2−1 ω j b2 j V2−1 :
that the matrices are identical:
n11 j n12 j n13 j
( ′ )T ( ′ ) ( ′ )T ( ′ ) N= n12 j n12 j n23 j
b1i V1−1 ωi b1i V1−1 ∼ b2i V2−1 ωi b2i V2−1
| {z } | {z } n13 j n23 j n33 j
E D
(45) Therefore, we deduce that:
8
This system of equations holds ten unknowns (five for
l n11 j ωi and five for ω j ). It is nonlinear, therefore to solve it,
11 j
= ⇔ l11 j n12 j − n11 j l12 j = 0
we minimize it by using the Levenberg-Marquardt algo-
l 12 j n12 j
l12 j n12 j rithm (Moré, 1978) by using the following nonlinear cost
= ⇔ l12 j n13 j − n12 j l13 j = 0 function:
l13 j n13 j
(48)
l13 j
=
n13 j
⇔ l13 j n23 j − n13 j l23 j = 0
∑
k−1 ∑
k
l j n23 j min (µ2i + γi2 + η2i + φ2i + µ2j
l23
n23 j ωi, j
23 j = ⇔ l23 j n33 j − n23 j l33 j = 0 i=1 j=i+1 (54)
l33 j n33 j
+ γ2j + η2j + φ2j + ζi2j + ψ2i j )
Expressions (41) and (43) show that the first line and
With k the number of images.
columns of the matrices E and L are identical, which
gives:
The parameters of this cost function are given by the
( ′ −1 )T ( ′ ) ( ′ )T ( ′ ) following expression:
b1i V1 ωi b1i V1−1 ∼ b1 j V1−1 ω j b1 j V1−1
| {z } | {z }
µi = e11i d12i − d11i e12i
E L
γi = e12i d13i − d12i e13i
(49)
ηi = e13i d23i − d13i e23i
Therefore, we deduce that:
φi = e23i d33i − d23i e33i
§ µ j = l11 j n12 j − n11 j l12 j
e11i l11 j (55)
= ⇔ e11i l12 j − l11 j e12i = 0
γ j = l12 j n13 j − n12 j l13 j
(50)
η j = l13 j n23 j − n13 j l23 j
e12i l12 j
φ j = l23 j n33 j − n23 j l33 j
Expressions (42) and (44) show that the first line and
ζ = e11i l12 j − l11 j e12i
columns of the matrices D and N are identical, which ij
gives: ψi j = d11i n12 j − n11 j d12i
( ′ )T ( ′ ) ( ′ )T ( ′ ) The minimization of the cost function (54) is carried
b2i V2−1 ωi b2i V2−1 ∼ b2 j V2−1 ω j b2 j V2−1 out by using the Levenberg-Marquardt algorithm Moré
| {z } | {z }
D N (1978). The optimization algorithm used is nonlinear; so,
(51) it requires an initialization step. Therefore, we propose
The previous expression gives: the following conditions on the self-calibration system to
§ determine the initial solution:
d11i n11 j
= ⇔ d11i n12 j − n11 j d12i = 0 (52)
d12i n12 j • The pixels are squared, therefore : εi = ε j = 1, and
From the expressions (46), (48), (50) and (52) we obtain τi = τ j = 0,
the following system of ten equations: • The principal point is located at the center of the im-
e11i d12i − d11i e12i = 0 age, so u0i , v0i , u0 j and v0 j are known.
e12i d13i − d12i e13i = 0
e13i d23i − d13i e23i = 0 • The pixels are squared, therefore: εi = ε j = 1, and
τi = τ j = 0,
e23i d33i − d23i e33i = 0
l11 j n12 j − n11 j l12 j = 0 • The focal lengths ( fi , f j ) are estimated by replacing
(53)
l12 j n13 j − n12 j l13 j = 0 the parameters (εi , τi , u0i , v0i , ε j , τ j , u0 j , v0 j in the ex-
l13 j n23 j − n13 j l23 j = 0 pression (53) and by solving this system of equa-
l23 j n33 j − n23 j l33 j = 0 tions.
e l −l e =0
11i 12 j 11 j 12i
d11i n12 j − n11 j d12i = 0
9
4.4. Steps of the Self-calibration algorithm 0.8
The relative errors of " u 0" according to the number of images
Proposed method
This section summarizes the different steps described Zhang
Jiang
0.7
above and needed to compute the varying intrinsic param- Triggs
eters of cameras.
5. The experimental framework Figure 4: Relative errors of u0 according to the number of images.
0.5
bility, convergence and accuracy. Experiments have been
done both on synthetic and real data. 0.4
10
The relative errors of " f " according to the number of images
Proposed method
0.8
Zhang
Jiang
Triggs
0.7
Relative error of " f " (%)
0.6
0.5
0.4
0.3
0.2
0.1
2 3 4 5 6 7 8 9 10
Number of images
Figure 6: Relative errors of f according to the number of images. Figure 8: Relative errors of ε according to the number of images.
Figure 7: Relative errors of τ according to the number of images. Figure 9: The relative errors of u0 , v0 , f, ε and τ according to Gaussian
noises.
11
• Camera with free movement (1998). The results of the present approach are almost
identical to those obtained by Zhang (2000), and they are
• An unknown 3D scene a little different from those obtained by Triggs (1998).
Therefore, this approach provides a robust performance,
5.2. Real Data
and it is very close to the other well-established meth-
Two images of 480 × 480 images of an unknown three- ods. In addition, the present approach gives satisfactory
dimensional scene, taken by a CCD camera having vary- results in terms of accuracy, stability and our algorithms
ing intrinsic parameters from different views to confirm converge rapidly to the optimal solution, and this is seen
the robustness of the approach presented in this paper very clearly in the simplicity of calculations.
(Fig. 10). This section deals with the experiment results
of the different algorithms (Harris, ZNCC, Ransac, Lev-
enbergMarquardt, etc.) implemented by using the Java 6. Conclusion
object-oriented programming language.
The interest points are produced with the Harris algo- In this paper, we treated a theoretical and practi-
rithm, as shown in Fig. 11 and the matches between these cal study of a new proposed method of cameras self-
two images are shown in Fig. 12. The result obtained calibration by an unknown three-dimensional scene and
contains false matches. To eliminate them, we regular- characterized by varying intrinsic parameters. Further-
ized all the matches by the RANSAC algorithm (Fischler more, we have minimized the constraints on the self-
& Bolles, 1981; Wang & Zheng, 2008; Raguram et al., calibration system (the use of any camera with varying
2008). Figures 13 and 14) below show the regularized intrinsic parameters, a 3D scene, and only two points
matches (outliers points and inliers points) obtained by of the scene and two images to achieve the camera’s
this algorithm. self-calibration procedure). This method is based on the
The projection of the points of the 3D scene in the two demonstration of a relationship between two points in the
images allows us to estimate the geometric entities (the 3D scene and their projections in the planes of the images
Fundamental matrix and the projection matrices). After- and between the relationships between the images of the
wards, the solution of a nonlinear system of equations absolute conic (IAC) for each pair of images. These re-
(formula (51)) provides the parameters of the image of the lationships formulated a nonlinear cost function, its res-
absolute conic and finally the intrinsic parameters of the olution by following two steps: Initialization and opti-
camera. The Table 2 below represents the intrinsic param- mization allow to estimate the intrinsic parameters of the
eters estimated by the proposed approach in this paper. cameras used. The simulations performed and the results
of experiments show the performance of the method pro-
Methods f u v τ ε posed in this article.
0 0
Proposed Image 1 1175 232 252 0.27 0.78
method Image 2 1182 235 242 0.31 0.82 References
Jiang Image 1 1168 241 262 0.38 0.93
Image 2 1173 249 253 0.27 0.90 Beardsley, P., Torr, P., & Zisserman, A. (1996). 3d model
acquisition from extended image sequences. In Euro-
pean Conference on Computer Vision (ECCV’96) (pp.
Table 2: The results of the intrinsic camera parameters estimated by two 683–695).
methods.
Boufama, B., & Mohr, R. (1995). Epipole and fundamen-
The experimental results of the intrinsic camera param- tal matrix estimation using virtual parallax. In Interna-
eters presented in Table 2 obtained by the proposed ap- tional Conference on Computer Vision (ICCV’95) (pp.
proach on the real data and on synthetic data shows they 1030–1036).
are a little different from those obtained by Jiang & Liu
(2012). Furthermore, the present approach is also com-
pared with two other methods of Zhang (2000) and Triggs
12
(a) Image i (b) Image j
Figure 11: The interest points are shown respectively in red and blue in the images i and j.
Bougnoux, S. (1998). From projective to euclidean Chandraker, M., Agarwal, S., Kriegman, D., & Belongie,
space under any practical situation, a criticism of self- S. (2010). Globally optimal algorithms for stratified au-
calibration. In Sixth International Conference on Com- tocalibration. International Journal of Computer Vision
puter Vision (ICCV’98) (pp. 790–796). (IJCV’10), 90, 236–254.
13
(b) Image j
(a) Image i
Figure 12: The matches are shown and numbered in green in the two images.
14
Figure 14: Inliers points detected by RANSAC in the two images.
The experiment results of Interest points sion and Pattern Recognition (CVPR’99). volume 1.
550
Harris detector
500 Matching by ZNCC
Inliers points by RANSAC
De Agapito, L., Hayman, E., & Reid, I. (2002). Self-
Outliers points by RANSAC
450 calibration of rotating and zooming cameras. Interna-
400 tional Journal of Computer Vision( IJCV’02), 47, 287–
350 287.
300
De Agapito, L., Hayman, E., & Reid, I. D. (1998). Self-
250
calibration of a rotating camera with varying intrinsic
200
parameters. In The British Machine Vision Conference
150 (BMVC’98) (pp. 1–10).
100
50
Deriche, R., Zhang, Z., Luong, Q.-T., & Faugeras, O.
(1994). Robust recovery of the epipolar geometry for
0
2 3 4 5 6
Number of points
7 8 9 10
an uncalibrated stereo rig. In European Conference on
Computer Vision (ECCV’94) (pp. 567–576).
Di Stefano, L., Mattoccia, S., & Tombari, F. (2005). Zncc-
Figure 15: The results of matches detected by Harris, ZNCC and
RANSAC.
based template matching using bounded partial correla-
tion. Pattern Recognition Letters, 26, 2129–2134.
El Akkad, N., Merras, M., Saaidi, A., & Satori, K. (2014).
De Agapito, L., Hartley, R., Hayman, E. et al. (1999). Lin-
Camera self-calibration with varying intrinsic parame-
ear self-calibration of a rotating and zooming camera.
ters by an unknown three-dimensional scene. The Vi-
In IEEE Computer Society Conference on Computer Vi-
sual Computer, 30, 519–530.
15
Faugeras, O. D. (1992). What can be seen in three di- Heyden, A., & Åström, K. (1997). Euclidean reconstruc-
mensions with an uncalibrated stereo rig? In European tion from image sequences with varying and unknown
Conference on Computer Vision (ECCV’92) (pp. 563– focal length and principal point. In IEEE Computer
578). Society Conference on Computer Vision and Pattern
Recognition (CVPR’97) (pp. 438–443).
Faugeras, O. D., Luong, Q.-T., & Maybank, S. J. (1992).
Camera self-calibration: Theory and experiments. In Horaud, R., & Csurka, G. (1998). Self-calibration and
European Conference on Computer Vision (ECCV’92) euclidean reconstruction using motions of a stereo rig.
(pp. 321–334). In Sixth International Conference on Computer Vision
(ICCV’98) (pp. 96–103).
Fischler, M. A., & Bolles, R. C. (1981). Random sam-
ple consensus: a paradigm for model fitting with appli- Jiang, Z., & Liu, S. (2012). Self-calibration of varying
cations to image analysis and automated cartography. internal camera parameters algorithm based on quasi-
Communications of the ACM, 24, 381–395. affine reconstruction. Journal of Computers, 7, 774–
778.
Fusiello, A., Benedetti, A., Farenzena, M., & Busti, A.
(2004). Globally convergent autocalibration using in- Kim, J.-S., Gurdjos, P., & Kweon, I.-S. (2005). Geomet-
terval analysis. The IEEE Transactions on Pattern ric and algebraic constraints of projected concentric cir-
Analysis and Machine Intelligence (TPAMI), 26, 1633– cles and their applications to camera calibration. IEEE
1638. Transactions on Pattern Analysis and Machine Intelli-
gence (TPAMI’05), 27, 637–642.
Harris, C., & Stephens, M. (1988). A combined corner
and edge detector. In Alvey vision conference (p. 50). Knight, J., Zisserman, A., & Reid, I. (2003). Linear auto-
volume 15. calibration for ground plane motion. In IEEE Com-
puter Society Conference on Computer Vision and Pat-
Hartley, R., & Zisserman, A. (2003). Multiple view geom- tern Recognition (CVPR’03) (pp. I–503). volume 1.
etry in computer vision. Cambridge university press.
Kruppa, E. (1913). Zur Ermittlung eines Objektes aus
Hartley, R. et al. (1997). In defense of the eight-point zwei Perspektiven mit innerer Orientierung. Hölder.
algorithm. The IEEE Transactions on Pattern Analysis
and Machine Intelligence (TPAMI), 19, 580–593. Leroy, A. M., & Rousseeuw, P. J. (1987). Robust regres-
sion and outlier detection. Wiley Series in Probability
Hartley, R. I. (1992). Estimation of relative camera po- and Mathematical Statistics, New York: Wiley, 1.
sitions for uncalibrated cameras. In European Confer-
Lhuillier, M., & Quan, L. (2002). Quasi-dense reconstruc-
ence on Computer Vision (ECCV’92) (pp. 579–587).
tion from image sequence. In European Conference on
Hartley, R. I. (1994a). Euclidean reconstruction from un- Computer Vision (ECCV’02) (pp. 125–139).
calibrated views. In Applications of invariance in com-
Li, H., & Hu, Z.-y. A. (2002). Linear camera self-
puter vision (pp. 235–256).
calibration technique based on projective reconstruc-
Hartley, R. I. (1994b). Self-calibration from multiple tion [j]. Journal of Software, 13, 2286–2295.
views with a rotating camera. In European Conference Liebowitz, D., & Zisserman, A. (1999). Combining
on Computer Vision (ECCV’94) (pp. 471–478). scene and auto-calibration constraints. In The Sev-
Heyden, A., & Aström, K. (1996). Euclidean reconstruc- enth IEEE International Conference on Computer Vi-
tion from constant intrinsic parameters. In Interna- sion (ICCV’99) (pp. 293–300). volume 1.
tional Conference on Pattern Recognition (ICPR’96) Luong, Q.-T. (1992). Matrice fondamentale et autocal-
(pp. 339–343). volume 1. ibration en vision par ordinateur. Ph.D. thesis Paris
11.
16
Luong, Q.-T., & Faugeras, O. D. (1996). The fundamental Pollefeys, M., & Van Gool, L. (1997). A stratified ap-
matrix: Theory, algorithms, and stability analysis. In- proach to metric self-calibration. In IEEE Computer
ternational Journal of Computer Vision (IJCV’96), 17, Society Conference on Computer Vision and Pattern
43–75. Recognition(CVPR’97) (pp. 407–412).
Luong, Q.-T., & Faugeras, O. D. (1997). Self-calibration Raguram, R., Frahm, J.-M., & Pollefeys, M. (2008). A
of a moving camera from point correspondences and comparative analysis of RANSAC techniques leading
fundamental matrices. International Journal of Com- to adaptive real-time random sample consensus. In 10th
puter Vision (IJCV’97), 22, 261–289. European Conference on Computer Vision(ECCV’08)
(pp. 500–513).
Luong, Q.-T., & Viéville, T. (1994). Canonic represen-
tations for the geometries of multiple projective views. Rousseeuw, P. J., & Leroy, A. M. (2005). Robust regres-
In IEEE International Conference on Computer Vision sion and outlier detection volume 589. John Wiley &
(ECCV’94) (pp. 589–599). Sons.
Saaidi, A., Halli, A., Tairi, H., & Satori, K. (2009).
Malis, E., & Cipolla, R. (2000a). Multi-view constraints
Self-calibration using a planar scene and parallelo-
between collineations: application to self-calibration
gram. Journal of Graphics, Vision and Image Process-
from unknown planar structures. In IEEE International
ing (ICGST-GVIP), 1687.
Conference on Computer Vision (ECCV’00) (pp. 610–
624). Stein, G. P. (1995). Accurate internal camera calibration
using rotation, with analysis of sources of error. In
Malis, E., & Cipolla, R. (2000b). Self-calibration of The Fifth International Conference on Computer Vision
zooming cameras observing an unknown planar struc- (ICCV’95) (pp. 230–236).
ture. In 15th International Conference on Pattern
Recognition (ICPR’00) (pp. 85–88). volume 1. Sturm, P. F., & Maybank, S. J. (1999). On plane-based
camera calibration: A general algorithm, singularities,
Malis, E., & Cipolla, R. (2002). Camera self-calibration applications. In IEEE Computer Society Conference on
from unknown planar structures enforcing the mul- Computer Vision and Pattern Recognition (CVPR’99.
tiview constraints between collineations. The IEEE volume 1.
Transactions on Pattern Analysis and Machine Intel-
ligence (TPAMI), 24, 1268–1272. Torr, P. H., Beardsley, P. A., & Murray, D. W. (1994). Ro-
bust vision. In The British Machine Vision Conference
Maybank, S. J., & Faugeras, O. D. (1992). A theory (BMVC) (pp. 1–10).
of self-calibration of a moving camera. International
Torr, P. H. S. (1995). Motion segmentation and outlier
Journal of Computer Vision (IJCV’92), 8, 123–151.
detection. Ph.D. thesis University of Oxford England.
Moré, J. J. (1978). The levenberg-marquardt algorithm: Triggs, B. (1997). Autocalibration and the absolute
implementation and theory. In Numerical analysis - quadric. In IEEE Computer Society Conference on
Lecture Notes in Mathematics (pp. 105–116). volume Computer Vision and Pattern Recognition (CVPR’97)
630. (pp. 609–614).
Mühlich, M., & Mester, R. (1998). The role of total least Triggs, B. (1998). Autocalibration from planar scenes. In
squares in motion analysis. In IEEE International Con- European Conference on Computer Vision (ECCV’98)
ference on Computer Vision (ECCV’98) (pp. 305–321). (pp. 89–105).
Pollefeys, M., Koch, R., & Van Gool, L. (1999). Self- Wang, Z.-F., & Zheng, Z.-G. (2008). A region based
calibration and metric reconstruction inspite of vary- stereo matching algorithm using cooperative optimiza-
ing and unknown intrinsic camera parameters. Interna- tion. In IEEE Conference on Computer Vision and Pat-
tional Journal of Computer Vision (IJCV’99), 32, 7–25. tern Recognition (CVPR’08) (pp. 1–8).
17
Yan, C., Zhang, Y., Xu, J., Dai, F., Li, L., Dai, Q., & Wu,
F. (2014a). A highly parallel framework for hevc cod-
ing unit partitioning tree decision on many-core proces-
sors. The IEEE Signal Processing Letters, 21, 573–576.
Yan, C., Zhang, Y., Xu, J., Dai, F., Zhang, J., Dai, Q.,
& Wu, F. (2014b). Efficient parallel framework for
hevc motion estimation on many-core processors. IEEE
Transactions on Circuits and Systems for Video Tech-
nology, 24, 2077–2089.
Zeller, C., & Faugeras, O. (1996). Camera self-calibration
from video sequences: the kruppa equations revisited, .
Zhang, Z. (2000). A flexible new technique for camera
calibration. The IEEE Transactions on Pattern Analysis
and Machine Intelligence (TPAMI), 22, 1330–1334.
18
Highlights :
- Works freely in the domain of self-calibration without any prior knowledge about the scene or on
the cameras.