0% found this document useful (0 votes)
52 views20 pages

Boudine 2016

This document summarizes a research paper that proposes a new technique for self-calibrating cameras with variable intrinsic parameters using only two images. The method uses an unknown 3D isosceles right triangle in the scene and formulates a nonlinear cost function based on the relationship between matched 2D points and the absolute conic. Minimizing this cost function allows estimating the intrinsic camera parameters without any prior knowledge about camera orientations or the scene. The technique aims to minimize constraints on the self-calibration system using only two images.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views20 pages

Boudine 2016

This document summarizes a research paper that proposes a new technique for self-calibrating cameras with variable intrinsic parameters using only two images. The method uses an unknown 3D isosceles right triangle in the scene and formulates a nonlinear cost function based on the relationship between matched 2D points and the absolute conic. Minimizing this cost function allows estimating the intrinsic camera parameters without any prior knowledge about camera orientations or the scene. The technique aims to minimize constraints on the self-calibration system using only two images.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Accepted Manuscript

A Flexible Technique based on Fundamental matrix for Camera Self-Calibration


with Variable Intrinsic Parameters from two views

Bouchra Boudine, Sebastien Kramm, Nabil El Akkad, Abdelaziz Bensrhair,


Abderahim Saaidi, Khalid Satori

PII: S1047-3203(16)30061-X
DOI: http://dx.doi.org/10.1016/j.jvcir.2016.05.003
Reference: YJVCI 1754

To appear in: J. Vis. Commun. Image R.

Received Date: 23 February 2016


Revised Date: 26 April 2016
Accepted Date: 3 May 2016

Please cite this article as: B. Boudine, S. Kramm, N.E. Akkad, A. Bensrhair, A. Saaidi, K. Satori, A Flexible
Technique based on Fundamental matrix for Camera Self-Calibration with Variable Intrinsic Parameters from two
views, J. Vis. Commun. Image R. (2016), doi: http://dx.doi.org/10.1016/j.jvcir.2016.05.003

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers
we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and
review of the resulting proof before it is published in its final form. Please note that during the production process
errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
A Flexible Technique based on Fundamental matrix for Camera Self-Calibration
with Variable Intrinsic Parameters from two views

Bouchra Boudinea,b,∗, Sebastien Krammb , Nabil El Akkada , Abdelaziz Bensrhairb , Abderahim Saaidia,c , Khalid
Satoria
a LIIAN, Department of Mathematics and Computer Science, Faculty of Science Dhar El Mahraz Sidi Mohamed Ben Abdellah University, Fez P.O
1796, Morocco
b LITIS, INSA ROUEN, Avenue of the University, 76801 Saint Etienne of Rouvray, Rouen, France
c LSI, Department of Mathematics Physics and Informatics, Polydisciplinary Faculty University of Sidi Mohamed Ben Abdellah University, Taza

P.O 1223, Morocco

Abstract
We propose a new self-calibration technique for cameras with varying intrinsic parameters that can be computed using
only information contained in the images themselves. The method does not need any a priori knowledge on the
orientations of the camera and is based on the use of a 3D scene containing an unknown isosceles right triangle. The
importance of our approach resides at minimizing constraints on the self-calibration system and the use of only two
images to estimate these parameters. This method is based on the formulation of a nonlinear cost function from the
relationship between two matches which are the projection of two points representing vertices of an isosceles right
triangle, and the relationship between the images of the absolute conic. The resolution of this function enables us to
estimate the cameras intrinsic parameters. The algorithm is implemented and validated on several sets of synthetic
and real image data.
Keywords: Self-calibration. Variable intrinsic parameters. Fundamental matrix. Absolute conic. Unknown 3D scene.

1. Introduction of self-calibration without any prior knowledge about the


scene and with fewer constraints (the number of images
Motivation. Self-calibration is an important task in com- used, the characteristics of the cameras, and the type of
puter vision for providing a powerful method for the re- scene). This new approach does not require constraints on
covery of 3D models from image sequences. Many ap- the scene or on the cameras which shows that our methods
plications exist which require these models, including minimizes the constraints on the self-calibration system.
robotics, medicine (medical imaging), security (face de-
tection and recognition) etc. In general, these works can Contribution. In this work, we present a new method for
be classified into two categories: self-calibration of cam- self-calibration of cameras with varying intrinsic param-
era with constant parameters and self-calibration of cam- eters, by using an unknown 3D scene and two images
era with varying parameters. A variety of self-calibration only to estimate the intrinsic parameters of cameras. Our
approaches will be discussed with details in section 2. In method is based on the use of an unknown isosceles right
this paper, we are interested to work freely in the domain triangle ABC belonging to the semicircle C(O, r) to per-
form the self-calibration procedure. After detecting the
∗ Corresponding author:
control points by the Harris algorithm (Harris & Stephens,
1988) and the matching of these points by the correla-
Email address: [email protected] (Bouchra
Boudine ) tion measure ZNCC (Lhuillier & Quan, 2002; Di Stefano

Preprint submitted to Elsevier May 6, 2016


et al., 2005) and then we eliminate the false matches by introduced the idea of self-calibration, where the cam-
using the RANSAC algorithm (Fischler & Bolles, 1981; era calibration can be obtained from the image sequences
Wang & Zheng, 2008; Raguram et al., 2008). The pro- themselves (a camera could be calibrated using only point
jection of two points of the 3D scene in images plans matches between images), without requiring knowledge
taken by different views and estimation of the Fundamen- of the scene, or any knowledge of the camera motion.
tal matrix are exploited to formulate a system of linear This has allowed the possibility of reconstructing a scene
equations. Solving these equations allows us to obtain from pre-recorded images sequences, or computing the
the projection matrices. The relations between the projec- camera calibration during the normal vision tasks. The
tion matrices, projections of two points of the 3D scene original method by Faugeras et al. (1992) involved the
of an isosceles right triangle and the images of the abso- computation of the Fundamental matrix, which encodes
lute conic allow the formulation of a nonlinear cost func- epipolar geometry between two images Faugeras (1992);
tion. The minimization of this function by the Levenberg- Hartley (1992); Luong & Faugeras (1996). From three
Marquardt algorithm (Moré, 1978) lead to estimate the views a system of polynomial equations is constructed
intrinsic parameters of the cameras used. called Kruppa’s equations Kruppa (1913) (Kruppa’s equa-
tions are based on the relationship between the image of
The remainder of this paper. The paper is organized the absolute conic and the epipolar transformation). Since
as follows. In section 2, we reviewed some most tech- then, Luong (1992) has used an iterative search technique
niques and work of self-calibration made in these last to solve the set of polynomial equations, but results were
years. In Section 3 we study the basic structure of the limited by the choice of initial values and the complex-
camera model and camera self-calibration tools. Section ity of the equations. The self-calibration technique de-
4 presents the proposed method of camera self-calibration scribed in this article Zeller & Faugeras (1996) is the gen-
for 3D scene addressed in this paper. Section 5 describes eralization to a large number of images of the algorithm
the experimental framework used to evaluate the perfor- developped by Faugeras et al. (1992) based on the Kruppa
mance of the approach. Finally, in section 6 we conclude equations and solved the equations using energy minimi-
some comments are made based on the study carried out sation. Hartley (1994a) uses a set of matched points from
in this paper. images taken with the same camera but from different po-
sitions. There is no constraint on the motion allowed be-
2. Related work on camera self-calibration tween the camera positions. Each set of matched image
points has a corresponding point in 3D. The method in-
Obtaining three dimensional (3D) models of scenes volves an iterative search to find a set of calibrated cam-
from images has been a long lasting research topic in era matrices and 3D world points consistent with the im-
computer vision. Many applications exist which require age points. This approach of Hartley (1994a) was ex-
these models. Traditionally robotics and inspection ap- tended for purely rotating cameras with varying intrinsic
plications were considered. In this section an overview parameters by Agapito et al. in De Agapito et al. (1998,
of the literature in the area of camera self-calibration is 1999, 2002). The work of Agapito et al. proposed is the
given. The self-calibration of cameras characterized by first and mostly referred approach for self-calibration of
two types of parameters constant or varying intrinsic pa- a purely rotating camera with varying intrinsic parame-
rameters. Nowadays however more and more interest ters. This technique uses the Frobenius norm of the differ-
comes studies based on self-calibration of cameras with ence of the mapped dual image of the absolute conic and
varying intrinsic parameters. the dual image of the absolute conic; and in De Agapito
et al. (1999) the infinite homography constraint for cam-
Overview. Camera self-calibration was originally intro- eras j and i with varying intrinsic parameters was used
duced by Faugeras et al. (1992) in computer vision. The for self-calibration. Recently Heyden & Åström (1997)
methods based on the Kruppa equation were later devel- proved that selfcalibration in the case of continuously fo-
oped by Maybank & Faugeras (1992), Heyden & Aström cusing/zooming cameras is possible even when the as-
(1996) and Luong & Faugeras (1997). All this methods pect ratio was known and no skew was present. Hartley

2
(1994b) presented a method of self-calibration using a ro- 3D object. Applicative domains are numerous, includ-
tating camera. When there is no translation of the cam- ing robotics or autonomous driving, using some efficient
era between views, there is an image-to-image projective video encoding techniques Yan et al. (2014b,a).
mapping which can be calculated using point matches.
Luong & Viéville (1994) showed that a camera can be cal-
3. Camera Self-calibration tools
ibrated using two or more rotated views of an affine struc-
ture. A stratification approach was proposed by Pollefeys 3.1. The Camera Model
& Van Gool (1997) and refined recently by Chandraker
A camera is modeled by the usual pinhole model, see
et al. (2010). Triggs (1998) develops a practical algo-
Fig. 1. The mapping is a perspective projection from
rithm for the self-calibration of a moving projective cam-
3D projective space P3 (the world), to 2D projective
era, from m ≥ 5 views of a planar scene. This technique
space P2 (the image plane). The world point denoted by
based on some constraints involving the absolute quadric
B = (X, Y, Z, 1)T is mapped to the image point denoted
and the scene-plane to image-plane collineations. Sturm
by b = (x, y, 1)T by a 3 × 4 matrix Mi in homogeneous
& Maybank (1999) and Zhang (2000) independently pro-
coordinates, which is:
posed to use planar patterns in 3D space to calibrate cam-
eras. Triggs (1997) introduced the absolute (dual) conic as
b = Mi B (1)
a numerical device for formulating auto-calibration prob-
lem. These early works are however quite sensitive to Mi is called the camera projection matrix, which mixes
noise and unreliable Bougnoux (1998), Hartley & Zis- both intrinsic and extrinsic parameters. The matrix Mi
serman (2003). Li & Hu (2002) propose a linear camera may be decomposed as:
self-calibration technique based on projective reconstruc-
tion that can compute the 5 intrinsic parameters. In this Mi = Ki (Ri ti ) (2)
method, the camera undergoing at least a pure translation
Where (Ri ti ), called the extrinsic parameters, Ri is the
and two arbitrary motions. The numerical solution using
rotation and ti translation which relates the world coordi-
interval analysis was presented in Fusiello et al. (2004),
nate system to the camera coordinate system, and Ki is a
but this method is quite time consuming. Other tech-
3×3 calibration matrix containing the intrinsic parameters
niques, use different constraints, such as camera motion
of the camera, given by:
Hartley (1994b), Stein (1995), Horaud & Csurka (1998),
De Agapito et al. (1999), plane constraints Triggs (1998), „ Ž
fi τi u0i
Sturm & Maybank (1999), Knight et al. (2003), concen- Ki = 0 εi fi v0i (3)
tric circles Kim et al. (2005) and others Pollefeys et al. 0 0 1
(1999), Liebowitz & Zisserman (1999). Malis & Cipolla
(2000b) present a technique to self-calibrate cameras with where, fi represents the focal length of the camera for
varying focal length. This method does not need any a pri- the view i, εi is the scale factor, (u0i , v0i ) are the coordi-
ori knowledge of the metric structure of the plane. More- nates of the principal point in the image i and τi is the
over Malis & Cipolla (2002, 2000a) extend their work image skew.
into allow the recovering of the varying focal length and
propose to impose the constraints between collineation 3.2. Matching and control Points
using a different iterative method. Jiang & Liu (2012) In this work, we used the Harris algorithm (Harris &
presents a method of self-calibration of varying internal Stephens, 1988) to detect interest points. The matching of
camera parameters that based on quasi-affine reconstruc- interest points between each pair of images is performed
tion, this method does not require a special scene con- by ZNCC correlation measure (Zero mean Normalized
straints or a special movement informations to achieve the Cross Correlation) (Lhuillier & Quan, 2002; Di Stefano
goal of self-calibration. El Akkad et al. (2014) proposes a et al., 2005), and then we eliminate the false matches by
self-calibration method of camera having varying intrin- using the RANSAC algorithm (Fischler & Bolles, 1981;
sic parameters from a sequence of images of an unknown Wang & Zheng, 2008; Raguram et al., 2008).

3
Ω = I3 . The projection of the absolute conic in the im-
age plane i (ωi Image Absolute Conic (IAC)) is directly
related to the camera internal matrix Ki by expression 5:

ωi = (Ki KiT )−1 (5)

3.5. Triangle geometry


An isosceles right triangle ABC has two equal sides
AC = CB and they makes the right angle in C (Fig. 2).
These triangles have the following properties:
Figure 1: Pinhole Model of camera • The tree bisectors of the ABC intersect at O (middle
of the hypotenuse AB).
3.3. The Fundamental matrix • AB (The hypotenuse) is the diameter of the semicir-
A large amount of work has been done on understand- cle and the chord of the arc ö
AB.
ing and computing the Fundamental matrix (a 3×3 matrix)
using matched points (Luong & Faugeras, 1996; Hart- • OA = OB = OC = r (are radius of the semicircle).
ley et al., 1997; Mühlich & Mester, 1998; Boufama & √
Mohr, 1995; Beardsley et al., 1996; Deriche et al., 1994; • The measures of sides : AC = CB = r 2.
Hartley, 1992; Torr, 1995). Based on these methods, ro- • The median CO is equal to half of the hypotenuse
bust approaches were developed to obtain the Fundamen- AB
tal matrix from real image data (see the work of (Torr (CO = ).
2
et al., 1994; Torr, 1995), and (Zhang et al., 1995). These
techniques use robust techniques like RANSAC (Fischler – In an isosceles right triangle, the angles are :
& Bolles, 1981) or LMedS (Leroy & Rousseeuw, 1987; • Ab = Bb = 45◦
Rousseeuw & Leroy, 2005) In this article we used a robust • Cb = 90◦
RANSAC algorithm to calculate the Fundamental matrix
Fi j from eight matches between two views i and j.
The Fundamental matrix Fi j between the images i and
j is related to the epipole through an homography Hi j :

Fi j ∼ [e j ]∧ Hi j (4)
With the skew-symmetric matrix [e j ]∧ defined as:
„ Ž
0 −e3 e2
[e j ]∧ = e3 0 −e1
−e2 e1 0
Figure 2: An isosceles right triangle
Once Fi j is estimated, the epipole e j = (e1 , e2 , e3 )T of
the image j can be estimated from the formula Fi j T e j =
0.
4. Method description
3.4. Absolute Conic and its Image
One of the most important concepts for self-calibration 4.1. Vision system
is the absolute conic (AC) and its Image (IAC). The ab- We consider two points B and C of the 3D scene and
solute conic is an imaginary conic of the plane of infinity their projections (b1i , b2i ) and (b1 j , b2 j ) in the planes of

4
the images i and j, respectively (Fig. 3). To estimate the Points Affine plan Euclidien plan
„ Ž „ Ž
projections matrices, we note by Π the plane of this scene 0 r
which contains three vertices A, B and C of the isosceles C B1 = 1 B′1 = r
right triangle ABC and we consider two references: ref-
„ 1 Ž „ 1 Ž
erence Affine ℜ(A, Xa , Ya , Za ) and and a Euclidean refer- 1 2r
ence ℜ(A, Xe , Ye , Ze ) fixed on the planar scene and asso- B B2 = 0 B′2 = 0
ciated to the isosceles right triangle ABC such as: Za ⊥Π
„ 1 Ž „ 1 Ž
and Ze ⊥Π. 0 0
Table 1 presents the coordinates of the vertices A, B and C A B3 = 0 B′3 = 0
of an isosceles right triangle in two references Affine and 1 1
Euclidean. In practice there is not an automatic method to
determine the triangle in the images. But we can always
determine which two vertices of the triangle ABC. Table 1: Homogeneous coordinates of vertices of the Triangle in the two
references Affine and Euclidean.

This expression gives :

bni ∼ Hi Bn for n = 1, 2 (7)

bn j ∼ H j Bn for n = 1, 2 (8)
where bni and bn j , respectively, represent the points in
the images i and j that are the projections of the two ver-
tices B and C of the 3D scene, and Hm , m = i, j represent
the homography matrices (Fig. 3) that permits to project
the plan of the scene in the image i and j (Saaidi et al.,
2009). Homography matrices are expressed as follows:

„ Ž
1 0
Hm = Km Rm 0 1 RTm tm for m = i, j (9)
0 0
Figure 3: Projection of triangle ABC in the two images i and j by the With Km the matrix of intrinsic parameters, Rm the ro-
two matrixes Pi and P j .
tation matrix and tm the translation vector of the displace-
ment between the frame of the 3D plane and the camera
frame. Expressions (7) and (8) can be written as follows:
4.2. Computing the Projection Matrices using the Funda-
mental matrix and the Epipoles
Considering two homographies Hi and H j that can bnm ∼ Hm Q Bn for m = i, j and n = 1, 2 (10)
project the plane Π in images i and j, therefore, the pro-
jection of the two points B and C can be given by the where: B1 = (0, 1, 1)T and B2 = (1, 0, 1)T . The relation
following expression: (10) can be written as:

bni ∼ Hi Q Bn for n = 1, 2 (11)


bnm ∼ Hm Bn for m = i, j and n = 1, 2 (6)
bn j ∼ H j Q Bn for n = 1, 2 (12)

5
With: „ Ž
2r r 0
Q= 0 r 0 bnm ∼ Pm Bn for m = i, j and n = 1, 2 (21)
0 0 1
This leads to:
Q is the passage matrix between the Affine and Eu-
clidean frames of the vertices of the triangle such as:
bni ∼ Pi Bn for n = 1, 2 (22)
B′n = Q Bn (13)
bn j ∼ P j Bn for n = 1, 2 (23)
From the equation (10) we can write the projection ma-
trices by: Furthermore, from the equations (20) and (23) we can
write:
Pm ∼ Hm Q for m = i, j (14)
bn j ∼ Hi j Pi Bn for n = 1, 2 (24)
This leads to:
Similarly, this leads to:
Pi ∼ Hi Q and Pj ∼ Hj Q (15)

With Pi and P j represent the projections matrices a 3×3 [e j ]∧ bn j ∼ [e j ]∧ Hi j Pi Bn for n = 1, 2 (25)


of two points B1 and B2 in images i and j (See Table 1 and
Fig. 3). As expression (4) gives us Fi j ∼ [e j ]∧ Hi j , we can write
The homography Hi j between the images i and j is (25) as:
given by expression (16) below:
[e j ]∧ bn j ∼ Fi j Pi Bn for n = 1, 2 (26)
Hi j ∼ H j Hi−1 (16)
Expressions (22) and (26) each relate two homoge-
From the expression (15), we have Pi ∼ Hi Q, then we neous (3 × 1) vectors, which gives four equations of ho-
can write : mogeneous vectors. For each of these, once we divide
Hi−1 Pi ∼ Hi−1 Hi Q (17) the first two lines by the third one, this gives us two
scalar equations per equation. Therefore, each of these
Then, from the formula (17) we obtain : expression gives us four equations with eight unknowns.
We eventually have eight equations with eight unknowns,
Hi−1 Pi ∼ Q (18)
which are the Pi parameters. Thus, we can compute a nu-
Knowing that : merical value for these. Once we have computed Pi , we
can use the following expression to compute P j :
Hi−1 Hi = I3 with I3 the identity matrix
[e j ]∧ P j ∼ [e j ]∧ Hi j Pi (27)
We replace the expression (18) in the expression P j ∼
The previous expression leads to:
H j Q from (15), then we will obtain :
[e j ]∧ P j ∼ Fi j Pi (28)
P j ∼ H j Hi−1 Pi (19)
The P j parameters can be estimated from the formula
In the final, we replace (16) in the expression (19) and
(28). This expression gives eight equations with eight un-
we obtain the expression of P j :
knowns which are the P j elements.
P j ∼ Hi j Pi (20)
From expressions (11), (12) and (14), whe have:

6
 ‹
4.3. Camera self-calibration equations GT G GT RTj t j
PTj ω j P j ∼ T (34)
tj Rj G tTj t j
In this part, we present our method to calculate the Where ωm = (Km KmT )−1 for m = i, j is the image of the
equations of self-calibration based on the idea of an absolute conic.
isosceles right triangle ABC. The main idea is to demon- From the expression (22), we can write:
strate the relationship between the images of the two ab-
solute conics (ωi and ω j ) and two points of the 3D scene αni bni = Pi Bn for n = 1, 2 (35)
and their projections (b1i , b2i ) and (b1 j , b2 j ) in the planes
of the images i and j, respectively. A nonlinear cost func- Where : Bn = (s1 , s2 , 1) T

tion is obtained by these relations and will be minimized


by the Levenberg-Marquart algorithm (Moré, 1978) to es- §
And:
timate the ωi and ω j elements. n = 1 ⇔ s1 = 0 and s2 = 1
Solving this function gives the varying intrinsic param- n = 2 ⇔ s1 = 1 and s2 = 0
eters of the camera used. „ Ž
The expressions (9) and (14) give: Pi11 Pi12 Pi13
Pi = Pi21 Pi22 Pi23 ; bni = (uni , vni , 1)T
„ Ž pi31 Pi32 Pi33
1 0
Pm = Km Rm 0 1 RTm tm Q for m = i, j αni = s1 Pi31 + s2 Pi32 + Pi33 .
0 0
(29) αni is a nonzero scale factor that is used to switch from
Therefore, from the previous expression we can write: (22) to (35). The latter provides an expression for αni .
To build the final calibration equations, we introduce
„ Ž two (3 × 3) matrices b′ni and Vn to rewrite equation (35).
1 0
Km−1 Pm = Rm 0 1 RTm tm Q for m = i, j ‰ “
0 0 Pi11 Pi13
(30) αni
uni
αni
„ Ž
1 s1 0
If we develop the formula (30) and by using the rotation b′ = Pi21 Pi23
vni ; Vn = 0 s2 0
matrix properties (Rm RTm = I3 = RTm Rm ),we obtain: ni
αni αni
Pi31 Pi33 0 1 1
1
 ‹ αni αni
GT G GT RTm tm
Pm ω m Pm ∼
T
for m = i, j Therefore, formula (35) gives:
tmT Rm G tmT tm
(31)
With: αni b′ni = Pi Vn for n = 1, 2 (36)
„ Ž
2r r  ‹ Formula (36) gives:
4r2 2r2
G= 0 r , G G= T
(32)
2r2 2r2 ′
Pi ∼ bni Vn−1 for n = 1, 2 (37)
0 0

From the previous expression (31), we can write: The same for P j , we can write :

• If m = i: P j ∼ bn j Vn−1 for n = 1, 2 (38)
 ‹ From the relations (33) and (37), we have for m = i the
GT G GT RTi ti
PTi ωi Pi ∼ (33) following expression:
tiT
Ri G tiT ti

• If m = j:

7
Let us note by E the matrix corresponding to
′ ′
′ (b1i V1−1 )T ωi (b1i V1−1 ):
(bni Vn−1 )T ωi (b′ni Vn−1 )
 ‹ „ Ž
GT G GT RTi ti e11i e12i e13i
∼ T for n = 1, 2 E= e12i e12i e23i
ti Ri G tiT ti
(39) e13i e23i e33i
Similarly, from expressions (34) and (38) we have for
m = j the following expression: Similarly, let us note by D the matrix corresponding to
′ ′
(b2i V2−1 )T ωi (b2i V2−1 ):
′ ′
„ Ž
(bn j Vn−1 )T ω j (bn j Vn−1 ) d11i d12i d13i
 ‹ D= d12i d12i d23i
GT G GT RTj t j
∼ T for n = 1, 2 d13i d23i d33i
tj Rj G tTj t j
(40) Therefore, we deduce from (45) that:
By replacing in (39) n by 1 and 2, we have:
′ ′  e
(b1i V1−1 )T ωi (b1i V1−1 )  11i d11i
 ‹ 
 = ⇔ e11i d12i − d11i e12i = 0
GT G GT RTi ti (41) 
 e12i d12i
∼ 
 e d12i
tiT Ri G tiT ti 
 12i = ⇔ e12i d13i − d12i e13i = 0
e13i d13i (46)
And: 

e13i
=
d13i
⇔ e13i d23i − d13i e23i = 0



 e23i d23i

(b2i V2−1 )T ωi (b′ 2i V2−1 ) 

 ‹  e23i =
d23i
⇔ e23i d33i − d23i e33i = 0
GT G GT RTi ti (42) e33i d33i
∼ T
ti R i G tiT ti According to the expressions (43) and (44) we deduce
Similarly, (40) gives: that the matrices are identical:
( ′ −1 )T ( ′ ) ( ′ )T ( ′ )
b1 j V1 ω j b1 j V1−1 ∼ b2 j V2−1 ω j b2 j V2−1
′ ′ | {z } | {z }
(b1 j V1−1 )T ω j (b1 j V1−1 ) L N
 ‹ (43) (47)
GT G GT RTj t j
∼ Let us note by L the matrix corresponding to
T
tj Rj G tTj t j € ′
ŠT € ′
Š
b1 j V1−1 ω j b1 j V1−1 :
And: „ Ž
′ ′ l11 j l12 j l13 j
(b2 j V2−1 )T ω j (b2 j V2−1 ) L= l12 j l12 j l23 j
 ‹ (44)
T
G G GT RTj t j l13 j l23 j l33 j
∼ T
tj Rj G tTj t j
Similarly, let us note by N the matrix corresponding to
€ ′
ŠT € ′
Š
According to the expressions (41) and (42), we deduce b2 j V2−1 ω j b2 j V2−1 :
that the matrices are identical:
„ Ž
n11 j n12 j n13 j
( ′ )T ( ′ ) ( ′ )T ( ′ ) N= n12 j n12 j n23 j
b1i V1−1 ωi b1i V1−1 ∼ b2i V2−1 ωi b2i V2−1
| {z } | {z } n13 j n23 j n33 j
E D
(45) Therefore, we deduce that:

8
This system of equations holds ten unknowns (five for
 l n11 j ωi and five for ω j ). It is nonlinear, therefore to solve it,


11 j
= ⇔ l11 j n12 j − n11 j l12 j = 0

 we minimize it by using the Levenberg-Marquardt algo-


l 12 j n12 j

 l12 j n12 j rithm (Moré, 1978) by using the following nonlinear cost

 = ⇔ l12 j n13 j − n12 j l13 j = 0 function:
l13 j n13 j
(48)


l13 j
=
n13 j
⇔ l13 j n23 j − n13 j l23 j = 0

 ∑
k−1 ∑
k

 l j n23 j min (µ2i + γi2 + η2i + φ2i + µ2j
 l23
 n23 j ωi, j

 23 j = ⇔ l23 j n33 j − n23 j l33 j = 0 i=1 j=i+1 (54)
l33 j n33 j
+ γ2j + η2j + φ2j + ζi2j + ψ2i j )
Expressions (41) and (43) show that the first line and
With k the number of images.
columns of the matrices E and L are identical, which
gives:
The parameters of this cost function are given by the
( ′ −1 )T ( ′ ) ( ′ )T ( ′ ) following expression:
b1i V1 ωi b1i V1−1 ∼ b1 j V1−1 ω j b1 j V1−1 
| {z } | {z } 
 µi = e11i d12i − d11i e12i


E L

 γi = e12i d13i − d12i e13i
(49) 


 ηi = e13i d23i − d13i e23i
Therefore, we deduce that: 


 φi = e23i d33i − d23i e33i

§ µ j = l11 j n12 j − n11 j l12 j
e11i l11 j (55)
= ⇔ e11i l12 j − l11 j e12i = 0 
 γ j = l12 j n13 j − n12 j l13 j
(50) 
 η j = l13 j n23 j − n13 j l23 j
e12i l12 j 


 φ j = l23 j n33 j − n23 j l33 j


Expressions (42) and (44) show that the first line and 


 ζ = e11i l12 j − l11 j e12i
columns of the matrices D and N are identical, which  ij
gives: ψi j = d11i n12 j − n11 j d12i
( ′ )T ( ′ ) ( ′ )T ( ′ ) The minimization of the cost function (54) is carried
b2i V2−1 ωi b2i V2−1 ∼ b2 j V2−1 ω j b2 j V2−1 out by using the Levenberg-Marquardt algorithm Moré
| {z } | {z }
D N (1978). The optimization algorithm used is nonlinear; so,
(51) it requires an initialization step. Therefore, we propose
The previous expression gives: the following conditions on the self-calibration system to
§ determine the initial solution:
d11i n11 j
= ⇔ d11i n12 j − n11 j d12i = 0 (52)
d12i n12 j • The pixels are squared, therefore : εi = ε j = 1, and
From the expressions (46), (48), (50) and (52) we obtain τi = τ j = 0,
the following system of ten equations: • The principal point is located at the center of the im-


 e11i d12i − d11i e12i = 0 age, so u0i , v0i , u0 j and v0 j are known.



 e12i d13i − d12i e13i = 0

 e13i d23i − d13i e23i = 0 • The pixels are squared, therefore: εi = ε j = 1, and

 τi = τ j = 0,

 e23i d33i − d23i e33i = 0



l11 j n12 j − n11 j l12 j = 0 • The focal lengths ( fi , f j ) are estimated by replacing
(53)

 l12 j n13 j − n12 j l13 j = 0 the parameters (εi , τi , u0i , v0i , ε j , τ j , u0 j , v0 j in the ex-



 l13 j n23 j − n13 j l23 j = 0 pression (53) and by solving this system of equa-



 l23 j n33 j − n23 j l33 j = 0 tions.



 e l −l e =0
 11i 12 j 11 j 12i
d11i n12 j − n11 j d12i = 0

9
4.4. Steps of the Self-calibration algorithm 0.8
The relative errors of " u 0" according to the number of images

Proposed method
This section summarizes the different steps described Zhang
Jiang
0.7
above and needed to compute the varying intrinsic param- Triggs

eters of cameras.

Relative error of " u 0" (%)


0.6

(1) Estimation of the fundamental matrix 0.5

(2) Estimation the projection matrices


0.4
(3) Formulating the nonlinear cost function
(4) Minimization of the nonlinear cost function 0.3

(5) Computing intrinsic parameters of cameras in both


0.2
images from the set of values obtained through mini-
mization 0.1
2 3 4 5 6 7 8 9 10
Number of images

5. The experimental framework Figure 4: Relative errors of u0 according to the number of images.

In this section, we will realize an experimental study.


The relative errors of " v 0" according to the number of images
A comparison of our approach with three other efficient
Proposed method
methods in literature which are Zhang (2000), Triggs Zhang
Jiang
0.6
Triggs
(1998) and Jiang & Liu (2012) is presented to show the
performance of our proposed approach in terms of sta-
Relative error of " v 0" (%)

0.5
bility, convergence and accuracy. Experiments have been
done both on synthetic and real data. 0.4

5.1. Simulations 0.3

In this section, a sequence of ten images of 480 × 480


of a checkerboard pattern is simulated to test the perfor- 0.2

mance of our proposed approach. The detection of in-


0.1
terest points is done with the Harris algorithm (Harris & 2 3 4 5 6 7 8 9 10
Number of images
Stephens, 1988) and matched in each pair of images by
the correlation measure ZNCC (Lhuillier & Quan, 2002;
Figure 5: Relative errors of v0 according to the number of images.
Di Stefano et al., 2005). The pattern is projected in im-
ages taken from different views with Gaussian noise of
standard deviation σ which is added to each image pixel. u0 , v0 , f, ε, τ and f are closer to those calculated by the
The projection of the pattern points in the image planes method of Zhang (2000), and a little different from those
allows formulating the linear equations, and the solutionobtained by the method of Triggs (1998) and Jiang & Liu
of these equations gives the projection matrices. (2012). In addition, Triggs (1998) uses more than four
In this simulation, we will discuss the influence of the
images to estimate the intrinsic parameters. On the con-
number of images used on the relative errors correspond- trary, our method estimates the parameters the cameras
ing to u0 , v0 , f, ε, τ and f (represented respectively in
used by two images only. On the other hand, the relative
figures Figures 4, 5, 6, 7, 8 and 9) by our approach and errors corresponding to the parameters u0 , v0 , f, ε, τ and
the approaches of Zhang (2000), Triggs (1998) and Jiang f decrease if we increase the number of images (up to 7
& Liu (2012). images) and the relative errors of intrinsic parameters de-
crease almost linearly if the number of images is between
The analysis of these results show on one hand two and six. They decrease slowly if the number of im-
that the relative errors corresponding to the parameters ages is between six and eight, and they become almost

10
The relative errors of " f " according to the number of images

Proposed method
0.8
Zhang
Jiang
Triggs
0.7
Relative error of " f " (%)

0.6

0.5

0.4

0.3

0.2

0.1
2 3 4 5 6 7 8 9 10
Number of images

Figure 6: Relative errors of f according to the number of images. Figure 8: Relative errors of ε according to the number of images.

Figure 7: Relative errors of τ according to the number of images. Figure 9: The relative errors of u0 , v0 , f, ε and τ according to Gaussian
noises.

stable if the number of images exceeds eight. This shows


the improvement in the accuracy obtained by this method. at infinity and the presence of noise at this step reduces
In Fig. 10 we show the effect of the use of a large im- the quality of this estimation. This acts negatively on the
age number. It shows that when the number of images second step which consists of the estimation of the ho-
increases, the execution time of the different methods in- mography matrix induced by the plane at infinity.
creases implies the increase of parameter number to be es- In addition, the method of Triggs (1998) requires at
timated, which implies the increase of the equation num- least five images form a planar scene to calibrate the cam-
ber. From the Fig. 9, we deduce that our method is more era.
stable than the three other methods. This stability can be Furthermore, our method has several strong points:
partly explained by the fact that the method proposed by • The use of any camera
Jiang & Liu (2012) has two steps: the first is a quasi-
affine reconstruction using scene images to find the plane • The use of only two images to estimate the intrinsic
camera parameters

11
• Camera with free movement (1998). The results of the present approach are almost
identical to those obtained by Zhang (2000), and they are
• An unknown 3D scene a little different from those obtained by Triggs (1998).
Therefore, this approach provides a robust performance,
5.2. Real Data
and it is very close to the other well-established meth-
Two images of 480 × 480 images of an unknown three- ods. In addition, the present approach gives satisfactory
dimensional scene, taken by a CCD camera having vary- results in terms of accuracy, stability and our algorithms
ing intrinsic parameters from different views to confirm converge rapidly to the optimal solution, and this is seen
the robustness of the approach presented in this paper very clearly in the simplicity of calculations.
(Fig. 10). This section deals with the experiment results
of the different algorithms (Harris, ZNCC, Ransac, Lev-
enbergMarquardt, etc.) implemented by using the Java 6. Conclusion
object-oriented programming language.
The interest points are produced with the Harris algo- In this paper, we treated a theoretical and practi-
rithm, as shown in Fig. 11 and the matches between these cal study of a new proposed method of cameras self-
two images are shown in Fig. 12. The result obtained calibration by an unknown three-dimensional scene and
contains false matches. To eliminate them, we regular- characterized by varying intrinsic parameters. Further-
ized all the matches by the RANSAC algorithm (Fischler more, we have minimized the constraints on the self-
& Bolles, 1981; Wang & Zheng, 2008; Raguram et al., calibration system (the use of any camera with varying
2008). Figures 13 and 14) below show the regularized intrinsic parameters, a 3D scene, and only two points
matches (outliers points and inliers points) obtained by of the scene and two images to achieve the camera’s
this algorithm. self-calibration procedure). This method is based on the
The projection of the points of the 3D scene in the two demonstration of a relationship between two points in the
images allows us to estimate the geometric entities (the 3D scene and their projections in the planes of the images
Fundamental matrix and the projection matrices). After- and between the relationships between the images of the
wards, the solution of a nonlinear system of equations absolute conic (IAC) for each pair of images. These re-
(formula (51)) provides the parameters of the image of the lationships formulated a nonlinear cost function, its res-
absolute conic and finally the intrinsic parameters of the olution by following two steps: Initialization and opti-
camera. The Table 2 below represents the intrinsic param- mization allow to estimate the intrinsic parameters of the
eters estimated by the proposed approach in this paper. cameras used. The simulations performed and the results
of experiments show the performance of the method pro-
Methods f u v τ ε posed in this article.
0 0
Proposed Image 1 1175 232 252 0.27 0.78
method Image 2 1182 235 242 0.31 0.82 References
Jiang Image 1 1168 241 262 0.38 0.93
Image 2 1173 249 253 0.27 0.90 Beardsley, P., Torr, P., & Zisserman, A. (1996). 3d model
acquisition from extended image sequences. In Euro-
pean Conference on Computer Vision (ECCV’96) (pp.
Table 2: The results of the intrinsic camera parameters estimated by two 683–695).
methods.
Boufama, B., & Mohr, R. (1995). Epipole and fundamen-
The experimental results of the intrinsic camera param- tal matrix estimation using virtual parallax. In Interna-
eters presented in Table 2 obtained by the proposed ap- tional Conference on Computer Vision (ICCV’95) (pp.
proach on the real data and on synthetic data shows they 1030–1036).
are a little different from those obtained by Jiang & Liu
(2012). Furthermore, the present approach is also com-
pared with two other methods of Zhang (2000) and Triggs

12
(a) Image i (b) Image j

Figure 10: Two images of an unknown 3D scene.

(a) Image i (b) Image j

Figure 11: The interest points are shown respectively in red and blue in the images i and j.

Bougnoux, S. (1998). From projective to euclidean Chandraker, M., Agarwal, S., Kriegman, D., & Belongie,
space under any practical situation, a criticism of self- S. (2010). Globally optimal algorithms for stratified au-
calibration. In Sixth International Conference on Com- tocalibration. International Journal of Computer Vision
puter Vision (ICCV’98) (pp. 790–796). (IJCV’10), 90, 236–254.

13
(b) Image j
(a) Image i

Figure 12: The matches are shown and numbered in green in the two images.

Figure 13: Outliers points detected by Ransac in the two images.

14
Figure 14: Inliers points detected by RANSAC in the two images.

The experiment results of Interest points sion and Pattern Recognition (CVPR’99). volume 1.
550
Harris detector
500 Matching by ZNCC
Inliers points by RANSAC
De Agapito, L., Hayman, E., & Reid, I. (2002). Self-
Outliers points by RANSAC
450 calibration of rotating and zooming cameras. Interna-
400 tional Journal of Computer Vision( IJCV’02), 47, 287–
350 287.
300
De Agapito, L., Hayman, E., & Reid, I. D. (1998). Self-
250
calibration of a rotating camera with varying intrinsic
200
parameters. In The British Machine Vision Conference
150 (BMVC’98) (pp. 1–10).
100

50
Deriche, R., Zhang, Z., Luong, Q.-T., & Faugeras, O.
(1994). Robust recovery of the epipolar geometry for
0
2 3 4 5 6
Number of points
7 8 9 10
an uncalibrated stereo rig. In European Conference on
Computer Vision (ECCV’94) (pp. 567–576).
Di Stefano, L., Mattoccia, S., & Tombari, F. (2005). Zncc-
Figure 15: The results of matches detected by Harris, ZNCC and
RANSAC.
based template matching using bounded partial correla-
tion. Pattern Recognition Letters, 26, 2129–2134.
El Akkad, N., Merras, M., Saaidi, A., & Satori, K. (2014).
De Agapito, L., Hartley, R., Hayman, E. et al. (1999). Lin-
Camera self-calibration with varying intrinsic parame-
ear self-calibration of a rotating and zooming camera.
ters by an unknown three-dimensional scene. The Vi-
In IEEE Computer Society Conference on Computer Vi-
sual Computer, 30, 519–530.

15
Faugeras, O. D. (1992). What can be seen in three di- Heyden, A., & Åström, K. (1997). Euclidean reconstruc-
mensions with an uncalibrated stereo rig? In European tion from image sequences with varying and unknown
Conference on Computer Vision (ECCV’92) (pp. 563– focal length and principal point. In IEEE Computer
578). Society Conference on Computer Vision and Pattern
Recognition (CVPR’97) (pp. 438–443).
Faugeras, O. D., Luong, Q.-T., & Maybank, S. J. (1992).
Camera self-calibration: Theory and experiments. In Horaud, R., & Csurka, G. (1998). Self-calibration and
European Conference on Computer Vision (ECCV’92) euclidean reconstruction using motions of a stereo rig.
(pp. 321–334). In Sixth International Conference on Computer Vision
(ICCV’98) (pp. 96–103).
Fischler, M. A., & Bolles, R. C. (1981). Random sam-
ple consensus: a paradigm for model fitting with appli- Jiang, Z., & Liu, S. (2012). Self-calibration of varying
cations to image analysis and automated cartography. internal camera parameters algorithm based on quasi-
Communications of the ACM, 24, 381–395. affine reconstruction. Journal of Computers, 7, 774–
778.
Fusiello, A., Benedetti, A., Farenzena, M., & Busti, A.
(2004). Globally convergent autocalibration using in- Kim, J.-S., Gurdjos, P., & Kweon, I.-S. (2005). Geomet-
terval analysis. The IEEE Transactions on Pattern ric and algebraic constraints of projected concentric cir-
Analysis and Machine Intelligence (TPAMI), 26, 1633– cles and their applications to camera calibration. IEEE
1638. Transactions on Pattern Analysis and Machine Intelli-
gence (TPAMI’05), 27, 637–642.
Harris, C., & Stephens, M. (1988). A combined corner
and edge detector. In Alvey vision conference (p. 50). Knight, J., Zisserman, A., & Reid, I. (2003). Linear auto-
volume 15. calibration for ground plane motion. In IEEE Com-
puter Society Conference on Computer Vision and Pat-
Hartley, R., & Zisserman, A. (2003). Multiple view geom- tern Recognition (CVPR’03) (pp. I–503). volume 1.
etry in computer vision. Cambridge university press.
Kruppa, E. (1913). Zur Ermittlung eines Objektes aus
Hartley, R. et al. (1997). In defense of the eight-point zwei Perspektiven mit innerer Orientierung. Hölder.
algorithm. The IEEE Transactions on Pattern Analysis
and Machine Intelligence (TPAMI), 19, 580–593. Leroy, A. M., & Rousseeuw, P. J. (1987). Robust regres-
sion and outlier detection. Wiley Series in Probability
Hartley, R. I. (1992). Estimation of relative camera po- and Mathematical Statistics, New York: Wiley, 1.
sitions for uncalibrated cameras. In European Confer-
Lhuillier, M., & Quan, L. (2002). Quasi-dense reconstruc-
ence on Computer Vision (ECCV’92) (pp. 579–587).
tion from image sequence. In European Conference on
Hartley, R. I. (1994a). Euclidean reconstruction from un- Computer Vision (ECCV’02) (pp. 125–139).
calibrated views. In Applications of invariance in com-
Li, H., & Hu, Z.-y. A. (2002). Linear camera self-
puter vision (pp. 235–256).
calibration technique based on projective reconstruc-
Hartley, R. I. (1994b). Self-calibration from multiple tion [j]. Journal of Software, 13, 2286–2295.
views with a rotating camera. In European Conference Liebowitz, D., & Zisserman, A. (1999). Combining
on Computer Vision (ECCV’94) (pp. 471–478). scene and auto-calibration constraints. In The Sev-
Heyden, A., & Aström, K. (1996). Euclidean reconstruc- enth IEEE International Conference on Computer Vi-
tion from constant intrinsic parameters. In Interna- sion (ICCV’99) (pp. 293–300). volume 1.
tional Conference on Pattern Recognition (ICPR’96) Luong, Q.-T. (1992). Matrice fondamentale et autocal-
(pp. 339–343). volume 1. ibration en vision par ordinateur. Ph.D. thesis Paris
11.

16
Luong, Q.-T., & Faugeras, O. D. (1996). The fundamental Pollefeys, M., & Van Gool, L. (1997). A stratified ap-
matrix: Theory, algorithms, and stability analysis. In- proach to metric self-calibration. In IEEE Computer
ternational Journal of Computer Vision (IJCV’96), 17, Society Conference on Computer Vision and Pattern
43–75. Recognition(CVPR’97) (pp. 407–412).

Luong, Q.-T., & Faugeras, O. D. (1997). Self-calibration Raguram, R., Frahm, J.-M., & Pollefeys, M. (2008). A
of a moving camera from point correspondences and comparative analysis of RANSAC techniques leading
fundamental matrices. International Journal of Com- to adaptive real-time random sample consensus. In 10th
puter Vision (IJCV’97), 22, 261–289. European Conference on Computer Vision(ECCV’08)
(pp. 500–513).
Luong, Q.-T., & Viéville, T. (1994). Canonic represen-
tations for the geometries of multiple projective views. Rousseeuw, P. J., & Leroy, A. M. (2005). Robust regres-
In IEEE International Conference on Computer Vision sion and outlier detection volume 589. John Wiley &
(ECCV’94) (pp. 589–599). Sons.
Saaidi, A., Halli, A., Tairi, H., & Satori, K. (2009).
Malis, E., & Cipolla, R. (2000a). Multi-view constraints
Self-calibration using a planar scene and parallelo-
between collineations: application to self-calibration
gram. Journal of Graphics, Vision and Image Process-
from unknown planar structures. In IEEE International
ing (ICGST-GVIP), 1687.
Conference on Computer Vision (ECCV’00) (pp. 610–
624). Stein, G. P. (1995). Accurate internal camera calibration
using rotation, with analysis of sources of error. In
Malis, E., & Cipolla, R. (2000b). Self-calibration of The Fifth International Conference on Computer Vision
zooming cameras observing an unknown planar struc- (ICCV’95) (pp. 230–236).
ture. In 15th International Conference on Pattern
Recognition (ICPR’00) (pp. 85–88). volume 1. Sturm, P. F., & Maybank, S. J. (1999). On plane-based
camera calibration: A general algorithm, singularities,
Malis, E., & Cipolla, R. (2002). Camera self-calibration applications. In IEEE Computer Society Conference on
from unknown planar structures enforcing the mul- Computer Vision and Pattern Recognition (CVPR’99.
tiview constraints between collineations. The IEEE volume 1.
Transactions on Pattern Analysis and Machine Intel-
ligence (TPAMI), 24, 1268–1272. Torr, P. H., Beardsley, P. A., & Murray, D. W. (1994). Ro-
bust vision. In The British Machine Vision Conference
Maybank, S. J., & Faugeras, O. D. (1992). A theory (BMVC) (pp. 1–10).
of self-calibration of a moving camera. International
Torr, P. H. S. (1995). Motion segmentation and outlier
Journal of Computer Vision (IJCV’92), 8, 123–151.
detection. Ph.D. thesis University of Oxford England.
Moré, J. J. (1978). The levenberg-marquardt algorithm: Triggs, B. (1997). Autocalibration and the absolute
implementation and theory. In Numerical analysis - quadric. In IEEE Computer Society Conference on
Lecture Notes in Mathematics (pp. 105–116). volume Computer Vision and Pattern Recognition (CVPR’97)
630. (pp. 609–614).
Mühlich, M., & Mester, R. (1998). The role of total least Triggs, B. (1998). Autocalibration from planar scenes. In
squares in motion analysis. In IEEE International Con- European Conference on Computer Vision (ECCV’98)
ference on Computer Vision (ECCV’98) (pp. 305–321). (pp. 89–105).
Pollefeys, M., Koch, R., & Van Gool, L. (1999). Self- Wang, Z.-F., & Zheng, Z.-G. (2008). A region based
calibration and metric reconstruction inspite of vary- stereo matching algorithm using cooperative optimiza-
ing and unknown intrinsic camera parameters. Interna- tion. In IEEE Conference on Computer Vision and Pat-
tional Journal of Computer Vision (IJCV’99), 32, 7–25. tern Recognition (CVPR’08) (pp. 1–8).

17
Yan, C., Zhang, Y., Xu, J., Dai, F., Li, L., Dai, Q., & Wu,
F. (2014a). A highly parallel framework for hevc cod-
ing unit partitioning tree decision on many-core proces-
sors. The IEEE Signal Processing Letters, 21, 573–576.

Yan, C., Zhang, Y., Xu, J., Dai, F., Zhang, J., Dai, Q.,
& Wu, F. (2014b). Efficient parallel framework for
hevc motion estimation on many-core processors. IEEE
Transactions on Circuits and Systems for Video Tech-
nology, 24, 2077–2089.
Zeller, C., & Faugeras, O. (1996). Camera self-calibration
from video sequences: the kruppa equations revisited, .
Zhang, Z. (2000). A flexible new technique for camera
calibration. The IEEE Transactions on Pattern Analysis
and Machine Intelligence (TPAMI), 22, 1330–1334.

Zhang, Z., Deriche, R., Faugeras, O., & Luong, Q.-T.


(1995). A robust technique for matching two uncal-
ibrated images through the recovery of the unknown
epipolar geometry. Artificial intelligence, 78, 87–119.

18
Highlights :

- Self-calibrate cameras with varying focal length.

- Automatic estimation of the intrinsic parameters of the camera.

- Works freely in the domain of self-calibration without any prior knowledge about the scene or on
the cameras.

You might also like